It will be interesting to see if this marks a change to new nodes coming to the M lineup before the A lineup. It would make sense given that M-based devices are lower volume (as noted in the article).
On the other hand, maybe both Apple and TSMC are so eager to move from N3B to N3E that this is an unusual occurrence that we shouldn't read too much into...
Apple may now have the protocols between its different IP blocks well enough abstracted that different IP blocks can be updated somewhat independently. So that A17 (on one track) gets a particular CPU, GPU, and ANE. M3, on a different track, gets a newer CPU, same GPU, older ANE. Then M4 updates the ANE.
Going forward an issue always to be remembered is that mask sets are ungodly expensive. Meaning that hardware cannot be debugged like software as in "compile it and see what happens". Instead you have to do everything you can to test the HW on simulators and so on, and then test it as best you can on some other hardware platform. There are different ways this plays out. One is what nVidia does. On their big chips they always have a few mm^2 on unused area in some corner or two, so they pack those unused areas with particularly tricky functionality that they want to test. Of course this is invisible to everyone else. Another way to do it is you add the new functionality behind what are called Chicken bits. If the bit (in some OS controlled register) is set one way, the functionality is active; the other way it's inactive. Then Apple can test the functionality (performance, security, stress conditions where it might fail). Eventually at some point it's considered safe for public use.
One way a company can run their OODA loop faster is more aggressive use of chicken bits on more different designs. I SUSPECT Apple does this aggressively, and that this is one reason they stagger product releases. Release version 1.a of an idea on an A chip and test it. See a few flaws, fix them, and release 1.b on an M chip. Repeat finding and fixing flaws. Try again with the Pro and Max chips.
If you can get it working by the M chip, then we have some small functionality (eg a feature of the branch predictor) that gives you a 1% speed boost of M over A. If you can get it working in Pro or Max, well they get a 1% speed boost. All invisible, but that's how you get faster, 1% at a time, accumulated over every successive chip. M4 may have, for example, the "same" CPU as M3 (which was disappointing in that IPC was essentially same as that of M2, although the branch predictors are supposed to be better, along with being 9 wide and having 8 rather than 6 integer units). But if much of the functionality that was supposed to boost M3 CPU was behind chicken bits, and has now been debugged and fixed, maybe there's a 4..5% iPC boost just from those features?
Another way to see this is it's a good sign. It suggests that Apple volumes in different segments are high enough that Apple's willing to spin up a new mask set and integrate/validate a new design on a schedule that makes sense. We may not see "what makes sense" (some combination of TSMC process readiness, availability of certain parts like DRAM or screens, validated fix of some bug in an earlier part that held back important functionality) but it does suggest that Apple won't artificially hold back when all the pieces are available for a better product.
This is, I believe, the best AnandTech article comment I’ve read in the past seven years. I wish the site did more reviews, for example actually testing the M4 versus just reporting on it, but the comments section is still a must read at times.
I agree. One thing to remember is that Apple is still first and foremost The iPhone. Everything else gets a full backseat. Apple can spin off iPhone as its own company, and that would be the dominant one, as opposed to say "Apple Inc" which does iWatch, iPod, iPad, iTV, iMac, MacBook, Apple Music, Apple Cloud, Apple TV, Apple Messages, etc etc etc.
So keeping that in mind, the M-lineup is still following the A-lineup. Remember the A13 made a big leap forward putting "Android" in the dust. Then A14 too. Then A15 did the same but with efficiency and not performance. Then it stopped there. The A16 was basically an overclocked A15. And the A17 was basically an overclocked A16.
And it is only NOW that the competitors are catching up in terms of efficiency AND performance. And that goes for the whole product stack from the Underclocked A16 (iPhone 15 small) to the Overclocked A17 (iPhone 15 Max) to the M1 (Air tablet), M2 (Pro tablet), M3 (MacBook Air), M3 Pro (14in), M3 Max (16in), M2 Ultra (Mac Studio).
So this M4 is really a kneejerk reaction. It looks like a solid upgrade over the M2, and is really what the M3 actually was expected AND what it SHOULD have been. There is zero architectural improvements because Apple is now incompetent (allegedly), but the improvements do come in the form of more efficiency AND more performance due to the node jump. The TSMC 7nm was legendary, the TSMC +5nm was a decent upgrade, and this is the new proper platform 3nm that is bringing +30% improvements. There is a high probability that the A18 coming in a few months will follow the same process.
And when these corporations fall-short and are not able to deliver on the basis of hardware, they default to using software as differentiation. We see this happening already. It was in Late 2018 with Nvidia heading towards the pathway of proprietary co-processors and software ("deep learning") and it is just named with marketing as "AI" which all major corporations have agreed upon. Apple has started going down this pathway. So they will probably lean heavily into this at the end of next year with the A19 + M5, simply building a bigger chipset with more of these simple cores/accelerators and show "hey guys buy this, it can do ChatGPT which is now built-in and do it with x3 speed". By the time the A20 + MY rolls out, they would have had enough time to develop and improve the underlying architecture. Then it'll be time for another lithography upgrade, and so on and so forth. Basically like Intel's famous Tick-Tock cycle but more like Architecture, Optimisation, Software, Process.
And when that fails, it'll just go back to simple locked ecosystem, walled garden, marketing, etc etc. And when that fails it will have to be less price, more volume, and somewhat of an assurance to the stakeholders and shareholders.
But before that last one, I think they will try to either disrupt the market with innovation, or just a gimmicky fad device service, or stir up controversy in the news with their designers/leaders. Whatever works to keep the profits high and the investors not scared.
Apple MAY have to eventually give up the crown for performance, and efficiency, and security, and price. But they'll hold onto it as long as they are able to. And they're not letting go of the "prestige" crown, which kind of happened to BlackBerry back in 2002-2008 if that analogy works. Or the opposite for KIA during their 2010 transition.
AMD and Intel still haven't caught up to M1 in perf/watt yet. There is still no good fanless laptop based on an AMD/Intel chip. Now we're at M4 already. Yikes.
Nah, AMD has similar or superior efficiency to Apple even though it uses inferior processes and a more conservative design. Apple, by controlling hardware and software, as well as cramming in a lot of Asics, can present itself as much more efficient in specific or extremely light tasks such as web browsing.
When using 100% of the CPU AND 100% of the GPU at the same time, the M3 Max can consume up to 100W, the CPU power consumption is a fraction of that 100W.
You are only reading the Boost TDP(PL2) specs on paper, which actually don't even exist on Apple's website, so they were probably invented. Real world testing shows 90-100w consumption from CPU alone.
"You are only reading the Boost TDP(PL2) specs on paper, which actually don't even exist on Apple's website, so they were probably invented. Real world testing shows 90-100w consumption from CPU alone." No, I measured it myself with powermetrics (the equivalent of HWInfo for Macs). Right now. A 10+4 CPU spikes at 44W, a 12+4 would spike at 50W for the tests I ran (GB6 multicore). So measuring from the wall yeah, a fully loaded M3 Max probably measures 70-80 watts under that load which is what notebook check puts the TDP at "78W". R15 seems particularly hard hitting and I haven't run that particular test so 93W at wall for ~60W at CPU is plausible but that would be the highest I've ever seen reported anyone, possibly due to Rosetta 2, so its 75-90 wall power, not 90-100.
However, again the comparable R15 wall power measure for the 7950X at "65W" is 200W (90-100W if they haven't subtracted idle power which I'm pretty sure they haven't). Also, not to belabor the point though a 7950X has 32 threads and 16 full performance cores. A Threadripper pro at 105W would also outperform a 7950X at 105W. When comparing it to 7840HS and 7945HX3D ...
I think you have to learn to read more carefully to avoid embarrassing yourself, the link you are posting shows the average consumption in CineBench MT at 90w. The test isolates CPU consumption as they use an external monitor.
It doesn't isolate the rest of the computer system: wall power measurement can be significantly higher than measuring CPU power by itself using software either powermetrics on Mac or HWinfo on PC. As notebook check says, the CPU uses about 56 Watts under load. I can verify it on my own machine. The "65W" for the 7950X being quoted by Dante is for software measurements, not at wall power (and typically when they say 65W TDP x86 devices can still exceed that). Not for nothing but notebookcheck as did the same power consumption for the 7950x at eco mode 65 ... they measured 200W:
7950X in ECO mode 65w pulls up to 180w at high load, as PL2 should be around 170w. But you can manually set the CPU to run below 100w and still have more performance than the M3 max.
Cinebench R23 uses Intel Embree engine which is hand optimized for AVX. Also, no one uses Cinemark. It's a useless benchmark and can't be used to compare between ARM and x86 chips.
Use Geekbench which highly correlates to SPEC, which is the standard. Cinebench R23 is horrendous as a CPU benchmark.
Follow your own advice – – it clearly says 56 W FOR CPU ONLY. Are you not able to comprehend that this is an SOC with both of CPU and GPU and what you measure from the wall is not relevant to the discussion? It is so freaking simple – – open your eyes, turn on your brain.
Yes, in high performance x86 is the king. Apple has a well-optimized design to lower the voltage dramatically at low load, the software helps a lot as they control everything, but the M line also has LPDDR5x integrated in the same package as the Soc, no x86 design yet has this.
High performance what? GB6 single core Processor Benchmark shows best x86 is i9-13900KS at 3109, best Mac is M3 Max at 3131. These are GB6 scores averaged over a large number of users.
Yes you can find individual crazy high GB6 scores for rando Intel chips. These are mostly fake, and the ones that aren't fake represent some nitrogen cooled nonsense. That's the advantage of the Processor Benchmark lists, that they give the real-world performance, not tweaker nonsense.
Geekbench is actually a real shit benchmark for larger CPU's like this. For example look at the Geekbench mutli score for a 7995WX and then an M3 Max. There is no way an M3 Max is going to have a score even in the ballpark of a modern 96-core CPU.
Cinebench R23 uses Intel Embree engine which is hand optimized for AVX. Also, no one uses Cinemark. It's a useless benchmark and can't be used to compare between ARM and x86 chips.
Do you want software that doesn't use the capabilities of the x86 CPU correctly to pretend that the M3 is better? Geekbench does it well. I wouldn't doubt it if it had Apple's hand behind it.
Ryzen 7 7840U actually beats the M3 Pro 12-Core and M3 Max 16-Core in Cinebench R23 Multi Power Efficiency on external monitor. It's score and power draw are essentially identical to the 11-Core M3 Pro.
Dante, if it doesnt make Apple look good, its an irrelevant benchmark. If it makes Apple look good, its a good benchmark. Apple Fanbox 101. Lemur litters every thread with this one sided nonsense.
You know, Cinebench 2024 has been released ..as it has been properly optimized for more platforms and includes a GPU test as well. No-one should be using Cinebench r23 to compare an optimized AVX build vs Apple Silicon..
manufacturers refusing to make a fanless laptop has nothing to do with amd or intel architectures, 10watts is exactly the same power load and battery drain whether it's now or a console in the 80s, intel was in fanless x86 tablets a decade ago, y i k e s yourself
I had a very nice ASUS tablet powered by an intel x86 processor, which I used for seven years. It worked well, but it's Android 4.4.2 OS was never updated or supported by ASUS. It got only one security update, and no OS updates, and after about five years, apps were requiring newer versions of Android, and there was nothing I could do about it. Finally replaced it with a Samsung Galaxy Tab 7 Lite, which not only works well, but also has great support. Originally advertised with Android 11, mine came with Android 12, and Samsung updated it to Android 13, plus lots of other updates. Wonderful support from Samsung, vs, near zero from ASUS, but the intel chip was fine.
Indeed, I think I gave the exact same tablet. The keyboard on mine was.very nice for note taking, and I'm still using it with Google docs and sheets for as long as they'll still work. It's a shame Intel sometimes has such a short attention span.. (and probably the pricing was dumb)
For us trapped in the backward x86 regime, it is quite unfortunate. Slowly, some of us are seeing the light and dream of crossing over to the enlightened world. Apple, the bastion of truth and performance, a dream of a better computer, where all MHz are equal.
Out of what park? The park they made and say they knocked it out of? LOL ARM makes good chips for mobile devices but its not playing in the same "Park" as x86 CPU's. You are comparing a sports car to an 18 wheeler. ARM processors are fantastic on multimedia tasks, and perf/watt. Heavy lifting? Not so much.
It's because the iPad model it's replacing used an M2. <eyeroll> And conveniently, Apple has stated comparisons of both the M3 and the M4 vs the M2, so, you know, it's not like it's hard to see how M3 vs M4 compare.
They will make a comparison to old Intel chips or even brazilian oranges if they must to make the Number Look Bigger Without Any Relevant Context. Which they kinda need to when they barely improve most mainstream-relevant indicators over 3 gens.
So impresive that the turd m3 has same performance/watt in games and productivity when there is no dedicated ASIC (90% of the time when you actually do something and not FB/Browsing bs) compared to Zen 4 on worse node. Ofcourse the price of m3 with more than 8GB ram and 256GB is double, but this is the price to have scrap on the back side of your device
the actual PR they released made claims of better tops than 'any ai pc'... on the same month that qualcomm is about to release socs with higher tops, after intel and amd socs with gpu-combined higher tops, after years of the most basic low end nvidia dgpus having higher tops that somehow dont count as an ai pc
instead of boasting about their app ecosystem having functional ai, instead of saying single chip or accelerator, they resort to misleading nonsense and altered definitions, the lying cult of apple
M2 support 3 displays, as for M1, but with only 2 routing. For example 1 internal display routed plus 2 external displays on the same port. M3 and M4 supports only 2 displays (with 2 routing).
I've got to love apple slides. I wonder why they don't say it's 98x faster than intel 8086? I am sure apple fans would cheer. I wonder where that 50% come from. Was m2 overheating that much in this form factor?
I was intrigued by your comment (genuinely). So I asked two people who own large video production companies in my town who I've known and trust for decades now. Their comments were surprisingly (to me) consistent. They rely on software encoding solutions for final renders going to broadcast, mass streaming or even inclusion in software, video games...etc. Almost universally when video comes in it is converted into ProRes in both full resolution/fidelity as well as in a smaller proxy copy that's used throughout their process then software rendered to all final outputs. Both said (in slightly different ways) that hardware encoders are simply not capable of producing anywhere near the same quality of final ouput as software renderers can. They said it's better to let YouTube, Twitch, X...etc use their server farms to do that compression for the vast majority of people since their going to publish through a service like that 95% of the time anyway.
I think for most final rendering, it's going to be perceptually transparent (ProRes, DNxHD), lossless (FFV1), or high-bitrate software encoding. Hardware encoding is useful in temporary encodes where time is of the essence, streaming, or even recording. But this is weakened by software encoders' being quite fast on higher presets today. On my computer and 720p, SVT-AV1's fastest preset, 12, is quicker than x264's "medium," quite a feat, and libvvenc's "faster" beats libaom's "8."
You’re not wrong. But in a smaller productions or solo work, a single video might get exported dozens of times before it is “final” as part of the workflow. Also for the youtube crowd, its a lot easier to upload an overkill-bitrate AV1 version and let youtube optimize it, then to spend a few hours doing a perfect encode that youtube is just going to redo anyway. These more modest productions are also big consumers of apple consumer-grade products.
Is it possible to make the neural accelerator thing useful for everyday activities? Rendering chrome, upscaling video... Unzipping files... It's gotta be useful for something, right?
And how many of you AMD apologists have gone 3 years without ever hearing a fan — even once — or feeling warmth in your laptop? I have, and it's liberating. It would be good for us all to try to give some credit for the end result where credit is due.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
68 Comments
Back to Article
Blastdoor - Tuesday, May 7, 2024 - link
It will be interesting to see if this marks a change to new nodes coming to the M lineup before the A lineup. It would make sense given that M-based devices are lower volume (as noted in the article).On the other hand, maybe both Apple and TSMC are so eager to move from N3B to N3E that this is an unusual occurrence that we shouldn't read too much into...
name99 - Tuesday, May 7, 2024 - link
Apple may now have the protocols between its different IP blocks well enough abstracted that different IP blocks can be updated somewhat independently.So that A17 (on one track) gets a particular CPU, GPU, and ANE. M3, on a different track, gets a newer CPU, same GPU, older ANE. Then M4 updates the ANE.
Going forward an issue always to be remembered is that mask sets are ungodly expensive. Meaning that hardware cannot be debugged like software as in "compile it and see what happens". Instead you have to do everything you can to test the HW on simulators and so on, and then test it as best you can on some other hardware platform.
There are different ways this plays out. One is what nVidia does. On their big chips they always have a few mm^2 on unused area in some corner or two, so they pack those unused areas with particularly tricky functionality that they want to test. Of course this is invisible to everyone else.
Another way to do it is you add the new functionality behind what are called Chicken bits. If the bit (in some OS controlled register) is set one way, the functionality is active; the other way it's inactive. Then Apple can test the functionality (performance, security, stress conditions where it might fail). Eventually at some point it's considered safe for public use.
One way a company can run their OODA loop faster is more aggressive use of chicken bits on more different designs. I SUSPECT Apple does this aggressively, and that this is one reason they stagger product releases. Release version 1.a of an idea on an A chip and test it. See a few flaws, fix them, and release 1.b on an M chip. Repeat finding and fixing flaws. Try again with the Pro and Max chips.
If you can get it working by the M chip, then we have some small functionality (eg a feature of the branch predictor) that gives you a 1% speed boost of M over A. If you can get it working in Pro or Max, well they get a 1% speed boost. All invisible, but that's how you get faster, 1% at a time, accumulated over every successive chip.
M4 may have, for example, the "same" CPU as M3 (which was disappointing in that IPC was essentially same as that of M2, although the branch predictors are supposed to be better, along with being 9 wide and having 8 rather than 6 integer units). But if much of the functionality that was supposed to boost M3 CPU was behind chicken bits, and has now been debugged and fixed, maybe there's a 4..5% iPC boost just from those features?
Another way to see this is it's a good sign. It suggests that Apple volumes in different segments are high enough that Apple's willing to spin up a new mask set and integrate/validate a new design on a schedule that makes sense. We may not see "what makes sense" (some combination of TSMC process readiness, availability of certain parts like DRAM or screens, validated fix of some bug in an earlier part that held back important functionality) but it does suggest that Apple won't artificially hold back when all the pieces are available for a better product.
Sunrise089 - Thursday, May 9, 2024 - link
This is, I believe, the best AnandTech article comment I’ve read in the past seven years. I wish the site did more reviews, for example actually testing the M4 versus just reporting on it, but the comments section is still a must read at times.Kangal - Tuesday, May 14, 2024 - link
I agree.One thing to remember is that Apple is still first and foremost The iPhone. Everything else gets a full backseat. Apple can spin off iPhone as its own company, and that would be the dominant one, as opposed to say "Apple Inc" which does iWatch, iPod, iPad, iTV, iMac, MacBook, Apple Music, Apple Cloud, Apple TV, Apple Messages, etc etc etc.
So keeping that in mind, the M-lineup is still following the A-lineup. Remember the A13 made a big leap forward putting "Android" in the dust. Then A14 too. Then A15 did the same but with efficiency and not performance. Then it stopped there. The A16 was basically an overclocked A15. And the A17 was basically an overclocked A16.
And it is only NOW that the competitors are catching up in terms of efficiency AND performance. And that goes for the whole product stack from the Underclocked A16 (iPhone 15 small) to the Overclocked A17 (iPhone 15 Max) to the M1 (Air tablet), M2 (Pro tablet), M3 (MacBook Air), M3 Pro (14in), M3 Max (16in), M2 Ultra (Mac Studio).
So this M4 is really a kneejerk reaction. It looks like a solid upgrade over the M2, and is really what the M3 actually was expected AND what it SHOULD have been. There is zero architectural improvements because Apple is now incompetent (allegedly), but the improvements do come in the form of more efficiency AND more performance due to the node jump. The TSMC 7nm was legendary, the TSMC +5nm was a decent upgrade, and this is the new proper platform 3nm that is bringing +30% improvements. There is a high probability that the A18 coming in a few months will follow the same process.
And when these corporations fall-short and are not able to deliver on the basis of hardware, they default to using software as differentiation. We see this happening already. It was in Late 2018 with Nvidia heading towards the pathway of proprietary co-processors and software ("deep learning") and it is just named with marketing as "AI" which all major corporations have agreed upon. Apple has started going down this pathway. So they will probably lean heavily into this at the end of next year with the A19 + M5, simply building a bigger chipset with more of these simple cores/accelerators and show "hey guys buy this, it can do ChatGPT which is now built-in and do it with x3 speed". By the time the A20 + MY rolls out, they would have had enough time to develop and improve the underlying architecture. Then it'll be time for another lithography upgrade, and so on and so forth. Basically like Intel's famous Tick-Tock cycle but more like Architecture, Optimisation, Software, Process.
And when that fails, it'll just go back to simple locked ecosystem, walled garden, marketing, etc etc. And when that fails it will have to be less price, more volume, and somewhat of an assurance to the stakeholders and shareholders.
But before that last one, I think they will try to either disrupt the market with innovation, or just a gimmicky fad device service, or stir up controversy in the news with their designers/leaders. Whatever works to keep the profits high and the investors not scared.
Apple MAY have to eventually give up the crown for performance, and efficiency, and security, and price. But they'll hold onto it as long as they are able to. And they're not letting go of the "prestige" crown, which kind of happened to BlackBerry back in 2002-2008 if that analogy works. Or the opposite for KIA during their 2010 transition.
GeoffreyA - Tuesday, May 14, 2024 - link
Excellent. Thanks.lemurbutton - Tuesday, May 7, 2024 - link
AMD and Intel still haven't caught up to M1 in perf/watt yet. There is still no good fanless laptop based on an AMD/Intel chip. Now we're at M4 already. Yikes.Apple is really knocking it out of the park.
Dante Verizon - Tuesday, May 7, 2024 - link
Nah, AMD has similar or superior efficiency to Apple even though it uses inferior processes and a more conservative design. Apple, by controlling hardware and software, as well as cramming in a lot of Asics, can present itself as much more efficient in specific or extremely light tasks such as web browsing.lemurbutton - Tuesday, May 7, 2024 - link
Not even the most cringy AMD fanboy would claim that AMD has equivalent efficiency as Apple Silicon.AMD still hasn’t caught up to M1.
Dante Verizon - Tuesday, May 7, 2024 - link
Typical apple fanboy calling others what they are.At high CPU load the 16-core chip (M3 max) exceeds 100w, the 7950x configured with 65w beats the M3max in everything.
Boland - Tuesday, May 7, 2024 - link
When using 100% of the CPU AND 100% of the GPU at the same time, the M3 Max can consume up to 100W, the CPU power consumption is a fraction of that 100W.Dante Verizon - Tuesday, May 7, 2024 - link
Nope. Stressing Only the CPU it reaches 100w and beyond. Cinebench, and external monitor: https://www.notebookcheck.net/Apple-MacBook-Pro-14....
dada_dave - Tuesday, May 7, 2024 - link
It's a 55W CPU, the same wall power measurement for the 7950X running in eco mode "65W" comes in at nearly 200W:https://www.notebookcheck.net/AMD-Ryzen-9-7950X-Pr...
The Hardcard - Tuesday, May 7, 2024 - link
Reread your link. 56 Watts there.Terry_Craig - Tuesday, May 7, 2024 - link
You are only reading the Boost TDP(PL2) specs on paper, which actually don't even exist on Apple's website, so they were probably invented. Real world testing shows 90-100w consumption from CPU alone.dada_dave - Tuesday, May 7, 2024 - link
"You are only reading the Boost TDP(PL2) specs on paper, which actually don't even exist on Apple's website, so they were probably invented. Real world testing shows 90-100w consumption from CPU alone." No, I measured it myself with powermetrics (the equivalent of HWInfo for Macs). Right now. A 10+4 CPU spikes at 44W, a 12+4 would spike at 50W for the tests I ran (GB6 multicore). So measuring from the wall yeah, a fully loaded M3 Max probably measures 70-80 watts under that load which is what notebook check puts the TDP at "78W". R15 seems particularly hard hitting and I haven't run that particular test so 93W at wall for ~60W at CPU is plausible but that would be the highest I've ever seen reported anyone, possibly due to Rosetta 2, so its 75-90 wall power, not 90-100.However, again the comparable R15 wall power measure for the 7950X at "65W" is 200W (90-100W if they haven't subtracted idle power which I'm pretty sure they haven't). Also, not to belabor the point though a 7950X has 32 threads and 16 full performance cores. A Threadripper pro at 105W would also outperform a 7950X at 105W. When comparing it to 7840HS and 7945HX3D ...
https://arstechnica.com/gadgets/2023/11/review-app...
The performance and performance per watt (as measured by powermetrics and HWinfo) of the M3 Max is significantly better.
Terry_Craig - Wednesday, May 8, 2024 - link
https://www.anandtech.com/show/17641/lighter-touch...dada_dave - Tuesday, May 7, 2024 - link
Though admittedly the M3 Max machine has far better idle powerBoland - Tuesday, May 7, 2024 - link
You’re not very good at this.From your own link.
“ The power consumption of the M3 Max will drop from 56 to around 41 Watts in Automatic mode and about 50 Watts in High Performance mode.”
Blastdoor - Tuesday, May 7, 2024 - link
Any links to support those bold claims?https://arstechnica.com/gadgets/2023/11/review-app...
shows all M-series chips with much better performance/watt than x86 competitors.
https://www.notebookcheck.net/Apple-M3-Max-16-Core...
Shows that under load the CPU part of the m3Max consumes up to 56 watts.
Terry_Craig - Tuesday, May 7, 2024 - link
I think you have to learn to read more carefully to avoid embarrassing yourself, the link you are posting shows the average consumption in CineBench MT at 90w. The test isolates CPU consumption as they use an external monitor.dada_dave - Tuesday, May 7, 2024 - link
It doesn't isolate the rest of the computer system: wall power measurement can be significantly higher than measuring CPU power by itself using software either powermetrics on Mac or HWinfo on PC. As notebook check says, the CPU uses about 56 Watts under load. I can verify it on my own machine. The "65W" for the 7950X being quoted by Dante is for software measurements, not at wall power (and typically when they say 65W TDP x86 devices can still exceed that). Not for nothing but notebookcheck as did the same power consumption for the 7950x at eco mode 65 ... they measured 200W:https://www.notebookcheck.net/AMD-Ryzen-9-7950X-Pr...
Dante Verizon - Tuesday, May 7, 2024 - link
7950X in ECO mode 65w pulls up to 180w at high load, as PL2 should be around 170w. But you can manually set the CPU to run below 100w and still have more performance than the M3 max.lemurbutton - Wednesday, May 8, 2024 - link
Cinebench R23 uses Intel Embree engine which is hand optimized for AVX. Also, no one uses Cinemark. It's a useless benchmark and can't be used to compare between ARM and x86 chips.Use Geekbench which highly correlates to SPEC, which is the standard. Cinebench R23 is horrendous as a CPU benchmark.
dada_dave - Tuesday, May 7, 2024 - link
Though admittedly the M3 Max machine has far better idle powerBlastdoor - Tuesday, May 7, 2024 - link
Follow your own advice – – it clearly says 56 W FOR CPU ONLY. Are you not able to comprehend that this is an SOC with both of CPU and GPU and what you measure from the wall is not relevant to the discussion? It is so freaking simple – – open your eyes, turn on your brain.Dante Verizon - Tuesday, May 7, 2024 - link
Why don't you look at the test instead of the erroneous text?Terry_Craig - Tuesday, May 7, 2024 - link
Yes, in high performance x86 is the king. Apple has a well-optimized design to lower the voltage dramatically at low load, the software helps a lot as they control everything, but the M line also has LPDDR5x integrated in the same package as the Soc, no x86 design yet has this.name99 - Tuesday, May 7, 2024 - link
High performance what?GB6 single core Processor Benchmark shows best x86 is i9-13900KS at 3109, best Mac is M3 Max at 3131. These are GB6 scores averaged over a large number of users.
Yes you can find individual crazy high GB6 scores for rando Intel chips. These are mostly fake, and the ones that aren't fake represent some nitrogen cooled nonsense. That's the advantage of the Processor Benchmark lists, that they give the real-world performance, not tweaker nonsense.
Terry_Craig - Tuesday, May 7, 2024 - link
RealWorld =/= Geekbench.GeekBench is a biased synthetic benchmark that doesn't represent anything, and conveniently gets a new version every time Apple releases a new CPU.
hecksagon - Tuesday, May 7, 2024 - link
Geekbench is actually a real shit benchmark for larger CPU's like this. For example look at the Geekbench mutli score for a 7995WX and then an M3 Max. There is no way an M3 Max is going to have a score even in the ballpark of a modern 96-core CPU.hecksagon - Tuesday, May 7, 2024 - link
Ryzen 7 7840U actually beats the M3 Pro 12-Core and M3 Max 16-Core in Cinebench R23 Multi Power Efficiency on external monitor.https://www.notebookcheck.net/Apple-M3-Pro-M3-Max-...
lemurbutton - Wednesday, May 8, 2024 - link
Cinebench R23 uses Intel Embree engine which is hand optimized for AVX. Also, no one uses Cinemark. It's a useless benchmark and can't be used to compare between ARM and x86 chips.Dante Verizon - Wednesday, May 8, 2024 - link
Do you want software that doesn't use the capabilities of the x86 CPU correctly to pretend that the M3 is better? Geekbench does it well. I wouldn't doubt it if it had Apple's hand behind it.hecksagon - Tuesday, May 7, 2024 - link
Ryzen 7 7840U actually beats the M3 Pro 12-Core and M3 Max 16-Core in Cinebench R23 Multi Power Efficiency on external monitor. It's score and power draw are essentially identical to the 11-Core M3 Pro.https://www.notebookcheck.net/Apple-M3-Pro-M3-Max-...
lemurbutton - Tuesday, May 7, 2024 - link
Ah yes. Cinebench r23. The benchmark that is hand optimized optimized for x86 AVX barely outperforms Apple Silicon.Stop using Cinebench r23. No one runs that application and it doesn’t correlate to any other applications.
Dante Verizon - Wednesday, May 8, 2024 - link
it does. It's the Cinema4D engine. Most CPU arm reviews use irrelevant benchmarks like 3Dmark and GeekBench.goatfajitas - Wednesday, May 8, 2024 - link
Dante, if it doesnt make Apple look good, its an irrelevant benchmark. If it makes Apple look good, its a good benchmark. Apple Fanbox 101. Lemur litters every thread with this one sided nonsense.thunng8 - Thursday, May 9, 2024 - link
You know, Cinebench 2024 has been released ..as it has been properly optimized for more platforms and includes a GPU test as well. No-one should be using Cinebench r23 to compare an optimized AVX build vs Apple Silicon..kn00tcn - Tuesday, May 7, 2024 - link
manufacturers refusing to make a fanless laptop has nothing to do with amd or intel architectures, 10watts is exactly the same power load and battery drain whether it's now or a console in the 80s, intel was in fanless x86 tablets a decade ago, y i k e s yourselfdwbogardus - Thursday, May 9, 2024 - link
I had a very nice ASUS tablet powered by an intel x86 processor, which I used for seven years. It worked well, but it's Android 4.4.2 OS was never updated or supported by ASUS. It got only one security update, and no OS updates, and after about five years, apps were requiring newer versions of Android, and there was nothing I could do about it. Finally replaced it with a Samsung Galaxy Tab 7 Lite, which not only works well, but also has great support. Originally advertised with Android 11, mine came with Android 12, and Samsung updated it to Android 13, plus lots of other updates. Wonderful support from Samsung, vs, near zero from ASUS, but the intel chip was fine.shadowjk - Sunday, May 26, 2024 - link
Indeed, I think I gave the exact same tablet. The keyboard on mine was.very nice for note taking, and I'm still using it with Google docs and sheets for as long as they'll still work.It's a shame Intel sometimes has such a short attention span.. (and probably the pricing was dumb)
GeoffreyA - Wednesday, May 8, 2024 - link
For us trapped in the backward x86 regime, it is quite unfortunate. Slowly, some of us are seeing the light and dream of crossing over to the enlightened world. Apple, the bastion of truth and performance, a dream of a better computer, where all MHz are equal.Dante Verizon - Wednesday, May 8, 2024 - link
What baseless alienation.GeoffreyA - Wednesday, May 8, 2024 - link
It's meant as a joke, not serious :)goatfajitas - Wednesday, May 8, 2024 - link
Out of what park? The park they made and say they knocked it out of? LOL ARM makes good chips for mobile devices but its not playing in the same "Park" as x86 CPU's. You are comparing a sports car to an 18 wheeler. ARM processors are fantastic on multimedia tasks, and perf/watt. Heavy lifting? Not so much.meacupla - Tuesday, May 7, 2024 - link
38TOPS paired with 8GB RAM and 256GB storage is going to be the hilarious configuration.Dante Verizon - Tuesday, May 7, 2024 - link
"At some point, TSMC’s N3E production capacity with catch up, and then-some"I'd like my chips without ketchup please.The text needs careful revision :)
usiname - Tuesday, May 7, 2024 - link
You know the new SOC is full trash when they compare it with their own 2 generations older SOCBoland - Tuesday, May 7, 2024 - link
They’re comparing it to the previous iPad….diastatic.power - Tuesday, May 7, 2024 - link
It's because the iPad model it's replacing used an M2. <eyeroll>And conveniently, Apple has stated comparisons of both the M3 and the M4 vs the M2, so, you know, it's not like it's hard to see how M3 vs M4 compare.
incx - Tuesday, May 7, 2024 - link
It is Apple.<eyeroll>They will make a comparison to old Intel chips or even brazilian oranges if they must to make the Number Look Bigger Without Any Relevant Context. Which they kinda need to when they barely improve most mainstream-relevant indicators over 3 gens.
name99 - Tuesday, May 7, 2024 - link
It was an iPad event. The iPad being replaced used the M2. Apple announcements are about PRODUCTS, not chips. This is just basic common sense.And, BTW, since the M2 is a pretty damn kickass chip, I don't think your comment lands quite as powerfully as you seem to think it does...
usiname - Wednesday, May 8, 2024 - link
So impresive that the turd m3 has same performance/watt in games and productivity when there is no dedicated ASIC (90% of the time when you actually do something and not FB/Browsing bs) compared to Zen 4 on worse node. Ofcourse the price of m3 with more than 8GB ram and 256GB is double, but this is the price to have scrap on the back side of your deviceDante Verizon - Friday, May 10, 2024 - link
Stop hurting their feelings.kn00tcn - Tuesday, May 7, 2024 - link
the actual PR they released made claims of better tops than 'any ai pc'... on the same month that qualcomm is about to release socs with higher tops, after intel and amd socs with gpu-combined higher tops, after years of the most basic low end nvidia dgpus having higher tops that somehow dont count as an ai pcinstead of boasting about their app ecosystem having functional ai, instead of saying single chip or accelerator, they resort to misleading nonsense and altered definitions, the lying cult of apple
iAPX - Wednesday, May 8, 2024 - link
M2 support 3 displays, as for M1, but with only 2 routing.For example 1 internal display routed plus 2 external displays on the same port.
M3 and M4 supports only 2 displays (with 2 routing).
deil - Wednesday, May 8, 2024 - link
I've got to love apple slides.I wonder why they don't say it's 98x faster than intel 8086?
I am sure apple fans would cheer.
I wonder where that 50% come from. Was m2 overheating that much in this form factor?
repoman27 - Wednesday, May 8, 2024 - link
@Ryan Smith - there's a typo in the chart, it should be LPDDR5X-7500 SDRAM for the M4.tafreire - Wednesday, May 8, 2024 - link
I don't understand Apple's approach to DO NOT putting an AV1 encoder in their CPUs. An AV1 encoder is a requirement for me to buy an Apple computer.GeoffreyA - Thursday, May 9, 2024 - link
Indeed, it would be nice to see VVC in CPUs as well, now that playback has been enabled through FFmpeg's libavcodec.CaptGingi - Friday, May 10, 2024 - link
I was intrigued by your comment (genuinely). So I asked two people who own large video production companies in my town who I've known and trust for decades now. Their comments were surprisingly (to me) consistent. They rely on software encoding solutions for final renders going to broadcast, mass streaming or even inclusion in software, video games...etc. Almost universally when video comes in it is converted into ProRes in both full resolution/fidelity as well as in a smaller proxy copy that's used throughout their process then software rendered to all final outputs. Both said (in slightly different ways) that hardware encoders are simply not capable of producing anywhere near the same quality of final ouput as software renderers can. They said it's better to let YouTube, Twitch, X...etc use their server farms to do that compression for the vast majority of people since their going to publish through a service like that 95% of the time anyway.GeoffreyA - Tuesday, May 14, 2024 - link
I think for most final rendering, it's going to be perceptually transparent (ProRes, DNxHD), lossless (FFV1), or high-bitrate software encoding. Hardware encoding is useful in temporary encodes where time is of the essence, streaming, or even recording. But this is weakened by software encoders' being quite fast on higher presets today. On my computer and 720p, SVT-AV1's fastest preset, 12, is quicker than x264's "medium," quite a feat, and libvvenc's "faster" beats libaom's "8."Hresna - Saturday, May 25, 2024 - link
You’re not wrong. But in a smaller productions or solo work, a single video might get exported dozens of times before it is “final” as part of the workflow. Also for the youtube crowd, its a lot easier to upload an overkill-bitrate AV1 version and let youtube optimize it, then to spend a few hours doing a perfect encode that youtube is just going to redo anyway. These more modest productions are also big consumers of apple consumer-grade products.flyingpants265 - Saturday, May 11, 2024 - link
Is it possible to make the neural accelerator thing useful for everyday activities? Rendering chrome, upscaling video... Unzipping files... It's gotta be useful for something, right?tipoo - Sunday, May 12, 2024 - link
I'd love to see an Anandtech style deep dive on this oneddps - Tuesday, May 14, 2024 - link
And how many of you AMD apologists have gone 3 years without ever hearing a fan — even once — or feeling warmth in your laptop? I have, and it's liberating. It would be good for us all to try to give some credit for the end result where credit is due.flyingpants265 - Wednesday, May 15, 2024 - link
We have. Apple silicon is good.Harry_Wild - Wednesday, May 15, 2024 - link
IMO, not that much different at all! Hold my 💵 for M5 or M6 to be released on 2nm!