Comments Locked

493 Comments

Back to Article

  • JfromImaginstuff - Monday, October 25, 2021 - link

    Huh, nice
  • Kangal - Monday, October 25, 2021 - link

    What isn't nice is gaming on macOS.
    We all know how bad emulation is, and whilst Apple seems to have pulled "magic" with their implementation of Metal/Rosetta2's hybrid-translation strong performance.... at the end of the day it isn't enough.

    The M1X is slightly slower than the RTX-3080, at least on-paper and in synthetic benchmarks. This is the sort of hardware that we've been denied for the past 3 years. Should be great. It isn't. When it comes to the actual Gaming Performance, the M1X is slightly slower than the RTX-3060. A massive downgrade.

    The silver lining is that developers will get excited, and we might see some AAA-ports over to the macOS system. Even if it's the top-100 games (non-exclusives), and if they get ported over natively, it should create a shock. We might see designers then developing games for PS5, XSX, OSX and Windows. And maybe SteamOS too. And in such a scenario, we can see native-coded games tapping into the proper M1X hardware, and show impressive performance.

    The same applies for professional programs for content creators.
  • at_clucks - Monday, October 25, 2021 - link

    "The silver lining is that developers will get excited, and we might see some AAA-ports over to the macOS"

    I think that's their whole point. Make developers optimize for Mac knowing that gamers would very likely choose to have their performant gaming machine in a Mac format (light, cool, low power) rather than in a hot and heavy DTR format if they had the choice of natively optimized games.
  • bernstein - Monday, October 25, 2021 - link

    we now have 3 primary gpu api‘s:
    - directx (xbox, windows)
    - vulkan (ps5, switch, steamos, android)
    - metal (macos, ios & derivates)
    Because they’re all low level & similar, most bigger engines support them all.

    There used to be two for pc, one for mobile and three for consoles. And vastly different ones at that.

    So it will come down to the addressable market and how fast apple evolves the api‘s. Historically windows, with its build once run two decades later has made it much much easier on devs.
  • yetanotherhuman - Tuesday, October 26, 2021 - link

    "how fast apple evolves the api‘s"

    That'll be a very slow, given their history. Why they invented another API, I have no idea. Vulkan could easily be universal. It runs on Windows, which you didn't note, with great results.
  • Dribble - Tuesday, October 26, 2021 - link

    Vulkan is too low level, it assumes nothing which means you have to right a ton of code to get to the level of Metal which assumes you have an apple device. If metal/dx are like writing in assembly language, for vulkan you start of with just machine code and have to write your own assembler first. Hence it's not really a great language to work with, if you were working with apple then metal is so much nicer.
  • Gracemont - Wednesday, October 27, 2021 - link

    Vulkan is too low level? It’s literally comparable to DX12. Like bruh, if anything the Metal API is even more low level for Apple devices cuz of it being built specifically for Apple devices. Just like how the NVAPI for the Switch is the lowest level API for that system cuz it was specifically tailored for that system, not Vulkan.
  • Ppietra - Wednesday, October 27, 2021 - link

    Gracemont, the Metal API was already being used with Intel and AMD GPUs, so not exactly a measure of "low level"
  • NPPraxis - Tuesday, October 26, 2021 - link

    "Why they invented another API, I have no idea. Vulkan could easily be universal."

    You're misremembering the history. Metal predates Vulkan.

    Apple was basically stuck with OpenGL for a long time, which fell further and further behind as DirectX got lower level and faster. That made all of Apple's devices at a huge gaming handicap.

    Then Apple invented Metal for iOS in 2014 which gave them a huge performance rendering lead on mobile devices.

    They led the Mac languish for a couple years, not even updating the OpenGL version. Macs got worse and worse for games. In 2016, Vulcan came out. People speculated that Apple could adopt it.

    In 2017, Apple released Metal 2 which was included in the new MacOS.

    Basically, Apple had to pick between unifying MacOS (Metal) with iOS or with Linux gaming (Vulkan). Apple has gotten screwed over before by being reliant on open source third parties that fell further and further behind (OpenGL, web browsers before they helped build WebKit, etc) so it's kind of understandable that they went the Metal-on-MacOS direction since they had already built it for iOS.

    I still wish Apple would add support for it (Mac: Metal and Vulkan, Windows: DirectX and Vulkan, Linux: Vulkan only), because it would really help destroy any reason for developers to target DirectX first, but I understand that they really want to push devs to Metal to make porting to iOS easier.
  • Eric S - Friday, October 29, 2021 - link

    Everyone has their own graphics stack- Microsoft, Sony, Apple, and Nintendo all have proprietary stacks. Vulcan wants to change that, but that doesn’t solve everything. Developers still need to optimize for differences in GPUs. Apple is looking for full vertical integration which helps to have their own stack.
  • darwinosx - Friday, November 5, 2021 - link

    Everything you said is wrong.
  • C@illou - Tuesday, October 26, 2021 - link

    Slight correction, Vulkan works great on windows (and also works on Linux, but that counts the same as "SteamOS"), that makes it the most compatible API.
  • xeridea - Tuesday, October 26, 2021 - link

    Vulkan runs on everything.
  • Qozmo - Tuesday, October 26, 2021 - link

    Worth mentioning that MoltenVK exists officially from Khronos Group which layers Vulcan on top of the MetalAPI enabling Vulcan apps to run on MacOS/iOS
  • Wrs - Monday, October 25, 2021 - link

    Is it just me or does that make no economic sense? When I’m AAA gaming (flashy visuals, complex scenes, high fps) I don’t feel as if I’m looking for light and cool or portable. I’d be on a desk flinging a mouse, or wielding a controller in front of a TV.
  • michael2k - Monday, October 25, 2021 - link

    Maybe it isn't clear, but 'light and cool' means there is lots of headroom for overclocking. From the third page:
    Power Behaviour: No Real TDP, but Wide Range
    Apple doesn’t advertise any TDP for the chips of the devices – it’s our understanding that simply doesn’t exist, and the only limitation to the power draw of the chips and laptops are simply thermals. As long as temperature is kept in check, the silicon will not throttle or not limit itself in terms of power draw.

    You can imagine that in a desktop, with far better cooling and far more available power, that the M1P/M1M might grow well beyond the 92W of observed package power. The Mac Pro with 28 cores and 2 GPUs today will allow the CPU to consume 902W, there is a lot of space for performance to grow!

    So imagine 10x more performance from a desktop Mac with 10 M1P in some kind of fabric (100 cores and 320 GPUs!) or a much smaller number of M1P, maybe 4 (40 cores and 128 GPUs) with each allowed to consume 2.5x as much power
  • sean8102 - Tuesday, October 26, 2021 - link

    Problem is developer support. It seems there are only 2 "AAA" macOS AND ARM native games.

    https://www.applegamingwiki.com/wiki/M1_native_com...

    That has to improve A LOT for getting a ARM Mac for gaming to make any sense. Otherwise you're always taking the performance hit of Rosetta 2. Plus not many AAA games are releasing for macOS since they announced the switch to ARM.

    The chips are amazing in terms of performance and efficiency, but getting a mac esp a ARM based one for gaming wouldn't make much sense. At least for now and not unless developer support improves A LOT.
  • AshlayW - Tuesday, October 26, 2021 - link

    Clock speeds do not scale with power consumption and Firestorm cores are not designed to reach high clock speeds, these cores would likely not break 3.5 if overclocked (wide, dense design for perf/W). AMD / Intel / NVIDIA's 5nm-class processors will put Apple back in its place for people wanting to NOT be locked into a walled garden from a company adamant on crushing consumer rights. It's just a shame that Apple's silicon engineers are so freakin' good, they're working for the wrong company (and hurting human progress by putting the best wafers/chips in Apple products).
  • valuearb - Tuesday, October 26, 2021 - link

    Lol apple is responsible for more human progress than all the other PC makers combined.
  • MooseNSquirrel - Friday, October 29, 2021 - link

    Only if your metric is marketing based.
  • michael2k - Thursday, October 28, 2021 - link

    Power consumption scales linearly with clock speed.

    Clock speed, however, is constrained by voltage. That said, we already know that the M1M itself has a 3.2GHz clock while the GPU is only running at 1.296GHz. It is unknown if there is any reason other than power for the GPU to run so slowly. If they could double the GPU clock (and therefore double it's performance) without increasing it's voltage, it would only draw about 112W. If they let it run at 3.2GHz it would draw 138W.

    Paired with the CPU drawing 40W the M1M would still be several times under the Mac Pro's current 902W. So that leaves open the possibility of a multiple chip solution (4 M1P still only draws 712W if the GPU is clocked to 3.2GHz) as well as clocking up slightly to 3.5GHz, assuming no need to increase voltage. Bumping up to 3.5GHz would still only consume 778W while giving us almost 11x the GPU power of the current M1P, which would be 11x the performance of the 3080 found in the GE76 Raider

    Also, you bring up AMD/Intel/NVIDIA at 5nm, without also considering that when Apple stops locking up 5nm it's because they will be at 4nm and 3nm.
  • uningenieromas - Thursday, October 28, 2021 - link

    You would think that if Apple's silicon engineers are so freakin' good, they could basically work wherever they want...and, yep, they chose Apple. There might be a reason for that?
  • varase - Wednesday, November 3, 2021 - link

    We're glad you shared your religious epiphany with the rest of us 😳.
  • Romulo Pulcinelli Benedetti - Sunday, May 22, 2022 - link

    Sure, Intel and AMD would take all the hard work to advance humanity toward Apple level chips if Apple was not there, believe in this...
  • Alej - Tuesday, October 26, 2021 - link

    The native ARM Mac scarcity I don’t fully get, a lot of games get ported to the switch which is already ARM. And if they are using Vulkan as the graphics API then there’s already MoltenVK to translate it to Metal, which even if not perfect and won’t use the 100% of available tricks and optimizations, it would run well enough.
  • Wrs - Tuesday, October 26, 2021 - link

    @Alej It's a numbers and IDE game. 90 million Switches sold, all purely for gaming, supported by a company that exclusively does games. 20 million Macs sold yearly, most not for gaming in the least, built by a company not focused on gaming for that platform. iPhones are partially used for gaming, however, and sell many times the volume of the Switch, so as expected there's a strong gaming ecosystem.
  • Kangal - Friday, October 29, 2021 - link

    Apple is happy where they are.
    However, if Apple were a little faster/wiser, they would've made the switch from Intel Macs to M1 Macs back in 2018 using the TSMC 7nm node, their Tempest/Vortex CPUs and their A12-GPU. They wouldn't be too far removed from the performance of the M1, M1P, M1X if scaled similarly.

    And even more interesting, what if Apple released a great Home Console?
    Something that is more compact than the Xbox Series S, yet more powerful than the Xbox Series X. That would leave both Microsoft and Sony scrambling. They could've designed a very ergonomic controller with much less latency, and they could've enticed all these AAA-developers to their platform (Metal v2 / Swift v4). It would be gaming-centric, with out-of-box support for iOS games/apps, and even a limited-time support (Rosetta v2) for legacy OS X Applications. They wouldn't be able to subsidies the pricing like Sony, but could basically front the costs from their own pocket to bring it to a palatable RRP. After 2 years, then they would be able to turn a profit from its hardware sales and software sales.

    I'm sure they could have been a hit. And it would then pivot to make MacBook Pro's more friendly for media consumption, and developer-supported. Strengthening their entire ecosystem, and leveraging their unique position in software and hardware to remain competitive.
  • kwohlt - Tuesday, October 26, 2021 - link

    I think it is just you. Imagine a hypothetical ultra thin, fanless laptop that offered 20 hours of battery under load and could play games at desktop 3080 levels...Would you wish this laptop was louder, hotter, and had worse battery?

    No of course not. Consuming less power and generating less heat, while offering similar or better performance has always been the goal of computing. It's this trend that allows us to daily carry computing power that was once the size of a refrigerator in our pockets and on our wrists.
  • Wrs - Wednesday, October 27, 2021 - link

    No, but I might wish it could scale upward to a desktop/console for way more performance than a 3080. :) That would also be an indictment of how poorly the 3080 is designed or fabricated, or how old it is.

    Now, if in the future silicon gets usurped by a technology that does not scale up in power density, then I could be forced to say yes.
  • turbine101 - Monday, October 25, 2021 - link

    Why would developers waste there time on a device which will have barely any sales?

    The M1 Mac Max costs $6knzd. That's just crazy, even the most devout Apple enthusiasts cannot justify this. And Mac is far less usable than IOS.
  • Hrunga_Zmuda - Monday, October 25, 2021 - link

    Everything you just wrote is wrong.

    The Maxed out computer in in the 6K range. They start at $1999, quite in range of gaming machines from MSI and others. (And they are faster than the fastest MSIs.)

    Barely any sales? They are the #3 computer maker in the world. And they are growing way faster than the competition.

    Such thinking was legitimate 10 - 20 years ago. But not any longer.
  • sirmo - Monday, October 25, 2021 - link

    The full M1 Max starts at $3099 that's on the 14" model. On the 16" model it's $3499.
  • valuearb - Tuesday, October 26, 2021 - link

    14 inch MBP w/M1 Max & 32 Gb RAM, 512Gb SSD is $2,899.
  • nico_mach - Tuesday, October 26, 2021 - link

    I think they overstated it, but it's a legitimate concern.
    Most gaming PCs are less than $2k. We can assume that Apple will release more Mac Minis, which would be cheaper than these, but will they be powerful enough? Will they support multiple monitors well? These are open questions. Apple clearly has different priorities and it seems that they don't want to court gamers/game publishers at all anymore.

    Also, if you compare benchmarks, there are places where AMD is very close simply from being on the most recent TSMC production line. They have a huge competitive advantage now: Intel fell behind, AMD is not well capitalized and fab space is very limited. They are on the top of their game, but also a little lucky. That won't last forever.

    Though with MS having their heads in the clouds, it might last forever. The pandemic could be a last gasp of sorts, even if gamers don't want to give up our PCs. Just look at those prices and new efficiency regulations.
  • sharath.naik - Monday, October 25, 2021 - link

    There is also the big elephant in the room.. Soldered SSDs .. every MAC has a shelf life of 3000 writes. I donot see how spending 4000$ on a laptop that dies after a fixed number of data writes is sensible choice to any one.
  • valuearb - Tuesday, October 26, 2021 - link

    That’s a myth.
  • yetanotherhuman - Tuesday, October 26, 2021 - link

    3000 writes, full drive writes maybe. It's certainly not a myth that SSDs die. They die. If they're soldered, they're taking everything with it. That's not misleading at all.
  • web2dot0 - Tuesday, October 26, 2021 - link

    You know what’s a myth. SSD dying. Can’t you tell me the last time a SSD died on you?

    Every single ssd I’ve owned still works perfectly to this day.

    Hard drives? They have died on me.
  • Oxford Guy - Friday, October 29, 2021 - link

    'Can’t you tell me the last time a SSD died on you?'

    I have a stack of dead OCZ drives.

    I had an Intel that had the file corruption bug. It was eventually patched.
  • flyingpants265 - Sunday, October 31, 2021 - link

    Are you simple? SSDs absolutely die. Every single one of them will die after enough writes. Some will even die after only like 100TB of writes which is just filling the drive 100 times.
  • noone2 - Tuesday, October 26, 2021 - link

    The laptop will be worthless and insanely outdated by the time and SSD dies, making it irrelevant even if that was the case.
  • flyingpants265 - Sunday, October 31, 2021 - link

    What an extremely dumb comment. Old Macbooks aren't worthless, they hold their value extremely well. If you use the Mac for what it's intended (video work) it's possible that you'll do damage to the drive.

    You really are an absolute idiot if you think there's an excuse for soldering SSDs. It's like welding in the suspension on your car because "the car will be worthless and outdated". No. They have a limited lifespan and need to be replaced when they die.
  • varase - Wednesday, November 3, 2021 - link

    Then you take it in and have the logic board traded for a refurbished one, then restore your data.

    Anyone who doesn't take off-computer backups is an idiot and is deserves to lose data whether it sits on a HDD or SSD.

    All drives fail eventually. I have at least two backups of everything, including my ginormous disk array. And when your replaceable SSD dies, it will take everything with it too.
  • coolfactor - Tuesday, October 26, 2021 - link

    I'm typing this on a 2013 MacBook Air. 8 years and going strong. You chose "3000 writes" to sound dramatic, but that's the low end of low-end SSDs, none of which are used in Macs. SSDs can be rated up to 100,000 writes, and Samsung even promotes some of theirs as lasting 10 years under heavy usage. So your argument is weak, sorry.
  • AshlayW - Tuesday, October 26, 2021 - link

    Just as anecdotal as your emotional reply to defend your product/purchase decision. Look up Louis Rossmann on YT if you want to know what kind of company you are supporting.
  • caribbeanblue - Saturday, October 30, 2021 - link

    Unfortunately, these MacBooks are the best laptops on the market. Repairability is only part of the story, and the repairability of a device isn't just about ease of repair that is enabled by hardware design choices, it's also about the company providing board-level schematics to 3rd party repair shops, so users can have access to cost friendly genuine repair. Apple does have a long way to go in that aspect, that is true, however they *definitely* should not be forced to not solder down their SSD or DRAM. Soldering down such components earn you big improvements in terms of performance, energy efficiency, and space savings. If you want a laptop with a socketed RAM & removable SSD, then that's fine, buy something else, but don't act like MacBooks don't have any selling points, you would be delusional for thinking that.
  • varase - Wednesday, November 3, 2021 - link

    Louis Rossmann is a religious zealot.

    He's a repair gnome, not an innovator or designer.
  • UnNameless - Wednesday, November 17, 2021 - link

    LR sadly became a jest! A joker filled with hate! I respected him a lot back in the days he mostly had content on repair stuff! I also agree with him about the Right to Repair and most of the issues regarding Mac stuff repairs! But his war for RtoR became a crusade and went nuts in the last year or so when he also started to bash Apple on software stuff and practically everything he can find awful! And I wouldn't have said nothing if he'd be an informant software guy, but he's an repair engineer, a good one and he should have sticked with that! From the fiascos with the Apple services and firewall whitelists, to the Apple OCSP so poorly misunderstood by him and even worse presented, to Apple hashing etc. Ever since he became a couch diva with that freaking cat, instead a shop repair guy...his true colors and hate towards just oozed so smoothly from his skin!
  • RealAlexD - Monday, November 1, 2021 - link

    3000 writes are actually a pretty good durability for a pro consumer SSD. While it is true, that some SSDs are rated for up to 100k writes, those are SLC devices, which are not really used anymore outside of special cases (Samsung Z-NAND). Normal SDDs will either have TLC or QLC Flash cells (or maybe 2bit MLC, but even Samsung Pro SSDs are now TLC), which don't last nearly as long.
    The TLC SSD with the most writes I could find was a Seagate Nytro Write intensive Server SSD, than promises about 20k writes.

    Also conditions effect the number of writes a Flash cell will survive, here running warmer actually can increases lifetime. And the advertised durability is a worst case durability.
  • UnNameless - Wednesday, November 17, 2021 - link

    I agree! Been using my iMac Pro from 2018 till present! Before I started earlier this year my little experiment with Chia plotting, which is known to burn up SSDs like nothing else, I had 99% lifetime drive left in my 1TB SSD. Even before I ran my experiment, I tried to search online to find what kind of nand flash the iMP uses but couldn't find any real concrete stuff as Apple has customs chips in those SSDs. So I took it for a spin and in the course of weeks I wrote in excess of 1PB of data! SSD lifetime dropped from 99% to 86%. If this scales linearly, I reckon you'd have to write in excess of 10PB of data on a 1TB SSD to bring it to critical levels or burn it completely! And I have never heard of anyone that does such a thing!
  • UnNameless - Wednesday, November 17, 2021 - link

    Where did you get the 3000 writes number from?

    I know for a fact that I did >1000 full drive writes on my 1TB SSD in my iMac Pro and it barely hit 86% SSD lifetime! So I wrote more than 1PB of data and still got 86% life in it!
  • coolfactor - Tuesday, October 26, 2021 - link

    Hey folks, listen to turbine! He really knows what he's talking about! I mean, he's never used a Mac, or he'd know better, but hey, listen to him anyway! He can't get the OS names correct (it's macOS and iOS, with a small "i"), but hey, he's making an important point! So important! More market share obviously means better! Yah? So that $2.00 cheeseburger from McDonalds is abviously the best because it's low-cost and everywhere! Yah, that's what matters, after all!
  • Daniel Egger - Wednesday, October 27, 2021 - link

    Don't be ridiculous. The 16" MBP with the M1 Max costs less than what I have paid for my TiBook way back when and that's without inflation considered. Oh, and back then I was just a student, tired of his Compaq Armada 1750 aka "the brick".
  • xeridea - Tuesday, October 26, 2021 - link

    Developers optimize for PC, knowing that Mac has virtually no marketshare for gamers. There are decent APUs and midrange gaming laptops that aren't hot and heavy.
  • Altirix - Monday, October 25, 2021 - link

    actually could be unlikely, Apple are trying to kill of any open-source low-level graphics API in favour of their own API metal. look at the smaller devs who are going to be less likely to go out their way to rewrite their engines to support metal especially when they also need to buy the hardware to test it on. prior to that is if macos can run it cool, if it can well that's a shame. big devs follow the money so the rest will be up to apple handing out engineers or there's enough people gaming on mac
  • photovirus - Monday, October 25, 2021 - link

    Apple doesn't try to kill Vulkan, it's just they don't care. They've eaten OpenGL problems for years and they've had enough, thus no respect for open-source. What they want is a fast modern cross-platform framework, and that's Metal. It's tightly controlled, so it's easy to implement any new hardware feature into it.

    Since there's a quite a number of iPads and Macs with M1, I think publishers will invest into Metal optimisation.
  • bernstein - Monday, October 25, 2021 - link

    Metal isn‘t cross-platform. ios & macos are the same os with a different „skin“ (ui/lifecycle model).
  • techconc - Monday, October 25, 2021 - link

    @bernstein... That's like saying Andorid and Linux aren't different platforms... you know because they share some common ground. From a developer perspective, iOS and MacOS are different platforms. Yes, there is much similarity, but there are also differences.
  • tunsten2k - Monday, October 25, 2021 - link

    No, it's like saying Android and ChromeOS aren't different platforms, and generally, that would be a reasonable statement. Regardless, "cross platform" doesn't mean "across 2 proprietary platforms, only one of which is non-mobile and makes up only 16% of the non-mobile market". Get a grip :)
  • Hrunga_Zmuda - Monday, October 25, 2021 - link

    No, they are not the same OS. They have the same base, but they are quite different in many ways. But Metal isn't one of those differences. Metal is powerful and any developer who wants to break into the Mac world will be going there in the future.
  • coolfactor - Tuesday, October 26, 2021 - link

    That's not true. Yes, they have common roots, but they are definitely not the same OS line-for-line. Prior to M1, they were even compiled for different architectures. The OS is much more than a "skin". Many people wish that macOS and iOS were skinned, so they could customize that skin!
  • darwinosx - Monday, October 25, 2021 - link

    Apple does a lot of open source and contributes to the community.
    https://opensource.apple.com
  • Oxford Guy - Friday, October 29, 2021 - link

    'They've eaten OpenGL problems for years and they've had enough, thus no respect for open-source.'

    My understanding is that Apple stuck with an extremely outdated version of OpenGL for years and years. Hard to claim that open source is the problem, since all the updates were ignored.
  • coolfactor - Tuesday, October 26, 2021 - link

    @photovirus is correct. Metal achieves much better performance because Apple can design it to work on their hardware. Open-source solutions are good in principle and have their solid place in the software universe, but that doesn't mean it's the best solution in _every_ case. Metal solves a problem that plagued Macs for too long.
  • varase - Wednesday, November 3, 2021 - link

    Well, Apple can design it to work with any hardware it uses.

    That has in the past included AMD graphics cards.
  • Eric S - Saturday, October 30, 2021 - link

    Not really. Metal makes sense for Apple. A graphics stack these days is a compiler. It is built on the LLVM project and C++ that they already use for their other compiler work. They will likely base it on their Swift compiler eventually. You can still use Vulcan on Mac and iOS since it’s shading language can be translated to Metal.
  • Hifihedgehog - Monday, October 25, 2021 - link

    > What isn't nice is gaming on macOS

    That's a whole lot of damage control and pussyfooting around the truth. GFXBench is a joke for getting a pulse for real-world performance. In actuality, we are GPU bound at this point. Hence, the linear scaling from the M1 Pro to the M1 Max. The bottom line is this performs like an RTX 3060 in real-world games.
  • zshift - Monday, October 25, 2021 - link

    As noted in the article, these benchmarks were run on x86 executables. The fact that it can keep up with 3060 levels of performance is incredible, but we can’t make any real judgements until we see how natively-compiled games run.
  • sirmo - Monday, October 25, 2021 - link

    @zshift 3060 uses a 192-bit memory bus, M1 Max has 512 bits and a huge GPU. Not to mention 6600xt does even better with less (only 128-bit memory bus). It's also only 11B transistors, while this SoC is 57B for perspective. It really isn't impressive tbh.
  • Ppietra - Monday, October 25, 2021 - link

    If they use different memory type it’s irrelevant to talk bit width.
    Furthermore it doesn’t make much sense as argument to compare a GPU number of transistors with a SoC number.
  • coolfactor - Tuesday, October 26, 2021 - link

    The M1 achieves that with far better performance-per-watt. In other words, same performance, far more efficiently.
  • AshlayW - Tuesday, October 26, 2021 - link

    TSMC N5 vs Samsung 8LPE, so you'd expect it to be better! It's an entire node more advanced!
  • sirmo - Tuesday, October 26, 2021 - link

    @ciikfactir which makes sense since it's definitely not built for performance per dollar.
  • varase - Wednesday, November 3, 2021 - link

    Apple - unlike other silicon developers - doesn't design for performance per dollar.

    They design for performance per task.

    CPU speed your problem - design a wider CPU. After designing an eight wide CPU, going wider wouldn't really produce much value. Video encode/decode your bottleneck - design a media engine with high speed encode/decode for H.264, H.265, and ProRes.

    First ProRes decode was in the $2,500 afterburner card in the Mac Pro. Now encode/decode is in the M1 Pro SoC - two in the M1 Max.
  • thecoffeekid - Thursday, October 28, 2021 - link

    it also has to run through rosetta, if the games were native the performance would be equivalent if not better to a 3080. Pretty crazy
  • nunya112 - Monday, October 25, 2021 - link

    if this is 3060 levels of performance. why doesn't apple make Discrete GPU's mid range GPU's is where the money is !! or they could add another 8 cores or 1024 units, and guarantee a great GPU. imagine Intel, Apple, AMD, Nvidia all competing for your $$ we might even see affordable GPU's again..
  • michael2k - Monday, October 25, 2021 - link

    Apple’s revenue last quarter was $81b; NVIDIA’s in the same quarter was $6.5b

    There is absolutely no money in discrete midrange GPUs if Apple is going to be fighting for $3b in revenue between AMD and NVIDIA.
  • AshlayW - Tuesday, October 26, 2021 - link

    Apple's revenue is so high because plebs buy overpriced products :/
  • thecoffeekid - Thursday, October 28, 2021 - link

    yea can’t be the good products.
  • steven4570 - Friday, October 29, 2021 - link

    pleb? lol. k
  • caribbeanblue - Saturday, October 30, 2021 - link

    Lol, you're just a troll at this point.
  • sharath.naik - Monday, October 25, 2021 - link

    The only reason M1 falls behind 3060 RTX is because the games are emulated.. if native M1 will match 3080. This is remarkable.. time for others to shift over to the same shared high bandwith memory on chip.
  • vlad42 - Monday, October 25, 2021 - link

    Go back and reread the article. Andrei explicitly mentioned that the games were GPU bound, not CPU bound. Here are the relevant quotes:

    Shadow of the Tomb Raider:
    "We have to go to 4K just to help the M1 Max fully stretch its legs. Even then the 16-inch MacBook Pro is well off the 6800M. Though we’re definitely GPU-bound at this point, as reported by both the game itself, and demonstrated by the 2x performance scaling from the M1 Pro to the M1 Max."

    Borderlands 3:
    "The game seems to be GPU-bound at 4K, so it’s not a case of an obvious CPU bottleneck."
  • web2dot0 - Tuesday, October 26, 2021 - link

    I heard otherwise on m1 optimized games like WoW
  • AshlayW - Tuesday, October 26, 2021 - link

    4096 ALU at 1.3 GHz vs 6144 ALU at 1.4-1.5 Ghz? What makes you think Apple's GPU is magic sauce?
  • Ppietra - Tuesday, October 26, 2021 - link

    Not going to argue that Apple's GPU is better, however the number of ALU and clock speed doesn’t tell the all story.
    Sometimes it can be faster not because it can work more but because it reduces some bottlenecks and because it works in a smarter way (by avoiding doing work that is not necessary for the end result).
  • jospoortvliet - Wednesday, October 27, 2021 - link

    Thing is also that the game devs didn't write their game for and test on these gpus and drivers. Nor did Apple write or optimize their drivers for these games. Both of these can easily make high-double digit differences, so being 50% slower on a fully new platform without any optimizations and running half-emulated code is very promising.
  • varase - Thursday, November 4, 2021 - link

    Apple isn't interested in producing chips - they produce consumer electronics products.

    If they wanted to they could probably trash AMD and Intel by selling their silicon - but customers would expect them to remain static and support their legacy stuff forever.

    When Apple finally decided ARMv7 was unoptimizable, they wrote 32 bit support out of iOS and dropped those logic blocks from their CPUs in something like 2 years. No one else can deprecate and shed baggage so quickly which is how they maintain their pace of innovation.
  • halo37253 - Monday, October 25, 2021 - link

    Apple's GPU isn't magic. It is not going to be any more efficient than what Nvidia or AMD have.

    Clearly a Apple GPU that only uses around 65watts is going to compete with a Nvidia or AMD GPU that only uses around 65watts in actual usage.

    Apple clearly has a node advantage at work here, and with that being said. It is clear to see that when it comes to actual workloads like games, Apple still has some work to do efficiency wise. As their GPU in the same performance/watt range compared to a Nvidia chip in the same performance/watt range on a older and not as power efficient node is able to still do better.

    Apple's GPU is a compute champ and great for workloads that avg user will never see. This is why the M1 Pro makes a lot more sense then the M1 Max. The M1 Max seems like it will do fine for light gaming, but the cost of that chip must be crazy. It is a huge chip. Would love to see one in a mac mini.
  • misan - Monday, October 25, 2021 - link

    Just replace GPU by CPU and you will see how devoid of logic your argument is.

    Apple has much more experience in low-power GPU design. Their silicon is specifically optimized for low-power usage. Why wouldn't it be more efficient than the competitors?

    Besides, Andreis' test already confirm that your claims are pure speculation without any factual basis. Look at the power usage tests for the GFXbench. Almost three times lower power consumption with a better overall result.

    These GPUs are incredible rasterizers. It's that you look at bad quality game ports and decide that they reflect the maximal possible reachable performance. Sure, GFXbench is crap, then look at Wild Life Extreme. That result translates to 20k points. Thats on par with the mobile RTX 3070 at 100W.
  • vlad42 - Monday, October 25, 2021 - link

    And there you go making pure speculative claims without any factual basis for the quality of the ports. I could similarly make absurd claims such as every benchmark Intel's CPU looses is because that is just a bad port. Provide documented evidence it is a bad port as you are the one making that claim (and not bad Apple drivers, thermal throttling because they would not turn on the fans until the chip hit 85C, etc.).

    Face it, in the real world benchmarks this article provides, AMD's and Nvidia's GPUs are roughly 50% faster than Apple's M1 Max GPU.

    Also, a full node shrink and integrating a dGPU into the SOC would make it much more energy efficient. The node shrink should be obvious and this site has repeatedly demonstrated the significant energy efficiency benefits of integrating discrete components, such as GPUs, into the SOCs.
  • jospoortvliet - Wednesday, October 27, 2021 - link

    Well they are 100% sure bad ports as this gpu didn't exist. The games are written for a different platform, different gpus and different drivers. That they perform far from optimal must be obvious as fsck - driver optimization for specific games and game optimization for specific cards, vendors and even drivers usually make the difference between amd and nvidia - 20-50% between entirely unoptimized (this) and final is not even remotely rare. So yeah this is an absolute worst case. And Aztec Ruins shows the potential when (mildly?) optimized - nearly 3080 levels of performance.
  • Blastdoor - Monday, October 25, 2021 - link

    Apple's GPU isn't magic, but the advantage is real and it's not just the node. Apple has made a design choice to achieve a given performance level through more transistors rather than more Hz. This is true of both their CPU and GPU designs, actually. PC OEMs would rather pay less for a smaller, hotter chip and let their customers eat the electricity costs and inconvenience of shorter battery life and hotter devices. Apple's customers aren't PC OEMs, though, they're real people. And not just any real people, real people with $$ to spend and good taste .
  • markiz - Tuesday, October 26, 2021 - link

    When you say "Apple has made a design choice", who did in fact make that choice? Can it e attributed to an individual?
    Also, why is nobody else making this choice? Simply economics, or other reasons?
  • markiz - Tuesday, October 26, 2021 - link

    Apple customers having $$ and taste, at a time where 60% of USA has an iphone can not exactly be true. Every loser these days has an iphone.

    I know you were likely being specific in regards to Macbooks Pros, so I guess both COULD be true, but does sound very bad to say it.
  • michael2k - Monday, October 25, 2021 - link

    That would be true if there were and AMD or NVIDIA GPU manufactured on TSMC N5P node.

    Since there isn't, a 65W Apple GPU will perform like a 93W AMD GPU at N7, and slightly higher still for an NVIDIA GPU at Samsung 8nm.

    That is probably the biggest reason they're so competitive. At 5nm they can fit far more transistors and clock them far lower than AMD or NVIDIA. In a desktop you can imagine they can clock higher 1.3GHz to push performance even higher. 2x perf at 2.6GHz, and power usage would only go up from 57W to 114W if there is no need to increase voltage when driving the GPU that fast.
  • Wrs - Monday, October 25, 2021 - link

    All the evidence says M1 Max has more resources and outperforms the RTX 3060 mobile. But throw crappy/Rosetta code at the former and performance can very well turn into a wash. I don't expect that to change as Macs are mainly mobile and AAA gaming doesn't originate on mobile because of the restrictive thermals. It's just that Windows laptops are optimized for the exact same code as the desktops, so they have an easy time outperforming the M1's on games originating on Windows.

    When I wanna game seriously, I use a Windows desktop or a console, which outperforms any laptop by the same margin as Windows beats Mac OS/Rosetta in game efficiency. TDP is 250-600w (the consoles are more efficient because of Apple-like integration). Any gaming I'd do on a Windows laptop or an M1 is just casual. There are plenty of games already optimized for M1 btw - they started on iOS. /shrug
  • Blastdoor - Tuesday, October 26, 2021 - link

    As things stand now, the Windows advantage in gaming is huge, no doubt.

    But any doubt about Apple's commitment to the Mac must surely be gone now. Apple has invested serious resources in the Mac, from top to bottom. If they've gone to all the work of creating Metal and these killer SOCs, why not take one more step and invest some money+time in getting optimized AAA games available on these machines? At this point, with so many pieces in place, it almost seems silly not to make that effort.
  • techconc - Monday, October 25, 2021 - link

    It's hard to speak about these GPUs for gaming performance when the games you choose to run for your benchmark are Intel native and have to run under emulation. That's not exactly a showcase for native gaming performance.
  • sean8102 - Tuesday, October 26, 2021 - link

    What games could they have used? The only two somewhat demanding ARM native macOS games are WoW, and Baldur's Gate 3.
  • web2dot0 - Tuesday, October 26, 2021 - link

    You just made his case. 😂
  • melgross - Monday, October 25, 2021 - link

    We’re all ready seeing optimized software from Blackmagic, and others. Blackmagic is claiming anything from 2 times to over 7 times performance gains in Resolve 17.4 with the 32 core Max.

    Apple also has a power mode that we can access which will turn the fans to max level for extra performance. I’m looking to try that when my 16” comes in a few days. I wonder if that feature was tested, as it wasn’t mentioned.

    Gaming, well, yeah. Over the decades most ports were bad. I don’t have any thought that the ones tested here were much better. Then running under Rosetta 2, as good as it is, isn’t helping.
  • daveinpublic - Monday, October 25, 2021 - link

    Huh, nice way to hijack the top comment.
  • name99 - Monday, October 25, 2021 - link

    So nothing changes.
    Gamers have always hated Apple. Doesn't change.
    Developers have been uninterested in Apple. Probably doesn't change.

    None of this matters to Apple, or most of its customers. Doesn't change.

    *Perhaps* Apple will make some attempt to grow Arcade upwards, but honestly, why bother? Almost all the ranting about gaming HW in threads like this is trash talk and aspirational; it does no translate into purchases, certainly not of Apple HW, and usually not of Wintel HW. It's no different from the sort of comments you might read on a Maserati vs Ferrari comment board -- and as uninteresting and unimportant to either the engineers at both companies, or most of the *actual* customers.
  • web2dot0 - Tuesday, October 26, 2021 - link

    Developers love MacBookPro what are you talking about?!?
  • mlambert890 - Tuesday, October 26, 2021 - link

    He's talking about *game developers* targeting MacOS. Not that in developers in general tend to like MacBooks (although certainly not Windows developers obviously)
  • TheinsanegamerN - Tuesday, October 26, 2021 - link

    Apple developers love macbook pros. The rest of the world sees them for the overpriced shiny facebook machines they've become. Gone are the days of the 2010 era tank macbook pros that lasted forever, chromebooks have better build quality and longer lifespans then modern apple products.
  • steven4570 - Friday, October 29, 2021 - link

    "The rest of the world sees them for the overpriced shiny facebook machines they've become. "

    Not really
  • Hrunga_Zmuda - Monday, October 25, 2021 - link

    It's not M1X. There are two chips, M1 Pro and M1 Max.

    I'm guessing that next year, the entry level chips will be M2 and the big boys will be M2 Pro and M2 Max.
  • TedTschopp - Monday, October 25, 2021 - link

    The money in the gaming space is in mobile gaming, not in AAA gaming, and Apple is the leader in Mobile Gaming Revenue. So the classic pattern here would be for them to leverage themselves from the leader in the low-end market into a leader in the high-end market.

    And with their first attempt at a gaming class machine, they came out with something in the first quartile. Next year, when the M2 Pro and Max, my guess is that they will be accelerating faster then their competition as their development and design processes have been designed to out compete mobile competitors, not desktop competitors like Intel and NVidia.
  • Speedfriend - Tuesday, October 26, 2021 - link

    This isn't their first attempt. They have been building laptop version of the A series chips for years now for testing. There have been leaks about this for years. Assuming that the world best SOC design team will make a significant advancement from here after 10 years of progress on A series is hoping for a bit much
  • robotManThingy - Tuesday, October 26, 2021 - link

    All of the games are x86 translated by Apple's Rosetta, which means they are meaningless when it come to determining the speed of the M1 Max or any other M1 chip.
  • TheinsanegamerN - Tuesday, October 26, 2021 - link

    Real-world software isnt worthless.
  • AshlayW - Tuesday, October 26, 2021 - link

    "The M1X is slightly slower than the RTX-3080, at least on-paper and in synthetic benchmarks."
    Not quite, it matches the 3080 in mobile-focused synthetics where Apple is focusing on pretending to have best-in-class performance, and then its true colours shows in actual video gaming. This GPU is for content creators (where it's excellent) but you don't just out-muscle decades of GPU IP optimisation for gaming in hardware and software that AMD/NVIDIA have. Furthermore, the M1MAX is significantly weaker in GPU resources than the GA104 chip in the mobile 3080, which here, is actually limited to quite low clock speeds, it is no surprise it is faster in actual games, by a lot.
  • TheinsanegamerN - Tuesday, October 26, 2021 - link

    Rarely do synthetics ever line up with real word performance, especially in games. MatcHong 3060 mobile performance is already pretty good.
  • NPPraxis - Tuesday, October 26, 2021 - link

    Where are you seeing "actual gaming performance" benchmarks that you can compare? There's very few AAA games available for Mac to begin with; most of the ones that do exist are running under Rosetta 2 or not using Metal; and Windows games using VMs or WINE + Rosetta 2 has massive overhead.

    The number of actual games running is tiny and basically the only benchmark I've seen is Shadow of the Tomb Raider. I need a higher sample size to state anything definitively.

    That said, I wouldn't be shocked if you're right, Apple has always targeted Workstation GPU buyers more than gaming GPU buyers.
  • GigaFlopped - Tuesday, October 26, 2021 - link

    The games tested were already ported over to the Metal API, it was only the CPU side that was emulated, we've seen emulated benchmarks before, the M1 and Rosetta does a pretty decent job at it and when they ran the games at 4k, that would have pretty much removed any potential bottleneck. So what you see is pretty much what you'll get in terms of real-world rasterization performance, they might squeeze an extra 5% or so out of it, but don't expect any miracles, it's an RTX 3060 Mobile competitor in terms of Rasterization, which is certainly not to be sniffed at and very good achievement. The fact that it can match the 3060 whilst consuming less power is a feat of its own, considering this is Apple first real attempt at desktop level or performance GPU.
  • lilkwarrior - Friday, November 5, 2021 - link

    These M1 chips aren't appropriate for serious AAA Gaming. They don't even have hardware-accelerated ray-tracing and other core DX12U/Vulkan tech for current-gen games coming up moving forward. Want to preview that? Play Metro Exodus: Enhanced Edition.
  • OrphanSource - Thursday, May 26, 2022 - link

    you 'premium gaming' encephalitics are the scum of the GD earth. Oh, you can only play your AAA money pit cash grabs at 108 fps instead of 145fps at FOURTEEN FORTY PEE on HIGH QUALITY SETTING? OMG, IT"S AS BAD AS THE RTX 3060? THE OBJECTIVELY MOST COST/FRAME EFFECTIVE GRAPHICS CARD OF 2021??? WOW THAT SOUNDS FUCKING AMAZING!

    Wait, no I, misunderstood, you are saying that's a bad thing? Oh you poor, old, blind, incontinent man... well, at least I THINK you are blind if you need 2k resolution at well over 100fps across the most graphics intensive games of 2020/2021 to see what's going on clearly enough to EVEN REMOTELY enjoy the $75 drug you pay for (the incontinence I assume because you 1. clearly wouldn't give a sh*t about these top end, graphics obsessed metrics and 2. have literally nothing else to do except shell out enough money to feed a family a small family for a week with the cost of each of your cutting edge games UNLESS you were homebound in some way?)

    Maybe stop being the reason why the gaming industry only cares about improving their graphics at the cost of everything else. Maybe stop being the reason why graphics cards are so wildly expensive that scientific researchers can't get the tools they need to do the more complex processing needed to fold proteins and cure cancer, or use machine learning to push ahead in scientific problems that resist our conventional means of analysis

    KYS fool
  • BillBear - Monday, October 25, 2021 - link

    The performance numbers would look even nicer if we had numbers for that GE76 Raider when it's unplugged from the wall and has to throttle the CPU and GPU way the hell down.

    How about testing both on battery only?
  • Ryan Smith - Monday, October 25, 2021 - link

    MSI took back the Raider, which is why we only have data for benchmarks we're previously run on it.

    As for the Mac, I can confirm it runs at full perf even on a battery. I've not seen the GPU pass 45W.
  • daveinpublic - Monday, October 25, 2021 - link

    Huh, nice way to hijack the top comment.
  • FreckledTrout - Monday, October 25, 2021 - link

    "The chips here aren’t only able to outclass any competitor laptop design, but also competes against the best desktop systems out there, you’d have to bring out server-class hardware to get ahead of the M1 Max – it’s just generally absurd."

    Agree. Its crazy a company whos side job is making chips is competing with Intel and AMD so well.
  • goatfajitas - Monday, October 25, 2021 - link

    Not really. Ever since Apple got caught cheating in benchmarks way back when it was Power PC vs Intel they set a goal to master the benchmark game and that they have done - They have mastered the benchmark game. If you are into benchmarks Apple is a great thing for you to buy.
  • spdcrzy - Monday, October 25, 2021 - link

    This is my biggest worry. Benchmarks are all well and good, but many professional CAD, data analysis, and other performance-hungry applications are x64/x86 native. I have yet to see CFD, FEA, or BI programs natively supported on M1 silicon, for example. That will be the true test of performance.
  • foheng - Monday, October 25, 2021 - link

    This is literally the dumbest thing I’ve read.
  • Hifihedgehog - Monday, October 25, 2021 - link

    > This is literally the dumbest thing I’ve read.

    If you are speaking of the statement itself you wrote above, we appreciate your self-awareness.
  • foheng - Monday, October 25, 2021 - link

    How clever. “I know you are but what am I” springs to mind.

    The poster stated Apple cheats on benchmarks. Maybe they’ve forgotten all the years of iPhone processors and how well they performed and forgot about all the Android vendors caught cheating in benchmarks, many spotted right here by Anandtech.

    But so go on and explain to us all the examples of Apple getting caught cheating. Especially as they relate to the M1 versions of processors.
  • daveinpublic - Monday, October 25, 2021 - link

    He wasn't.
  • Makaveli - Monday, October 25, 2021 - link

    lol Burn!
  • daveinpublic - Monday, October 25, 2021 - link

    Agreed. How can we keep Apple from getting a win on this chip design? Oh, we'll say they're gaming the benchmarks!
  • wr3@k0n - Monday, October 25, 2021 - link

    Thats literally what this article shows. Did you even read the article?
  • michael2k - Monday, October 25, 2021 - link

    Yes. It sounds like you didn't read the article.

    Gaming anything means tailoring your software or hardware to succeed at benchmarks to the detriment of performance.

    Quoting the article, that directly contradicts you:
    For the true professionals out there – the people using cameras that cost as much as a MacBook Pro and software packages that are only slightly cheaper – the M1 Pro and M1 Max should prove very welcome. There is a massive amount of pixel pushing power available in these SoCs, so long as you have the workload required to put it to good use.
  • sirmo - Monday, October 25, 2021 - link

    I don't see anything dumb about the comment you're responding to. Also why so defensive?
  • blanarahul - Monday, October 25, 2021 - link

    Look, those who want Windows will buy Windows laptops, those who want Macs will buy Macbooks. Those who don't care about either also are likely to not give a shit about performance and will do whatever is convenient for them.
  • ThreeDee912 - Monday, October 25, 2021 - link

    Cheating? When was this?
  • sirmo - Monday, October 25, 2021 - link

    When they had PowerPC processors before they switched to x86
  • techconc - Monday, October 25, 2021 - link

    @sirmo, What is your specific example of Apple cheating on a benchmark? Please support the claim you've made with evidence.
  • sirmo - Monday, October 25, 2021 - link

    I only said when not what.
  • sirmo - Monday, October 25, 2021 - link

    But I found a news story on it: https://www.theregister.com/2003/06/24/apple_accus...
  • techconc - Monday, October 25, 2021 - link

    So Apple reported 3rd party results of a test and the 3rd party provided full disclosure of test conditions. That’s as close as you can get to say Apple cheated?

    To me, cheating is like we see on Android phones where the OEM looks for the existence of a specific benchmark and performs differently just for that benchmark.
  • steven4570 - Monday, October 25, 2021 - link

    "If you are into benchmarks Apple is a great thing for you to buy."

    Lol, k
  • daveinpublic - Monday, October 25, 2021 - link

    Amazing how many salty Android/Windows fans there are here.

    Don't worry you guys, Google is working on designing their own silicon, Microsoft will soon. Then you can fawn over the new chips once you have your own.
  • Lavkesh - Monday, October 25, 2021 - link

    Seriously. These android and windows shrills are worse than barking dog running behind a car.
  • AshlayW - Tuesday, October 26, 2021 - link

    You are aware of how Apple treats consumers, right? I mean, its awfully distasteful to be supporting a company that get sued for not honouring legal warranty requirements.
  • melgross - Monday, October 25, 2021 - link

    Abe you’re thinking of Intel. Apple didn’t cheat on benchmarks. Point out an actual cheat.
  • sirmo - Monday, October 25, 2021 - link

    They allegedly cheated by tweaking register settings: https://www.theregister.com/2003/06/24/apple_accus...
  • web2dot0 - Tuesday, October 26, 2021 - link

    17years ago 😂

    We’re you born back then?
  • sirmo - Tuesday, October 26, 2021 - link

    I had a G3 mac. Still do in fact.
  • easp - Monday, October 25, 2021 - link

    Huh? Apple has long de-emphasized benchmarks. That may be changing a little for the Mac line now that they aren't using the same silicon as all the other PC makers, but their phone SoC's have been exclusive for, what, a decade, and they have't leaned into benchmarks.

    Benchmarking well isn't the same thing as optimizing for benchmarks.
  • taligentia - Monday, October 25, 2021 - link

    M1 destroyed its competitors in real world use. Performance/battery life was unlike anything we had seen.

    And companies like Davinci are saying that M1 Pro/Max continue this even further.

    So the idea that Apple is just doing this for benchmarks is laughable.
  • techconc - Monday, October 25, 2021 - link

    @goatfajitas Apple has never been caught cheating in benchmarks. You seem to confuse Apple with your typical Android OEM in that regard.
  • Lavkesh - Monday, October 25, 2021 - link

    Ignore him. Just a butt hurt troll with nothing else to do.
  • name99 - Monday, October 25, 2021 - link

    :eyeroll:
  • schujj07 - Monday, October 25, 2021 - link

    Basically the M1 is great in synthetic benchmarks but once you have to run applications it falls behind. Apple made this big deal about how their GPU could compete with the mobile 3080 at 1/3 the power all based on synthetic benchmarks. However, once the GPU is actually used you see it is only 1/3 as fast as the mobile 3080 in real scenarios. I also do not like the use of SPEC at all. It is essentially a synthetic benchmark as well. Problem is there aren't a lot of benchmarks for the Apple eco system that aren't like Geekbench.
  • SarahKerrigan - Monday, October 25, 2021 - link

    SPEC isn't a synthetic - it's real workload traces.
  • schujj07 - Monday, October 25, 2021 - link

    More like its "real world." OEMs spend hours tweaking their platforms to get the highest SPEC score possible. That really shows how SPEC borders real world and synthetic. I have been to many conferences and never once have the decision makes for companies said they made their decision based on SPEC performance. It is essentially nothing more than a bragging right for OEMs.
  • The Garden Variety - Monday, October 25, 2021 - link

    I did some googling and could not find measurements to back up your statements. I'm interested in learning more about how the M1's real performance is dramatically below the measurements of people like Andre, et al. I've relied on Anandtech to provide a sort of quantitative middle ground, and I'm a little rocked to hear that I shouldn't. Could you point me in the right direction for articles or some kind of analysis? You don't have to do my homework for me, just let me know where I could read more.
  • schujj07 - Monday, October 25, 2021 - link

    That is the biggest problem with the Apple eco system. Typical benchmark suites aren't useful as many of the programs either don't run on ARM or OSX. Therefore you are left with things like Geekbench or SPEC. I will be interested in seeing what the M1 Max can do in things like Adobe. Puget Systems has their own Adobe Premiere benchmark suite but the M1 Max hasn't been benchmarked, however, the M1 has. https://www.pugetsystems.com/labs/articles/Apple-M...
  • Ppietra - Monday, October 25, 2021 - link

    Puget Premiere Pro benchmark is in the article, though I would never classify that as CPU benchmark, nor Premiere Pro as particularly suitable to make general conclusions considering that it isn’t as optimised for macOS as it is for Windows.
  • arglborps - Friday, March 25, 2022 - link

    Exactly. In the world of video editing suites Premiere is the slowest, buggiest piece of crap you can think of, not really a great benchmark except for how fast to crash an app.
    DaVinci and Final Cut run circles around it.
  • ikjadoon - Monday, October 25, 2021 - link

    AnandTech literally tested the M1 Max on PugetBench Premiere Pro *in this article*. Surprise, surprise 955 points on standard, 868 on extended, thus just 4% slower than a desktop 5950X + desktop RTX 3080.

    "biggest problem with the Apple eco system" Huh? Premiere Pro has already been written in Apple Silicon's arm64 for macOS. It's been months now.

    >We’ll start with Puget System’s PugetBench for Premiere Pro, which is these days the de facto Premiere Pro benchmark. This test involves multiple playback and video export tests, as well as tests that apply heavily GPU-accelerated and heavily CPU-accelerated effects. So it’s more of an all-around system test than a pure GPU test, though that’s fitting for Premiere Pro giving its enormous system requirements.

    You clearly did not read the article and a misinformed "slight" against Apple's SoC performance: "These benchmarks disagree with my narrative, so I need to change the benchmarks quickly now."

    I don't get why so many people are addicted to their "Apple SoCs can't be good" narrative that they'll literally ignore:

    1) the AnandTech article that benchmarked what they claimed never got benchmarked
    2) the flurry of press when Adobe finally ported Premiere Pro to arm64
  • easp - Monday, October 25, 2021 - link

    So if one can't really compare "real-world" benchmarks between platforms how are you so sure that Mac's fall-short?
  • sirmo - Monday, October 25, 2021 - link

    We aren't shore of anything. Why are we even here?
  • SarahKerrigan - Monday, October 25, 2021 - link

    Sure, OEM submissions are mostly nonsense. SPEC is a useful collection of real-world code streams, though. We use it for performance characterization of our new cores, and we have an internal database of results we've run inhouse for other CPUs too (currently including SPARC, Power, ARM, IPF, and x86 types.) Run with reasonable and comparable compiler settings, which Anandtech does, it's absolutely a useful indicator of real world performance, one of the best available.
  • schujj07 - Monday, October 25, 2021 - link

    You are the first person I have talked to in industry that actually uses SPEC. All the other people I know have their own things they run to benchmark.
  • phr3dly - Monday, October 25, 2021 - link

    I'm in the industry. As a mid-sized company we can't afford to buy every platform and test it with our workflow. So I identify the spec scores which tend to correlate to our own flows, and use those to guide the our platform evaluation decisions.

    Looking at specific spec scores is a reasonable proxy for our own workloads.
  • 0x16a1 - Monday, October 25, 2021 - link

    uhhhh.... SPEC in the industry is still used. SPEC2000? Not anymore, and people have mostly moved off of 2006 too onto 2017.

    But SPEC as a whole is still a useful aggregate benchmark. What others would you suggest?
  • sirmo - Monday, October 25, 2021 - link

    It's a synthetic benchmark which claims that it isn't. But it very much is. Anything that's closed source and compiled by some 3rd party that can't be verified can be easily gamed.
  • Tamz_msc - Tuesday, October 26, 2021 - link

    LOL more dumb takes. Majority of the benchmarks are licensed to SPEC under open-source licenses.

    https://www.spec.org/cpu2017/Docs/licenses.html
  • Ppietra - Tuesday, October 26, 2021 - link

    anyone can compile SPEC and see the source code
  • Ppietra - Monday, October 25, 2021 - link

    There aren’t that many games that are actually optimized for Apple’s hardware so you cannot actually extrapolate to other case scenarios, though we shouldn’t expect to be the best anyway. We should look for other kind of workloads to see out it behaves.
    SPEC uses a lot of different real world tasks.
  • FurryFireball - Wednesday, October 27, 2021 - link

    World of Warcraft is optimized for the M1
  • Ppietra - Wednesday, October 27, 2021 - link

    true, but it isn’t one of the games that were tested.
    What I meant is that people seem to be drawing conclusions about hardware based on games that have almost no optimisation.
  • The Hardcard - Monday, October 25, 2021 - link

    Please provide 1 example of the M1 falling behind on native code. As far as games, we’ll see if maybe one developer will dip a toe in with a native game. I wouldn’t buy one of these now if gaming was a prority.

    But note, these SPEC scores are unoptimized and independently compiled, so there are no benchmark tricks here. Imagine what the scores would be if time was taken to optimize to the architecture’s strengths.
  • name99 - Monday, October 25, 2021 - link

    Oh the internet...
    - Idiot fringe A complaining that "SPEC results don't count because Apple didn't submit properly tuned and optimized results".
    - Meanwhile, simultaneously, Idiot fringe B complaining that "Apple cheats on benchmarks because they once, 20 years ago, in fact tried to create tuned and optimized SPEC results".
  • sean8102 - Tuesday, October 26, 2021 - link

    From what I can find Baldur's Gate 3 and WoW are the only 2 demanding games that are ARM native on macOS.
    https://www.applegamingwiki.com/wiki/M1_native_com...
  • michael2k - Monday, October 25, 2021 - link

    From the article, yes, the benchmark does show the M1M beating the 3080 and Intel/AMD:
    On the GPU side, the GE76 Raider comes with a GTX 3080 mobile. On Aztec High, this uses a total of 200W power for 266fps, while the M1 Max beats it at 307fps with just 70W wall active power. The package powers for the MSI system are reported at 35+144W.

    In the SPECfp suite, the M1 Max is in its own category of silicon with no comparison in the market. It completely demolishes any laptop contender, showcasing 2.2x performance of the second-best laptop chip. The M1 Max even manages to outperform the 16-core 5950X – a chip whose package power is at 142W, with rest of system even quite above that. It’s an absolutely absurd comparison and a situation we haven’t seen the likes of.

    However, your assertion regarding applications seems completely opposite what the review found:
    With that said, the GPU performance of the new chips relative to the best in the world of Windows is all over the place. GFXBench looks really good, as do the MacBooks’ performance productivity workloads. For the true professionals out there – the people using cameras that cost as much as a MacBook Pro and software packages that are only slightly cheaper – the M1 Pro and M1 Max should prove very welcome. There is a massive amount of pixel pushing power available in these SoCs, so long as you have the workload required to put it to good use.
  • taligentia - Monday, October 25, 2021 - link

    Did you even read the article ?

    The "real world" 3080 scenarios were done using Rosetta emulated apps.

    When you look at GPU intensive apps e.g. Davinci Resolve it is seeing staggering performance.
  • vlad42 - Monday, October 25, 2021 - link

    Did you read the article? Andrei made sure the UHD benchmarks were GPU bound, not CPU bound (which would be the case if it were a Rosetta issue).
  • techconc - Monday, October 25, 2021 - link

    I guess you missed the section where they showed the massive performance gains for the various content creation applications.
  • GatesDA - Monday, October 25, 2021 - link

    Apple currently has the benefit of an advanced manufacturing process. If it feels like future tech compared to Intel/AMD, that's because it is. The real test will be if it still holds up when x86 chips are on equal footing.

    Notably, going from M1 Pro to Max adds more transistors than the 3080 has TOTAL. This wouldn't be feasible without the transistor density of TSMC's N5 process. M1's massive performance CPU cores also benefit from the extra transistor density.

    Samsung and Intel getting serious about fabrication mean it'll be much harder for future Apple chips to maintain a process advantage. From the current roadmaps they'll actually fall behind, at least for a while.
  • michael2k - Monday, October 25, 2021 - link

    That's a tautology and therefore a fallacy and bad logic:
    Apple is only ahead because they're ahead. When they fall behind they will fall behind.

    You can deconstruct your fallacy by asking this:
    When will Intel get ahead of Apple? The answer is never, at least according to Intel itself:
    https://appleinsider.com/articles/21/03/23/now-int...

    By the time Intel has surpassed TSMC, it means Intel will need to have many more customers to absorb the costs of surpassing TSMC, because it means Intel's process advantage will be too expensive to maintain without the customer base of TSMC.
  • kwohlt - Tuesday, October 26, 2021 - link

    It's pretty clear that Apple will never go back to x86/64, and that they will be using in-house designed custom silicon for their Macs. Doesn't matter how good AMD or Intel get, Apple's roadmap on that front is set in stone for as far into the future as corporate roadmaps are made.

    Intel saying they hope to one day get A and M series manufacturing contracts suggests they're confident about their ability to rival TSMC in a few years, not that they will never be able to reach Apple Silicon perf/watt.

    Intel def won't come close to M series in perf/watt until at least 2025 with Royal Core Project, and even then, who knows, still probably not.
  • daveinpublic - Monday, October 25, 2021 - link

    So by your logic, Apple is ahead right now.

    Samsung and Intel are behind right now. And could be for a while.
  • Sunrise089 - Tuesday, October 26, 2021 - link

    The Apple chips have perf/watt numbers in some instances 400% better than the Intel competition. Just how much benefit are you expecting a node shrink to provide? Are you seriously suggesting Intel would see a doubling, tripling, or even quadrupling of perf/watt via moving to a smaller node? You are aware node shrink efficiency gains don’t remotely approach that level of improvement be it on Intel or TSMC, aren’t you?

    “Samsung and Intel getting serious about fabrication.” What does this even mean? Intel has been the world leader in fabrication investment and technology for decades before recently falling behind. How on earth could you possibly consider them not ‘serious’ about it?
  • AshlayW - Tuesday, October 26, 2021 - link

    Firestorm cores have >2X the transistors as Zen3/Sunny Cove cores in >2X the area on the same process (or slightly less). The cores are designed to be significantly wider making use of the N5 process, and yes, I very much expect at LEAST a doubling of perf/w from N5 CPUs from AMD, since they doubled Ryzen 2000 with 3000, and +25% from 3000 to 5000 on the same N7 node.
  • kwohlt - Tuesday, October 26, 2021 - link

    Ryzen 3000 doubled perf/watt over Ryzen 2000?? Which workloads on which SKUs are you comparing?
  • dada_dave - Monday, October 25, 2021 - link

    So I wonder why Geekbench scores have so far shown M1Max very far off it's expected score relative to the M1 (22K)? I've checked other GPUs in its score range across a variety of APIs (including Metal) and so far they all show the expected scaling (or close enough) between TFLOP and GB score except the M1 Max. Even the 24 core Max is not that far off, it's the 32 core scores are really far off. They should be in the 70Ks or even high 80Ks for perfect scaling which is achieved by the 16-core Pro GPU, but the 32-core scores are actually in the upper 50Ks/low 60Ks. Do you have any hypotheses as to why that is? Also does the 16" have the high performance mode supposedly coming (or here already)?
  • Andrei Frumusanu - Monday, October 25, 2021 - link

    The GB compute is too short in bursts and the GPU isn't ramping up to peak frequencies. Just ignore it.
  • dada_dave - Monday, October 25, 2021 - link

    Ah so. The Apple GPU takes a bit longer than others to ramp up which doesn't actually affect any real workloads, but does for GB because it's too short. Have I got it?
  • dada_dave - Monday, October 25, 2021 - link

    But why aren't the smaller Apple GPU's seemingly as affected? The larger one takes more time to spin up to peak?
  • zony249 - Monday, October 25, 2021 - link

    Smaller GPUs take longer to finish each task, thus has the time to clock all the way up
  • dada_dave - Tuesday, October 26, 2021 - link

    That’s not it as that would apply to Nvidia and AMD GPUs too and it doesn’t (well not until those GPUs get *really* big). Ryan confirmed on Twitter the Max just doesn’t ramp it’s clocks up quite as fast and while most workloads won’t notice the difference, GB workloads are small enough that they do.
  • sirmo - Monday, October 25, 2021 - link

    But if the Apple GPU takes a bit longer so do other dGPU. So why are we using this benchmark?
  • tipoo - Monday, October 25, 2021 - link

    Definitely interested to hear more about that. So Max has a delayed onset burst? Kind of unlike their GPU behavior before this.
  • neogodless - Monday, October 25, 2021 - link

    On page 5, you have four paragraphs duplicated starting with "The fp2017 suite has more workloads..."
  • Andrei Frumusanu - Monday, October 25, 2021 - link

    Thanks, edited.
  • unclevagz - Monday, October 25, 2021 - link

    On Page 5, why does the spec2017 estimated totals have two 11900Ks?
  • Andrei Frumusanu - Monday, October 25, 2021 - link

    Edited, graph bug.
  • unclevagz - Monday, October 25, 2021 - link

    Is the M1 Max here being tested on the so called "high power" mode?
  • Andrei Frumusanu - Monday, October 25, 2021 - link

    No.
  • unclevagz - Monday, October 25, 2021 - link

    Is the high power mode a selectable option on your review unit?
  • Ryan Smith - Monday, October 25, 2021 - link

    It's selectable on the 16-inch models. However it's not needed for anything short of the heaviest combined CPU + GPU workloads. Any pure CPU or GPU workload doesn't come close to the thermal limits of the machine. And even a moderate mixed workload like Premiere Pro didn't benefit from High Power mode.

    It has a reason to exist, but that reason is close to rendering a video overnight - as in a very hard and very sustained total system workload.
  • unclevagz - Monday, October 25, 2021 - link

    So from your comment, I take it that the high power mode doesn't do anything to performance except allow the fans to kick in more aggressively rather than clock down the CPU/GPU in thermally limited scenarios?
  • hmw - Monday, October 25, 2021 - link

    Ryan, Andrei - so would the high power mode be something that prevents throttling or frequency downshifting when running sustained CPU + GPU workloads that might otherwise cause the machine to throttle or run over the combined limit (example > 120W ) ? If so, makes sense - it's just adjusting the power profile much like in Windows ...

    Just trying to decide between the lighter 14" and the heavier 16", seems like both models score equally well on the benchmarks otherwise ...
  • Ryan Smith - Tuesday, October 26, 2021 - link

    " so would the high power mode be something that prevents throttling or frequency downshifting when running sustained CPU + GPU workloads that might otherwise cause the machine to throttle or run over the combined limit (example > 120W ) ? "

    Correct.

    But being aware of the throttling issues on the Touch Bar 15/16 Inch MBPs, I should emphasize that we're not seeing throttling to begin with. Even "typical" heavy workloads like games aren't having problems. HP mode exists because there are edge cases, but unlike past MBPs, it seems you have to really work at it to find them.
  • BillBear - Monday, October 25, 2021 - link

    I would expect "high power" mode to just be "crank the fans up and leave them up" mode.

    For instance, that MSI GE76 Raider with a GTX 3080 has such a mode:

    >The downside of the extreme performance mode though is noise, with the system peaking around 55 dB(A) measured one inch over the trackpad. If you are going to run at maximum, you would really need closed-back earphones to try and remove some of the noise.

    https://www.anandtech.com/show/16928/the-msi-ge76-...
  • Tyrone91 - Monday, October 25, 2021 - link

    No Cinebench R23? Those R23 MT scores seem much closer to the Intel/AMD laptop chips.
  • sirmo - Monday, October 25, 2021 - link

    The only useful test I wanted to see. What's weird is it's present in the power consumption chart. So why not post the results?
  • zony249 - Monday, October 25, 2021 - link

    Same reason as why Andrei didn’t run only one of SPEC’s benchmarks, but rather ran all of them. When reviewing benchmarks, to compare the average performances of different CPU’s, you’ll have to average the performance in all benchmarks. However, if you only want to determine the cinebench performance, i.e you want to determine which CPU to get to run Cinema4D, then by all means, only look at how it performs in Cinebench.
  • sirmo - Monday, October 25, 2021 - link

    I have found Cinebench to be a much better predictor of actual performance for workloads I care about than SPEC or Geekbench.

    I would like to see more benchmarks than just Cinebench, but Cinebench at least would be nice.
  • ikjadoon - Tuesday, October 26, 2021 - link

    Cheers for proving you have the mental stamina of a YouTube comment. Read a complete article for once in your life now, FFS.

    Cinebench is literally the first CPU performance benchmark AnandTech shows. CB23 is Cinebench R23. Let's wake up and smell the coffee *before* writing a comment.

    I can't understand how anyone could simultaneously

    1) understand the niche use of Cinebench scores in professional 3D modelling images, and also
    2) lack the mental stamina to finish reading a seven-page article that's mostly charts and graphs
    3) make the time to login to AnandTech to prove to everyone that they did not read the article
  • tipoo - Monday, October 25, 2021 - link

    Pretty amazing. Now I just wish Tim Apple would swing some of that Smaug hoard of cash around to get native first class game ports to macOS, as even with all this power in the chip the gaming results were the letdown. I know, you don't particularly buy one to game on, but if you needed it for video editing or another high performance compute job, it would be nice to have the option to do it better on the side.
  • Blastdoor - Monday, October 25, 2021 - link

    I wonder if we're on the cusp of Apple doing whatever it takes to get more AAA games on the Mac. It might not have made a lot of sense for Apple to make a push with developers when a large fraction of Macs were sold with Intel's integrated graphics. But with the M1 as the performance baseline, maybe now it makes sense? If they're willing to spend billions on Apple TV+ (including a TV show about a game studio, for goodness sake), then why not spend to get games on the Mac?
  • daveinpublic - Monday, October 25, 2021 - link

    Ya, seems like good timing for bringing AAA games into the fold. Could potentially port to Mac & iPad, even an iPhone version at the same time.
  • lemurbutton - Monday, October 25, 2021 - link

    Finally, a M1 Pro/Max performance review without just running the really bad Cinebench.

    Good bye AMD/Intel.
  • pSupaNova - Tuesday, October 26, 2021 - link

    Not at that Silicon Budget and price point.
  • qqii - Monday, October 25, 2021 - link

    The seems to be a minor spelling mistake on page 2 (Huge Memory Bandwidth, but not for every Block):

    > Starting off with our memory latency tests, the new M1 Max *changers* system memory behaviour quite significantly compared to what we’ve seen on the M1.
  • qqii - Monday, October 25, 2021 - link

    Page 3:

    Starting off with device idle, the chip reports a package power of around 200mW when doing nothing but idling on a static *scree*.
  • id4andrei - Monday, October 25, 2021 - link

    Hey Andrei, on gaming, how does the API difference weigh in? I assume that Apple expects all devs to design their new games with Metal in mind. On the Windows side you have DX, OpenGL, Vulkan and the GPUs themselves are tuned to various quirks of these APIs.
  • Ryan Smith - Monday, October 25, 2021 - link

    Metal 2 is close enough in functionality and overhead to DX12 that I don't lose any sleep. But it is an API like any other, so devs need to be familiar with it to get the best performance from Apple's platform. Especially as Apple's GPU is a TBDR.
  • Silma - Monday, October 25, 2021 - link

    I don't understand how Intel, AMD and Qualcomm did not respond more urgently to the threat after the marketing of the M1.
    If they don't hurry, some people like me, who never considered switching to Apple, will entertain the idea.
  • willis936 - Monday, October 25, 2021 - link

    It's 57 Bn transistors in a consumer SoC. How can companies who only sell SoCs compete with a company that sells their SoC at a loss?
  • Ppietra - Monday, October 25, 2021 - link

    "company that sells their SoC at a loss"????
    Apple is making profits when selling these machines so it cannot be selling at a loss.
    What you could say is that Apple is not bound by the same manufacturing cost constrains as Intel or AMD because it doesn’t have to convince any other company to buy its chips.
  • phr3dly - Monday, October 25, 2021 - link

    It would be really interesting to see the BOM for a MBP. It'll never happen of course, and direct comparisons to even an x86 MBP would be impossible.

    Certainly the vertical design gives Apple the opportunity to pocket what would otherwise be fat margins for Intel.
  • Ppietra - Monday, October 25, 2021 - link

    Without a doubt Apple is taking advantage of not having to pay the for Intel (CPU) or AMD (GPU) margins.
    I am sure there will be some analyst trying to predict its cost – all iPhone BOM that you read do they same thing. Considering its size and some of the estimates for the previous M1 maybe somewhere close to 200 dollars for the M1 Max.
  • sirmo - Monday, October 25, 2021 - link

    This chip is more expensive to manufacture than the 3090. It even has the wider memory bus. No way could you sell this as a component alone. Apple is clearly subsidizing this run to convince customers to stay for the transition.
  • Ppietra - Monday, October 25, 2021 - link

    How do you know that it is more expensive to manufacture than the 3090? And do you know how much the 3090 actually costs in order to come to such conclusion about "subsidizing"?? Wider memory bus has no relevance on how much the chip costs.
    Apple doesn’t sell computers at a loss, actually Apple almost certainly pays less for the chip than if it had to buy from Intel.
  • sirmo - Monday, October 25, 2021 - link

    3090 is 28B transistors, this behemoth is 57B. As you go larger the yields get worse, so there is a bell curve to how high the costs ramp up. Also M1 Max is made on the most advanced 5nm node. Which is more expensive than Samsung's 8nm 3090 is made on.

    Like it would definitely cost more than a 3090, maybe even 3-4 more times more expensive.
  • Ppietra - Monday, October 25, 2021 - link

    The number of transistors doesn’t say what is price because, like you said, they are not manufactured on the same node, nor even by the same company, so you have no idea what is the difference in yields for the manufacturing process used.
    Secondly the 3090 is actually much bigger than the M1 Max (630 mm^2 vs estimated 430 mm^2), and it’s the area that affects the yield per wafer on that node.
    Thirdly you still have no idea what is the actual cost of the 3090 chip, so you don’t have a clue for the price of M1 Max to base your argument on.
  • sirmo - Monday, October 25, 2021 - link

    TSMC discloses scaling between different nodes (84% better density from 7nm-5nm). So we have a pretty good idea. I am 100% sure M1 Max is more expensive to manufacture than 3090.
  • Ppietra - Monday, October 25, 2021 - link

    :S what are you talking about!??? We know the size of the 3090 and we already have an estimate for the M1 Max based on the size of M1. The 3090 is much much bigger.
    And it is not enough to be more expensive than the 3090, you actually need a very high cost value for your argument about subsidies to make any sense - and by the way you haven’t given any value that would make a case for it!
  • sirmo - Monday, October 25, 2021 - link

    5nm is 84% denser than 7nm, and even more so compared to Samsung's 8nm. Which is why transistor count is easier to understand.
  • Ppietra - Monday, October 25, 2021 - link

    you continue to make no sense! What matters when talking about yields is die size. Transistors count doesn’t tell you anything about yields when comparing chips using different manufacturing processes.
  • vlad42 - Monday, October 25, 2021 - link

    The price per transistor has generally been on the rise since post 14nm. So, just because the 3090 is larger, that does not mean it is more expensive.

    Also, I would not say the M1 Max MacBook Pro is being sold at a loss, but I would bet that the upgrade cost from the M1 Pro is less than then cost difference between the two chips. That is to say, I think Apple's margins on the M1 Max MacBook Pro are less than the M1 Pro MacBook Pro (or the margins from the other extra components such as memory are making up for it).
  • valuearb - Tuesday, October 26, 2021 - link

    The higher the price point the higher Apples margins are, that’s always been true. The margin on the $1999 14 inch M1 Pro MBP is always going to be less than margins on a $3500 16 inch M1 Max MBP.

    Think about it. It’s basic pricing strategy. The high end buyers aren’t as price sensitive, have higher disposable income, earn more from their MBP investment, etc.
  • Ppietra - Tuesday, October 26, 2021 - link

    vlad42, never have I said that the 3090 is more expensive... my argument is that you cannot say that the M1 Max is more expensive just because it has more transistors - yields are affected by dye area which affects the price and there are other factors that affect price too. As for the price per transistor info, that is assuming the yield is always the same.
    They charge 400 dollars more just to upgrade to the M1 Max. Considering the size of the M1 Max that would give around 130 chips per wafer, the price per wafer should be around 17,000 dollars (according to some estimates). That would mean 130 dollars per chip at 100% yield. An extra 400 dollars should be more than enough to cover a low yields and packaging.
  • valuearb - Tuesday, October 26, 2021 - link

    And Microsoft puts 15B transistors in a $500 gaming device. I’m thinking a $3,000 laptop has plenty of margin to cover those big wafers.
  • valuearb - Tuesday, October 26, 2021 - link

    Apple never sells at a loss, always prices for high margins, and has zero reason to change that now given the new MBPs were instantly back ordered. Underpricing would be leaving mass amounts of money on the table.
  • tvrrp - Monday, October 25, 2021 - link

    Apple is making profits on products, not from the parts. This Soc in production costs much more than similar ones from Intel and AMD
  • sirmo - Monday, October 25, 2021 - link

    You are right. But I think M1 Max is a special case. I think Apple may even be losing money on it. Of course I don't know this to be true, but I can't see it being profitable with how large it is, as well as the exclusivity of being on 5nm first. This thing has more transistors than an A100.
  • photovirus - Monday, October 25, 2021 - link

    I don't think Apple had ever sold anything at a loss (at least since Jobs' return). Their hardware margin is very solid at around 30—35%.
  • sirmo - Monday, October 25, 2021 - link

    Apple is doing great in China, and the interesting thing about China is that it's heavily weighted towards the top end models. This is because those who buy iPhone in China do it as a status symbol. So they go all out. As a result Apple has like 79% margins in China. Plenty of cash to burn on a small number of M1 Max they have to make comparatively speaking.
  • sirmo - Monday, October 25, 2021 - link

    Exactly what I was thinking. All that 5nm silicon wasted on flexing how fast the computer can run benchmarks. Because it can't run most of the demanding software folks use.
  • michael2k - Monday, October 25, 2021 - link

    Well, there is 6 years of OS updates to consider.
  • vladx - Monday, October 25, 2021 - link

    Well said, real world performance is what matters not marketing BS. As
  • vladx - Monday, October 25, 2021 - link

    *As Andrei's countryman, I'm ashamed how much he tries to oversell M1 Max&Pro performance with shitty synthetic benchmarks
  • easp - Monday, October 25, 2021 - link

    Selling their SoC at a loss? That makes no sense.

    Apple's margins are the envy of the industry.

    And, of course, they aren't selling SoCs at all. However, these systems sell for similar price points to Intel (+AMD) MacBook Pro models so it would seem that their overall SoC costs are similar to what they were spending on Intel CPUs + AMD GPUs in the past.

    I suppose that means more of the BoM cost goes to silicon rather than Intel & AMDs margins, but the flip side is that different really just accrues to Apple's margins.
  • name99 - Monday, October 25, 2021 - link

    So what? Who cares about the number of transistors? What matters is that its area is comparable to a mid-size GPU (not outrageous) and that it likely costs Apple ~$130 or so from TSMC (post-yield).
    If Intel or AMD can't achieve similar behavior in similar area, whose fault is that?

    This complaint is no different from the earlier idiotic complaint that Apple is "cheating" by using so much cache -- once again, who's stopping Intel or AMD from doing the same thing?
  • photovirus - Monday, October 25, 2021 - link

    It ain't that easy. It took Apple a whole year just to scale the M1. And it took many more years to make M1 (namely, its Firestorm and Icestorm cores) in the first place.

    Also, Intel hasn't done nearly anything similar to what Apple is doing. AMD, on the other side, did the consoles which are somewhat close to what Apple offers. But still, it's not a walk in the park, and I'm quite sure both did spin some projects to make a competitor system. It might take them several years before these projects are fruitful (pun intended).
  • zodiacfml - Monday, October 25, 2021 - link

    well they can't, only Apple has the scale and premium pricing to do this. the closest thing is AMD's SoC on the PS5/Xbox but they can't put that on a PC. AMD could make a similar SoC for PC but they certainly can't sell much
  • phr3dly - Monday, October 25, 2021 - link

    My feeling: Apple is targeting a small market with a very small handful of SKUs. To this point their marketing seems very elegant, falling largely into the "Good, Better, Best" model that Steve Jobs introduced way back in 1998 or whenever the iMac came out.

    Intel meanwhile has a massive SKU database targeting hundreds (thousands?) of segments. On the one hand, that's enviable! It's a huge market, they make tons of money. On the other hand, it's not conducive to battling someone who is, at this point, tackling a very specific market.

    Intel's marketing department has segmented the product line like crazy. Battling the M1 in the laptop world requires undoing a lot of that effort. Interestingly, Intel is facing the same challenge from AMD in the server space, where the Epyc SKUs are dramatically "simpler" than Xeon offerings.

    I don't envy the position Intel is in, but it's their own fault for spending decades pursuing higher margins via market segmentation rather than focusing on the customer.
  • kwohlt - Monday, October 25, 2021 - link

    Intel and AMD are working on a response for sure - but CPUs/GPUs are a slow moving business. It takes years for a new design to hit the market - there's just no way AMD or Intel could have responded in the year since the M1's release.
  • taligentia - Monday, October 25, 2021 - link

    We are still waiting on a response to the M1 and it's been out for a year now.
  • jospoortvliet - Wednesday, October 27, 2021 - link

    A fully new cpu design takes at least 3 years, so expect a response at the soonest 2 years from now and that is assuming they had a team ready to start, nothing else to do and of course were interested in countering the m1 in the first place.
  • michael2k - Monday, October 25, 2021 - link

    Generally it takes 4 years for silicon design to be completed. That means AMD, Intel, and Qualcomm would have had to look at the A11 Bionic in 2017 and think, "Self, this is an existential threat"

    The A11 was showcased in the iPhone 8. That wasn't exactly going to show up on anyone's radar, except maybe Qualcomm.
  • sirmo - Monday, October 25, 2021 - link

    This SoC/APU would cost over $2000 and no one would buy it. Besides a few neck beards who just like to be different.
  • catinthefurnace - Monday, October 25, 2021 - link

    How many times do you need to be called out for making things up, and making no sense for you to stop?
  • Drash01 - Monday, October 25, 2021 - link

    End of the GPU performance section, the sentence ends abruptly: "But with the loss of Boot Camp, it’s".
  • Ryan Smith - Monday, October 25, 2021 - link

    Thanks. Fixed!
  • 5j3rul3 - Monday, October 25, 2021 - link

    It's amazing.

    Is there any analysis for promotion, M1 Max GPU ray tracing...?
  • dada_dave - Monday, October 25, 2021 - link

    Ray tracing is in Metal, but as of yet no GPU-hardware accelerated ray tracing yet
  • Kangal - Monday, October 25, 2021 - link

    Really impressive chip.
    I noted my satisfaction/dissatisfaction a whole year ago with the original Apple M1. I suggested that Apple should release a family of chipsets for their devices. It was mainly for being more competitive and having better product segmentation. This didn’t happen, and it looks like its only somewhat happening. Not to mention, they could've done this transition even earlier like a year or two ago. Also they could update their “chipset-family” with the subsequent architectural improvements per generation. For instance;

    Apple M1, ~1W, only small cores, 1cu GPU... for 2in watch, wearables
    Apple M10, ~3W, 2 large cores, 4cu GPU... for 5in phones, iPods
    Apple M20, ~5W, 3 large cores, 4cu GPU... for 7in phablets or Mini iPad
    Apple M30, ~7W, 4 large cores, 8cu GPU… for 9in tablet, ultra thin, fanless
    Apple M40, ~10W, 8 large cores, 8cu GPU… for 11in laptop, ultra thin, fanless
    Apple M50, ~15W, 8 large cores, 16cu GPU… for 14in laptop, thin, active cooled
    Apple M60, ~25W, 8 large cores, 32cu GPU… for 17in laptop, thick, active cooled
    Apple M70, ~45W, 16 large cores, 32cu GPU… for 23in iMac, thick, AC power
    Apple M80, ~85W, 16 large cores, 64cu GPU... for 31in iMac+, thicker, AC power
    Apple M90, ~115W, 32 large cores, 64cu GPU…. for Mac Pro, desktop, strong cooling

    …and after 1.5 years, they can move unto the next refined architecture/node, and repeat the cycle every 18 months). The naming could be pretty simple as well, for example; in 2020 it was M50, then in 2021 their new model is the M51, then it is M52, then M53, then M54, etc etc. This was the lineup that I had hoped for, kinda bummed, they didn't rush out the gate with such a strong lineup, and they possibly may not in the future.
  • rmullns08 - Monday, October 25, 2021 - link

    With how many SKUs Apple already has with the just the M1 Pro/Max configuration's it would likely be a supply chain nightmare to try to manage 10 CPUs as well.
  • gobaers - Monday, October 25, 2021 - link

    Page 4 should be "put succinctly" not "succulently." Even if we do appreciate water efficiency in our chip manufacturing process ;)
  • paulraphael - Monday, October 25, 2021 - link

    "Put succulently, the new M1 SoCs prove that Apple ...."
    A rare case of autocorrect improving an idea.
  • Hifihedgehog - Monday, October 25, 2021 - link

    Mmmunchy Krunchy Dee-licious.
  • GC2:CS - Monday, October 25, 2021 - link

    So hardware is on one hand much upgraded like in terms of memory architecture but on the other hand it is still a year old Firestorm icestorm GPU and NPU.
    I wonder if LPDDR5 is simply not suited for iPhones but seems strange the A15 gives some upgrade to everything while sticking with DDR4.

    24 and 48 MB system caches were shown by apple. For brief moment they labeled their M1Pro area with 24 little parts as system cache. Max doubles the same part. I just was not sure one little part of SM equals 1 MB.

    So making two independet clusters with their own L2 helps compared to a single 8C/24 MB cluster ?
    After all firestorm is about 5 W per core so it is probably easy to fit many of them in lets say upcoming 300 W desktop Apple silicon. The question is, is there a space for an even larger core than firestorm ? If a 5 W core is fast why not make a 20W core (even if less efficient) and put two of them into a desktop, along with few dozens firestorms ? Like make firestorm the little core in the desktop.
    Honestly i could not take an idea that next year we will have desktop PC with less powerfull main cores than in a phone (how could i flex on my friends then ?)

    While M1’s are fast we have the A15 with supposed large gains in CPU and GPU efficiency better NPU and 32 MB system cache already shiping. Seems like a good omen for those M2 generations ?
    So M2/Pro/Max will get up to 10/20/40 GPU cores 32/48/64 MB system cache 18/36/36 MB of L2 ?!?

    Apple Silicon lineup is getting confusing - not liking it much. A4-A15 is the best naming scheme for any piece of silicon I have ever seen. (Would be better if they started at A1).
  • StinkyPinky - Monday, October 25, 2021 - link

    Thanks for being the only place that actually did real world benchmarks. Some of these reviews around the web are god awful.

    Any chance you can do Civ 6? That always seems a good test of both CPU and GPU.
  • zodiacfml - Monday, October 25, 2021 - link

    Nice, it falls where I expected it since the announcement. Apple is now playing with the 5nm process like it is nothing.
  • LuckyWhale - Monday, October 25, 2021 - link

    Seems like a hasty and rushed article by a fanboy. So lacking (perhaps on purpose) in real-world general benchmarks; Could have added some encoding or compression, image manipulation, etc. benchmarks for several competing systems.
  • Hifihedgehog - Monday, October 25, 2021 - link

    > Seems like a hasty and rushed article by a fanboy.

    Unlike Ian Cutress who is down to earth and a blast to send questions or pose hypotheses to, the author on occasion has been very rude and condescending if you disagree. Never mind his statements like this one from Andre that ARM would make you question his motives: "No - Apple is indeed special and many Arm ISA things happen because of Apple." ARM ISA has been progressing more and more automonously and independently from Apple since the 2013 arm64 contribute. Apple indeed has a 1-2 year lead over the rest of industry thanks to bets made around a decade ago, but they are not heaven's never-failing gift to humanity. Statements like this from Andrei should give you all the knowledge you need about his bias.
  • ikjadoon - Monday, October 25, 2021 - link

    I think the AT team work together on various pieces. Ian is, after all, the primary Intel & AMD author here, so his work is seemingly used here, too.

    >many Arm ISA things happen because of Apple

    You realise that statement has been directly validated by Apple employees...publicly? See Shac Ron, who commented in the very thread you're referring to.

    It's not that unexpected...Arm Ltd. was literally founded by Acorn, Apple and VLSI. An Apple VP was appointed as Arm's first CEO. Former Arm employees have confirmed that Apple was responsible for Arm's name removing all mention of "Acorn". Literally from Wikipedia, mate:

    >The company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology. Acorn provided 12 employees, VLSI provided tools, Apple provided $3 million investment. Larry Tesler, Apple VP was a key person and the first CEO at the joint venture.

    Then this...

    >Apple indeed has a 1-2 year lead

    If you think Intel (and AMD) need just 1-2 years to overcome a 4x perf/watt gap, I'm not sure how long you have read AnandTech or followed this industry. If that timeline was anything close to true, we should see AMD & Intel matching M1 perf/watt next month, right? M1 launched a year ago.

    >they are not heaven's never-failing gift to humanity

    Don't think anyone has made conclusion, have they? These are CPU reviews: there's data and there's straightforward conclusions from the data.
  • Hifihedgehog - Monday, October 25, 2021 - link

    If you are referencing the data, then you would observe it is a 2X efficiency advantage, not your grossly over-exaggerated 4X. Unless, of course, you are referencing Apple’s 3080 Laptop claims. The numbers here show 3060 gaming performance. Adobe Premiere Pro, meanwhile, which has been made enhanced for M series silicon since July, is only showing numbers on par with RTX 3050 Ti coming out of M1 Max. Let’s be objective but Andrei needs to broaden his benchmark horizons:
    https://twitter.com/TheRichWoods/status/1452639861...
  • ikjadoon - Tuesday, October 26, 2021 - link

    lmao: what kind of basic YouTube comment is this? You're thoroughly confused. And ignored every other silly point that was neatly debunked...

    1) Nope: XDA admits in their actual review that Premiere Pro hasn't yet been updated to activate the M1 Pro / Max video engines, lmao, and that's why it appeared "slow". Do you just copy-paste tweets that agree with you with zero critical review? XDA made it obvious in their review (which you happily refused to link).

    https://www.xda-developers.com/apple-macbook-pro-2...

    >I checked with Apple on the jarring discrepancy between Final Cut Pro and Adobe Premiere Pro rendering times (1:35 vs 21:11!) and a representative from Apple said it’s because Adobe Premiere Pro has not been optimized to use the M1 Pro/Max’ ProRes hardware for video encoding.

    2) You simply cannot read and I'm done wasting my time. Thanks for making me write this out, though: much easier to debunk other illiterate YouTube commenters. 😂

    Cinebench R23 ST: M1 Max has 4.7x higher perf/W than the i9-11980HK
    Cinebench R23 MT: M1 Max has 2.6x higher perf/W than the i9-11980HK
    SPEC2017 502 ST: M1 Max has 2.9x higher perf/W than the i9-11980HK
    SPEC2017 502 MT: M1 Max has 4.0x higher perf/W than the i9-11980HK
    SPEC2017 511 ST: M1 Max has 3.5x higher perf/W than the i9-11980HK
    SPEC2017 511 MT: M1 Max has 2.9x higher perf/W than the i9-11980HK
    SPEC2017 503 ST: M1 Max has 2.4x higher perf/W than the i9-11980HK
    SPEC2017 503 MT: M1 Max has 6.3x higher perf/W than the i9-11980HK

    The geometric mean perf/W improvement is 3.5x. :) Thank you for the laughs, though. Now I know who actually doesn't read the articles!

    If you want objectivity, then bring real data next time. Please find someone else to "debate" lmao. See you once Intel & AMD to release their "just 1-2 years behind" M1 competitors in the next two months.

    Will eagerly await for AnandTech's review...
  • ikjadoon - Monday, October 25, 2021 - link

    Did you read the article, though? AnandTech tested every single thing you asked for...

    All benchmarks need to be standardized between arm vs x86, Windows vs macOS, etc. What validated, repeatable cross-ISA, cross-OS "real-world" software do you suggest?

    Today, that is SPEC2017...which would you have understood had you read the article:

    557.xz_r = compression
    525.x264_r = encoding
    538.imagick_r = image manipulation

    They also ran PugetBench's Premiere Pro, one of the few Apple Silicon-native production applications.
  • michael2k - Monday, October 25, 2021 - link

    They did: That's what SPEC2017 was and they compared it to the Ryzen 5980HS and Core i9-11980HK
  • vladx - Monday, October 25, 2021 - link

    Yes at this point it's quite obvious Andrei is a big Apple fanboy with how he tries to oversell the performance of the Max SoC with synthetic benchmarks.
  • FurryFireball - Monday, October 25, 2021 - link

    For games why don’t you use one that was converted for M1? I know world of Warcraft was made native for M1 by blizzard so that would give you a better idea of how well it would do in gaming. Wow still can beat down a 3080ti if you max everything out so it’s not a slouch of a game to benchmark to.
  • photovirus - Monday, October 25, 2021 - link

    Apple GPUs are quite a different beasts from all others (see TBDR, tile-based deferred rendering). Porting games made for immediate rendering isn't easy. One has to modify the rendering pipeline to leverage everything their GPUs have to offer. Few bother with this.

    I'd check smth Baldur's Gate 3. I'm not sure they have heavy enough graphics, but since they were on stage during one of Apple's Events, maybe they did everything right.
    A properly adapted game would be quite easy to port to iOS/iPadOS (and M1 iPads will even have the proper hardware to support quite nice graphics).
  • hmw - Monday, October 25, 2021 - link

    Are there Geekbench scores for the M1 Pro vs M1 Max? Or the 14" vs the 16" ? Just wondering if there's any real performance difference between the 14" and 16"
  • Rudde - Tuesday, October 26, 2021 - link

    Geekbench is too short to measure actual performance, is what Anandtech answered on another comment.
  • Lock12 - Monday, October 25, 2021 - link

    Why is there no power consumption data in game mode?
  • sirmo - Monday, October 25, 2021 - link

    Where are Cinebench results? Puget uses playback acceleration in its score which makes it useless. How about some real workloads like compiling kernel in Docker?

    Like this review tells me nothing about the actual performance of the computer for anything I might use it for.
  • ginandxanax - Monday, October 25, 2021 - link

    Exactly! Why are there no benchmarks reported for “Absurd Trolling”?
  • steven4570 - Monday, October 25, 2021 - link

    "Where are Cinebench results?"

    lol oh lord
  • Ppietra - Monday, October 25, 2021 - link

    That’s the problem of believing that you can make any conclusion based only on 1 benchmark...
    You end up not accepting a benchmark that actually uses a real world workload just because computers can work faster without relying only on the CPU!
  • sirmo - Monday, October 25, 2021 - link

    I would want to see more benchmarks of course. But just one would do for now. SPEC and Geekbench I found to be unreliable predictor for my workloads.
  • Ppietra - Monday, October 25, 2021 - link

    And cinebench would? For someone that seems interested in compiling kernel in Docker, Cinebench would not be an adequate benchmark to ask for. SPEC and Geekbench would actually be far more informative considering that you can look at the performance in the compiling tests that are run!
  • sirmo - Monday, October 25, 2021 - link

    Cinebench is not perfect either. But I found it to be a pretty good predictor of performance for my workloads.
  • Ppietra - Monday, October 25, 2021 - link

    you seem to have weird workloads, because you are asking for Cinebench and after that asking for compiling.
  • catinthefurnace - Monday, October 25, 2021 - link

    Something tells me his workloads work similarly to his brain.
  • sirmo - Monday, October 25, 2021 - link

    Yes my brain can do more than one thing. Imagine that!
  • name99 - Monday, October 25, 2021 - link

    Then do a freaking google search. You do know how to use Google don't you?
  • Ppietra - Monday, October 25, 2021 - link

    And if you play close attention SPEC actually has a compiling test
  • name99 - Monday, October 25, 2021 - link

    And yet you feel the need to write at least ten comments about how much it sucks?
  • hmw - Monday, October 25, 2021 - link

    Interesting observation about running the Aztech + 511MT on the 16" M1 Max. That model has a 140W power adapter and used 120W from the wall socket. Seeing that the 14" M1 Max has a 97W adapter, it would be logical to assume the 14" will perform slower on CPU+GPU heavy workloads simply because it is limited to how much power it can draw from the wall - unless of course it's possible to supply 140W via the MagSafe to the 14" as well ...
  • guy23929 - Monday, October 25, 2021 - link

    The 14'' model is only sold with the M1 Pro, not the M1 Max
  • daveedvdv - Monday, October 25, 2021 - link

    The M1 Max is available on the 14” model as a built-to-order option (I have one arriving next week).
  • NCM - Monday, October 25, 2021 - link

    @guy23929: The M1 Max processor is a BTO option for the 14” model. See https://www.apple.com/shop/buy-mac/macbook-pro/14-...
  • Ppietra - Monday, October 25, 2021 - link

    it can also be sold with the M1 Max, it’d a build to order option.
  • photovirus - Monday, October 25, 2021 - link

    This is not correct. You can totally customize a 14" to have an M1 Max w/ 32-core GPU. (There's no Max-equipped *base* model, though.)
  • sirmo - Monday, October 25, 2021 - link

    My guess is 14" will be power limited more. Since I bet it doesn't have the cooling capacity to handle 120 watts.
  • hmw - Monday, October 25, 2021 - link

    Unless the 16" uses a very different cooling design over the 14", it's not likely to be that much bigger. Say it's capable of dissipating 120W - and the 14" will handle 100W. In this case the 14" would spin the fans harder and handle that extra load. So perhaps the limitation is artificial ?
  • photovirus - Wednesday, October 27, 2021 - link

    Sure, I think it will. However (IMO) there aren't that many workloads which will saturate both CPU and GPU simultaneously to stress the thermals. If any one is underutilized, then the bottlenecking one will run at 100% speed.

    I think, 14" Max will be as fast as 16" Max for most people. At 1.6 kg, this powerhouse is surprisingly portable.
  • maroon1 - Monday, October 25, 2021 - link

    Apple is bad for gaming.... less games than windows 10 and get destroyed by nvidia GPU because of poor optimizations
  • ComputeGuru - Monday, October 25, 2021 - link

    Sadly until Apple gets on board with gaming, these Macs and their exceptional power efficiency are going to remain a niche product.

    These new Macs are definitely a very nice product to that small subset of people who need it's processing power for their specific workloads, but this is still a product that is not for the masses. It would be nice to see all computers sip power like these new Macs but until Mac actually becomes a threat to the Windows market by giving the masses the biggest legit reason to buy a Mac (no compromise gaming with access to all games) sadly this will take a while as well.
  • Focher - Monday, October 25, 2021 - link

    Can you let us all know when Apple gave a crap about gaming on Macs? Because every time Apple releases a new Mac product, there are inevitably a bunch of posters about Apple being bad at gaming. Very insightful.

    These machines are designed for digital workflows. Transcoding video. Rendering. Publishing. Machine learning. It's pretty much a guarantee that Apple's engineering teams do not have a single AAA game in their mind when designing silicon. Yet here we are again having to scroll through useless comments about gaming support on a Mac.
  • Alistair - Monday, October 25, 2021 - link

    It's because of the pricing. You can buy a $1000 laptop with a Ryzen 5900H if all you need is CPU performance. I don't find Apple products compelling without gaming chops. Neither do most people.
  • Focher - Monday, October 25, 2021 - link

    Now you're just being silly and possibly even intentionally ignorant. A $1000 laptop is not going to do ANY of the things that MacBook Pros are actually designed to do anywhere close to one of these machines.

    As for "most people", it appears Apple sells quite a few of their laptops so they seem to be doing fine without the gaming crowd. You might as well tell us how well a Toyota Corolla is at transporting people versus a Porsche 911.
  • Alistair - Monday, October 25, 2021 - link

    Excuse me, but our sys admin just outfitted everyone at the company brand new 16" 8 core laptops. $1000 each. There are many of them. And they have RTX 3050 also.
  • name99 - Monday, October 25, 2021 - link

    Was gaming the priority that drove the configuration of those laptops?
  • ComputeGuru - Monday, October 25, 2021 - link

    @Allstair Pretty much this. Which is why they have been and will continue to be relegated as a small niche product. It doesn't need to be this way though.
  • Alistair - Monday, October 25, 2021 - link

    Sometimes I think Apple users are unaware that that MAJORITY of high powered CPU laptops are under $1000 USD now. If you don't need more than an RTX 3050, there's no need to pay more.
  • Alistair - Monday, October 25, 2021 - link

    Not to mention Alder Lake is launching very soon.
  • valuearb - Tuesday, October 26, 2021 - link

    Apple has close to 10% of the PC market units at nearly triple the industry ASP giving it at least 20% of revenues, and the majority of PC hardware profits. They’ve also outgrown industry
    almost every one of the last twenty years, going from 2% to over 8%.

    The new MBPs are massive improvements for one of their best selling Mac lines, they’ll be over 10% in industry units next year.
  • taligentia - Monday, October 25, 2021 - link

    Gamers really do have a hilariously high opinion of themselves.

    The computing world doesn't revolve around games. People do use computers for doing work as well.

    And for doing work Apple is pretty far ahead of the competition.
  • ComputeGuru - Monday, October 25, 2021 - link

    Sounds more like the small computing subset that use Mac have an even higher opinion of themselves.
  • valuearb - Tuesday, October 26, 2021 - link

    That small computing subset comprising graphics, illustration, publishing, digital video, and music production.
  • ComputeGuru - Tuesday, October 26, 2021 - link

    There are also Illustrators, publishers, digital video artists/editors and music producers that use Windows computers as well. Mac does not have a monopoly on these forms of employment, and a lot of these people game too in their spare time.

    Thanks for your moot point though.
  • web2dot0 - Tuesday, October 26, 2021 - link

    Some people use their computers to generate revenue as their source of income.

    But hey, what do you care about their livelihoods.

    You just want high frame rates when you play games … that don’t generate any income. In fact you spend your income on it
  • Focher - Monday, October 25, 2021 - link

    They also feel compelled to litter forums about computing devices that aren't designed for gaming with posts criticizing devices for not being better at gaming.

    Does anyone go into gaming forums and critique gaming computers for not being good at non-gaming stuff? I've never noticed it. Besides, no serious gamer uses a laptop of any kind.
  • ComputeGuru - Monday, October 25, 2021 - link

    @Focher they don't do that because that's not an issue.

    Don't let that stop you from making logical fallacies though!
  • ComputeGuru - Monday, October 25, 2021 - link

    >And for doing work apple is pretty far ahead of the competition.

    They are ahead, now. Pretty far ahead is a stretch, unless you're talking performance for watt, which to be fair, most people still aren't concerned with at the moment. Also, gaming is a massive market that you'd do well to understand.
  • musiknou - Monday, October 25, 2021 - link

    Gaming is a massive market is true, and Apple makes more money from gaming than just about all the gaming companies put together combined. This is what some folks don't get, they keep talking about gaming when casual gaming is where the bulk of the money is and Apple is killing it there. Apple makes more money in gaming than Microsoft, Sony, Nintendo etc... yeah keep saying Apple and gaming don't mix while Apple is laughing all the way to the bank with their gaming revenue.
  • ComputeGuru - Tuesday, October 26, 2021 - link

    We're talking about Mac, not Android and iOS. Do try to keep up.
  • musiknou - Wednesday, October 27, 2021 - link

    No need, my answer still apply. Apple do not care about gaming on the Mac because the ROI is simply not there. The energy, the resource investment they would need into making the Mac a powerhouse gaming platform is simply not worth it, not when they are killing it on the mobile side. Mobile is the bigger gaming platform by revenue and profit by far and its also not a growing market. I don't see Apple doing much there then they are already doing. If they chips get so good developers decide to target it, fine, but I don't see them going after that market. Folks need to come to grip with that, but I think folks keep bringing it up because that's about the only thing left to badmouth those chips.
  • web2dot0 - Tuesday, October 26, 2021 - link

    You don’t care about performance per watt when you use you laptop?

    Why don’t we all slap on a 1000W cpu on your lap.

    Maybe you’ll care then
  • ComputeGuru - Tuesday, October 26, 2021 - link

    Nice hyperbole. You guys are very good at hyperbole.
  • Makaveli - Tuesday, October 26, 2021 - link

    Sadly this is true have a majority of children and man children that just play games and really do nothing else on a PC and they think they drive the industry. You would expect to be getting more comments from people in the industry and IT professionals then the children come in with my games my fps I need more blah blah blah. You have less of this on anandtech compared to some other forums but you cannot escape it sadly. I'm not even a mac guy don't have a single apple device in my house but I know that no one is dropping that kinda coin apple wants for this to play games.
  • itsmrfungible - Tuesday, October 26, 2021 - link

    For 35+ years all we heard were “Macs are toys, get a real computer” then once Macs start beating Intel all we hear is “but muh games.” Who’s playing with toys now?
  • ComputeGuru - Tuesday, October 26, 2021 - link

    Intel isn't even the leader in the PC market anymore. Intel has become largely irrelevant due to their own stupidity.
  • valuearb - Tuesday, October 26, 2021 - link

    The biggest gaming market in the world by dollars is iOS.
  • ComputeGuru - Tuesday, October 26, 2021 - link

    We're talking about desktop/Mac gaming. Keep up with the convo.
  • photovirus - Wednesday, October 27, 2021 - link

    It's actually relevant.

    iOS has a lot of common frameworks with macOS (especially now with Catalyst). Studios which are proficient in high-end mobile graphics might want to take one step beyond and make visuals for M1 iPads and Macs. Especially since they have money from the biggest gaming market.

    With M1, Macs and iPads are ready for AAA-games, and also they sell well, so it's a good time to make some games for the platform.

    I think we'll see an inflection point quite soon. Tens of millions of gaming-worthy hardware can't get unnoticed by the publishers. :~)
  • ComputeGuru - Tuesday, October 26, 2021 - link

    Sure the mobile gaming market that comprises Android and iOS is bigger, there are also a lot more people with access to mobile phones that can run casual games than computers capable of gaming, so that's not a surprise. Since we're talking about Mac/Desktop gaming here not mobile this point is moot.
  • ryne.andal - Monday, October 25, 2021 - link

    I'm sold. It's precisely what I want out of a productivity machine. Plenty of horsepower for when there's a need to compile and develop locally, plenty of battery life for remote/ssh work as well.
  • rmari - Monday, October 25, 2021 - link

    In regard to bandwidth, the author is too disparaging.
    The CPU cores are maxed out in ability to use bandwidth at 243 GB/s.
    But the author complained that they should be able to use all 400 GB.
    However, all the memory bandwidth is shared across all the system components.
    So if the GPU maxes out at 90 GB/s, the CPU+GPU Max bandwidth need is 333 GB/s.
    This leaves 67 GB/s (about the bandwidth of the M1) for the rest of the components
    including the Neural Processor and the Media Processors - which can encode and decode
    multiple 8K videos simultaneously.
    Obviously the massive bandwidth allows the M1 Max to minimize the CONTENTION
    for memory resources used for Unified Memory.
    If the CPU hogged all 400 GB/s bandwidth, then the GPU and the rest of the chip
    couldn't function since they would have ZERO memory access.
    Duh.
  • sirmo - Monday, October 25, 2021 - link

    Nah it is clear Apple picked a oversized bus to save on power.
  • name99 - Monday, October 25, 2021 - link

    "oversized bus"?
    listen to yourself. These are the rantings of a lunatic.

    Maybe you want to follow this up with a complaint that Apple are using too many vector units, have grossly oversized caches, and can execute too many instructions simultaneously.
  • sirmo - Monday, October 25, 2021 - link

    Why are you being defensive? I am just speculating. Isn't that what the comment section is for? And if that's why they did it, that's clever. I wasn't even disparaging them for it.
  • ComputeGuru - Monday, October 25, 2021 - link

    @rmari Well said and I think this gets overlooked by almost everyone right now.
  • Alistair - Monday, October 25, 2021 - link

    that's not a good analysis, if i want to load up the GPU to 100 percent usage and only need 2 CPU cores to do those tasks, getting 300GB+ to the GPU is a good idea, not 90GB/s

    i think that's the main reason the gaming scores are so disappointing, your mobile RTX 3080 in your laptop has its own memory, and unified memory is worse than your own fast memory in these cases

    NOT ALL ELSE IS EQUAL, which is why we don't see a lot of unified designs for PC products
  • Ppietra - Monday, October 25, 2021 - link

    They don’t say that the GPU is only allowed to use 90GB/s, they mentioned that there could be other workloads that use more.
    Not enough data to say that memory is a bottleneck, and don’t forget that CPU and GPU can also share the system cache which reduces the reading and writing to RAM.
  • Alistair - Monday, October 25, 2021 - link

    it says right in this article they could not get the GPU to use a lot of bandwidth to the memory
  • Ppietra - Monday, October 25, 2021 - link

    What it says is that the workloads they used didn't use more than 90, but it also mentions that there could be other workloads that might use more bandwidth.
    In other words, they haven’t proven anything about a GPU memory bottleneck.
  • tvrrp - Monday, October 25, 2021 - link

    Does the fp on Intel use avx instructions? This numbers seems low
  • imaskar - Monday, October 25, 2021 - link

    Where's M1 Pro in the CPU benchmarks? Came for this and disappointed =(
  • Alistair - Monday, October 25, 2021 - link

    it's the same, m1 pro or max, cpu is the same
  • Bik - Monday, October 25, 2021 - link

    Im excited to see apple 10 core 40w cpu beats intel alderlake 16 cores 300w desktop cpu comming up later this week.
  • lilo777 - Monday, October 25, 2021 - link

    What if it won't? Will you be equally excited?
  • Bik - Monday, October 25, 2021 - link

    ofcourse! that'll be a big surprice in itself lol
  • hemil0102 - Monday, October 25, 2021 - link

    Why you didn't check the performance with Baldur's Gate3?
    It supports M1 native.
    I can't believe that you just checked the only game that is using Rosetta.
  • Bik - Monday, October 25, 2021 - link

    It go on to show that Andrej and Ryan don't do much gaming on mac, or do they game at all?
  • hemil0102 - Tuesday, October 26, 2021 - link

    I want specific data of M1 Native game because I've played Baldur's Gate3 on my Macbook Air(M1).
  • Ryan Smith - Tuesday, October 26, 2021 - link

    BG3 was considered. But it lacks a built-in benchmark, and there isn't a viable FRAPS/OCAT-like tool for macOS right now. The best you can do is the instantaneous framerate via the Steam overlay.
  • yeeeeman - Monday, October 25, 2021 - link

    well, I guess being on the best process there is, plus having full control of hardware and software gives you this result. Very impressive, but not so unexpected.
  • Robberbaron12 - Monday, October 25, 2021 - link

    this is definitely impressive especial the Perf/watt is absolutely the best. This is why Intel are panicking, because if Apple can do this on ARM, then others can too (eventually) and we will start to see a whole lot more super-powerful ARM SOCs.
  • Bik - Monday, October 25, 2021 - link

    And where is others in that picture? Qualcom, Samsung all have their mobile socs several generations behind . Apple soc is the result of heavy R&D to be ahead. When x86 is moving too, I doubt any other Arm company can wow us like this.
  • valuearb - Tuesday, October 26, 2021 - link

    Intels also vulnerable in servers to graviton, which has been growing at massive rates. ARM doesn’t need to have Apple Silicon level performance to win in servers if it’s cheaper and cooler.
  • Hifihedgehog - Monday, October 25, 2021 - link

    Where is Cinebench R23? Show more benchmarks!
  • neural42 - Monday, October 25, 2021 - link

    If you check cpu-monkey, there are some benchmarks there. Another option is if you buy one of the computers you can benchmark it and see if it will work for your needs. If you don't like the results, return the computer and stick with what you have.
  • Silver5urfer - Tuesday, October 26, 2021 - link

    SPEC Is all that matters. And real life encode decode also doesn't matter. Have to rely on Dave2D and other Youtubers lol.
  • OreoCookie - Tuesday, October 26, 2021 - link

    The purpose of this article was a low-level review of the M1 Max SoC's CPUs and GPU. SPEC is a widely accepted cross-platform, low-level benchmark where different benchmarks push different subsystems. This is complementary to real-world reviews where the gains are much more subtle. I remember seeing that with the M1 reviews where some testers did not see gains, because they were using the “wrong” codec in their workflow that wasn't hardware accelerated.

    Would I like to see similar low-level benchmarks for the NPU (i. e. the ML accelerator)? Yes. But testing that is much more difficult. Those are usually tied to platform-specific APIs. And even if you can run the same workload on x86 on the CPU cores, it isn't clear how much you'd learn from that — other than that dedicated custom hardware will be much faster and much more efficient.
  • DittyDan - Monday, October 25, 2021 - link

    there appears to be something more at the bottom edge of the Max version of the chip vs the Pro just under the GPU logic. Is this dead space from something that was pulled or is there something more? I was thinking it was part of the FaceID which I guess was pulled or will show up in the next versions of these systems (2022 model)
  • Tomatotech - Monday, October 25, 2021 - link

    Noone knows yet if these die shots are actual die shots or just photoshopped PR images. AnandTech said they will wait for third party die shots before commenting further on what's visible in the die shots.
  • mkppo - Monday, October 25, 2021 - link

    So the M1 Max has triple the transistor count compared to a 5950x. Even if we add a RTX 3090 into the mix, it still has about 10 billion extra transistors. So yeah the performance is impressive but it's not like they've created a monster out of thin air - a lot of transistors are dedicated towards the silicon to make it wider and clocked lower to keep power in check. Impressive, but I know what i'd rather get if given a choice.
  • valuearb - Tuesday, October 26, 2021 - link

    If Intel could build at this transistor count they would, but they can’t as their CPUs would melt.
  • Ppietra - Tuesday, October 26, 2021 - link

    It’s an SoC, it’s more than a CPU+GPU.
  • Torrijos - Monday, October 25, 2021 - link

    It would have been nice to use maybe a couple of native games (I know most of the Native games for M1 have a ready benchmark but still) I feel like the emulation layer is comparing bananas and oranges...
    Baldur Gates 3, World of Warcraft
  • kkromm - Monday, October 25, 2021 - link

    This alone not "this along" second to last para.
  • kkromm - Monday, October 25, 2021 - link

    Would be more interested in a mac mini with the max hardware.
  • vladx - Monday, October 25, 2021 - link

    LMAO M1 MAX doesn't even guarantee 60fps at 1080p, and Apple had the gauls ro compare their GPU to high-end discrete GPUs.

    Just as I expected, Apple talks big game but it's all marketing BS.
  • Leeea - Monday, October 25, 2021 - link

    You do not understand. Gamers are going to love the m1 Max!

    The specs on this thing make it look like an eth mining beast. The MH/s per watt on this thing is likely to be right off the charts. If you are a miner, this is the machine to have. Miners need to panic sell those PC GPUs now and transition over to the future of mining before they get left behind.

    The m1 max is going to be great for gamers!
  • vladx - Monday, October 25, 2021 - link

    Is there even an ARM variant of ethminer out there? I wouldn't jump the gun at making those assumptions if I were you.
  • Leeea - Monday, October 25, 2021 - link

    They will make one soon enough!

    They have miners for the Raspberry Pi's, and that thing is next to useless for it.

    This thing is going to rock.
  • Silver5urfer - Tuesday, October 26, 2021 - link

    $3600 Laptop for gamers, sure. I wonder people are going to avoid buying those MSIs, Lenovos, Alienwares for this pile of soldered junk for a closed ecosystem with a notch on top.
  • ComputeGuru - Thursday, October 28, 2021 - link

    @SilverSurfer lol
  • Leeea - Monday, October 25, 2021 - link

    This thing is going to be an excellent Ethereum miner with all that bandwidth.

    Likely beat out a 3090 with ease. Especially with the low wattage scores.

    -cackles manically-
  • zodiacfml - Monday, October 25, 2021 - link

    are you a miner? memory bandwidth maxes at 400gb/s, basically 50-60mh/s at half the power consumption of a 5700xt or 3070. the m1-max is at least $3500 and no miner software yet though if it does mine could help the alleviate the huge cost.
    Nice piece of tech but I'd be happy with an M1 or M2 device.
  • vladx - Tuesday, October 26, 2021 - link

    Yep I guess he didn't look at the memory bandwidth of RTX 3070 Mobile which comes in laptops 1/3 of the price of the cheapest M1 Max Macbook.
  • zodiacfml - Tuesday, October 26, 2021 - link

    yeah if one looks at mining alone but considering the efficiency, the integration/small size, display, aluminum chassis, etc... it is not much more expensive than a thin and light laptop with a mobile rtx 3080. i still believe the m1-max is equivalent to that card, only no x86 game is natively ported to the M1.
  • vladx - Tuesday, October 26, 2021 - link

    A miner buys these laptops because they are more available than a desktop GPU, they'll buy a dozen of them solely for mining and nothing else so they don't give a crap how thin or how much they would weight.
  • ComputeGuru - Thursday, October 28, 2021 - link

    The RTX 3070 in my Legion 5 Pro does 65MHs.
  • Oxford Guy - Monday, October 25, 2021 - link

    Several comments argue that Apple is making a good decision by exploiting casual mobile gaming and not 'AAA' gaming — as if those are mutually exclusive.

    While I don't know how much value there is in bribing companies to port to metal + M — is there anything other than preventing companies from doing it of their own volition? Here are some possible issues:

    1. The cost/difficulty of implementing invasive complex DRM that's designed for Windows.

    2. Apple's track record for breaking backward compatibility quickly, both with macOS internals (and, judging by that record, metal going forward specifically).

    3. Perceived market share, not only in terms of capable M hardware but also in terms of its buyers demographic.

    4. The concern that Apple might lock all software in macOS (possibly with exceptions for MS Office and Adobe) behind the same paywall it uses for iOS, thus requiring a heavy royalty chunk. How the Epic lawsuit goes...

    5. How well will the hardware handle the very high sustained utilization of the GPU, in terms of noise and throttling? Will Apple throttle the laptops with a software update to preserve battery viability/life, as it did with the iPhone?
  • Oxford Guy - Monday, October 25, 2021 - link

    Also,

    6. Will Apple choose to make a 'console' instead of pushing macOS gaming?
  • StuntFriar - Tuesday, October 26, 2021 - link

    1. Not an issue. Existing cross-platform solutions exist to cater for different types of games.
    2. If you're maintaining your own game engine, then yes. Otherwise, licensed engines like UE4 and Unity sort most of the kinks out for you.
    3. This - games just don't sell on the Mac. I've worked with a few publishers over the years, and Mac versions are never considered, even if the engine supports it.
    4. This has never been a deterrent to publishers pushing games on consoles. Self-published Indie devs will complain because of lack of funds.
    5. Not an issue. We scale the quality on games based on the ability of the hardware. The same game can appear on Switch and PS4, but with some compromises on the former.
    6. Unlikely. It's a highly competitive market and Apple has to offer something that the big 3 don't have. They have no unique IP and would have to spend a boatload of money to get enough exclusive content to even be a viable secondary or even tertiary platform for people who own multiple consoles (let alone primary).
  • Oxford Guy - Tuesday, October 26, 2021 - link

    'Not an issue. Existing cross-platform solutions exist to cater for different types of games.'

    You're claiming that all the popular DRM is native in macOS? That's news to me!

    'It's a highly competitive market and Apple has to offer something that the big 3 don't have.'

    How about 'It's a highly-competitive market and Apple has to offer something that the big 2 don't have' and 'It's a highly-competitive market and Apple has to offer something that the big player doesn't have'.

    Claiming that three walled gardens is the limit needs to be supported with hard evidence. Let's see your data.
  • powerslave65 - Monday, October 25, 2021 - link

    For the work the MXPro and Max are designed for they will no doubt deliver and that would be work, not pounding redbull bleary-eyed with video games for days losing an entire sound frequency patch of hearing to the howling fans of some jacked up PC. If you haven’t gotten there yet, video games are for children. Building things and making art is what one does when you grow up. Apple is clear headed about the difference and thankfully doesn’t give 2 f’s of thought about what a child wants from a professional laptop.
  • Alistair - Tuesday, October 26, 2021 - link

    you're a typical defensive Apple moron, when the product sucks at gaming, you just insult gaming altogether...
  • coolfactor - Tuesday, October 26, 2021 - link

    Can your microwave oven cook a turkey? Didn't think so! Macs and gaming consoles are like microwaves and ovens. Different things. But wait a year... the trajectory that Apple is on is going to change the landscape just like iPhone did.
  • BushLin - Tuesday, October 26, 2021 - link

    https://www.panasonic.com/uk/consumer/home-applian...
  • Spunjji - Tuesday, October 26, 2021 - link

    "video games are for children"
    Opinion discarded
  • web2dot0 - Tuesday, October 26, 2021 - link

    The reality is many folks don't game. So to make gaming as your disqualifier for any computer buying decision is absurd.

    You do you, but millions of folks don't care about gaming.
  • Oxford Guy - Tuesday, October 26, 2021 - link

    If companies want to leave money on the table it's the choice of their boards/CEOs. Some don't have to worry about things like hostile takeovers.
  • Alistair - Tuesday, October 26, 2021 - link

    that's not really true, more than half the population plays games, it is the same thing as watching a movie or listening to music

    he was basically like "you must create" so I'd say, I guess he doesn't listen to music or watch movies, only makes them... bizarre attitude

    even my most non gaming friends have kids and will sit down and play multiplayer party games with them
  • Speedfriend - Wednesday, October 27, 2021 - link

    Making art with a Mac?. More likely a bunch of kidadults cranking out Instagram videos and mindless 'tunes' that they think changes the world....
  • Frenetic Pony - Monday, October 25, 2021 - link

    Small correction. GDDR6 is more power efficient than DDR even at high clockspeeds. At lower clockspeeds it compares well to LPDDR in power/bandwidth terms.
  • markiz - Tuesday, October 26, 2021 - link

    Is there a simple way for someone to explain what are the main reason for this insane part/watt gap?
    Like, technical reasons in the chip itself, not reasons why neither Intel nor AMD have ever achieved anything close.

    But also, is it realistic to expect anyone in the near future to approach it?
  • _crazy_crazy_ - Tuesday, October 26, 2021 - link

    yes the simple way to explain is packaging .. while intel and amd are working on cpu/apu that need to comunicate with external ram (not in the cpu die) and need to account a multitude of other peripherals and components apple just stuck everything in the cpu core thats where most of the optimisations come from in a simple way , its another way to do things ..
    but then comes the reverse that we have yet to know and thats durability because in normal hardware if a ram fails you can simply replace it in apple case... well its all e-waste
    its a way of doing things that works with apple since its pretty much a closed ecosystem
  • coolfactor - Tuesday, October 26, 2021 - link

    Good answer, but it's a SoC (System on a Chip), not just a "CPU Core" as you described it. This is an important distinction.

    One reason that computer hardware fails is due to heat, so if the system is way more efficient than traditional discrete components, it won't get as hot and therefore should not fail as much. A more reliable architecture in many ways.
  • _crazy_crazy_ - Tuesday, October 26, 2021 - link

    thanks for the correction i blame lack of my morning coffee xD

    well heat is one reason for hardware failure but there are a lot more and assuming a normal failure rate in consumer electronics of around 10% in the first year of use it's still a lot since you cannot replace anything from these machines eveything is soldered and not that easy to salvage
  • web2dot0 - Tuesday, October 26, 2021 - link

    When was the last time a consumer PC failed? Just more baseless fear mongering.
  • vladx - Tuesday, October 26, 2021 - link

    Indeed and at least PCs can be fixed without paying a fortune each time.
  • ComputeGuru - Tuesday, October 26, 2021 - link

    Very good point vladx. Hopefully Apple offers an excellent replacement warranty for those rare times when something on the SoC package will fail and render the whole system dead.
  • zodiacfml - Tuesday, October 26, 2021 - link

    TSMC 5nm process and really large chips. Apple has the luxury to make large chips because they have plenty loyal customers to buy their pricey products. in fairness to Apple, their M1 desktop computing products are better value than their iphones.
  • kwohlt - Tuesday, October 26, 2021 - link

    I don't know how Intel / AMD can actually respond. If they take the very-large-physical-chip approach, this will certainly help performance for their higher SKUs, but would absolutely kill the costs of their low end chips that would need to share a socket. As long as Intel / AMD offer a lot of SKUs that can be mixed and matched with other components, they will always be at an efficiency disadvantage.

    As long as that gap isn't too large, I guess it'll just have to be something people will accept.
  • vladx - Tuesday, October 26, 2021 - link

    They don't need to respond, software support is 100x more important than hardware performance.
  • OreoCookie - Tuesday, October 26, 2021 - link

    I think Intel makes more than enough profit per chip to eat up the larger cost of larger dies. Intel's problem is that they cannot produce chips with as many transistors with competitive efficiency as they have been stuck on a worse process node. To stay competitive in terms of performance, they have been pushing clockspeeds to the limit, which kills battery life.

    AMD can respond more easily, because they can stitch together suitable chiplets. In the future they could also add RAM in the same fashion that Apple does or they already do for their GPUs (HBM). So they have the expertise and the technology to do that.

    Lastly, one small comment: there are over 1.5 billion active iOS devices out there. The image that Apple customers are comprised of “loyal customers”/“fanbois” is outdated, the user base is way too large for that.
  • Spunjji - Tuesday, October 26, 2021 - link

    It's a combination of 2 of the responses you got - advanced packaging (having memory close to the SoC) and using the most advanced manufacturing process to spend a *lot* of transistors.

    On the CPU side, Zen 4 on TSMC N5 is likely to approach the power and performance levels of M1 Pro, albeit with a higher idle power and (probably) higher peak performance if efficiency is sacrificed entirely.
  • Ppietra - Tuesday, October 26, 2021 - link

    more advanced node is a very small reason in order to explain it.
    AMD is supposedly already using the 7nm+ node and difference between this and 5nm is not that big... at most a 10% improvement in perf/W considering TSMC info.
    As for the memory packaging advantage, as far as I understand that is an extra that is not included in these values (except in the Wall Power consumption), they are only measuring the processor package power. Even if you were talking about the memory controller the difference would only be 1-2W, hardly enough to explain the total difference in power consumption. And this is before actually looking at performance per Watt.
  • BushLin - Tuesday, October 26, 2021 - link

    Combination microwave ovens are a thing btw, most efficient way to roast turkey if it fits inside. Just sayin'

    https://www.panasonic.com/uk/consumer/home-applian...
  • BushLin - Tuesday, October 26, 2021 - link

    Was relevant to a comment above, now just looks weird
  • Youyou122 - Tuesday, October 26, 2021 - link

    Is M1 max with 32 GB ram can reach to 400 GB bandwitch speed ? Or it’s just in 64 GB ram version?
  • Spunjji - Tuesday, October 26, 2021 - link

    AFAIK the 32GB version retains the full memory bandwidth.
  • Spunjji - Tuesday, October 26, 2021 - link

    It seems Apple's M1 chips are destined to produce flame-wars in the comments...

    I'm genuinely impressed by these designs. The CPU side of the SoC is extremely capable and, even accounting for node differences, shows a very interesting perf/watt over Intel and (to a lesser extent) AMD.

    The GPU side, on the other hand, is less relevant to my interests. They've clearly designed an extremely capable arithmetic machine that's great for video filters. Unfortunately, between the near-non-existence of ARM binary games on Mac and the painful results of binary translation, it's apparently still not possible to game well on macOS. That perf/watt is still pretty promising, but they have less of an advantage over AMD/Nvidia here than on the CPU side. I suspect the transition to 5nm-class nodes will see AMD and Nvidia catching up again.
  • Farfolomew - Tuesday, October 26, 2021 - link

    I was a bit disheartened as well with the M1's game performance. It kind of stinks that you have this incredible new SoC, that is 2-3x more efficient than anything before it, hamstrung by it's need to run binary translation. I suppose that was always going to be the case with Apple making the switch, but it's such a tease to see what a chip design could be truly capable given near infinite R&D $ and limitless design constraints.

    I'll be sticking to x86 for now. I hope the players in this camp step up their game and copy Apple in this new SoC design (or AMD produces a mainstream PC part that mimic's their console SoCs)
  • Tigran - Tuesday, October 26, 2021 - link

    Why there are two different results of MSI GE76 Raider in GFXBench 5.0 Aztec Ruins High off-screen: 266 fps (page 'Power Behaviour') and 315 fps (page 'GPU Performance')?
  • vladx - Tuesday, October 26, 2021 - link

    Because Andrei is a terrible reviewer
  • celeste_P - Tuesday, October 26, 2021 - link

    Does any one know where can I find the policy about translating/reprinting the article? Do AnandTech allow such behavior? What are the policies that one needs to follow?
    This article is quite interesting and I want to translate/publish it on Chinese website to share with a broader range of people
  • colinstalter - Wednesday, October 27, 2021 - link

    Why not just share the URL on the Chinese page? Do people in China not have translator functions built into their web browsers like Chrome does?
  • celeste_P - Wednesday, October 27, 2021 - link

    Of course they do XD
    But as you can imagine, the quality of machine translation won't be that great, especially considering all these domain specific terms within this article.
  • ABR - Tuesday, October 26, 2021 - link

    An excellent review.
  • ajmas - Tuesday, October 26, 2021 - link

    Given the number of games already available and running on iOS, I wonder how much work would be involved in making them available on macOS?

    As for effective performance, I am eagerly waiting to see what the real world tests reveal, since specs only say so much.
  • mandirabl - Wednesday, October 27, 2021 - link

    As a developer, technically you don't have to do much, just re-compile the game and check another box (for Mac), basically.

    The problem is: iOS games are mostly touch-focused, whereas macOS is mouse-first. So they have to check if that translates without changing anything. If it does, it's a matter of a couple of minutes. If it doesn't translate well ... they have a choice to release it anyway or blocking access on macOS. Yes, developers have to actually decide against releasing their app/game for macOS - if they don't do anything in that regard, the app/game simply shows up in an App Store search on a Mac.
  • Kevin45 - Tuesday, October 26, 2021 - link

    Apple's goal is very simple: If you are going to provide SW tools for Pro users of the MacOS platform, you write to Metal - period.

    It IS the most superior way to take advantage of what Apple has laid out to developers and Apple's Pro users absolutely want the HW tools they buy to be max'd out by the developers.

    Apple has taken an approach Intel and AMD cannot. Unified memory design aside, Apple has looked at it's creative markets and developed sub-cores, which for this Creative focus segment, Apple markets as it's "Media Engine" which has hardware h.264 and hardware ProRes compute, which just crush these formats and codecs.

    The argument "Yah, but the CPU and GPU cores aren't the most powerful that one can buy." is still. They don't need to be because they have dedicated cores to where the power needs to be. Sure, in a Wintel world, or Linux space, more powerful GPU and CPU cores is all they've got. So when talking those worlds indeed that's the correct argument. Not when talking Apple HW with Apple silicon.

    Intel has fought nVIDIA to have their beefier and beefier cores do heavy lifting, while nVIDA wants the GPU to be the most important play in the mix. Apple has broken out their SoC into many sub-sets to meet the high compute needs of it's user base.

    Now more than ever, developers that have drug their feet, need to get onboard. As companies continue to show off - such as Apple with FCP, Motion and Compressor optimized apps for the hardware, even DaVinci (niche player but powerful), they put pressure on other players such as sloth-boy Adobe, to get going and truly write for Apple's tools that take advantage of such well thought out HW + SW combo.
  • richardnpaul - Tuesday, October 26, 2021 - link

    The article comes across a bit fanbioy. (yes, yes I know that this is usually the case here but I just wanted to say it out loud again). See below for why.

    You have covered in depth things like how the increased L3 design between Zen2 and 3 can cause big jumps in performance and what was missing here was discussion of how the 24/48MB cache between the memory interface impacts performance especially when using the GPU (we've seen this last year AMD's designs doing exactly this to improve performance of their designs by reducing the impact of calling out to the slow GDDR6 RAM.)

    The GPU is nothing special. 10Tflops at 1.3GHz puts it around the same class as a Vega64, a 14nm design, which similarly used RAM packaged on an interposer with the GPU (being 14nm it was big, 5nm makes it much more reasonable). With the buffer cache I'd expect it might perform better, also the CPUs will bump up performance (just look at how much more FPS you get with Zen3 over Zen2 and with Zen3 with vcache it'll be another 15% more on top from exactly the same GPU hardware and that's with the CPU and GPU having to talk over PCI-E).

    Also, Apple have made themselves second class gaming citizens with their decision to build Mantle and enforce it as the only API (I may be mistaken here but as far as I'm aware the whole reason for Molten is because you have to use Metal on MacOS and developers have introduced this Vulkan to Metal shim to ease porting). Also, as I understand it, you can't connect external dGPUs via Thunderbolt to provide comparisons. Apple's vendor lock-in at it's worst (have I mentioned that Apple are their own worst enemy a lot of the time?)

    As such the gaming performance doesn't surprise me, this is a technically much slower and inferior GPU to AMD and nVIDIAs current designs on an older process (7nm and 8nm respectively). The cost is that whilst these are faster, they're larger and more power hungry though a die shrink of bring something like an AMD 6600 based chip into the same ballpark.

    Also on the 512bit memory interface I'd probably look at it more like 384bit plus 128bit, which is the GPU plus the usual CPU interfaces. The CPU is always gojng to contend for some of that 512bit interface, so you're never going to see 512bit for the GPU, on the other hand, you get what ever the cpu doesn't use for free, which is a great bonus of this design, and if the CPU needs more than a 128bit interface can manage it has access to that too if the GPU isn't heavily loaded on the memory interface.

    I kind of expect you guys to cover all this though in the article, not have me railing at the lack of it in the comments section.
  • richardnpaul - Tuesday, October 26, 2021 - link

    Oh and you failed to ever mention that the trade-off of the design is that you need to buy all the RAM you'll ever need up front because it's soldered to the SoC package. The reason that we don't normally see such designs is that the trade-off is potentially expensive unsaleable parts. The cost of these laptops are way above the usual and whilst they have some really nice tech this is one of the other downsides of this design (and the 5nm node and the amount of silicon).
  • OreoCookie - Tuesday, October 26, 2021 - link

    Or perhaps Anandtech gave it a glowing review simply because the M1 Max is fast and energy efficient at the same time? In memory intensive benchmarks it was 2-5 x faster than the x86 competition while being more energy efficient. What more do you want?

    And the article *was* including a Zen 3 mobile part in its comparison and the M1 Max was faster while consuming less energy. Since the V-Cache version of Zen 3 hasn't been released yet, there are no benchmarks for Anandtech to release as they either haven't been run yet or are under embargo.

    Lastly, this article is about some of the low-level capabilities of the hardware, not vendor lock-in or whether Metal is better or worse than Vulkan. They did not even test the ML accelerator or hardware codec bits (which is completely fair).
  • richardnpaul - Wednesday, October 27, 2021 - link

    I'm not saying that it's not great and energy efficient marvel of technology (although you're forgetting that the compared part is Zen3 mobile 35W part which has 12MB rather than 32MB of L3 and that's partly because its a small die on 7nm).

    They mentioned Metal they mentioned how they can't get direct comparative results, this is one of the downsides of this, and the others from Apple, chip, great as it is it has drawbacks that hamper it which are nothing to to do with the architecture.
  • OreoCookie - Wednesday, October 27, 2021 - link

    I don’t think I’m forgetting anything here. I am just saying that Anandtech should compare the M1 Max against actual products rather than speculate how it compares to future products like Alderlake or Zen 3 with V cache. Your claim was that the article “comes across as a fanboi article”, and I am just saying that they are just giving the chip a great review because in their low-level benchmarks it outclasses the competition in virtually every way. That’s not fanboi-ism, it is just rooted in fact.

    And yes, they explained the issue with APIs and the lack of optimization of games for the Mac. Given that Mac users either aren’t gamers or (if they are gamers) tend to not use their Macs for gaming, we can argue how important that drawback actually is. In more GPU compute-focussed benchmarks (e. g. by Affinity that make cross-platform creativity apps), the results of the GPU seem very impressive.
  • richardnpaul - Thursday, October 28, 2021 - link

    My main disagreement was not them comparing with with Zen3, but more that I felt that they failed to adequately cover how the change would impact this use case scenario between M1 versions given that comparing Zen2 to Zen3 has been covered (and AMD have already said that the Vcache will mainly impact gaming and server workloads by around 15% on average) and shown in these specific use cases to have quite a large benefit and I'd just wanted that kind of abstract logical analysis of how the Max might be more positively positioned for this or these use cases above say the original M1. (I know that they mentioned in the article that they didn't have the M1 anymore and the actual AMD 5900HS device is dead which has severely impacted their testing here.

    I come to Anandtech specifically for the more indepth coverage that you don't get elsewhere and I come for all the hardware articles irrespective of brand because I'm interested in technology not brand names which is why I dislike articles that come across as biased (whilst it'll never be intentionally biased we're all human at the end of the day and it's hard not to let the excitement of novel tech cloud our judgement).
  • richardnpaul - Wednesday, October 27, 2021 - link

    Also my comparison was AMD to AMD between generations and how it might apply to increasing the cache sizes of the M1 and the positive improvement it might have on performance in situations using the GPU such as gaming.
  • Ppietra - Wednesday, October 27, 2021 - link

    You are so focused on a fringe case that you don’t stop to think that "maybe" there are other things happening besides "gluing" a CPU and GPU on the same silicon, fighting for memory bandwidth. Unified memory architecture plus CPU and GPU sharing data over the system cache, has an impact on memory bandwidth needs.
    Besides this, looking at data that it is provided, we seem to be far from saturating memory bandwidth on a regular basis.
    It would be interesting though to actually see how applications behave when truly optimised for this hardware and not just ported with some compatibility abstraction layer in the middle. Affinity Photo would probably be the best example.
  • richardnpaul - Wednesday, October 27, 2021 - link

    This is exactly what I wanted coving in the article. If the GPU and CPU are hitting the memory subsystem they are going to be competing for cache hits. My point was that Zen3 (desktop) showed a large positive correlation between doubling the cache (or unifying it into a single blob in reality) and increased FPS in games and that that might also hold true for the increased cache on the M1 Pro and Max.

    Unfortunately testing this chip is hampered by decisions completely unrelated to the hardware itself, and that also applies to certain use cases.
    it'll be more interesting to see testing the same games under Linux between an Nvidia/AMD/Intel based laptop as then the only differences should be the ISA; and immature drivers.
  • Ppietra - Wednesday, October 27, 2021 - link

    "hitting the memory subsystem they are going to be competing for cache hits"
    CPU and GPU also have their own cache (CPU 24MB L2 total; GPU don’t know how much now) which is very substantial.
    And I think you are not seeing the picture about CPU and GPU not having to duplicate resources, working on the same data in an enormous 48MB system cache (when using native APIs of course) before even needing to access RAM, reducing latency, etc. This can be very powerful. So no, I don’t assume that there will any significant impact because of some fringe case while ignoring the great benefits that it brings.
  • richardnpaul - Wednesday, October 27, 2021 - link

    One person's fringe edge case is another person's primary use case.

    The 24/48MB is a shared cache between the CPU and GPU (and everything else that accesses main memory).
  • Ppietra - Wednesday, October 27, 2021 - link

    no, it’s a fringe case period! You don’t see laptop processors with these amounts of L2 cache and system cache anywhere, not even close, and yet for some reason you feel that it would be at an disadvantage, failing to acknowledge the advantages of sharing
  • richardnpaul - Thursday, October 28, 2021 - link

    What you call a fringe case I call 2.35m people. Okay, so it's probably on about 1.5 to 2% of Mac users; it's ~2.5% of Steam users.

    I know people who play games on Windows Machines because their GPUs in their Macs aren't good enough. Those people who are frustrated having to maintain a Windows machine just to play games. Those people will buy into an M1 Pro or Max just so they can be rid of the Windows system. It won't be their main concern, but then they're not going to be buying an M1 Pro/Max for the reason of rendering etc when they're a web developer, they're going to buy it so that they can dump the pain in the backside Windows gaming machine. Valve don't maintain their MacOS version of Steam for no good reason.
  • Ppietra - Thursday, October 28, 2021 - link

    Gosh, no! not 2.35m people. You are so obsessed with a GPU having everything on silicon for itself that you fail to see how much more cache resources the SoC has when compared with other processors. Even if there was 1GB of cache you would still be complaining because the CPU can use it. Get some common sense.
  • richardnpaul - Friday, October 29, 2021 - link

    You're wrong. I've shown that your fringe is larger than some country's populations and you've dismissed it and pivoted back to another talking point, a point that is a misrepresentation of what I was saying.
    I was wondering what the effect was on performance of the CPU and GPU when both are being used and both are using the shared cache simultaneously given that we know that in isolation with just their own cache it improves efficiency. I'm not and haven't been saying it's an actual issue, it's something that could be tested, and also, we have no clue as to whether it's a real world problem or not.

    The article was the one that was talking about the GPU and it having access to all the 512bit memory interface, I was challenging that saying that actually the CPU is going to use some of that bandwidth, but the benefit of the design is that when the GPU needs more and the CPU isn't using it it has access to it and vice-versa.

    And if you knew anything about common sense you wouldn't say to get some of it. You're rude and dismissive of anyone else who doesn't fit into your world view, you might want to do something about fixing that about yourself; but probably you won't.
  • Ppietra - Friday, October 29, 2021 - link

    No, you haven’t shown anything, because for whatever reason you continue to ignore how big the cache is when compared with anything else out there, and how big the L2 cache is, also when compared with anything else out there - something that they don’t share. Thirdly, if you even tried to pay attention to what was said, you would see that the M1 Max has double the system cache size, and yet not much different CPU performance.
    You also continue to ignore that in a game (which is the thing you are obsessing about), CPU and GPU work together. Not having to send instructions to an external GPU, and CPU and GPU being able to work on the same data stored in cache, gives a big performance improvement, it removes bottlenecks. So you obsessing because the CPU can use system cache during a game makes no sense, because the sharing can actually give a boost in game performance.
    Fringe cases would never be equivalent to every gamer.
  • richardnpaul - Friday, October 29, 2021 - link

    "continue to ignore how big the cache is when compared with anything else out there"
    Like the previously mentioned RX 6800 which has 256MB? I've not mentioned the RX6800 (infinity) cache at all?

    The L2 cache is large, but then it doesn't have an L3 cache. This is a balancing act that chip architects engage with all the time. It seems that zen3 and the M1 Max graphs are very similar for latency with full random being a little higher but most everything else looking close enough that I'm not going to stick my neck out and declare either a winner.

    "and CPU and GPU being able to work on the same data stored in cache, gives a big performance improvement, it removes bottlenecks"
    This is not represented in the benchmarking, which might be because there needs to be some specific optimisation done, or it could be due to something else. I expect the situation to improve though, probable with more focus on the M1 Pro which will carry over to the Max.
  • Ppietra - Friday, October 29, 2021 - link

    You are not going to see something in benchmark that is inherent to how the system works, how it manages memory, there is no off switch. You need to have the knowledge of how things work.
    "The L2 cache is large, but then it doesn't have an L3 cache."????????????????
    System cache behaves as if it was a L3 cache for the CPU. How can you say that zen 3 and M1 are similar when the M1 Max has 3.5 times the cache size of a laptop Ryzen??? Just the L2 cache is larger than all the cache available in a laptop Ryzen.
    "RX 6800 which has 256MB?" A RX6800 isn’t a laptop chip. [" laptop processors " - - it’s there in one of the first comments]
  • richardnpaul - Saturday, October 30, 2021 - link

    This is where you need to look at the latency graphs for M1 Pro/Max and then go and find the Zen 3 article and compare the graphs for yourself. And I haven't been comparing the M1 Max to a laptop Ryzen, I have repeatedly compared it to a single zen3 core complex where they are much closer in terms of total cache. Compare the 5nm M1 Max to the 7nm Zen3 all you like, with its much higher transistor count. You're not talking about the same thing as I was all along.

    I have repeatedly compared whatever is the closest comparison, regardless of where its used to get a helpful idea of what benefits it could bring. That Apple have managed to do this in a laptop's power budget is, and I'll quote myself here "a technological marvel". The M1 Pro/Max are combined GPUs and CPUs, that means you can compare them to standalone GPUs and to CPUs. You're the one who can't seem to understand that they both need to stand on their own merits.
  • Ppietra - Saturday, October 30, 2021 - link

    Really!??? You want to compare a laptop processor with Desktop chips that can consume 3-4 times more than the all laptop, and you think that is close? no common sense whatsoever!
    But guess what even then a M1 Max has more cache available than a consumer desktop Ryzen!
    The latency graphs are for the CPU (which, by the way you can actually see differences because of the size of the level 1 and 2 caches even with desktop Ryzen), they don’t tell you anything when you want to compare the response latency between CPU and GPU, nor about the performance boost from processing the CPU and GPU being able to process the same data in cache without having to access RAM.
    Who said you cannot compare with dedicated GPUs?
  • richardnpaul - Sunday, October 31, 2021 - link

    I'm comparing architectures, not products, that's why it seems to you like this is an "unfair" comparison. I also bear in mind what node the architecture is at, as that makes quite a marked difference due to transistor budget constraints.

    Yes M1 Max has more cache, and where you're not using the GPU (a bit difficult as you'll be running an OS which has a GUI, but let's say that that is basically negligible) it should have a reasonable impact on usages which are heavy on memory bandwidth. In fact you can see that in the benchmarks, there are a number of which heavily reward the M1 Max over anything else, not that many in total but certain use cases will see great uplifts, just the same as Milan-X and the equivalent chiplets in Ryzen CPUs which we'll get to see in the next few days will have benefits in certain use cases.

    What I was saying way back was, what's the contention there, when running a game, how much benefit is the GPU getting and if any how much is the CPU losing when contention starts to happen on the SLC. Caches usually work on some kind of LRU basis, so if two separate things are trying to use the same cache (which can have benefits where they are both using the cache for the same data) both suffer as their older cache data is evicted by the other processor. That should be measurable. Workloads that share the same data, if its small enough to fit into the 48MB on the Max, should see huge benefits, and yes, one application that has been highlighted has taken advantage of this. But we are yet to see others take this up, AMD, having tried this before will tell you that if you can't get broad software support that it's a dead duck, however, Apple have often made long term bets and stuck with them over a number of years, which could make the difference.

    Apple have approached this in two different ways. They have created a monster APU, AMD's effort was... safe, I think they thought that they could iterate over time to large better designs, however, no-one wanted to put that much time and effort into a bet that AMD would deliver in the future when Intel wasn't making similar noises.
    They're on a cutting edge node, with a cutting edge design, and there's no other choice for Apple users, sure you can get the original M1 or M1 Pro, but there's no Intel to get in the way and the only downside of the other chips are that they will be slower due to having fewer resources but it's all much the same design.
  • OreoCookie - Wednesday, October 27, 2021 - link

    No, the 24 MB = 2 x 12 MB are the shared L2 caches amongst the performance core clusters, the two efficiency cores share another 4 MB (so the M1 Pro and M1 Max have close to Zen 3 desktop-level L2 caches if you ignore the system level cache). These caches are not shared between CPUs and GPUs at all. Only the system-level cache of yet *another* 48 MB is shared amongst all logic that has access to main memory. Given that the total memory bandwidth is larger than what CPU and GPU need in a worst-case scenario, I fail to see how this is somehow an edge case.

    It seems the memory bandwidth so large that it can accommodate all CPU cores running a memory-intensive workload at full tilt *and* the GPU running a memory-intensive workload with room to spare. Even if you could saturate the memory bandwidth by also using the NPU (ML accelerator) and/or the hardware en/decoder, I think you are really reaching. This would be far beyond the capabilities of any comparable machine. Even much more powerful machines would struggle with such a workload.
  • richardnpaul - Thursday, October 28, 2021 - link

    Yes sorry, I do know that, the 24 in 24/48MB was a reference to the M1 Pro which has half the shared buffer. That shared buffer, I'd need to go back and look at the access times (and compare it to Zen3 desktop) because it's almost on the other side of the chip from the cores.

    I do see that they tested a game at 4K, and I know that some games lean more heavily on the onboard RAM on dGPUs and not all games have specific high resolution 4K textures and so use more RAM than others. And it is mentioned on the second page that they didn't see anything that pushed the GPU over using 90GB/s of bandwidth and I don't know if that they were measuring during that testing run (I would expect that they were but you know what they say about assumptions :D).

    I think that you're right and that the architecture team probably went overboard on the bandwidth anticipating certain edge case scenarios where the system has multiple tasks loading multiple parts of the CPU and we'll see some rebalancing in future designs. I would like to see a game run with or without mods that does stress the GPU memory subsystem (games aren't usually hammering the CPU bandwidth so more should be available to the GPU, which may very well never be able to saturate it by design, but the cache may be saturated). This will also tell us something about longevity of the SoC too.

    I don't think that I'm reaching, more that I see systems lasting for 7+ years, and when newer generations of hardware move on unusual usage when some hardware is new suddenly becomes common place because newer hardware is a evolving target over time and sometimes software does actually utilise it. (Sometimes CPU bugs rob you of performance and make your hardware feel slow, other times it's just that software is a bit more demanding now than it was years before when you got it)
  • OreoCookie - Friday, October 29, 2021 - link

    You shouldn't mix the M1 Pro and M1 Max: the article was about the Max. The Pro makes some concessions and it looks like there are some workloads where you can saturate its memory bandwidth … but only barely so. Even then, the M1 Pro would have much, much more memory bandwidth than any laptop CPU available today (and any x86 on the horizon).

    And I think you should include the L2 cache here, which is larger than the SL cache on the Pro, and still significant in the Max (28 MB vs. 48 MB).

    I still think you are nitpicking: memory bandwidth is a strength of the M1 Pro and Max, not a weakness. The extra cache in AMD's Zen 3D will not change the landscape in this respect either.
  • richardnpaul - Friday, October 29, 2021 - link

    The article does describe the differences between the two on the front page and runs comparisons throughout the benchmarks, whilst it's titled to be about the Max I found that it really basically covered both chips, the focus was on what benefits if any the Max brings over the Pro, so I felt it natural to include what I now see is a confusing reference to 24MB because you don't know what's going on in my head 😁

    From what I could tell the SL cache was not described like a typical L3 cache but I guess you could think of it more like that, so I was thinking of it as almost like an L4 cache (thus my comment about its placement in the die, its next to the memory controllers, and the GPU blocks, and quite far away from the CPU cores themselves so there will be a larger penalty for access vs a typical L3 which would be very close to the CPU core blocks. I've gone back and looked again and it's not as far away as I first though as I'd mistook where the CPU cores were)

    Total cache is 72MB (76MB including the efficiency cores' L2, and anything in the GPU), the AMD Desktop M3 chip has 36MB and will be 100MB with the Vcache so certainly in the same ballpark really, as in it's a lot currently (but I'm sure that we'll see the famed 1GB in the next decade). The M1 Max is crazy huge for a laptop which is why I compare it to the desktop Zen3 and also because nothing else is really comparable with 8 cores.

    I don't think it's a weakness, it's pretty huge for a 10TF GPU and an 8 core CPU (plus whatever the NPU etc. pull through it). I'm just not a fan of the compromises involved, such as RAM that can't be upgraded; though a 512bit interface would necessitate quite a few PCB layers to achieve with modular RAM.
  • Oxford Guy - Friday, October 29, 2021 - link

    Apple pioneered the disposable closed system with the original Mac.

    It was so extreme that Jobs used outright bait and switch fraud to sucker the tech press with speech synthesis. The only Mac to be sold at the time of the big unveiling had 128K and was not expandable. Jobs used a 512K prototype without informing the press so he could run speech synthesis — software that also did not come with the Mac (another deception).

    Non-expandable RAM isn’t a bug to Apple’s management; it’s a very highly-craved feature.
  • techconc - Thursday, October 28, 2021 - link

    You're exactly right. Here's what Affinity Photo has to say about it...

    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."
  • richardnpaul - Thursday, October 28, 2021 - link

    They're right, which is why you see SMA these days on the newer AMD stuff (Resize BAR) and why Nvidia did the custom interface tech with IBM and are looking to do the same in servers with ARM to leverage these kinds of performance gains. It's also the reason why AMD bought ATI in the first place all those years ago; the whole failed heterogeneous compute (it must be galling for some at AMD that Apple have executed on this promise so well.)
  • techconc - Thursday, October 28, 2021 - link

    You clearly don't understand what drives performance. You have a very limited view which looks only at the TFLOPs metric and not at the entire system. Performance comes from the following 3 things: High compute performance (TFLOPS), fast on-chip bandwidth and fast transfer on and off the GPU.

    As an example, Andy Somerfield, lead for Affinity Photo app had the following to say regarding the M1 Max with their application:
    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."

    This is comparing the M1 Max GPU to a $6000, 300W part and the M1 Max handily outperforms it. In terms of TFLOPS, the 6900XT has more than 2x the power. Yet, the high speed and efficient design of the share memory on the M1 Max allows it to outperform this more expensive part in actual practice. It does so while using just a fraction of the power. That does make the M1 Max pretty special.
  • richardnpaul - Thursday, October 28, 2021 - link

    Yes TFLOPs is a very simple metric and doesn't directly tell you much about performance, but it's a general guide (Nvidia got more out of their hardware compared to AMD for example and have until the 6800 series if you only looked at the TFLOPS figures.) Please, tell me more about what I think and understand /s

    It's fastest for their scenario and for their implementation. It may be, and is very likely, that there's some specific bottleneck that they are hitting with the W6900X that isn't a problem with the implementation details of the M1 Pro/Max chips. Their issue seems to be interconnect bandwidth, they're constantly moving data back and forth between the CPU and GPU and with the M1 chips they don't need to do that, saving huge amounts of time because the PCI-E bus adds a lot of latency from what I understand so you really don't want to transfer back and forth over it (and maybe you don't need to, maybe you can do something differently in the software implementation, maybe you can't and it's just a problem that's much more efficiently done on this kind of architecture I don't know and wouldn't be able to comment knowing nothing about the software or problem that it solves. What I don't take at face value is one person/company saying use our software as it's amazing on only this hardware; I mean a la Oracle right?)

    When it comes to gaming performance, it seems that the 6900XT or the RTX 3080 seem to put this chip in its place, based on the benchmarks we saw (infact, the mobile 3080 is basically just an RTX 3070 so even more so which could be because of all sorts of issues already highlighted) you could say that the GPU isn't good as a GPU but is great at one task as a highly parallel co-processor for one piece of software that if that's the software you want to use then great for you but if you want to use the GPU for actual GPU tasks it might underwhelm (though in a laptop format and for this little power draw of ~120W max it's not going to do that for a few years which is the point that you're making and I'm not disputing - Apple will obviously launch new replacements which will put this in the shade in time).
  • Hrunga_Zmuda - Tuesday, October 26, 2021 - link

    From the developers of Affinity Photo:

    "The #M1Max is the fastest GPU we have ever measured in the @affinitybyserif Photo benchmark. It outperforms the W6900X — a $6000, 300W desktop part — because it has immense compute performance, immense on-chip bandwidth and immediate transfer of data on and off the GPU (UMA)."

    Ahem, a laptop that tops out at not much more than the top GPU. That is bananas!
  • buta8 - Wednesday, October 27, 2021 - link

    Please tell me how monitor the CPU Bandwidth - Intra-cacheline R&W?
  • buta8 - Wednesday, October 27, 2021 - link

    Please tell me how monitor the CPU Bandwidth - Intra-cacheline R&W?
  • mjptango - Wednesday, October 27, 2021 - link

    What I would like to see is a sustained benchmark comparison.

    What I mean is to run a CPU or GPU intensive test over an extended period of time to see the thermal throttling effect.
    Clearly with a more power efficient SOC, the M1 family should demonstrate considerable advantage over other mobile impmentations.
    My Intel Mac heats up once the GPU kicks in and I would be sure that if an RTX-3080 is exercised long enough its performance will drop much sooner than an M1 system.
    So it would be very valuable to know these numbers because when we work, we don't just work for minute or so, but process files over the course of an hour
  • Whiteknight2020 - Wednesday, October 27, 2021 - link

    On a screen the size of a magazine, with a keyboard 3 inches away from it. Ergonomic not. This obsession with laptops with tiny displays and ergonomics which cause long term physical harm is just getting silly.
  • OreoCookie - Wednesday, October 27, 2021 - link

    My first computer (an Amiga 500) had a 14" CRT and a 12~13" usable screen area. My iPad Pro has more screen space than that. Ditto for my first PC. Only in the late 1990s did I get a used 19" trinitron CRT that had about 18" usable space. That isn't so different from my 16" MacBook Pro in terms of size, although the latter has much more screen estate in practice. It covers my field of view without having to swivel my head. Laptops are fine.

    Don't get me wrong, I still like external monitors, but laptops these days are great to get work done.
  • Whiteknight2020 - Wednesday, October 27, 2021 - link

    Specifically US developers, the rest of the world, not so much. And with no x86 virtualization layer the new M1 Mac's are even less enticing, can't run a full VMware/K8s stack so you need two machines.
  • Focher - Wednesday, October 27, 2021 - link

    Funny how for the last year, developers have been going bonkers about their M1 MacBooks being superior to equivalent x86 hardware.
  • razer555 - Wednesday, October 27, 2021 - link

    The GPU performance for gaming is very disappointing. Rosetta 2 translate only CPU and it doesn't affect the GPU performance. Apple said 32 cores is quite equal to mobile RTX 3080 and yet, it performs less than mobile RTX 3060. I dont think the optimization is a problem. The raw performance lacks the gaming performance so far.
  • Focher - Wednesday, October 27, 2021 - link

    It's almost like Apple has never cared about gaming on the Mac and engineered the hardware for entirely different purposes - which it totally excels at.
  • Vitor - Thursday, October 28, 2021 - link

    It is not disappointing since it hasnt been even tried. You can say emulating x86 games is disappointing. But a game fully optimized for this system would be able to get 1440p 100fps no problem I bet.
  • Lock12 - Thursday, October 28, 2021 - link

    Why didn't the table show the power consumption in the game Shadow ofthe Tomb Raider?
  • aparangement - Thursday, October 28, 2021 - link

    Hi Andrei, thanks a lot for the review.
    About the memory bandwidth, I am wondering if the numbers are comparable with those in the STREAM-Triad benchmark, like we see in the epyc 7003 review?
    e.g. https://images.anandtech.com/doci/16529/STREAM_tri...

    If so that's very impressive, M1 max actually outperforms the a 2-way x86 workstation.
  • ruthan - Friday, October 29, 2021 - link

    So great on paper and for some number crunching, compiling and maybe some video editing.. but where you really need performance for gaming it sucks... and all Apples lofty paper specs are gone. I know that there is some translation layer, but its Apple choice to use it.
  • richardnpaul - Sunday, October 31, 2021 - link

    I think that that is a bit of an unfair characterisation at this stage.
  • jojo62 - Saturday, October 30, 2021 - link

    I am programmer. Not a gaming programmer but I use my Mac Book Pro 2019 to connect to my work computer. I run Databases like Oracle 21c, microsoft sql server, and others in Windows 11 on my Mac. The performance is great and these laptops last forever. I still have my mac book pro 2012 laptop and it works. I've had many many computers over the years and they all seem to die after 3-4 years but not my apple computers. I think PC makers have implemented planned Obsolescence on their products. I am upgrading to the new mac book pro m1 max soon.
  • razer555 - Saturday, October 30, 2021 - link

    https://www.youtube.com/watch?v=xRPPLrlUeSA

    Anadtech, It seems you really need to test with Baldur's gate 3 which can perform 4K 100~120 FPS.
  • ailooped - Monday, November 1, 2021 - link

    What 7? years back there were proof of concept ARM computers that proved you can run many many processors in parallel. I am not that technically apt, of course. However this seems like apple taking advantage of that fact.

    They are just doubling everything. I am guessing we will see a 64 core graphics and perhaps a max of 128 core for Mac Pro. With M2 cpu cores also doubling to 24 cores or something like that.

    Yes, Apple chose to say goodbye to windows compatibility. However, they have a HUGE developer base in iOS. And they (Mac and iOS) are now on-par and running on the same silicone.

    This is a disruption to the pc world no matter how you slice it. Of course, intel can see it hence the smear ads against apple. Windows is quietly tinkering with their ARM version of windows, just to see if apple can actually take off with it.

    The pc ecosystem is already suffering from the influx of powerful smartphones/tablets. And now apple is in 100% with ARM computers, with a HUGE iOS user base what will be seduced by a seamless transition to Macs from iPhones? Perhaps.. Understandable that Apple is trying...

    Do you really mind though? The Intel/AMD/Nvidia trifecta seems to be quite stagnant on CISC. Perhaps it`s better for the PC ecosystem to be on the same silicone as phones and tablets... To benefit from ALL that R&D money going into it...
  • ailooped - Monday, November 1, 2021 - link

    silicon...
  • ailooped - Monday, November 1, 2021 - link

    To be quite honest, I am not sure I want to see Apple with their approach to hardware gain tons of marketshare on the desktop/laptops.. No upgradeability... RAM integrated into CPU... I DO however think Intel/AMD/Nvidia can do with a fourth player in the GPU/CPU game..
  • jmmx - Tuesday, November 2, 2021 - link

    It would be nice to see some discussion of the NPU. I imagine it would be hard to find any tests across platforms but some type of evaluation would be helpful.
  • bgnn - Tuesday, November 2, 2021 - link

    Clarification on node advantage.. I've designed in both 7nm and 5nm. The power and performance increases are marginal compared to good old days. Back then when we switched from 32nm to 28nm we had more than 70% perf/power increase. 7nm to 5nm it's more like 25% at best. Density is the main benefit. Interconnect is killing it for smaller nodes. Gate contacts are tiny and they are incredibly resistive..
  • Hrunga_Zmuda - Sunday, November 7, 2021 - link

    Anyone who actually designs in this corner of the computer industry must be familiar with the law of diminishing returns. Right?
  • sthambi - Wednesday, November 3, 2021 - link

    Hi Anand, I stumbled across your blog post, and I enjoyed reading it. I'm a professional video editor, photographer. Ordered the 32 core, 64GB, M1 Pro Max for $3900. I'm upgrading from the iMac 5k, late 2015 model. I personally feel like am overkilling my configuration. I don't want to look back 2 years from now, and feel like I lost 4k, and now apple doubled again. Do you think I really need this much heavy configuration to use premiere pro cc, max 5k video editing, and canon raw images, and simultaneous creative cloud application running? what would you recommend, which can help me save money and not compromise on the performance? is my decision of going full configuration bad?
  • MykeM - Sunday, November 14, 2021 - link

    Read the byline (the names under the headlines). The site’s namesake- Anand- left a few years ago. He no longer writes here. The people replacing him are every bit as capable but none of them are actually named Anand.
  • razer555 - Thursday, November 4, 2021 - link

    https://www.youtube.com/watch?v=OMgCsvcMIaQ
    https://www.youtube.com/watch?v=mN8ve8Hp4I4

    Anandtech, your tests about the graphic seems wrong.
  • Sheepshot - Sunday, November 7, 2021 - link

    Anand tech = Apple shills.

    M1 beats both the M1Pro and max in power efficiency. Draws 50% of watt butt provides almost 65-70% of the performance in most relevant benches.
  • Hrunga_Zmuda - Sunday, November 7, 2021 - link

    Shills?

    The 90s called and want your insult back.
  • evernessince - Wednesday, November 10, 2021 - link

    HWUB just did a review of the M1 pro in actual applications and performance is good but not nearly as impressive as Anand suggests. These chips are competitive with laptop chips but you certainly don't need to bust out server class components as suggested in the article. Performance is very good in certain areas and in others it's very poor. Most of the time it's about as good as X86 laptop chips. GPU is decent but given the price, you can get much much more performance on X86 at a much lower price.
  • Motti.shneor - Sunday, November 14, 2021 - link

    I think I heard M1 Max and M1 Pro have different number of CPU cores? Here you say they're identical?

    Also, I keep asking myself why a tech visionary like yourself doesn't see the "big picture" and the bold transitional step in computing taken here.

    For me, that sheer "horse power" means very little - I'm using a 1st generation M1 Mac-Mini, with medium configuration, beside an i9 MacBookPro from 2020 - and the Mini is SO MUCH BETTER in each and every way and meaning (except of course for the terrible bugs and deteriorating quality and bad behavior at boot time, the EFI and such)

    As a power user, and Mac/iOS software engineer/tech-lead for over 35 years, and with my pack of 400 applications installed, some native, some emulated, and my 0.5TB library of photos and 0.5TB library of music.... well, with all this, I can testify that MacMini "feels" 5 times faster than the MBP, in most everything I DO. Maybe it'll fail on benchmarks, but I couldn't care less. Rebuild a project? snap. Export a video while rescaling it? Immediate! heavy image conversion? no time. Launch a heavy app? before you know it. It FEELS very very fast, and that's 1st generation.

    What I think IS IMPRTANT and not being said by anyone, is that the whole mode of computing goes back from "general purpose" into "specialized hardware". You can no longer appreciate a computer by its linear-programming CPU cycles, and if you do - you just get a completely wrong evaluation.

    Moreover - you CANT just "port some general C code from somewhere" and expect it to run fast. You MUST be using system APIs at SOME LEVEL, that will dispatch your work onto specialized hardware, so you gain from all those monstrous engines under the hood. If you will just compile some neural-networks engine or drag it over in python or something, it'll crawl and it will suck. But if you use Vision Framework from Apple, you'll have jaw-dropping performance. You MUST build software FOR the M1, to have the software shine. This is a paradigm shift, that contradicts everything we've seen in the last 40 years (moving from custom hardware into general-purpose computing devices).

    If history is to repeat itself like so many times in the past - soon enough all the competitors in the Computing arena will be forced into similar changes, so not to lose market share - and we'll have a very strange market, much harder to compare - because the Apple guys will always bring in Apple software highly optimized to use the hardware, and the "other" guys will pull their "specialized" software for their special processors... I

    I am quite thrilled, and I really want to have one of them M1 Max machines, just to feel them a little.

    Despite the long threads underneath, I think Gaming is not even secondary in the list of important aspects - And I also predict that Game makers will skip the Mac in the future just like today. It's not because they don't like it, but because of tradition, and because of the high priced entry point of Powerful Mac computers. Still - Corporate-America is buying MBPs like mad, and they'll keep doing that in the coming 3-5 years.
  • stevenLu - Tuesday, December 7, 2021 - link

    I am a fan of Apple technology. And I am only glad to read news about their development and some new technologies. Another new technology is used by Lucid Reality Labs. You can read more on their website https://lucidrealitylabs.com/blog/5-vr-headsets-ma...
  • Cloakstar - Tuesday, December 21, 2021 - link

    One reason this M1 Max performs so well is that even though the CPU is in control, the memory hierarchy is more GPU+CPU than the typical CPU+GPU, so the typical APU memory bottleneck is gone. :D AMD APUs, for example, are highly memory bound, doubling in performance when you go form 1 stick of RAM to 4 sticks with bank+channel interleaving.
  • wr3@k0n - Friday, December 24, 2021 - link

    For a $3499 no shit it's competing with server grade, if it comes with that price. Though PC still ends up being cheaper and infinitely more repairable and upgradeable. This article doesn't address many of the drawbacks of the Apple ecosystem and it will take more than "close to PC" performance.
  • xol - Monday, March 21, 2022 - link

    quote "Apple’s die shot of the M1 Max was a bit weird initially in that we weren’t sure if it actually represents physical reality – especially on the bottom part of the chip we had noted that there appears to be a doubled up NPU – something Apple doesn’t officially disclose"

    It looks like the old die shots were deliberately misleading - it's easy to compare the old and new images of M1 Max post M1 Ultra release. Where the "2nd NPU" was appears to be where the 'interposer hardware" is today.

    Probably intentional on apple's part (the article could be post-updated ?)
  • bubka9 - Wednesday, June 15, 2022 - link

    Polskie Sloty https://polskie-sloty.com/game-category/sloty-owoc... is fantastic gaming experience. Wouldn't go anywhere else to throw away my money, haha! It's rekindled my love of gambling and I'm making money doing it. Great bonuses and free spins also.
  • techbug - Tuesday, August 16, 2022 - link

    Are the binaries compiled with LP64 or ILP32 data model? Thanks

Log in

Don't have an account? Sign up now