Comments Locked

682 Comments

Back to Article

  • Der Keyser - Tuesday, November 17, 2020 - link

    This release is going to be an interesting change in the industry
  • Kevin G - Tuesday, November 17, 2020 - link

    There is also the business side of it too which is gonna be full of Game of Thones-like drama with nVidia attempting to acquire ARM. Apple and nVidia notoriously don't get along. Apple's relationship with Imagination Technologies is also strained but they've seemingly made up so that Apple can gain access to some ray tracing acceleration designs. Apple still seems to be on good terms with AMD but moving away from them as a supplier on the Mac side right now due to the ARM transition in general.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Thankfully chaos is a ladder.
  • helios24 - Tuesday, November 17, 2020 - link

    Apple has an Architecture License with ARM. Basically the broadest license that ARM sells. If you didn't know this, ARM was founded as a joint venture between Apple and Acorn. Apple's license is also perpetual. That is the reason why Apple isn't interested in acquiring ARM. Apple does not use ARM designs, they use certain principles present in the ARM architecture and the ISA. Apple then makes cpus that can run ARM instruction set, that is the reason why you see big difference between apple designed SoC vs Qualcomm or ARM design.
  • Tams80 - Tuesday, November 17, 2020 - link

    Apple's not interested in buying ARM mainly because it would never be allowed to happen (without Apple splitting up), so trying to do so would be a complete waste of time.
  • dejuknow - Tuesday, November 17, 2020 - link

    Nope. Apple would still have no reason to purchase ARM even if it were allowed to happen. helios24 is completely correct. Apple has no interest in licensing technology to third parties.
  • nevernotmaybe - Friday, November 20, 2020 - link

    Apple cares about nothing other than making money, if they could have a subsidiary printing money while they continue as normal they would do it instantly. The idea they wouldn't is laughable.
  • skingers - Monday, November 23, 2020 - link

    Not laughable at all. Apple have tonnes of free cash that they could use to buy something they don't really need that makes money, but they don't do it. Instead they buy back their own stock. The other commenters here are correct, Apple already have, in perpetuity, what they need for their chip designs and they are backing themselves to be best in the business at it.
  • helpmeoutnow - Thursday, November 26, 2020 - link

    but they were never the best in the business, so how come?
  • alysdexia - Monday, December 28, 2020 - link

    die, troll http://google.com/search?q=Apple+TCO
  • Rrrumble - Tuesday, January 25, 2022 - link

    You may have missed the part where apple started the "business" (the smart phone, with the iPod as a first step) and grew and expanded it with savvy products and marketing while others tried catch up.
  • Henry 3 Dogg - Friday, November 27, 2020 - link

    "...Apple have tonnes of free cash that they could use to buy something they don't really need that makes money, but they don't do it. Instead they buy back their own stock. ..."

    Not true. Yes, Apple does buy back their own stock but they also have a subsidiary called Braeburn Capital which is sitting on around $250 Billion worth of investments.
  • theonetruestripes - Tuesday, December 1, 2020 - link

    Apple _likes_ focus. When I worked there SJ made the point that Apple had incredible brand loyalty, but making some products that "make money" but are not as good as the rest of the products damages that brand loyalty. That seems pretty obvious to me at the time. Th other point he made is projects cost attention. If Apple launched a line of drink bottles it would take his time, designers time, marketing time, and many others. Some of that you can "just hire" if you have enough money, but you can't "just hire" more hours into the CEO's day, or SVP's days.

    To a certain extent that might not matter for something as distant from Apple's core business as say owning a CPU design firm. If Apple bought ARM and the ARM reference designs are "meh" very few people will decide that means the new MacBook Pro is "meh" by association. However it would still require some CEO time to decide "this new ARM subsidiary can't be called anything Apple related, and needs to make sure nobody is allowed to buy products from them and claim they are Apple related - we absolutely don't want a new "Apple Powered Dell" marketing campaign anywhere!"; how much money is that time worth? I'm sure there is some number at which you could say "if buying ARM makes this much per year it is worth 300 hours of Tim's time to close the deal and 16 hours per Q to make sure it doesn't screw anything up", but it may be a much higher number then ARM actually generates.

    (or it might not, I expect a large part of Apple not being excited about buying ARM is lack of likelihood of getting regulatory approval, likelihood of needing to appear before congress to defend the purchase if in fact it is approved, and lack of any meaningful value to Apple (i.e. Apple has all the ARM license it needs to do anything it decides to))
  • alysdexia - Monday, December 28, 2020 - link

    cares !-> they; 1 != 2; and you liar
  • dysonlu - Sunday, February 21, 2021 - link

    If Apple buys Arm, not only does it have to keep licensing out Arm's designs but it may risk being forced to license its own CPU design/innovation as well since with Apple+Arm being a single entity, there is no longer any distinction in CPU intellectual properties between the two companies.
  • Henry 3 Dogg - Friday, November 27, 2020 - link

    Apple owned 43% of ARM for several years. There are more reasons than licensing technology to buy a all or part of a company.

    There are some very good reasons why Apple might choose to own 20% of ARM.
  • danbob999 - Tuesday, November 17, 2020 - link

    Actually it's the opposite. It's the cheapeast/narrowest license from ARM. More expensive license include access to full Cortex cores, and not only instruction set.
  • Spiderman10 - Tuesday, November 17, 2020 - link

    No helios24 is correct. The Arch. license is at the top of the licensing pyramid. Anandtech actually wrote a piece about this here: https://www.anandtech.com/show/7112/the-arm-diarie...
  • ws3 - Tuesday, November 17, 2020 - link

    Danbob is confused by the fact that Apple doesn't use the ARM designs. He assumes that lack of use results from lack of access.
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    They still based some designs closely to the generic ones. I think the A7 was shown to be only slightly modified in Anandtech’s review (the first 64-bit with Arm-v8(?))
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    Obviously their new designs are way off the beaten path in their improvements than the canned designs by now.
  • dotjaz - Wednesday, November 18, 2020 - link

    You must be smoking something really good. A7 was a 6-wide design while CA57 was only 3-wide. Cyclone also has 4/2/2/3 (Int/Branch/LS/NEON) units while A57 only had (2+1)/1/2/2. That's completerly different design.
  • RedGreenBlue - Wednesday, November 18, 2020 - link

    I was thinking of the A6 that was the first modification of ARM’s architecture and before that they were fundamentally copies. It’s not easy to remember which article of Anand Shimpi’s commentary I read 7 or 8 years ago. https://www.anandtech.com/show/6330/the-iphone-5-r...
  • danbob999 - Thursday, November 19, 2020 - link

    They are dumb if they pay for designs which they do not use. The instruction set must be cheaper, otherwise ARM got it the wrong way.
    It's like saying that the cost of food at a groceries store is higher than the complete meal at the restaurant.
  • michael2k - Thursday, November 19, 2020 - link

    It's actually more accurate to say, "Paying for the time of the restaurant's menu designer costs more than either the groceries or the meal"

    With an architectural license, they get access to to a specification, which is closer to a menu, recipes, and a shopping list, than a meal or groceries.
  • dotjaz - Wednesday, November 18, 2020 - link

    No helios24 is INCORRECT. It's the top of the licensing for sure, but it also doesn't include any hard IP. It's the broadest in use case as you can do ANYTHING with it as long as you are ISA compliant. But it's also the narrowest in terms of ARM IP portfolio. For example Huwwei still hold ARM architectural license and can design their own ARM cores, but they don't have access to anything newer than CA77 because that's a different license.

    Architectual license is also the cheapest *once you have certain volume*. The initial licensing fee is high, BUT you don't pay much royalty on a per-core basis becase you don't use ARM IP other than ISA.
  • dotjaz - Wednesday, November 18, 2020 - link

    Maybe this will help you understand more. The top of the pyrimid actually don't have access to ARM's standard IP portfolio at all.

    https://semiaccurate.com/2013/08/07/a-long-look-at...
  • michael2k - Thursday, November 19, 2020 - link

    The article here contradicts you: https://semiaccurate.com/2013/08/07/a-long-look-at...

    On top of the pyramid is both the highest cost and lowest licensee count option, but those two factors are probably not directly related. The reason is this one is called an architectural license and you don’t actually get a core, you get a set of specs for a core and a compatibility test suite.
  • mjkpolo - Thursday, November 19, 2020 - link

    Actaully nVidia purchased ARM lol
  • Henry 3 Dogg - Friday, November 27, 2020 - link

    "ARM was founded as a joint venture between Apple and Acorn."

    No. ARM was founded by Acorn spinning out its inhouse developed ARM chip as a separate company. Apple bought in later as an investment, and to prevent take overs that might threaten its Newton product.
  • Ppietra - Friday, November 27, 2020 - link

    ARM was really founded as a joint venture between Apple, Acorn and another company.
    The technology behind it was based on technology from Acorn, but the company was established as a joint venture to develop the processor for Newton. That was the objective for the company creation.
  • darwinosx - Wednesday, November 18, 2020 - link

    Money talks and bullshit walks.
  • lilmoe - Tuesday, November 17, 2020 - link

    Not sure how you came to this conclusion with such a poor and unprofessional review. It seemed to me that AMD is killing it on all fronts, but the folks here made it really hard to tell with all these purposefully misleading charts.

    AMD wins. Come 5nm with Zen4 on laptops, poof goes all the drivel currently in the tech media.

    It's disappointing to see this from Andre non-the-less. Very poor quality, and very misleading benchmarks.
  • ws3 - Tuesday, November 17, 2020 - link

    Stage One: denial
  • Hifihedgehog - Tuesday, November 17, 2020 - link

    Stage One: Comparing Apples to Apples, or 5nm to 5nm. AMD 7nm Zen 2, not even their latest and greatest, is doing admirably against a 5nm product. Pit Zen 3, still held back by 7nm, against it which is a good deal faster than Zen 2 and you have a totally different outcome. Pit Zen 4 where the Zen microarchitecture is given the legs to run with 5nm and it's no contest.
  • defferoo - Tuesday, November 17, 2020 - link

    show me a 5nm Zen 4 CPU to test against then, oh, it doesn't exist. I guess we can't do that comparison yet. what matters here is availability, and Zen 4 won't come for another year, M1 is here now.

    the closest thing to apples to apples now is to use the same TDP for comparison. stack up the Ryzen 7 4800U against the M1 in a Macbook Pro (~15W). M1 is faster in both ST and MT despite the 4800U having 8 cores with SMT.

    when AMD was kicking Intel's butt on the 7nm process and Intel was on 14nm, nobody said, "but you need to compare like to like!" except for Intel fans. now it's Intel/AMD vs. Apple, and only those in denial are demanding a fair comparison on the same process node.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I don't see the M1 having faster multicore than the 4800U, which benchmark am i missing?
  • Kuhar - Wednesday, November 18, 2020 - link

    You are absolutely right! All that hype around M1 was just overexaggerated.
  • defferoo - Wednesday, November 18, 2020 - link

    Spec2017 MT in this very article, Geekbench. We should not over index on one very specific benchmark (Cinebench r23) when we have more comprehensive ways to measure performance.
  • halo37253 - Tuesday, November 17, 2020 - link

    Actually the Ryzen 4800U is not only beating the M1 in MultiThread in cinebench. But doing so with similar power usage. This is Zen2. Zen 3 will easily compete with apple silicon in terms of performance/watt, and in most cases beat it. At 7nm no less. Only makes we wonder why Apple is so willing to fracture their already pretty small Mac OS fanbase.

    The 4800U in 15w mode uses around 20-25watts of power running cinebench, vs the m1 using around 22watts. Sure it doesn't have the lead in single thread performance, but pretty close when the m1 is running x86 apps. Zen3 as we know is a massive improvement in this area. And the 4800U single thread wise is largely clock limited to keep it in check power usage wise.

    I just dont see apple beating out AMD any time soon.

    The 4800u still uses GCN graphic cores, so expect a huge gain when they move up to RDNA or RDNA2 (Hopefully they jump to 2).

    Apple does have years of experience building tightly integrated SOCs, and this is where this chip shines. It clearly shows how well ARM can perform. But this is about as cutting edge as apple has been able to get their chip. AMD's focus is still mostly on the data center, so the fact their mobile devices do so well is a testament to how well suited Zen is to scale down.

    Geekbench is a joke of a benchmark and was only ever good comparing devices in the same family. There is wide score changes with the same hardware when taking into the OS its running on. Never use it to compare different CPU Archs or even two different operation systems.
  • Spunjji - Tuesday, November 17, 2020 - link

    @halo37253 I suspect you're largely correct based on what we're seeing in the benchmarks here.

    Of course, the answer to why Apple would do it is clear: they love vertical integration. They'll eventually be able to translate this into power/performance advantages that will be difficult to assail with apps written specifically for their platform.
  • mdriftmeyer - Friday, November 20, 2020 - link

    Apple will have to modify their future M1s to accomodate PCIe because a large portion of the Audio Video Professional world needs it--in fact we all rely on DMA over PCI for Thunderbolt to reduce latency, and nothing like throwing away a $5k-$25k stack of Audio Interface, Mic Pres and more just because Apple wants to drop that, or just simply dump Apple and move back to Windows and deal with DLLs. I hate Windows but I sure as hell won't drop expensive gear tied with Dante Ethernet and TB3 interfacing with various Audio Interfaces and rack mount hardware because Apple thinks the Pro market only needed the Mac Pro one off before dropping us off a cliff.

    No one in the world of Professional Music uses Logic Pro stock plugins and the average track has any where between 80-200 channel strips to manage one mix. If you think the M1 or its predecessors with this type of tightly joined unified memory system will satisfy people are just not familiar to how many resources making professional music or film production require.

    Let's not even talk about 3D Modeling for F/X in Films or full blown PIXAR style film shorts, never mind full length motion pictures. Working in 8k and soon 16k film to have real-time scrubbing will demand new versions of the Mac Pro's Afterburner and upgraded Xeons [or if they were smart, Zens] but definitely not M series SoCs.
  • Spunjji - Monday, November 23, 2020 - link

    @mrdriftmeyer - I don't see that any of the requirements you've mentioned here would preclude Apple producing an M1 successor that would be capable of fulfilling them. In particular you mentioned 8K video scrubbing, which the M1 can already do better than the average Xeon. I doubt they'd throw away the audio market entirely over this switch - I guess we'll just have to wait and see what the next chips look like.
  • varase - Wednesday, November 25, 2020 - link

    Most people are looking at these first Apple Silicon Macs wrong - these aren't Apple's powerhouse machines: they're simply the annual spec bump of the lowest end Apple computers with DCI-P3 displays, Wifi 6, and the new Apple Silicon M1 SoC.

    They have the same limitations as the machines they replace - 16 GB RAM and two Thunderbolt ports.

    These are the machines you give to a student or teacher or a lawyer or an accountant or a work-at-home information worker - folks who need a decently performing machine who don't want to lug around a huge powerhouse machine (or pay for one for that matter). They're still marketed at the same market segment, though they now have a vastly expanded compute power envelope.

    The real powerhouses will probably come next year with the M1x (or whatever), rumored to have eight Firestorm and four Icestorm cores. Apple has yet to decide on an external memory interconnect and multichannel PCIe scheme, if they decide to move in that direction.

    Other CPU and GPU vendors and OEM computer makers take notice - your businesses are now on limited life support. These new Apple Silicon models can compete up through the mid-high tier of computer purchases, and if as I expect Apple sells a ton of these many will be to your bread and butter customers.

    In fact, I suspect that Apple - once they recover their R&D costs - will be pushing the prices of these machines lower while still maintaining their margins - while competing computer makers will still have to pay Intel, AMD, Qualcomm, and nVidea for their expensive processors, whereas Apple's cost per SoC goes down the more they manufacture. Competing computer makers may soon be squeezed by Apple Silicon price/performance on one side and high component prices on the other. Expect them to be demanding lower processor prices from the above manufacturers so they can more readily compete, and processor manufacturers may have to comply because if OEM computer manufacturers go under or stop making competing models, the processor makers will see a diminishing customer base.

    I believe the biggest costs for a chip fab are startup costs - no matter what processor vendors would like you to believe. Design and fab startup are _expensive_ - but once you start getting decent yields, the additional costs are silicon wafers and QA. The more of these units Apple can move, the lower the per unit cost and the better the profits.

    The real threat to OEM computer and processor makers are economic - and that fact that consumer publications like Consumer Reports will probably _gush_ over the improvements in battery life and performance.

    Most consumers are not Windows or macOS or ChromeOS fanboys - the just want a computer which is affordable and has decent build quality and gets the job done. There are aspirational aspects of computer purchases, and M1 computers shoot waaayyy above their peers. This can mean a potential buyer _doesn't_ have to buy way up the line for capabilities he or she may want sometime during their ownership window, and these computers will last a long long time and will not suffer slowdowns due to software feature creep.
  • Eric S - Tuesday, November 17, 2020 - link

    Remember that this is designed to be Apple’s lowest end Mac chip. Their Intel i3. Wait until the big chips come out next year.
  • BushLin - Wednesday, November 18, 2020 - link

    ... Your speculation may or may not be correct but next year will see 5nm zen 4 which is actually announced rather than rumors.
  • jospoortvliet - Wednesday, November 18, 2020 - link

    Sure, and 3nm m2. Different generation with different processes etc. But today, M1 has the best single core and at lower power comes close to octacores despite only 4 fast and 4 slow cores. I wish I could buy it with Linux on it...
  • dysonlu - Sunday, February 21, 2021 - link

    "makes we wonder why Apple is so willing to fracture their already pretty small Mac OS fanbase"

    You have it upside down. It is exactly BECAUSE it has a small fanbase that it can afford to do this kind of migration. (The large and heterogenous "fanbase" in Windows is the big achilles' heel for Microsoft when it comes to making any significant change.) There will be very little "fracture" of Apple's fanbase, if any at all. The fans will gladly move to Mx CPUs given the advantages over Intel.
  • adriaaaaan - Thursday, November 19, 2020 - link

    People are giving apple too much credit here, this is only impressive because of the process advantage which has nothing to do with apple.

    People are forgetting that Mac's have a tiny market share and that's not likely to change any time soon. You wouldn't knows it because journos tend to use Mac's therefore they think everyone does.

    If anything I hope this kicks AMD into gear they are still releasing gcn designs. Let's see who's boss when they release 5nm rDNA 2
  • Spunjji - Thursday, November 19, 2020 - link

    "this is only impressive because of the process advantage"

    False. A crap core on a high-tech process will still produce bad results; you only have to look at the last bunch of Zhaoxin CPUs based on the old Via tech.

    If this were just about process node you'd expect to see lower power but with limited performance. As it is, they manage both extremely low power *and* very competitive performance. Beating Intel is no small feat, even in their current incarnation.
  • Zagor Te Nay - Sunday, November 22, 2020 - link

    Unrelated to how good M1 is - and I think it is really darn good, and will only get better as devs start supporting it natively (although Rosetta 2 seems to be doing fine, all considered) - Intel is not hard to beat. AMD has done it with having much less money than Apple has.

    As someone said, Intel has stagnated themselves out of competition. They are more responsible for their own sad current situation than AMD or Apple, really.
  • Spunjji - Monday, November 23, 2020 - link

    @Zagor Te Nay - I don't think the fact that AMD have finally clawed out a lead over Intel indicates that they're easy to beat.

    Nvidia had a crack at CPU design a while back and were forced to pack it in. Samsung have tried to out-engineer Apple with large ARM core designs and have failed. It's not clear whether Qualcomm can't compete or can't be *bothered* to compete, but they've never come within a year of Apple's designs and are usually around 18 months behind.

    These are all large, wealthy, serious organisations. To be honest I'm impressed by Apple, and even more so by AMD.
  • beowulfey - Tuesday, November 17, 2020 - link

    I mean, the point of benchmarks is to compare CPUs that are available today, right?

    In the hypothetical future where Zen 4 is comparable to an M1, I would counter that by then the latest Apple M3 or whatever will have improved as well, so...
  • Tams80 - Tuesday, November 17, 2020 - link

    But Zen 2 is roughly comparable to the M1.

    No one is claiming that other future processors will only match the M1. Well, perhaps other than you in your imagination.
  • halo37253 - Tuesday, November 17, 2020 - link

    M1 is slower than 4800u when it comes to multithread workloads. Even on these video compression tests.... While using slightly more power at most. 22watts vs 25watts while running cinebench...

    Zen3 mobile will be out before M2, and will most likely have no problems matching or beating M2 in nearly any task while using same amount of power. While being 7nm

    Only reason why M1 is even remotely impressive is largely thanks to 5nm. Apple managed to compete with Zen 2 in terms of power efficiency with 5nm, even though Zen 2 is 7nm. This M1 chip is no more impressive than the 4800u in terms of Performance/watt. M1 just has higher single thread vs weaker multi thread....
  • Spunjji - Tuesday, November 17, 2020 - link

    @halo36253 - have to disagree with you on that part. A 5nm process does not magically make a 3.2Ghz CPU act like one boosting north of 4Ghz. M1 is particularly impressive for power draw, which has a lot to do with that process, but it's also quite fast in its own right. Beating out Intel and duelling with a newly-resurgent AMD is an impressive showing for their first SoC designed for anything more than an iPad.

    It's also impressive that it is even on 5nm in the first place. It would have taken lots of work between designers and the foundry to pull that off a year before AMD will make the move.
  • halo37253 - Tuesday, November 17, 2020 - link

    I probably was a little too critical.

    Yes the M1 deserves all the praise it can get. But some of that praise should be on TSMC, they are on fire. TSMC and Samsung have leapfrogged Intel. And Honestly if Intel's Fabs were able to keep up, this move to Arm would have been more questionable. I too think it made more sense to go with their own chip than risk the mac lineup with AMD processors.

    I just wonder how well they can scale their Arm chips of, if they ever do. As if they ever really want to transition the Macbook pro 16, iMac or Macpro We need a Ryzen competitor. Intel was already behind in there areas and users have been wanting a high core count Mac for a long time now. Sadly the idea of running VMs on a Mac is looking grim.

    Apple's Silicon is just what I figured it would be when it was allowed to actually suck power and stay cool. While the 22-25watt wall is most likely firmware enforced to keep the chip from pulling more power than designed for. GPU and CPU performance is top notch. This is what Arm should be. Now only if Apple and MS would work together to get windows for ARM working in bootcamp.

    I just hope we one day see a 8-16 core M series chip from apple only packing high power cores. I'd love to see a ARM chip with a TDP of 65-95watts that doesn't consist of 100 cores.

    Many people have been under the spell that Apple's silicon is somehow magically leagues above everyone else. They are no doubt good, and do give AMD a run for their Money. Funny to say that both AMD and Apple are making Intel look bad.

    I've been wanting a ARM laptop for a long time now TBH. And been putting off getting the Wife a Macbook till these new ARM chips hit the market. Now I just need to hope MS works with Apple on getting Windows onto this. As as soon as windows for ARM allows for x64 apps to run, windows would be the choice for getting games running on these devices.
  • [email protected] - Tuesday, November 17, 2020 - link

    Why Windows... ??? All apps can run Arm, windows is not something you run, its a desktop, that I personally practically never use. Applications, now that's what I use. MS has announced a native Arm M1 version of Office, and I am in 100%
  • tuxRoller - Tuesday, November 17, 2020 - link

    Oh yeah, this is far less critical🙄
  • Eric S - Tuesday, November 17, 2020 - link

    TSMC has been doing very well. Although remember that a lot of their work is financed by a cash infusion from Apple.
  • Eric S - Tuesday, November 17, 2020 - link

    That cash is the reason Apple is getting so much of the 5 nm node production.
  • varase - Wednesday, November 25, 2020 - link

    You can pretty much forget Microsoft - there is no compelling ARM Microsoft software, and if ARM Windows does do x86 transcompiling it will almost certain be to the standard ARM instruction set, or worse to a Microsoft/Qualcomm mutated instruction set. In no case would it be the Apple SIlicon's AArch64 implantation.

    If they relied on interpretation instead of trans compilation, expect performance to be less than stellar.

    Now Parallels seems to be working on something that they're keeping pretty mum about - my guess would be a hypervisor running x64 clients. Using Rosetta 2 like trans compilation, they could front end OS boot and segment loaders and read x64 code segments and return Apple Silicon AArch64 code to the client virtual machine. They'd probably have to front end code signing segments if those exist. To sustain performance, they'd also have to cache transcompiled code using a source/CRC/length key to prevent having to transcompile all the code all the time.

    They wouldn't have Metal access to GPUs to lean back on, so unless Apple implements PCIe attached graphic cards I wouldn't expect gaming to ever be performant enough to be practical.
  • hlovatt - Tuesday, November 17, 2020 - link

    Isn't the 22 W you quote for Apple for the whole system, whereas the 25 W you quote for AMD is just the processor? In't a 10 W Apple processor competitive with a 25 W AMD processor?
  • halo37253 - Wednesday, November 18, 2020 - link

    No thats full system power from the wall of AMD's 4800u in 15watt mode.
  • whatthe123 - Tuesday, November 17, 2020 - link

    By anandtech's standards these benchmarks are highly misleading, though. 5950x is capped off to its lowest possible package power. In MT testing they switch back to zen 2 chips also power limited. Everything is focusing on efficiency, which is admittedly very, very good, but the article frames it as though these are stock comparisons, when in reality most of these CPUs would be able to draw more power and dwarf the M1 in performance.

    If they are only looking at efficiency, why make such a misleading article? Just focus on efficiency and the results are stellar. Instead they went with this mess of an article, I'm honestly shocked considering this site normally tries to be unbiased as possible.
  • Spunjji - Thursday, November 19, 2020 - link

    @whatthe123 - "5950x is capped off to its lowest possible package power" - where did you get that impression from?
  • Spunjji - Thursday, November 19, 2020 - link

    @whatthe123 - Regarding your complaint that they only compare the 5950X to the M1 in ST testing, of course they do, it's a 16-core 32-thread chip with a 105W TDP and ~140W power limit. It'll demolish the ~24W 4+4 M1 in MT. That's just... not a useful comparison.

    The point is to show AMD's peak Zen 3 performance against Apple's peak Firestorm performance. Once AMD have Zen 3 processors in the ~25W range, then a valid MT comparison in comparable designs can be made.
  • helpmeoutnow - Thursday, November 26, 2020 - link

    @Spunjji but be fair we want to see the difference between this M1 and fully utilized 5950X just to see the real difference. Now it looks like m1 is a good cpu. but as it is said, they just cap everything else.
  • Stephen_L - Tuesday, November 17, 2020 - link

    Nah, no just listen to op, we HAVE to compare chips on the same node! Let’s wait for Intel 7nm also, hopefully in 2022 but god knows if they can do it before 2024, so we can have “Apples to apples comparison”. Let’s come back in 2-4 years guys. /s
  • magreen - Tuesday, November 24, 2020 - link

    Totally agree. The Cyrix M3 Jalapeno when it comes out on 5nm is going to crush this little M1.
  • Spunjji - Tuesday, November 17, 2020 - link

    Apples to Apples is comparing what's on offer to what's on offer. If AMD were on the verge of releasing Zen 4 I'd say hey, let's see what's in store, but you have to admit that right now Apple have leveraged 5nm to build a fundamentally solid design that has some significant competitive advantages.

    I suspect M1 will lose to AMD's offerings at the ~25W mark once Cezanne hits, which means good things for Zen 4 when it does indeed arrive, but this is a mighty good CPU as it stands at this time in 2020.
  • lilmoe - Tuesday, November 17, 2020 - link

    Beating Intel doesn't say much. Intel is a well known issue in the tech industry. Apple isn't the only one complaining. Mobile Ryzen goes neck-to-neck with Intel desktop. This is well know fact. Apple doesn't have any breakthrough here; AMD did earlier this year. TSMC has great 7nm and a breakthrough 5nm process (Your move, Sammy).

    The M1 is Apple's A-Game in single thread. You won't see double digit improvements YoY.

    Apples to apples (pro review):
    - Consistent Charts.
    - M1+Rosetta2 VS Zen2(4800U): THAT's what's available today.
    - M1 Native vs Zen3 (Prediction/Analysis for Zen4): THAT is what M1/M2 Native will go up against.
    - M1 Chrome/Javascript VS 4800U Chrome, NOT Safari+M1(Native) VS Chrome/Edge+4800U. That's a browser benchmark, not a CPU benchmark. Chrome all the way, then an educated prediction into how much Native would improve that.
    - Actual popular games.

    I'm dismissing this entire review. Any monkey can install and run a couple of benchmark apps.
  • andreltrn - Tuesday, November 17, 2020 - link

    Why Chrome? What you people don't get is that people buying a Mac mini or a macbook air are buying a device. Like you would buy a refrigerator, a Nest thermostat. They will use what is the best for that DEVICE. They don't care about the processor. Even a processor (CPU) comparaison is bulls... when comparing the M1 with the AMD laptop offering. The M1 is a SOC with way more built-in functionality such as ML processor and accelerator and much more. A (intel or AMD) laptop CPU couldn't do what the M1 does in a properly coded app. EG. Final cut Pro or Logic or Safari for that mather. A device is the sum of it's parts. Not only a CPU specially in a laptop.
  • vlad42 - Tuesday, November 17, 2020 - link

    Why Chrome? Well because it is available for both operating systems. Of course another browser such as FireFox could also be used.

    We have seen time and time again that the web browser used can have an enormous impact on the results of browser based benchmarks. As you can see here in Anandtech's browser comparison on 9/10/2020, https://www.anandtech.com/show/16078/the-2020-brow... the Chromium version of Edge out performs FireFox in Speedometer 2.0 by roughly 35%! Since this article is not trying to compare the performance of different web browsers, the browser used should be kept the same.

    In addition, since Speedometer 2.0 was made by Apple, it is highly likely that they likely put more weight on Safari updates improving the Speedometer score than, say, Google does with Chrome.
  • helpmeoutnow - Thursday, November 26, 2020 - link

    @andreltrn lol keep it real. we are talking benchmarks. you can only compere software that runs on both systems.
  • Spunjji - Tuesday, November 17, 2020 - link

    Also, Zen 3 is in some of those comparisons... It wins as you'd expect, but dropping to 5nm wouldn't magically bring it to M1 power levels.
  • vlad42 - Tuesday, November 17, 2020 - link

    5nm would help by reducing the voltage, and thus power draw, for the chip. The bigger thing to remember is that there are no mobile versions of Zen 3 yet. Consider that the 5950X is only ~37% faster than the 4800U in single threaded Cinebench despite have a tdp 7 times higher. If the 5800U ends up having the same clocks as the 4800U, then the M1 would roughly have a 7% perf/W advantage. Granted, this assumes the 5800U’s score would be 19% faster than the 4800U's 1199.

    So, given the expected benefits that TSMC, Samsung, etc. have touted about 5nm, a die shrink from 7nm to 5nm would easily make up for this difference in power efficiency.
  • R3lay - Wednesday, November 18, 2020 - link

    You can't compare single core performance and then compare then to the TDP. At single core the 5950X doesn't use 7x more power.
  • Kangal - Saturday, November 21, 2020 - link

    To be honest, a lot of comparisons of the Apple Silicon M1 are vague, misrepresentative or blatantly off. The best representative benchmarks I've seen are:

    Single Core, Geekbench v5, 5min run, Rosetta2
    2020 Macbook Air (10W M1): ~1300 score
    2019 MacBook Pro 16in (35W i9-9880H): ~1100 points
    AMD Zen2+ Laptop (35W r9-4900HS): ~920 points
    2019 Macbook Pro 13in (15W i5-8257U): ~900 points
    AMD Zen2+ Laptop (20W r7-4800U): ~750 points

    Multi-Thread, CineBench r23, 10min run, Rosetta2
    AMD Zen2+ Laptop (35W r9-4900HS): ~11,000 score
    AMD Zen2+ Laptop (20W r7-4800U): ~9,200 score
    2019 MacBook Pro 16in (35W i9-9880H): ~9,100 score
    2020 Macbook Air (10W M1): ~7,100 score
    2019 Macbook Pro 13in (15W i5-8257U): ~5,100 score

    Rendering Performance, Final Cut ProX, 10min clip
    AMD Zen2+ Laptop (35W r9-4900HS): error on ryzentosh
    AMD Zen2+ Laptop (20W r7-4800U): error on ryzentosh
    2019 MacBook Pro 16in (35W i9-9880H): ~360 seconds
    2020 Macbook Air (10W M1): ~410 seconds
    2019 Macbook Pro 13in (15W i5-8257U): ~1100 seconds

    GPU Performance, GFXBench v5 Aztec Ruins High, Rosetta2
    2019 MacBook Pro 16in (i9 5600M): ~79 fps
    2020 Macbook Air (M1 8CU): ~76 fps
    AMD Zen2+ Laptop (r9 Vega-8): ~39 fps
    AMD Zen2+ Laptop (r7 Vega-7): ~36 fps
    2019 Macbook Pro 13in (i5 Iris Pro): ~20 fps

    Gaming Perfomance, Rise of the Tomb Raider, 1080p High
    2019 MacBook Pro 16in (i9 5600M): ~70 fps
    2020 Macbook Air (M1 8CU): ~40 fps
    AMD Zen2+ Laptop (r9 Vega-8): ~23 fps
    AMD Zen2+ Laptop (r7 Vega-7): ~21 fps
    2019 Macbook Pro 13in (i5 Iris Pro): ~12 fps

    ....so I share the well-grounded outlook that Dave Lee (D2D) has on the matter, where Linus (LTT) was more pessimistic than Dave but I think his opinions are pretty neutral overall. I simply out-right reject the unprofessional and unrealistic look that Andrei (Anandtech) has displayed in the previous article. Nor am I fully on-board with the overly-optimistic perspective that Jonathan Morrison demonstrated.
  • Kangal - Saturday, November 21, 2020 - link

    More thoughts on the matter...

    I get there's the argument to be made that: new modern and more efficient apps are coming natively, that single-core is most important, low TDP is very important, and race to idle (or at least race to small cores) is important. From that perspective, the M1 in the Macbook Air is the best by a HUGE margin. We're talking a x3 better overall experience than the best x86 devices in such comparisons.

    Then there's the alternate debate. That what you get is, what you get. So that legacy program performance is most important, single-core is no longer be-all-end-all, multi-thread being relevant for actual "Pro" users, and just as important as TDP is the Sustained Performance. When looking from that perspective, the Apple M1 in a MacBook Pro 16/13 is only equivalent to the very best x86 device performances. So basically a meh situation, not a paradigm shift.

    So what can we realistically postulate from this, and expect from Apple and the industry?
    Firstly, Apple disappointed us with the M1. In short, Apple played it safe and didn't really do their best. That means they purposely left performance on the table, it was artificial and it was deliberate. The why is simple, just so that they can incrementally introduce these increases, that way they can incentivise customers. In long, what they have now, the 4/8 setup is somewhat reminiscent to the current high-end phablets, or the 4c/8t hyperthreading setup of Intel CPUs, or the older AMD Bulldozer setup. At these thicknesses, there's really no need for the medium-cores, they should have killed it, and stuck with an 8-large-core design instead. These large ARM cores aren't too different to x86 core in size, so they could have afforded that silicon cost. As for operation, simply undervolt/underclock (1.5GHz) the whole stack, and ramp up 1-2 cores to high clocks/volts (3.5GHz) dynamically when necessary. That makes thread allocation simple, and here simple means more efficient software. And this means we can see a performance difference moving from an 11inch passive-cooled device, to a 17in active-cooled device. For example, 8-cores running at 4.0GHz, versus 2-cores running at 3.5GHz. And let's not forget the GPU which is fine as an 8CU (~GTX 1050) on an "ultraportable" like an 11in Macbook Air. But we we're expecting something more like 16CU (~GTX 1660) for the "regular laptop" 13in MacBook, and even beefier 32CU (~RTX 2070) for a "large laptop" 17in MacBook Pro. On top of this, the new SoC demands a smaller size internally, so we should have seen a much more compact Mac devices, and Apple didn't take advantage of this.

    Other places Apple dropped the ball, is that they have less PCIe ports allocated. There is no dedicated GPU or eGPU option available. Their current iGPU is about on-par with GTX 1050, so impressive against AMD's and Intel's iGPUs... but it's still behind modern (low-profile) dedicated-GPUs from Nvidia's Volta or AMD's RDNA2. There's no support for 32bit x86 programs. And lastly, there is no bootloader support, so that people can run another OS such as Android, Linux distro, Windows10 ARM/Mobile (or perhaps even to boot x86-OS via a low-level translator).

    And here's what Apple got right.
    They released the Mac Mini Zero/Development device a year early to get developers primed. Their new Operating System, which is definitely NOT the same OS X (macOS), but is an "iOS-Pro OS" actually is stable. Their forwards-compatibility with iOS Apps runs without issues. Their backwards-compatibility for 64bit-macOS Apps actually runs very very very well (some code, such as the gpu-APIs are actually processed natively). And we can only surmise that most current Apps will run (average 60%) almost as good as running natively (min 49%-to-94% max), something Microsoft dropped the ball on with Windows 8/RT and have dragged their feet since. Whilst in the near future (3-4 years), they will remove the actual hardware-coprocessors that handle this x86-to-ARM translation, and they will use that "silicon budget" to add to the SoC, slightly improving native further. So with updating Applications, improving microarchitecture, improving lithography, increasing silicon budget, and thus extending it from an efficient design (4B+4s) to a (8 Big) performance design...... we will see performance literally x2-x4 in the coming 2-4 year timeframe (think Apple M2, M3, M4 in 2024). And I didn't even mention GPU improvements either. That's just too much pressure on the whole industry (Lenovo, HP, Dell, ASUS), and more specifically on Microsoft, AMD Zen, and Intel (lol) when it comes to their roadmap.

    Plus, the current setup of 4-big, and 4-medium cores, is adequate, but works wonders for low-impact tasks and thermally limited devices. And they have demonstrated that their software is mature in handling these hybrid systems. So the current setup means the Macbook Air (ultra thin/light) has a phenomenal leap, and future iterations will benefit from this setup too. Also that means lower R&D time/effort/cost is necessary as most of the work between the smallest iPhone Mini, the medium-sized iPad-Mini, and the much larger Macs are closely related, as far as SoC is concerned. And it's a brilliant move to keep the current x86 line, and launch identical hardware with the M1 silicon. So all feedback will provide insight for future Silicon-M designs.

    I personally think, they're going to move to having a better quality keyboard (bye crappy Butterfly) as now there is more internal space to play around with. And they will add new features to the Macs that are already included in iPhones, like barometer, GPS, etc etc. Also, they will add Apple Pen support (but no silo), with probably a magnetic holder. Lastly, I think they're going to evolve the design of the Macbooks... they will all have OLED HDR10+ displays, maybe in the 4K-5K resolutions, have a proper touchscreen, and mimic the Lenovo Yoga style with a 360' hinge.
  • Spunjji - Monday, November 23, 2020 - link

    @Kangal - I have a few disagreements with what you've written here.

    Firstly, I'm a little confused about why you see the Rosetta-based benchmarks as most relevant. I doubt that anyone buying an M1 device today will be getting rid of it before the majority of apps are converted across, so that performance is going to become increasingly *less* relevant as time passes.

    Secondly, this quote: "In short, Apple played it safe and didn't really do their best. That means they purposely left performance on the table, it was artificial and it was deliberate." - I just don't see how you could draw that conclusion. They used their highest-performing cores in the largest chip yet produced on 5nm. It would be bizarre for them to begin such a grand experiment from the top-down - it would produce an odd situation where their most demanding users, who are most likely to be using applications that currently need translation, would be expected to transition to an incomplete ecosystem with performance that doesn't exceed existing systems.

    To me, it makes perfect sense from both an engineering and a product perspective. They begin the transition with a relatively small (and thus high-yielding, despite the new process) chip as part of a platform for users who are relatively performance-insensitive, but who will still appreciate the immediate benefits of reduced heat and increased battery life.

    I'm also a bit confused about your perspective on their GPU. AFAIK the most modern low-profile low-power GPU out there is Nvidia's 1650 - and in terms of performance-per-watt, this iGPU thrashes it, with absolute performance being not far behind. Perf/Watt appears to be Apple's primary concern (for a given degree of absolute performance), so I see it as a resounding (and surprising) success. It's down to AMD and Nvidia to respond now.
  • Kangal - Wednesday, November 25, 2020 - link

    @Spunjji
    Thanks for the read, sorry it's quite long.

    I mean, the Apple Silicon M1 as it is, it's very good for the new Macbook Air. I guess for the cheap/budget Mac Mini it is also decent. However, it's kind of out of place on the Pro. Perhaps they will launch more Macs in the next 6 months, something beefy for their larger MacBook Pro, and maybe something desktop-worthy in an iMac and Mac Pro. I completely agree with your points. Apple now has the best chipset in the world, their large cores are highly competitive, and their GPU tech is the most efficient. In fact, their medium-cores are the best, they're an Out-of-order processor which sucks slightly less power than a Cortex A53 (or slightly more than A55 ?), but they're slightly faster than a Cortex A73 (or slightly slower than A72 ?). Either way, that's stupidly impressive.

    But as it stands, Apple has done the works but on the last yard, pulled its punches.... and I state that since they're saving money on the SoC by sourcing it themselves, and not paying those exorbitant Intel prices. So there's definitely (money and silicon) budget there to go more ambitious. I just wanted to see more competitive/better product segmentation, eg:

    Apple M10, ~10W, 8 large cores, 8cu GPU... for 11in laptop, ultra thin, fanless
    Apple M13, ~15W, 8 large cores, 16cu GPU... for 14in laptop, thin, active cooled
    Apple M15, ~25W, 8 large cores, 32cu GPU... for 17in laptop, thick, active cooled
    Apple M17, ~45W, 16 large cores, 32cu GPU... for 29in iMac, thick, AC power
    Apple M19, ~95W, 16 large cores, 64cu GPU.... for Mac Pro, desktop, strong cooling

    ...and after 1.5 years, they can move unto the next refined architecture/node (ex Apple M20, M23, M25, M27, M29 etc etc).
  • Sherlock - Monday, November 30, 2020 - link

    I believe the iPad Pros (if not all iPads) will move to the M1 chip and run the MacOS with the ability to run iPadOS/iOS Apps. With the detachable keyboards and Apple Pen support - they will become the ultimate Portable workstation. Knowing Apple's penchant for a limited product line - they may even drop the Apple Macbook Air.
  • BushLin - Saturday, November 21, 2020 - link

    "To be honest, a lot of comparisons of the Apple Silicon M1 are vague, misrepresentative or blatantly off..."
    <proceeds to list unattributed benchmark results with incorrect power labels>
  • Spunjji - Thursday, November 19, 2020 - link

    @vlad24 - I'm aware of how process node can affect voltage requirements and power draw, and the various TDP differences.

    I wasn't arguing that TSMC 5nm wouldn't help AMD's power efficiency, I was arguing with the nonsensical statement that it's the *sole reason* for Apple's good showing in that area. lilmoe's salty opinions aren't supported by the facts.

    You're correct that AMD at 5nm would probably regain an advantage over M1 in mobile devices, but that will be in a year's time, and Apple aren't standing still. It's likely we'll be seeing them leapfrog each other. In the meantime, it'll be interesting to see how competitive Cezanne ends up being with M1 and/or whatever Apple's next-largest chip will end up being.
  • vlad42 - Saturday, November 21, 2020 - link

    But if shrinking Zen 3 to the same 5nm process would make its mobile variant more energy efficient, then that would imply that Zen 3 is a more efficient architecture. It just happens that the architecture is held back in this specific comparison by the manufacturing process.

    We do not know if AMD will bother to port Zen 3 to 5nm, they could skip straight to Zen 4. Who knows what process Apple will be using by the time AMD moves to 5nm. 3nm could still be too expensive for chips larger than those used for phones.

    Granted if the energy efficiency of Zen 3 equals M1 when both are on 5nm, then the M1's efficiency cannot be solely due to 5nm unless that were also true for Zen 3.
  • mdriftmeyer - Saturday, November 21, 2020 - link

    Zen 4 is scheduled to have samples Q1 2021 on 5nm advanced node TSMC. The fact you don't know this tells me you don't follow AMD.
  • Spunjji - Monday, November 23, 2020 - link

    @mdriftmeyer - You'd be wrong in both assuming that I don't know and that I don't "follow AMD". Samples in Q1 2021 does not equal released product in Q1 2021, does it? I'm talking about product availability, and you're moving the goalposts for reasons that aren't clear to me.
  • magreen - Tuesday, November 24, 2020 - link

    @Spunjji - Thanks for your insightful responses, as usual. Sometimes I'm tempted to just hit Ctrl-F to find your comments and ignore the rest.
  • haghands - Tuesday, November 17, 2020 - link

    Cope
  • Tams80 - Tuesday, November 17, 2020 - link

    Or not believing the ridiculous claims that phenonemal leaps in computing power can be made with no equal leaps in technology.
  • tempestglen - Tuesday, November 17, 2020 - link

    LOL
  • patel21 - Tuesday, November 17, 2020 - link

    yes AMD wins, even though it uses 5x more power to do that. So live in your cocoon
  • BlackHat - Tuesday, November 17, 2020 - link

    With 25 power consumption I think that a zen 3 Cézanne can match that with a very close power consumption.
  • Hifihedgehog - Tuesday, November 17, 2020 - link

    Apple shills. There is a reason certain investors and others are poo pooing this and have even pulled out because the writing is on the wall. Apple is going to double down on the walled garden to get that juicy 30% and developers who cater to open development environments, ones outside of the paid ad spots that we saw in the presentation, will not stand for it. Plain and simple.
  • Dolda2000 - Tuesday, November 17, 2020 - link

    What you say may very well be true, but is a completely different question from the technical examination of Apple's microarchitecture.
  • Spunjji - Thursday, November 19, 2020 - link

    The goalposts have to be moved *somewhere*, why not there? :D
  • xenol - Tuesday, November 17, 2020 - link

    Why don't you make your own "professional quality" review?

    Oh right.
  • WinterCharm - Tuesday, November 17, 2020 - link

    Fanboys are going to fanboy. The first stage of grief is denial. Anandtech's review integrity is above question here.
  • melgross - Tuesday, November 17, 2020 - link

    The problem here is your mental state, not the state of the review, which as always, is studded with facts and knowledgeable conclusions.

    Sorry that your world has burst, but it will happen again and again. Get used to it.
  • shadowii - Tuesday, November 17, 2020 - link

    You're not sure because you must not have read the article. May I suggest decoupling your self-esteem from chip performance from companies that don't personally care about you?
  • PhotinoBird - Tuesday, November 17, 2020 - link

    This is literally Apple's first and lowest end chip. Like, this is the CPU they put in the laptop with no fan. AMD is certainly top of the heap here in a lot of ways, and that's great. But you can't deny that Apple's first kick at the can with their least powerful and least expensive chip is very impressive, delivering extremely high performance at a powerdraw that is simply unheard of right now in laptops.
  • YesYesNo - Tuesday, November 17, 2020 - link

    This is their highest end chip.
  • nico_mach - Tuesday, November 17, 2020 - link

    Well, that's one way of putting it, but it's clearly a first effort for this segment and shot across the bow of Intel. AMD at least has a chance to catch up, and isn't losing a customer over this. Whereas Intel just got blown up. They'll be selling off the splinters soon to maximize shareholder value - they've been quite cynically milking the platform and failing, a breakup is the next logical step.
  • YesYesNo - Tuesday, November 17, 2020 - link

    There is no way to know if this is the best they can do currently or if they are holding something back.

    I see no reason to consider what might come out from Apple, AMD or Intel compared to what is currently out.
  • hugi - Tuesday, November 17, 2020 - link

    Apple is replacing the processors in their entire Mac lineup in the next 24 months. There will not be M1s in the Mac Pro.
  • mdriftmeyer - Friday, November 20, 2020 - link

    That's the plan but first they have several more Intel based products in the works to extend over the next three years. And if rumors were factual it is clear they know they can't replace the Mac Pro 218 core Xeon with any possible equivalent on their part with ARM, hence the Mac Pro Mini whispered about being half the size of the current Mac Pro.

    If they were smart they'd drop Xeon during the transition and go Zen 3.
  • jbelkin - Thursday, November 19, 2020 - link

    They usually announce new products around CES so look for the next batch (iMac, MBP 16"? ... 12-16 Core??) in the next 3 months.
  • Spunjji - Tuesday, November 17, 2020 - link

    For now, yes. It very soon won't be.
  • YesYesNo - Tuesday, November 17, 2020 - link

    And when it isn't i will reconsider.
  • andrewaggb - Tuesday, November 17, 2020 - link

    Pretty much. There's no reason to think the cores will be better on a chip with more of them. The only thing that is a possibility (certainly not a given) is that the clock speed will be substantially higher which should put Apple in the lead. That said, the previous review showed a very modest IPC improvement this time around even with huge reorder buffers and an 8-wide design. So I suspect apple's best course for improved performance is higher clocks but that always runs counter to power usage so we'll see. AMD and Intel will probably have to go wider to compete with Apple for single thread IPC in the long run.

    GPU-wise it's pretty decent for integrated graphics but if you want to play games you shouldn't be running Mac OS or using integrated graphics. It'll be interesting to see if Apple's market share jumps enough to pull in some game development.
  • Eric S - Tuesday, November 17, 2020 - link

    I’m don’t think any of these benchmarks are optimized for TBDR. Memory bound operations could be significantly faster if optimized for the chip. Many render pipelines could run 4X faster. I’m curious to see iOS graphics benchmarks run on this that are more representative. Of course I hope we see apps and games optimized for TBDR as well.
  • Spunjji - Thursday, November 19, 2020 - link

    @andrewaggb - Agreed entirely. The cores themselves aren't going to magically improve, and it's not clear from the meagre scaling between A14 at 5-10W and M1 at 10-25W that they can make them a lot faster with clock speed increases. But a chip with 12 Firestorm cores and 4 Icestorm cores would be an interesting match for the 5900X, and if they beef the GPU up to 12 cores with a 192bit memory interface and/or LPDDR5 then they could have something that's actually pretty solid for the vast majority of workloads.

    I don't think games are going to be moving en-masse from Windows any time soon, but I guess we'll see as time goes on.
  • Stephen_L - Tuesday, November 17, 2020 - link

    I feel very lucky that I didn’t use your mindset when I decided to buy AMD R5-1600X instead of an Intel i5 for my pc.
  • Spunjji - Thursday, November 19, 2020 - link

    @YesYesNo - you responded to a comment about how they *will* be releasing faster chips by talking about how they haven't done so yet. This is known. You're kind of talking past the people you're replying to - nobody's asking you to reconsider how you feel about the M1 based on whatever comes next, but it doesn't make sense to assume this is the absolute best they can do, either.
  • andreltrn - Tuesday, November 17, 2020 - link

    This is not their High-end chip! This a chip for low-end devices such as fan-less laptops. They attacked that market first because this where they will make the most money. High end Pro won't go for a new platform until it is proven and that they are 100% sure that they will be able to port their workflow to it. They are starting with the low-end and follow up with probably a 10 or 12 core chip in the spring for the high-end laptop and the iMac.
  • vlad42 - Tuesday, November 17, 2020 - link

    I just do not see Apple using any but a low power mobile chip for consumer devices.

    Think about it, about half the time we did not see Apple release a tablet optimized A#X chip for the iPad. In their recent earnings reports the combined iPad and Mac revenue is still only half that of the iPhone. By using the same chip for the iPad and all Mac machines, except the Mac Pro, maybe Apple will actually update the soc every year.

    If apple were to provide a higher performing chip for consumer devices, then it would probably be updated only once every few years. Apple just does not make enough money from high end laptops and the iMac to justify dedicated silicon for those products without pulling an Intel and reusing the soc for far too many product cycles. Just look at the Mac Pros. The engineering resources needed to design the most recent x86 Mac Pro is a drop in the bucket compared to designing and taping out a new soc. Despite this, Apple has only been updating the Mac Pro lineup once every 5-7 years!

    The problem, is that by the time they are willing to update those theoretical high end consumer chips, they will have been long since been made obsolete. Who in their right mind would purchase a "high end" laptop or an iMac if it is out performed by an entry level Air or an iPad or was lacking in important features (hardware codec support, the next stupid version of HDCP needed for movies/TV shows, etc.). Even worse for Apple is if their customers by a non-Apple product instead. Much of Apple's current customer base does not actually need a Mac. They would be fine with any decent quality high end laptop or any all-in-one with a screen that is not hot garbage.
  • Eric S - Tuesday, November 17, 2020 - link

    They are working on updates for the high end. I expect they will be amazing. At least two higher end chips are in late design or early production.
  • Eric S - Tuesday, November 17, 2020 - link

    You are probably right in that they may only be updated every few years, but the same can be said of the Xeon which also skips generations.
  • vlad42 - Tuesday, November 17, 2020 - link

    But the Xeon chips are a bad example because Intel shot themselves in the foot through a combination of complacency, tying their next gen products too tightly to the manufacturing process and a shortage of 14nm capacity. We used to get new Xeons if not every year, then at least every time there was an architecture update.

    A better more recent comparison would be with AMD which has always updated the Threadripper lineup. Granted, we technically do not know if the Threadripper Pro lineup will be updated every year, but it very likely will be.
  • mdriftmeyer - Saturday, November 21, 2020 - link

    Threadripper Zen 3 Q12021. Lisa Su and team have already verified.
  • vlad42 - Tuesday, November 17, 2020 - link

    It is interesting that they might be working on higher end parts. However, I fear that only companies that are dedicated chip manufacturers/designers such as AMD, Intel, Arm, etc. can financially justify maintaining a sufficient update pace for low volume high end chips due to the fact that they have a much larger addressable market. The costs for those high end parts need to be made up after all.

    Since people have complained for a long time about the slow update pace for the iMac, Mac Mini, Mac Pro and any other desktop/workstation Mac I may be forgetting, maybe it will not matter?

    I wonder if those dedicated high end chips could be a Mac Pro's CPU and GPU?
  • ABR - Wednesday, November 18, 2020 - link

    I'm afraid this is what it looks like. The high end Macs will be updated even less often than they are now and be even further behind – both the lower end models as well as PCs that will be able to use the latest discrete graphics.
  • alexvoda - Wednesday, November 18, 2020 - link

    I anticipate that there will be no Apple Silicon Mac Pro.
    Apple will most probably introduce another M cpu for the MacBook Pro and the iMac simply because this one is capped at 16GB of RAM. It may even be the same chip but clocked higher thanks to the thermal headroom and without on-package RAM.
    But I do not think Apple will develop a chip for a very niche product like the Mac Pro. Apple is not SGI. Apple's core market is not high end workstations.
    We will probably see the Mac Pro continue to be updated as long as new x86 macOS versions are released and as long as Intel offers something worth updating to.

    Or maybe a future Mac Pro will just be a multisocket design with regular iMac CPUs.
  • colinstalter - Wednesday, November 18, 2020 - link

    I would totally agree with you, but they did say that they plan on doing the ENTIRE line. maybe it will just be a chiplette design like AMD. I really don't know, it's hard to imagine them competing in the high-TDP space, but if they say they'll do it i'm sure they will. Their problem will be that that their main strong point is great perf per watt. For the MacPro no one cares about that and just wants the most power possible within a 100-300 watt TDP.
  • jbelkin - Thursday, November 19, 2020 - link

    If Apple says 24 months, they mean in about 15, they will be done with the transition. They have already announced the new Mac Pro will be about half the size of the crrent one. The high end Pro market moves the slowest with plugs in and dongles so it makes sense they'll move slower.
  • jbelkin - Thursday, November 19, 2020 - link

    Apple owns the high end in laptops (the 1K+ market as the industry counts). 80-90% market share. Macs are about the size of McD's or State Farm, ONLY at Apple can a $25 BILLION dollar business unit be dismissed.
  • BushLin - Thursday, November 19, 2020 - link

    "Apple owns the high end in laptops (the 1K+ market as the industry counts). 80-90% market share" [Citation needed] (not really as it's obviously nonsense)
    Apple's valuation and profits largely come from iPhone sales and services.
  • MrCrispy - Tuesday, November 17, 2020 - link

    First chip?? They've spent a decade designing and building iPhone/iPad SOCs which is exactly what M1 is with a different layout.

    This is a natural evolution of those. The most impressive part of this is actually Rosetta 2, and Apples's ability to transition the entire line - which comes from having a walled garden and captive developers/users who feed on hype, and not giving a crap about backwards compatibility.

    Other companies don't have this luxury.
  • TEAMSWITCHER - Wednesday, November 18, 2020 - link

    I don't see Apple hyping these products anymore than Intel, Nvidia, or AMD are hyping their own products. I do think that Apple delivered silicon that is competitive with what AMD and Intel are selling today, and it's now a three way race. I think that's a good thing.
  • Kuhar - Wednesday, November 18, 2020 - link

    You are wrong. This is literally Apple`s ONLY chip. So I can say it is the highest end chip.
  • TEAMSWITCHER - Wednesday, November 18, 2020 - link

    Not for long...
  • Hrunga_Zmuda - Wednesday, November 18, 2020 - link

    It's not a chip. It's an SOC. But be that as it may, Apple is literally using multiple chips right now, and they are going to be replacing their whole line right up to the Mac Pro. People think the Mac is small potatoes, but it's the equivalent of a Fortune 500 company. It just looks small because of how massive the iOS ecosystem is. They will easily make money just fine with the whole line updated to the M system. Why? Because Apple doesn't have to sell anything to other companies, so every single thing they make doesn't have to make money by itself. So the Mac Pro's processor might not make a profit itself, but the Mac Pro will.
  • Spunjji - Thursday, November 19, 2020 - link

    "Is" is not the same as "will be"

    Reading comprehension in the comments is not strong.
  • Spunjji - Tuesday, November 17, 2020 - link

    Oh dear. Please don't blame the graphs - or, indeed, the author - when they show you something you didn't want to see.

    What you see here is extremely competitive performance, that AMD may well exceed when they get to 5nm - but they're not there just yet. For the end-user, what counts is what you can get.

    AMD need to get their chips into more designs and with any luck they will; Intel can't bribe away a performance advantage like Zen 3 has forever.
  • markiz - Thursday, November 19, 2020 - link

    For the end-user is not really relevant for this particular discussion, I think?
    I think the discussion is "philosophical" in nature, as in are there intrinsic differences and advantages of one over the other?

    E.g., can AMD (or Intel, or Qualcomm) in lets say 2 years offer a SOC as efficeint and as performant as apple can?

    So as to say, is it a matter of time, is that time reasonable, or is it unsurmountable?

    If I knew Qualcomm will offer a comparable snapdragon in 2022 (and MS sorts the emulation issues), or if AMD will offer comparable chip in 2022, i am good, and would pick from a wastly wider pool of hw designs of windows ecosystem. I like convertibles.
    If on the other hand this time frame is larger, or if they will never offer either the efficiency or performance, I would switch to apple all be damned.
  • BushLin - Thursday, November 19, 2020 - link

    ..."can AMD (or Intel, or Qualcomm) in lets say 2 years offer a SOC as efficeint and as performant as apple can?"

    AMD have a comparable chip available now in performance and power, been out for ages and it's in the benchmarks. If you need your system to do some actual work, the 4800U is a better chip. If your workload doesn't scale to many threads and the software is available for the new ARM platform then Apple's silicon looks pretty sweet.
  • haghands - Tuesday, November 17, 2020 - link

    Cope
  • adt6247 - Tuesday, November 17, 2020 - link

    The parts that beat the M1 have way more cores, a higher thermal budget, and higher clock.

    There's a lot of things to optimize for, and in its current form, Apple silicon doesn't offer solutions to all desktop workflows -- number of PCIe lanes comes to mind as a limitation.

    AMD isn't wholly beaten, but they're also not playing the same game. The best thing to come out of this would be lighting a fire under AMD's butt.

    But AMD will be chasing higher IPC and performance per watt, while Apple will be chasing higher core counts, higher thermal and power budget for desktop parts, and higher clocks. I'm hoping Intel is going to rebound with competitive parts in a couple years. Competition makes everyone better!
  • BushLin - Tuesday, November 17, 2020 - link

    Er... Similar power drawn by old zen 2 design at 7nm which is giving better multithreaded performance.
  • Kuhar - Wednesday, November 18, 2020 - link

    Don`t bother, you won`t convince an apple fanboy.
  • Hrunga_Zmuda - Wednesday, November 18, 2020 - link

    Or the Apple haters.
  • Hrunga_Zmuda - Wednesday, November 18, 2020 - link

    You think you're telling people something? They know the fantastic performance of the M1 is in the single-threaded category. They said that from the keynote on Nov. 10th to today.

    If you think Apple will never go for threaded performance in future chips, or discreet GPUs, you are living in a delusion.
  • BushLin - Thursday, November 19, 2020 - link

    Calm down dear, I was just addressing "The parts that beat the M1 have way more cores, a higher thermal budget, and higher clock" which simply isn't an accurate reflection of even the limited benchmarks in the article, let alone other real world scenarios which aren't a quick burst of single threaded activity.
  • [email protected] - Tuesday, November 17, 2020 - link

    Very good review with Industrial benchmarks. Apple first to get the 5nm out there, must upset a few to start.... I understand a gamer reading this would be blinded by a Noisy laptop, kicking out hairdryer volumes of hot air, from a none aluminum styled case, but you gotta admit... pretty dawn good, for a tweaked iPhone 12 CPU.
  • BushLin - Wednesday, November 18, 2020 - link

    I would guess a gamer's Dell/Lenovo/Microsoft laptop to be silent while browsing this site since gaming laptops are for freaks.
  • Spunjji - Thursday, November 19, 2020 - link

    Weird flex but okay
  • BushLin - Thursday, November 19, 2020 - link

    Do you ever read the comment people are replying to?
  • Spunjji - Monday, November 23, 2020 - link

    I was responding specifically to "gaming laptops are for freaks". As I said, weird flex.
  • tempestglen - Tuesday, November 17, 2020 - link

    4C8T Zen3 CPU will be beat badly by M1, when 16" MBP with 8 big cores comes out, game over for Zen3.
  • BushLin - Wednesday, November 18, 2020 - link

    8-core mobile zen 2 chips have been available for nearly a year now. By the time you can buy that unannounced product you speculate about, it'll be competing against 5nm zen 4 and would still be a toss up in performance against 7nm zen 2.
  • Spunjji - Thursday, November 19, 2020 - link

    You're both wrong.

    Zen 4 is due out in at least a year's time, possibly 18 months. I'll eat my hat if Apple haven't released their higher-end chip with larger cores by then.

    That said, there's no reason to assume its CPU performance will be significantly higher than AMD's mobile Zen 3 designs. GPU will be for sure, but you're locked to a platform without access to decent games so that will limit the appeal to a certain audience.

    So it's not "game over" for Zen 3 - especially as they don't directly compete - but BushLin's completely wrong about how an 8-core variant of this would stack up to Zen 2 and 3.
  • BushLin - Thursday, November 19, 2020 - link

    So a 15W 8-core zen 2 beats a 15W 4+4 core M1 in multithreaded, close to real world tests; but a mythical 25-30W 8+4 CPU using the same design which hasn't scaled well from the additional watts it uses over the A14 chip is going to definitely, defiantly and majestically beat all comers, including zen 3? We'll see but random guy on the internet is probably just pulling stuff out of their ass.
  • Spunjji - Monday, November 23, 2020 - link

    @BushLin - Please check what I said again: "there's no reason to assume [M1's] CPU performance will be significantly higher than AMD's mobile Zen 3 designs". So no, I don't think it's going to "definitely, defiantly and majestically beat all comers, including zen 3" and you're kind of an ass for straw-manning me like that. Please don't.

    You keep making false comparisons with TDP too. Zen 2 is 15W at base clocks, but most of the tests seen so far take place largely within its turbo window of ~30W. Zen 3 Cezanne on 7nm will be in the same ballpark. A theoretical (not "mythical") 8+4 design should provide very similar performance in a very similar TDP, with the performance edge likely going to AMD. That indicates than Zen 4 on 5nm should likely be a superior option for both perf/watt and absolute performance, but we just don't know that yet as, in your terms, Zen 4 is still "mythical".

    But sure, equally-random guy on the internet. We'll see when we see.
  • BushLin - Monday, November 23, 2020 - link

    Both the M1 and 4800U are drawing more that 15W depending on workload, both settling around 22-24W after initial boost.
  • mdriftmeyer - Friday, November 20, 2020 - link

    Zen 4 is out Nov 2021, announced Oct 2021. It's already known. Zen 4 is nearly complete in design back in September. What's coming with Zen 4 is the technologies of Xilinx --Neural Engine: Check, Machine Learning Accelerators: check, DSPs for focused A/D Convert Encode/Decode: Check.

    People the single biggest news of SV this year isn't ARM+Nvidia or Apple M1 series. It's Xilinx merging to become part of AMD.

    The IP, 13k engineers and portfolio of best in breed products by Xilinx [run by former AMD] is massive.

    And Apple nor Intel nor Nvidia saw this coming.

    Zen 4 APU will be a 5w or less CPU, with specialized add-ons, a massive Infinity Fabric interconnect, RAM not constrained like Apple, 8, 12, 16 CPU cores in dual chiplets and RDNA 3.0 CU GPU.

    Fall 2021 will be Zen 4 CPU, APU/RDNA 3.0 and RDNA 3.0 discrete GPUs with CDNA 2.0 M series Compute Processors expanding their footprint into HPC.

    You'll see the Zen 4/CDNA 2.0 solutions on El Capitan Fall 2021/Spring 2022. Clearly, to win that $600 million contract AMD showed their plans 12 months ago.

    From March 04, 2020 Press Release

    AMD technology within El Capitan includes:

    Next generation AMD EPYC processors, codenamed “Genoa” featuring the “Zen 4” processor core. These processors will support next generation memory and I/O sub systems for AI and HPC workloads,
    Next generation Radeon Instinct GPUs based on a new compute-optimized architecture for workloads including HPC and AI. These GPUs will use the next- generation high bandwidth memory and are designed for optimum deep learning performance,
    The 3rd Gen AMD Infinity Architecture, which will provide a high-bandwidth, low latency connection between the four Radeon Instinct GPUs and one AMD EPYC CPU included in each node of El Capitan. As well, the 3rd Gen AMD Infinity Architecture includes unified memory across the CPU and GPU, easing programmer access to accelerated computing,
    An enhanced version of the open source ROCm heterogenous programming environment, being developed to tap into the combined performance of AMD CPUs and GPUs, unlocking maximum performance.
    “This unprecedented computing capability, powered by advanced CPU and GPU technology from AMD, will sustain America’s position on the global stage in high performance computing and provide an observable example of the commitment of the country to maintaining an unparalleled nuclear deterrent,” said LLNL Lab Director Bill Goldstein. “Today’s news provides a prime example of how government and industry can work together for the benefit of the entire nation.”

    Note the emphasis on Genoa Zen 4 processor core, not Genoa Zen 4 CPUs.
  • Spunjji - Monday, November 23, 2020 - link

    @mrdriftmeyer - do you have a source for the 2021 claim? The last roadmap I'm aware of had a Zen 3 refresh on desktop in 2021 (likely on AM5) followed by Zen 4 some time in 2022.

    Seeing as the rest of your post appears to consist mostly of wild speculation and unsupportable assertions (e.g. the Zen 4 design is already locked in, it's NOT going to contain Xilinx IP) I'm not going to hold my breath.
  • tempestglen - Tuesday, November 17, 2020 - link

    BTW, M1 is a SoC, so please add GPU and RAM power of Zen3 during comparison.
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    Benchmarks are benchmarks. I love AMD but in the power envelope the M1 is better. Also consider that this chip maxes out at 3.2Ghz not 4+. It’s just simply that x86-64 is a hinderance to AMD and Intel. That’s the real reason Apple had to switch and knew it with Steve Jobs in 2011. Intel is supposedly working on an x86 replacement. Haven’t heard anything new about it in years. But if they’re still working on it, it was expected in 2020-2022.
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    This will also vastly improve their slim profit margins on macs too. Intel was charging such ridiculous prices for mediocre chips it was unbelievable. This is the best business model.
  • BushLin - Wednesday, November 18, 2020 - link

    Do you have any idea what you're talking about or is it simply that whatever Apple are doing must be ideal?
    I'm sure Intel are gutted about continuing to not meet the huge demand for their x86 chips on a fabrication process two generations out of date.
    Also, the same power envelope AMD is beating out the M1 on an old design and fab process.
  • Spunjji - Monday, November 23, 2020 - link

    @BushLin - "I'm sure Intel are gutted about continuing to not meet the huge demand for their x86 chips on a fabrication process two generations out of date."

    This is the exact kind of nonsense that makes me disappointed every time I see a reply from you to one of my posts on this thread.
  • BushLin - Monday, November 23, 2020 - link

    I was replying to "It’s just simply that x86-64 is a hinderance to AMD and Intel"
    Maybe don't take technical/factual matters so emotionally. Also, wasn't a reply to you unless you have many accounts.
  • NetMage - Monday, November 23, 2020 - link

    Maybe take your own advice?
  • BushLin - Tuesday, November 24, 2020 - link

    How many accounts do you need?
  • markiz - Thursday, November 19, 2020 - link

    Ok, but do you imagine apple will not have advanced by then?
    I'm pretty sure they have a long pipeline ready for the next decade.
  • Steven Choi 4321 - Friday, November 20, 2020 - link

    Sure, amd wins with $700 chip vs $40 M1. Amd and intell are the nokia and blackberry of the time.
  • hagjohn - Tuesday, November 24, 2020 - link

    AMD is killing it. M1 is the entry level CPU (SoC) attempt from Apple. I think it is pretty good, considering it can go up against an i9. The way the M1 SoC is put together has some advantages and disadvantages. A big disadvantage is everything is build on the SoC, so if you want to add memory or change out an SSD, you are out of luck. Anything that breaks in the SoC and you need a new computer. A big advantage is that with everything on the SoC, Apple has removed a lot of the latency that we can see in intel/AMD systems.

    And remember... M1 is the entry level CPU (SoC) from Apple. Wait till we get towards the more Pro versions.
  • name99 - Wednesday, November 18, 2020 - link

    Andrei, are those L1$ bandwidth numbers correct? They look off to me.
    Specifically 100~3*2*16, ie 3GHz times 2 loads/cycle, each 128 hits (ie 16B) wide. (Either load pair of int registers, or load of a neon register).
    BUT the A14 article said there were three load units...
    Are three loads/cycle only sustainable for a very short time?

    A second item of interest is does the test even try to use Load Pair or Load Pair Nontemporal of two NEON registers? Earlier A cores had a 128bit per load/store unit path to L1, so there was no bandwidth win in loading a pair of vectors, but at some point presumably this might change...
  • Frantisek - Sunday, December 20, 2020 - link

    Are you planing to review any of M1 laptops so you can cover results in comparable laptops tests?
  • Holliday75 - Tuesday, November 17, 2020 - link

    I pray to the computer gaming gods that I do not have to purchase an Apple product 10 years from now.
  • nandnandnand - Tuesday, November 17, 2020 - link

    You might need to make a sacrifice while you're at it.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    That is not happening. Apple is always thin and light. They don't even sell their HW for others as in a B2B situation for the Server Market or such, there's no DIY in Apple land, it's all propreitary and gated. AMD Is not going to sit idle and Intel as well, investor pressure, Market demands. AWS needs to put more of their HW in their services, Oracle started with Xeon and EPYC recently.

    Windows abandoning DX is never going to happen, they are pushing to far to make the DX12 the base for all Xbox games and DX11 is about to die sadly. And MS wants gaming market, with Xbox failure and constant dizziying of their own studios at garbage games (Gears5, Halo5, Infinite) they are betting on the XCloud like Luna and Stadia but the market will only decide how far that goes.
  • nico_mach - Tuesday, November 17, 2020 - link

    The same MS that's putting everything in the cloud via subscriptions so that 'thin and light' devices can play AAA games? THAT MS?
  • taligentia - Tuesday, November 17, 2020 - link

    AWS doesn't care about AMD.

    They have their AWS Graviton (ARM) CPUs which destroys AMD/Intel. So much so they have been recently transitioning all of their managed services to it e.g. S3, RDS.

    ARM is going to eat everything.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    uhh what. "Destroys AMD and Intel", is this a joke or what, go and read articles on STH first before writing such useless trash..

    "RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you:"

    That is from Oct 2020 AWS blog, on RDS with Graviton 2, destroys ? utter bs, notice that line about "entry level".

    ARM is not AWS nor Apple. Amazon is stupid to buy tons of machines based off EPYC and XEON machines ? Or what about PCIe based HPC accelerator markets with FP64 compute with MI RDNA2 and GA100. Step back to reality and see economies of scale and read about it before even writing such lines on AWS doesn't care, it's their business to provide the Enterprises on the requirments, ARM is not competiting in any case with x86, Marvell Thunder is dead, they abandoned X3 from Off Shelf to Custom design like Graviton 2 upon a client request. AMD bought Xilinx FPGA too for boosting their Server market and HPC, and then Altera has to show yet what is their case, Nuvia is all smoke show, Qualcomm abandoned. Huawei is banned and not even there with this N.A market of Datacenters, what on hell are you talking.
  • Hifihedgehog - Tuesday, November 17, 2020 - link

    No worries. Developers are already in a strongly hostile posturing against Apple and Apple is going to try to pull a Microsoft and it is going to blow up miserably in their faces. The writing is on the wall that they are going to double down on the App Store in macOS. That is reason alone to look hard and long and use some objective common sense in light of history of what Apple has done, can do, does do, and will do to punish developers. Fool me once...
  • taligentia - Tuesday, November 17, 2020 - link

    You are delusional. Developers love the current situation.

    They can write ONE app and have it run on all Macs, iPads and iPhones.
  • nevcairiel - Wednesday, November 18, 2020 - link

    I'm a developer for desktop software, and my target is Windows, Linux, and macOS, and macOS is the worst part of the job by far. And its not getting better.

    Developers entrenched in Apples ecosystem might like it, but someone like myself absolutely hates the direction all of this is going. macOS is already the worst desktop OS to develop a cross-platform app for.
  • Kuhar - Wednesday, November 18, 2020 - link

    100% agree on that.
  • Hrunga_Zmuda - Wednesday, November 18, 2020 - link

    You clearly are not a real Apple developer, you are a Windows developer forced to violate your prejudice. There are plenty of top-flight developers who love to develop for MacOS.
  • Spunjji - Thursday, November 19, 2020 - link

    Most of the devs I know like macOS as a development platform *and* as a target, and none of them are Apple fans outside of that. I guess it varies.
  • allajunaki - Tuesday, November 17, 2020 - link

    On the contrary, one of the first thing they showcased in the initial demos were running virtualization and Linux.
  • andreltrn - Tuesday, November 17, 2020 - link

    Yes for for web developers that is important. Most web app run on linux not windows. I recruit developers for the IT industries and most of the server apps are developed for Linux servers.
  • toke - Tuesday, November 17, 2020 - link

    It would have been so nice to compare this to mini 2018 i7 (I7-8700B + uhd630),
    meh...
  • Ryan Smith - Tuesday, November 17, 2020 - link

    There are about half a dozen further Macs I would have liked to include. Unfortunately securing them is easier said than done, especially in the middle of a pandemic. So we had to take what we could get.

    The i7 would certainly have performed better than the i3 on the CPU side thanks to the additional cores and added frequency. The GPU side would have been almost as dire, however. Apple really wants to move the baseline for their systems far beyond what Intel (and really, the other PC OEMs) deem acceptable.
  • DeathArrow - Tuesday, November 17, 2020 - link

    Does it run Crysis?
  • Ryan Smith - Tuesday, November 17, 2020 - link

    Sadly, no. There never was a Crysis port for the Mac. In fact without Bootcamp, it's even less capable of running Crysis than the 2018 Mini.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Ask about SOTTR, yeah it exists on Mac OS and with proper port. But it won't be able to run as there are GPU requirements in play which Apple can never match, due to VRAM plus the Rosetta2 layer so the dev Feral should update it. So unless they do that there's no gaming benches which are big ones not the garbage Apple arcade mobile ones.
  • tipoo - Tuesday, November 17, 2020 - link

    I believe Apple had mentioned 128 ALUs per GPU core at the keynote around the same time they mentioned the flops, so that's confirmed.

    Worth keeping in mind, this is the slowest M chip Apple will ever ship, the very baseline, and it's already doing this. Can't wait for the "ARM can't scale" x86 die hards to pretend they never said such things.
  • YesYesNo - Tuesday, November 17, 2020 - link

    It is also the fastest chip they have shipped so far.
  • Spunjji - Thursday, November 19, 2020 - link

    Useless response is useless.
  • BushLin - Tuesday, November 17, 2020 - link

    So what higher spec chip have Apple announced? There's lower spec versions
  • Hrunga_Zmuda - Wednesday, November 18, 2020 - link

    That there is a straw man. Apple practically never announces ahead of time what they are going to bring out.
  • Spunjji - Thursday, November 19, 2020 - link

    Apparently some people get up in their feelings when other people use the present to extrapolate to a likely future 🤷‍♂️
  • BushLin - Thursday, November 19, 2020 - link

    There's a difference between discussing what might be vs. "this is the slowest M chip Apple will ever ship" which to me sound like declaring a certainty... And the only people who are in a position to make such a statement is Apple, and they haven't.
  • Spunjji - Monday, November 23, 2020 - link

    It *is* a certainty. They've announced that they'll transition their entire range, which means more chips to come. The remaining products in the Mac lineup sit higher in the stack, so the next chips will have higher performance. Unless you can think of a lower product category in their lineup than the ones serviced by M1, there's no reason for them to release an M chip with a lower spec than this one - everything below the MacBook Air is already covered by A14.

    This is simple logic, and you're being needlessly obtuse.
  • BushLin - Monday, November 23, 2020 - link

    Not obtuse. It wouldn't be out of character for apple to abandon the high end and target the M1for the much bigger market below that.
    Maybe you're right and there are higher clocked/higher core count versions on the way but the only certainty is in your mind.
  • Kirfkin - Tuesday, November 17, 2020 - link

    I'm impressed. It's definitely better than I expected. I didn't think ARM performance would be there QUITE yet. It'll be interesting as more applications and games etc make it to Native ARM. I suspect performance will be up and down, but this is absolutely impressive.

    (I use both x86/AMD64 machines and ARM machines such as the Pinebook Pro; I'd absolutely love to see ARM succeed if it ultimately proves better).
  • haghands - Tuesday, November 17, 2020 - link

    I really hope that somehow some of the engineering advances they've made with these chips are able to influence ARM IP in te rest of the industry. Obviously Apple doesn't want this and will in fact do everything in their power to prevent it, but I'm sure other vendors are studying these things closely and I hope it bears fruit lol. Who knows, maybe even some of their engineers leave Apple at some point to help build up some more open platforms, that's exactly what Chris Lattner of LLVM/CLANG and Swift did after all.
  • Spunjji - Thursday, November 19, 2020 - link

    Agreed with both of you, here. It seems that ARM have likely taken some of Apple's design principles into account with the X1, and with any luck future iterations on that design will close the gap.
  • Kishoreshack - Tuesday, November 17, 2020 - link

    People are saying apple have smashed intel & AMD
    The only question is how?
  • ElvenLemming - Tuesday, November 17, 2020 - link

    5nm process node and an extremely wide pipeline design that's much harder to implement in x86.
  • tipoo - Tuesday, November 17, 2020 - link

    Look at the 630 entry deep ROB, the cache specs, or the front end decode or the back end width. It's a wildly ambitious core and they were not afraid to go very big.
  • StormyParis - Tuesday, November 17, 2020 - link

    Thank you, especially for including modern, competitive x86 CPUs and not just the relatively outdated ones in current x86 Macs.
    I've looked at the new MacBook Air, it's the same price and speed as a core i7 (IIRC) Dell XPS 13, so it comes down to battery life and silence vs RAM and storage expandability, ports, software compatibility, and peripherals compatibility. But not really a game changer ?
  • ingwe - Tuesday, November 17, 2020 - link

    I'd say that on the power consumption side it is a game-changer. I would expect their next gen to be even better. I am actually considering an Air now and I thought I would be getting an XPS 13 for my next machine.
  • YesYesNo - Tuesday, November 17, 2020 - link

    If dell start using AMD already the xps 15 will be my next work laptop. I'm not holding my breath though.
  • Spunjji - Thursday, November 19, 2020 - link

    Same here on both counts. An XPS 15 with Cezanne and an RDNA 2 GPU would be the bomb; I doubt it's going to happen, though.
  • misan - Tuesday, November 17, 2020 - link

    It’s faster and cheaper than XPS 13 and it has better battery.
  • taligentia - Tuesday, November 17, 2020 - link

    20 hours battery life with incredible performance is a game changer.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    Performance per watt is significantly higher than Intel's current offerings. It outperforms i7 and i9 in certain tasks, let alone other laptop chips
  • Kishoreshack - Tuesday, November 17, 2020 - link

    The only thing these macs are better at is internet browsing & watching videos
    I would happily do that on my tab also don't see the point of using mac for serious productivity stuff
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Unfortunately the market is shifting towards that disaster, this Mac will be sending some wave in marketing bs for people who want a thin and light use and throw machine for such work. But thanks to Ryzen we have more DIY market incoming and more GPUs due to Next Gen arrival and more compute intense work like Streaming and etc.
  • melgross - Tuesday, November 17, 2020 - link

    That’s just your opinion.
  • misan - Tuesday, November 17, 2020 - link

    They are also better at compiling code, editing photos and videos and *shock* gaming than comparable computers ;)
  • BushLin - Tuesday, November 17, 2020 - link

    What games? iPhone games?
  • misan - Wednesday, November 18, 2020 - link

    All kinds of games. Larian has shown a native Baldur's Gates 3 version running smoothly at 1080p with highest settings. Not to shabby for a chip that runs at 15-20 watts.
  • BushLin - Wednesday, November 18, 2020 - link

    I'm sure it'll be good for playing Myst
  • Spunjji - Thursday, November 19, 2020 - link

    ROTTR performance beats any other integrated graphics out there, even under emulation.

    Facts must hurt you?
  • BushLin - Thursday, November 19, 2020 - link

    GPU performance on the M1 is its best quality, don't see any contradiction of that from me.
    Jokes about the platform limiting game availability hurt you?
  • Spunjji - Monday, November 23, 2020 - link

    Not really - I don't own a Mac and don't ever plan to unless Boot Camp comes back. You're mistaken to assume I'm personally invested in this; I just don't like shitposters very much.
  • taligentia - Tuesday, November 17, 2020 - link

    Not sure what you're rambling about.

    Macbook Air, Mac Mini and the 13 MBP were always their low-end models.

    Wait until their iMac Pro, Mac Pro are released. It will truly shake up the professional market.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    They're better at video editing, color correction, and image processing than other PCs.

    Show me a desktop PC that can scrub 8k video in DaVinci Resolve with no framedrops like the entry-level M1 in a 13" MBP can. Its exceedingly rare and incredibly expensive. I have a Threadripper workstation and am salivating at these
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    If this is "entry level" then I can't wait to see what their high end machines with 32GB+ of RAM look like next year
  • haghands - Tuesday, November 17, 2020 - link

    Did you look at a single benchmark? Are benchmarks too complicated for your single brain cell?
  • Silma - Tuesday, November 17, 2020 - link

    It's good to have more competition.
    However, let's temper these praises: 2 of the most paramount reasons explaining this chip speed are not apple: microarchitecture is ARM, production is TSMC 5 nm.

    The speed and power consumption of any Intel processor but made in 5 nm would be much similar to that of the M1.

    Even from AMD we can expect automatically much better results in 5 nm than in 7 nm.

    I would be very interested in seeing how competitive another ARM based TSCM 5 nm based SoC vs the M1. Perhaps from Qualcom or someone else.
  • Otritus - Tuesday, November 17, 2020 - link

    Tsmc's 5nm node offers about a 30 decrease in power consumption over 7nm. That would imply a 28-34 watt tdp at 7nm which still keeps it ahead in efficiency. In terms of performance, no AMD and Intel would not perform better on 5nm. There is no reason to believe the TSMC 5nm process can clock higher(or even as high) than the 7nm process. Since performance is equal to ipc times clock speed, a new microarchitecture would be needed to perform better. Apple has over a 50 percent ipc lead over intel and over a 40 percent ipc lead over AMD.
  • AlexDaum - Tuesday, November 17, 2020 - link

    An AMD or Intel CPU on 5nm would not gain performance by higher clock speeds, but gain it using a wider CPU design, larger caches, more ALUs, probably a better branch predictor and larger reorder buffer. The M1 has a whopping 16 billion transistors, AMD renoir only 9.6 billion, so they could add a lot more logic on the same chip size, which would lead to better performance.
    The biggest challenge with scaling x86 CPUs to higher performance seems to be building wide decoders (AMD and Intel only have 4 wide decode, M1 has 8 wide, 1 x86 instruction can do more than 1 ARM64 instruction, but not twice as much).
  • andreltrn - Tuesday, November 17, 2020 - link

    The problem with the X86 instruction set is that it has instruction of different size that don't match a wider architecture favorably. It is not that easy to achieve the high degree of parallelism of the M1 architecture with the X86 Instruction set. The 16 billions transistors are not only for the CPU and the GPU. There is a lot more in there! You can't compare the transistor count like that. This is a SOC.
  • Otritus - Wednesday, November 18, 2020 - link

    @AlexDaum

    1) As I said a new microarchitecture would be needed.

    2) Ice/tiger lake already has 5 decoders (4 simple + 1 complex), and tremont has 6 decoders (run as 2 sets of 3, but intel says it can run as 1 set of 6).

    3) Apples firestorm cores was designed for 2020 using the latest technology. Sunny cove was supposed to launch in 2017/18, with the next gen microarchitecture coming 2020/21. Intel also got lazy and didn't feel a need to speed up microarchitecture development and improve performance tremendously. So, intel is at 4/5 decoders with weak ipc compared to the competition.

    4) AMD is still at 4 decoders because they have a limited engineering budget and need to focus on getting the best ROI. I would imagine zen 4 or zen 5 would look into going much wider because lots of the other performance enhancing routes have already been investigated, and AMD is in a better financial situation.
  • BushLin - Tuesday, November 17, 2020 - link

    Meanwhile in reality, 7nm zen 2 is trading results with impressive initial offering by Apple. 7nm zen 3 is available in desktop and 5nm zen 4 becomes available when Apple's exclusively at TSMC ends.
  • Spunjji - Thursday, November 19, 2020 - link

    5nm Zen 4 will be contemporaneous with the second generation of M1 on N5P.
  • SarahKerrigan - Tuesday, November 17, 2020 - link

    The microarchitecture is Apple, not ARM; this is a fully-custom core.
  • RogerShepherd - Tuesday, November 17, 2020 - link

    Architecture is ARMs; microarchitecture is Apple's. And you can speculate about Apple's influence on the ARM V8 architecture.
  • BlackHat - Tuesday, November 17, 2020 - link

    Yeah, even the 4500U ryzen, is 27 slower in single core than this chip, with 18 power draw, a zen 3 version with other 19% IPC improvement should be very competitive.
  • BushLin - Wednesday, November 18, 2020 - link

    ...and that's before we focus on doing some real work in multithreaded tasks.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    Qualcomm's implementation is a joke. Their performance has been 18 months behind Apple's for the better part of a decade. It isn't as simple as saying "this is an ARM processor" while disregarding that Apple's chip design is so far ahead of everyone else in that space.

    Intel has been trying to poach their head of chip development (who prior to working at Apple was one of their higher ups) to be their new CEO for over a year now. Apple poached the best around 2008 and this is the payoff.
  • Nick B - Tuesday, November 17, 2020 - link

    Apple has a perpetual ARM licence because they co-founded ARM. Apple does NOT use ARM reference designs, just the licence.
  • haghands - Tuesday, November 17, 2020 - link

    The ISA is arm but the microarchitecture is entirely Apple. They've been designing their own cores from scratch since 2012.
  • andreltrn - Tuesday, November 17, 2020 - link

    The microarchitecture is nor ARM. The microarchitecture is from Apple and they use the ARM instruct ruction set. Apple was the first to design and produce a 64 bit ARM processor. They did that a year before ARM came out with their 64 bit design.
  • cfenton - Tuesday, November 17, 2020 - link

    The Rosetta2 performance is what I was most interested in. Being able to mostly match Tiger Lake even in apps that haven't been ported yet is incredible. It means there's no real downside to these new Macs, unless you need Bootcamp.
  • eastcoast_pete - Tuesday, November 17, 2020 - link

    Thanks Andrei! I agree that the M1 is a (needed) jolt to the CPU ecosystem; very impressive. Two questions: 1. Did you have a chance to try out Rosetta with, let's say, MS Office for Mac (maybe when writing this review?), and 2. Especially if not, any plans to write a separate review on Rosetta and how well the x86 emulator does when running key productivity/creative software on it, maybe some basic office and Adobe programs? Would find it really useful to know how memory-dependent (RAM) it is - after all, can't add any, since it's an SoC design.
  • Andrei Frumusanu - Tuesday, November 17, 2020 - link

    We didn't have time to properly put any of that into concrete numbers, as I noted I'm sure other publications will have done a way better job at those things. I'll try to experience it as much as possible, it's just been 3 days of compiling stuff.
  • eastcoast_pete - Tuesday, November 17, 2020 - link

    Thanks Andrei! Any insight on Rosetta and use of "legacy" applications (so, almost all right now) is appreciated.
  • melgross - Tuesday, November 17, 2020 - link

    It’s not an emulator. It’s virtualization.
  • jpcyr - Tuesday, November 17, 2020 - link

    None of the above. It’s translation. Once open and translated, the app runs.

    If an executable contains only Intel instructions, macOS automatically launches Rosetta and begins the translation process. When translation finishes, the system launches the translated executable in place of the original. However, the translation process takes time, so users might perceive that translated apps launch or run more slowly at times.

    https://developer.apple.com/documentation/apple_si...
  • eastcoast_pete - Tuesday, November 17, 2020 - link

    Thanks! If I understand that correctly, that translation needs to happen every time the x86-native application is opened; is so, how much wait time is to be expected?
  • Blark64 - Tuesday, November 17, 2020 - link

    The translation happens only once on first launch of an app, and the result is cached; after that, app launches are as fast as usual.
  • tipoo - Thursday, November 19, 2020 - link

    Except in code-within-code cases like Javascript, a browser, etc, where it must fall back to emulation
  • Retycint - Tuesday, November 17, 2020 - link

    Even as a consumer that would never buy a Mac, maybe ever, I have to thank Apple for pushing ARM processors in computers, and to accelerate the shift (hopefully?) towards ARM. Hopefully Qualcomm and Microsoft get their shit together and produce a comparable ecosystem/CPU within the next 2 years. Intel, meanwhile, can die a slow painful death, especially after, what, 10-12 years of 15W dual core sadness
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Why do you think that is going to happen, because Apple is making ARM chipset everyone in the world does it ? Microsoft already has Qualcomm 8cx Surface Pro X chip HW it's a garbage one since it has to translate 32bit x86 at slower rate and on top with ton of Windows exe ecosystem it will be even hard and then on top 64bit is not licensed, and if they do that it will be even more perf hit.

    Server market is not there, just forget it. Marvell abandoned Thunder X3 for off shelf CPUs, only Graviton 2 exists but it is limited scope and only for AWS. ARM is increasingly walled garden bs because look at the ARM Thunder X3, Marvell mentioned they will do a custom chip for any company who wants such machine but not off shelf, Graviton 2 is only for Amazon they don't sell it.

    With more and more vertical nature of ARM why do you guys always want ARM to replace x86, do you like your software and hardware gated. Look at this Mac Apple Rosetta2 don't allow 32bit x86 it was dead. Last was Mojave, many users still use that. On top the HW is full BGA, no user can do anything on this. Even a repair would be near impossible due to too many blackboxes.

    10-12years of dual core sadness lol, When there is no competition that is what happens, do you think Apple will do anything if there is no competition on mobile space ? AMD provided that now we have superb processors from AMD which are not only more powerful but rather more user friendly with DIY socket support, ECC RAM and etc.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Why would he want a fast efficient processor?

    Yeh i have no idea why someone would want that.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    This CPU is only for the light workloads not a replacement for i7 class machine. People think this is going to destroy 5950X and 10 series i9, by looking at the prev article comparisons with SPEC scores and GB on ST perf. When more cores come into play that advantage of Apple M1 diminishes. Which is why Apple is still having Intel macs for sale at a much higher price.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I don't think it is a replacement for a decent desktop. Actually i don't know why i should care about low power in a desktop at all. But it is decent for a laptop if you want battery life without having to give up too much performance.
  • SarahKerrigan - Tuesday, November 17, 2020 - link

    It sure looks like a replacement for laptop i7's, which is the real comparison point here.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I agree with this, it looks like it can replace a laptop i7, similar performance but better efficiency.
  • melgross - Tuesday, November 17, 2020 - link

    Ah, you just said it—ONLY when more cores are availav]Bluetooth Low Energy, and, by the way, three times as much power draw, does AMD show any consistent advantage.

    Do you really think Apple isn’t working on 6 and 8 core designs with even faster cores? Or even more powerful GPUs?
  • YesYesNo - Tuesday, November 17, 2020 - link

    If you want to talk about hypothetical things Apple are working on you have to compare them to hypothetical things Intel and AMD are working on too.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Why don't we just talk on Zen 4, Nvidia Hopper and AMD RDNA3, TSMC made an announcement that 2nm risk prod is on track for 2023, that should be the discussion we should do right now. or how SpaceX is going to take us to Mars for instance while their mission Crew 1 is all in the news today.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    Its already a replacement for i7 class machines in many productivity workloads, and this is just the entry level.

    If this can scrub 8k video in Resolve with no dropped frames then I can't wait to see what their beefy configurations are capable of.
  • andreltrn - Tuesday, November 17, 2020 - link

    Because pros won't integrate new equipment in their workflow until they know it is flawless. Wait until Spring 2021 when the next version (The M2 or m1X) comes out. Then they will stop selling intel based Macbook. Probably in 2022 they will ditch the Xeon in the MacPro as well.
  • Spunjji - Thursday, November 19, 2020 - link

    "People think this is going to destroy 5950X and 10 series i9"

    People are indeed capable of extrapolating daft conclusions from data, but that's not what the data says. The data says that a multicore chip based on this architecture and a comparable number of cores to an i9 or 5950X *could* be competitive in its performance, and for some reason a lot of responses are assuming that means it's *already the case with M1*.
  • KPOM - Tuesday, November 17, 2020 - link

    Very impressive. I have a MacBook Air arriving today and can hardly wait to test it out.
  • robco - Tuesday, November 17, 2020 - link

    Only the Air has the 7 core GPU at the entry level. The MBP and Mini both have 8 core GPUs.
  • KPOM - Tuesday, November 17, 2020 - link

    I have the Air with the 8-core GPU, though it's only about 6-7% faster than the 7-core GPU. I get Metal scores about 20,500 in Geekbench, FWIW.
  • MonkeyPaw - Tuesday, November 17, 2020 - link

    "Finally, it should be noted that Apple is shipping two different GPU configurations for the M1. The MacBook Air and MacBook Pro get chips with all 8 GPU cores enabled. "

    Small correction here, the MBA also has a 7 core GPU option at the entry level. I guess time will tell on how much more performance that extra GPU core will get you.
  • KPOM - Tuesday, November 17, 2020 - link

    My guess is that the “missing” GPU core won’t make much difference for the average consumer. It’s likely just Apple binning chips and using ones with a “defective” GPU in the base Airs. It’s $50 less than the 8-core models similarly equipped.
  • Stuka87 - Tuesday, November 17, 2020 - link

    I have gone through three previous Apple hardware migrations (68K-16bit --> 68K-32bit --> PPC --> Intel) so they certainly know how to make it not terrible. Will be curious how this one goes. But the performance shown here is better than I expected, which is a plus. But losing the ability to run Windows apps is a big issue for me, so I doubt I will be moving over anytime soon.
  • YesYesNo - Tuesday, November 17, 2020 - link

    The lack of windows makes it a no for my work machine and no external GPU makes it a no for my home machine.
  • biigD - Tuesday, November 17, 2020 - link

    Their products aren't for me either, but I'm happy to see them come out swinging. It'll be good for all of us in the long run.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Competition is good. Hopefully Intel will start competing again. AMD and apple aren't competing with each other at all in my opinion. Everyone using an AMD processor was on windows and apple moving to their own silicon only really hurts Intel.
    The only way I could have seen any competition between Apple and AMD was if Apple lowered prices enough to become an option for people who are OS agnostic.
  • Spunjji - Thursday, November 19, 2020 - link

    @YesYesNo - agreed on this. I don't think there are really that many OS agnostic people out there, either; the ones I do know of mostly use stuff like Chromebooks, as they just care about the internet and basic productivity apps.
  • Nick B - Tuesday, November 17, 2020 - link

    I have a Vega 64 external GPU connected to my 3.1GHz i7 2017 MBP for FCP use. Nice to know I don't need it anymore if I buy one of these 1st gen M1 Macs.

    I'm waiting for the new iMac with it's M1 Pro (or whatever they call that SoC) in Q1 2021.
  • vaddieg - Tuesday, November 17, 2020 - link

    Parallels has announced win4arm support on m1. Next version of windows expected to support x86_64 emulation like Rosetta does
  • Stuka87 - Tuesday, November 17, 2020 - link

    Yeah. But currently running windows software in Parallels is currently super fast as there is no translation of RISC/CISC.

    Even if MS does add ARM support (which I am not going to hold my breath for) it still means hardware translations for software that doesn't have an ARM binary.

    Plus, being able to fully boot into windows is a very nice thing to have. Which may return if ARM support does come to Windows (lets ignore Windows RT for this), but not short term.
  • Eric S - Tuesday, November 17, 2020 - link

    Microsoft announced it will be out this month. No need to hold your breath very long.
  • Hulk - Tuesday, November 17, 2020 - link

    Wow, this is impressive work from Apple. Anand must be behind it;)
    A couple years ago I would have never thought we've have an actual 3-way CPU battle among Intel, AMD, and Apple. Capitalism at work. Fantastic.
  • name99 - Tuesday, November 17, 2020 - link

    Kinda amazing how everyone wants to cheer on capitalism as the hero here...
    Was Qualcomm not capitalist? nVidia? ARM corporate?

    The issue is not capitalism per se, that's merely a tool. The issue is that certain companies have an engineer mindset, willing to take risks and try very new things for the sake of doing things better.
    Intel used to be such a company. QC never was, after they got rich off CDMA. AMD and nV still are. Apple is the peak in this regard.
    And of course the side effect of being the company that takes risks and tries new things is that you are the company that's the butt of jokes from every internet idiot. If you think "haha" when Apple said "it takes courage to drop the headphone jack", rather than thinking "exactly", then you might just be ...
  • MrCrispy - Tuesday, November 17, 2020 - link

    dropping the headphone jack, and hdmi, and any other port, so you can sell overpriced dongles is not an engineering decision, its profit driven. There's absolutely nothing brave about it.

    You can talk about adopting Firewire/Thunderbolt as an example, that'd be valid.
  • Spunjji - Thursday, November 19, 2020 - link

    Taking the piss out of Apple for dropping the floppy drive and switching to USB with the iMac turned out to be silly.

    Taking the piss out of Apple for removing the headphone jack, on the other hand, is completely valid. No consumer benefits have materialised from it, and it hasn't been replaced with something superior. If they'd used the opportunity to put a second Lightning port in, then maybe you'd have a point.
  • flyingpants265 - Tuesday, December 1, 2020 - link

    And there probably never will be a consumer benefit.

    The correct move would be to have wireless headphones with a little (wired) base thing which contains a battery that can last up to a week. This would be great for usability as a 3rd party accessory. No more charging your wireless earbuds since they've got a giant battery! Well, does this exist anywhere? For under $100? Without DAC quality suffering? I doubt it.
  • PeterSun - Tuesday, November 17, 2020 - link

    I am curious if M1 based mac mini or macbook could connect an eGPU through the TB4 ports?
  • Silver5urfer - Tuesday, November 17, 2020 - link

    No eGPU is out of equation.
  • chang3d - Tuesday, November 17, 2020 - link

    Think it got something with not having any drivers from AMD for this new architecture. I doubt NVidia would get a driver working here.
  • Eric S - Tuesday, November 17, 2020 - link

    At least not out the gate. Maybe that will change. Apple hadn’t actually said anything about this outside a compatibility list. However, Apple is big on TBDR chips. Currently all eGPUs are immediate mode which Apple wants to discourage since it is inferior in many ways. Apple is rumored to have a discrete GPU in development. Possibly for release next year. It might make sense to embed it in eGPUs or displays.
  • mrdude - Tuesday, November 17, 2020 - link

    If this is what Apple are capable of with a bit of hard work and time, it makes you wonder just what on earth is going on at Intel, ARM, and Qualcomm? I'll excuse AMD for the time being, given their recent generational improvements have been impressive -- though not to the same extent as witnessed above.

    Apple has made it evident that a pinpoint focus on microarchitecture improvements built upon the ARM ISA can match, and exceed, x86 at the top end in performance and far away better in efficiency. I despise Apple's anti-consumer antics and would prefer to avoid them, if I could. However, a rising tide lifts all boats... except for the x86 ones? Those are looking like they may drown.
  • YesYesNo - Tuesday, November 17, 2020 - link

    They haven't exceeded the i9 or 5950x as far as i am reading the data. Efficiency yes, performance no.
  • Zerrohero - Tuesday, November 17, 2020 - link

    What do you think will happen when Apple releases a proper desktop chip?

    Look at the computers in which they are using the M1.

    Is 5950x really the proper comparison?
  • YesYesNo - Tuesday, November 17, 2020 - link

    I don't know, i can't see the future. I think that apple have released their best effort so far. I don't think that is unreasonable to assume that they want to show off the best they can do currently when bringing out a new chip. And currently they aren't beating the top end. If they were they would have released that.

    I was replying to a comment about the top end of x86, which is why i mentioned the top end(consumer) of x86.
  • augustofretes - Tuesday, November 17, 2020 - link

    I don't know why people feel the need to delude themselves. Apple replaced the chips on their lowest end products first simply because those products are far more important to them. Is that clear? Apple sells far, far, far, far more base Macbook Airs than they do 16" Pros.

    Moreover, the people buying their top of the end products need all of the software ported first, so they will also be the last people to make the transition. It's that simple.

    Apple can obviously add 4 firestorm cores to their SOCs and they will most certainly do so in the coming months (probably they will start with the high-end Macbook Pro 13"). Their desktop-class processors are the last thing they will update, because it's the least relevant and the one where customers are more likely to wait until everything is ported to make a purchasing decision.

    The M1 performs exactly as anyone could've estimated (and actually did estimate) based on their A14 chip.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Yes of course, apple just didn't want to impress everyone with the performance of their best chips so they are keeping them hidden away. Makes perfect sense.

    Apparently being deluded means looking at what is released and not dreaming about what isn't.
  • Zerrohero - Tuesday, November 17, 2020 - link

    Only a die hard Apple hater isn’t impressed what they have shown now.

    Read the reviews, check Twitter for plenty of tests like compiling huge codebases. Even the fanless new Air beats the i9 16” MBP in those while using drastically less power.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I haven't said they aren't impressive, quite the opposite. But yes because i'm not saying well game over apple wins everything i must be a hater of course.
  • Kuhar - Wednesday, November 18, 2020 - link

    I am impressed by the results, they are very good. But beating i9 MBP isn`t really a big thing since i9 in MBP is so crippled by the poor design of the cooling solution and chasis in MBP that it actually works like an i7 in good designed notebooks.
  • Spunjji - Thursday, November 19, 2020 - link

    @Kuhar - one thing I will say in apple's defence is that the MBP cooling design is actually quite good, it's just not particularly *powerful* in an objective sense. For the size and weight of the device, it's well above average. The problem was that Apple kept iterating their designs at the release rate of Old Intel, and Intel's foundries haven't caught up - see also the MacBook and MacBook Air, which were clearly designed in expectation of processors with an actual 10W TDP instead of a "10W" mode that's really just permanent throttling.
  • PickUrPoison - Saturday, November 28, 2020 - link

    The i9 operates at well above Intel’s rated speeds, there is no thermal throttling. Unless by thermal throttling you mean it doesn’t run at 5.0GHz with all 8 cores at 100% util.

    It’s a 2.4GHz part, and it operates somewhere around 3.2GHz in Apple’s 16” MBP enclosure iirc.

    So apparently there is a lot of thermal headroom is apparently available to that i9.
  • apoctwist - Tuesday, November 17, 2020 - link

    When Apple made their move to Intel their first Intel based machines were the MacBooks. Low tier, low priced machines. They didn't release the Macbook Pros until later. Apple likes to release high volume products first to get the kinks out and then move on to the higher performing stuff later. It's pretty much the opposite of how most other companies do it. They start at the low end and move up.

    Apple doesn't care about impressing tech nerds on Anandtech, they focusing directly at consumers and all consumer needs to know is that the new consumer facing Macs perform as well and in many cases better than the intel based machines.
  • Glaurung - Tuesday, November 17, 2020 - link

    Chronology of the PPC-Intel transition:

    January 10 2006: Intel-based iMac, and 15-inch MacBook Pro
    February 28: Mac mini
    April 24: MacBook Pro 17"
    May 16: MacBook 13" (13" macbook pro did not exist yet)
    August 7: Mac Pro

    Keeping in mind that 15 years ago desktops made up much more of the market and laptops much less than now, I'd say that what they did both then and are doing again now is starting with their top-selling devices. Today, their top sellers are also their lowest-end models, back then it was more muddled.

    That said, I think it's bankable that the M1 is the best low power chip they can make *for ultraportables,* and not the most performant chip they can make. Next year, they will come out with a higher core count, possibly higher clock speed chip for their imacs and high end macbooks. And then this time next year or sometime in 2022 they'll come out with something even faster and more power hungry for their mac pro desktop.
  • YesYesNo - Tuesday, November 17, 2020 - link

    News flash, tech next year will be an improvement to tech this year.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Why would they have to, everyone knew what an intel processor could do as they existed whether they were in a mac or not.
  • Nick B - Tuesday, November 17, 2020 - link

    I'm sorry in the midst of a global pandemic Apple wasn't able to refresh every single product they offer on day one.

    Nvidia and AMD stagger the release of their latest cards due to engineering and logistic constraints. A bit of a double standard on your part if you expect Apple to perform miracles.

    Rest assured they will release new desktops and pro laptops in early 2021 and I'd pay to watch you eat your jealous words.

    In my case I'm placing my 5950X build plans on hold until I see what GPU magic Apple comes out next year.
  • Kuhar - Wednesday, November 18, 2020 - link

    :) I really like your comments! Very reasonable and objective.
  • Kuhar - Wednesday, November 18, 2020 - link

    @YesYesNo - that was to your comments that i like: reasonable and objective.
  • PickUrPoison - Saturday, November 28, 2020 - link

    re: “ apple just didn't want to impress everyone with the performance of their best chips so they are keeping them hidden away”... Nobody said that, that’s a straw man if your own creation, isn’t it? 🙄

    But obviously it upsets you to contemplate future Apple releases. Given how frightened you are of Apple silicon, your head is likely to explode as higher core offerings will certainly blow your mind 🤯

    Some think the M2 will be 12-core (8+4) and the M3 16-core (12+4) but I don’t think they can replace the 28-core Mac Pro with just 12 high performance cores. A 4+4 M1, 8+4 M2 and a 16+4) M3 make sense to me. Maybe there will be an additional 32- or 64-core M4 just for Mac Pro, who knows.
  • PickUrPoison - Saturday, November 28, 2020 - link

    Apple has just released their first few Macs with the lowest performing Apple silicon they will ever release. At this point we can say the M1’s performance is amazing for a processor with 4 high-performance cores. That’s just a fact, though I’m sure that upsets some of the Apple bashers.

    I think it’s foolish to think Apple will never make a CPU with more than 4 high performance cores, but fools will do what fools will do ¯\_(ツ)_/¯

    I also think it’s foolish to think the M1 should be able to beat CPUs “at the top end”. A $799 mini won’t beat a $50,000 Mac Pro with a 28 core CPU, just because it smokes it in single-core performance. That’s just a little too much to ash.

    But as Apple steps up the cores, they’ll replace higher performing, higher priced Macs. It’s really so simple as to be laughable that some people don’t get it lol.

    Apple’s laptops represent >80% of their sales, and they just kicked Intel out of the best selling ones. On day one, they already have custom Mac silicon that is sufficient for many, many users—consumer and pro alike. Who but Apple could have pulled this off 🤣
  • ingwe - Tuesday, November 17, 2020 - link

    Apple probably optimized the power draw for the laptop space. So it will likely be less efficient at higher powers (unless they just cram more cores in). Just something to think about.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I'm not sure why some people think that apple have something even better just waiting for the right moment to be released.

    Will they continue to improve, of course. Will everyone else stand still? That seems unlikely.
  • prisonerX - Tuesday, November 17, 2020 - link

    Here in the real world, there are capacity constraints for everything, even for massive corporations. For example, TSMC doesn't have unlimited wafer capacity at 5nm.

    Apple said they would take 2 years to do the transition. The high-end desktops are probably the most expensive to develop with the lowest volume, so you can expect them to come last, and probably late.
  • YesYesNo - Tuesday, November 17, 2020 - link

    So do they have more impressive chips available or not?

    Are we going to ignore binning too? Are these the top binned processors or the bottom binned?
  • Glaurung - Tuesday, November 17, 2020 - link

    Of course they do. They said in their presentation, right at the start, that the M1 is optimized *for low power*. It's designed for the Macbook air and they threw it into the low-end macbook pro and low-end mini because those are designed around for a similar kind of buyer - people who want light weight, long battery life (or low cost in the case of the mini), and don't care about more than 16gb of RAM or discrete-level graphics performance.

    Apple has a high end mini, two high end macbook pros, and their Imac line, which they sell to people who care very much about performance more than battery life. You can be absolutely certain that those computers are going to get a faster (more cores and/or more clocks), more power hungry chip than the M1.
  • Nick B - Tuesday, November 17, 2020 - link

    The M1 SoC is currently limited to 16GB of RAM. Mac Intel laptops currently offer 32 GB options, the iMac goes up to 128 GB, the iMac Pro maxes out at 256 GB.

    Apple is a multi-trillion dollar company because it executes consistently. Without a doubt SoC with additional TB4 ports and larger RAM capacity are coming. You'll just have to wait like the rest of us.
  • Nick B - Tuesday, November 17, 2020 - link

    Not "right timing" but engineering and logistics capacity. And yes they have a desktop class version of the M1 that's been under development for as long as this M1. That's how engineering and business planning works. Apple is consistent at delivering products on schedule when they control the process from start to finish. M1 Macs will be no different.
  • MrCrispy - Tuesday, November 17, 2020 - link

    and by the time Apple releases a new chip next year, I assume you think Intel/AMD will have spent the year sleeping?

    M1/Apple Silicon is meant for mobile/laptop, not desktop. And when Zen3/4 come out and move to 5nm the perf gap and perf/watt gap will be very little.

    Unless you think other companies don't have engineers or are run by idiots.
  • Spunjji - Thursday, November 19, 2020 - link

    Apple's next release is likely to be around the end of Q1, maybe early Q2 next year.

    At that time, the AMD competitor will still be Zen 3 (with the addition of Cezanne on the mobile side of things) and the closest Intel competitor will be 8-core Tiger Lake, with Rocket Lake on desktop. It's hard to take the latter seriously as a competitor, though, as its power usage will be on a different order of magnitude.

    That picture only changes significantly if it takes Apple until very late in 2021 or early 2022 to release a larger chip, in which case it might face off against Alder Lake. That would be more interesting as an outright performance comparison, but in power/performance terms it doesn't seem like Intel are likely to gain back much ground.
  • Spleter - Wednesday, November 18, 2020 - link

    What are they doing? Intel, AMD, and Qualcomm are trying to sell designs marked up at a profit that OEM’s are willing to pay for, as in turn the OEM’s are out to produce their product that they hope to mark up at a good profit but face a situation of having to vend product that must run Windows to address as much potential customers as possible. The world of a WinTel OEM’s market is highly competitive and it seems most consumers and corporate IT purchase decision makers only view PC’s as simple commodities and so you just get an OEM race to the bottom in prices. Outside of creative pro’s and gaming enthusiasts, it seems the average WinTel consumer is not willing to pay what it takes for the PC industry/ecosystem to

    Apple, on the other hand, does “the whole widget” and increasingly takes on the risk of in-house responsibility for design and production of critical components of their end product solutions. And since they are not trying to market those components as products that have to be individually marked up for sale, they only have to worry about one single profit objective. Apple only has to focus on what is technically needed to achieve performance targets for some user experience objective and can just fold in all the cost considerations together as they just pick a product price has their desired profit margin.

    Intel, AMD, and Qualcomm don’t have enough mass market consumer demand for their highest end designs and must make chips that address many different OEM segment needs. In one since its easy to see what Apple is doing for performance with their big/wide SoC designs but will any OEM’s pay the price it takes to match Apples designs knowing that the may price themselves out of the market with end consumers? Qualcomm’s latest mobile chips are now bigger than ever and now more expensive than ever, could the price themselves out of some OEM markets trying to match Apple SoC performance? (Or is it that Microsoft, Intel, and Qualcomm take “too much” value out of their part of the end product such that there is not enough for OEM’s to really sustainable

    Apple has the unit volumes and only makes what they need for their requirements, and they make their critical components the best they can be made at the time. Plus, Apple has the premium segment of the phone, mobile & desktop PC market with customers willing to pay for quality,

    M1 is not some low cost
  • Spleter - Wednesday, November 18, 2020 - link

    Wasn’t ready to submit that...

    M1 is not some low cost chip, it is the chip in Apples “low cost”/low end mobile PC offerings. M1 is the low power/high efficiency offering in the family of chips that will be based on this architecture.

    This is the lowest end performance baseline SoC of the Mac Apple Silicon family. The real performance mobile/desktop workstation SoC will really reveal the performance chasm between x86 and Apple Silicon
  • km1810vm4 - Tuesday, November 17, 2020 - link

    Any change of getting some compiler benchmarks? Building a toolchain or the Linux kernel?
  • km1810vm4 - Tuesday, November 17, 2020 - link

    *chance*
  • dgb448 - Tuesday, November 17, 2020 - link

    Not complete benchmarks, but early signs are that compiling is crazy fast.

    https://twitter.com/wvo/status/1328739313132077056
  • MoosBadda - Tuesday, November 17, 2020 - link

    Wonder why the 16' MBP didn't get featured in the benchmark comparisons. I see a few people waiting to upgrade being very interested in that comparison.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I think moving from the 16" to the 13" with M1 would lose you more than you gain regardless of performance.

    I've seen a couple of real world comparisons with the i9 16", the 16" coming out on top. Though i beleive the drive speeds in the new 13" pro are better.
  • MoosBadda - Tuesday, November 17, 2020 - link

    I meant, those owners deciding whether to bite the bullet and prepare for the eventual 16 inch MBP M1. I for one am currently deciding whether to get the 2019 16 or wait for the eventual m1 16s, as whichever one i do get will be the main driver for a minimum of 5 years.
  • YesYesNo - Tuesday, November 17, 2020 - link

    I'd wait in that case, unless you currently use EGPU and the i9.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Or need bootcamp too.
  • ABR - Wednesday, November 18, 2020 - link

    I've got a 16" and was debating over whether to upgrade to the same with a 5600M. Now I'm going to wait and see whether the first gen M 16's (maybe same time next year) solve the eGPU and x86 virtualization issues.
  • Aquaschaf - Tuesday, November 17, 2020 - link

    Looking at how it compares to the Ryzen 7 4800U I speculate that the Zen 3 based successor could be at least within striking distance in terms of performance and performance/watt. Of course, that product does not yet exist, so these results are impressive.
  • BlackHat - Tuesday, November 17, 2020 - link

    But the fact that can get close with a less advance node is proof that x86 is far from dead.
  • Spunjji - Thursday, November 19, 2020 - link

    100%. There's a lot of exaggerated doomsaying.

    Similarly, though, there were a lot of people who insisted ARM architecture CPUs could never play at this level. It's good to see!
  • id4andrei - Tuesday, November 17, 2020 - link

    I think it was worth mentioning that the GPU benchmarks on macos are Metal while Windows are DirectX. The real workload scenario, TombRaider, shows a larger difference between systems than synthetic benchmarks.

    The same with CPU. The heavier Cinebench workload is enough to reverse the standings as shown by Spec, Geekbench and Speedometer.
  • Kuhar - Wednesday, November 18, 2020 - link

    True, my comments too.
  • Spunjji - Thursday, November 19, 2020 - link

    Tomb Raider is being run under x86 code translation, though, so that's not exactly an entirely fair comparison either. It's not clear what "fair" would mean given the relative lack of native macOS games, though.
  • galad - Tuesday, November 17, 2020 - link

    Can you add some HandBrake benchmarks? There is a arm native 1.4.0 beta available.
  • sqrt(-1) - Tuesday, November 17, 2020 - link

    The benchmark comparisons with scores on Windows are not valid because many of them deliver very significantly different scores depending on the OS used.

    For example, Geekbench spews out ~20% higher scores on MacOS than Windows with the same CPU.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Wow I didn't knew this lol, that makes GB even more useless.

    I know that whenever a new GB comes it puts A series in a major light, the main identification is simply look at GB4 benches for S20 vs iPhone 11, 12 and then GB5 for the same set. You will observe massive difference with the S20 on GB5 losing to the iPhone 11 vs the GB4 run.
  • marrakech - Tuesday, November 17, 2020 - link

    i saw some reviews, so all new model come with theyr new os out of the box Big sur
    i think if u run windows 10 on them m1 parts u see how far its realy in performance
    they made an nice os for an underpowered iphone 2 so they know what to do to make an new os have good battery life
    for thos who war saying its 2 x faster lock at that ,
    https://youtu.be/BUBYO591Fo8?t=79
    i bet you if they run passmark cpu mark u see how poor it performs for its 8 cores
  • Solidstate20 - Tuesday, November 17, 2020 - link

    @Andrei - under the GPU section (second sentence) you wrote "if your computers" - I think you mean "of", not "if".
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Why not put the CBR23 benches for 5950X and 10900K when those are included for SPEC and GB ? You don't even need Mac Mini for that put the runs of those CPUs. If you put it will show the reality of the M1 vs x86
  • YesYesNo - Tuesday, November 17, 2020 - link

    I don't know why you are so obsessed with trying to make out that this chip isn't an achievement?
  • Silver5urfer - Tuesday, November 17, 2020 - link

    They have a decade of experience with ARM ISA and they make their own CPUs, it's a modified A14. Why is this a ground breaking achievement when the company is making broad bs claims like faster than 98% of laptops and world's fastest core rubbish. & moving architectures for Apple is not a new thing as well.

    Intel heatoutput is not helping them either, Apple is the one which pushed BGA bandwagon on the laptop market for high core i7 chips, Intel applied the same for all mobile processors from basic celerons to i9 which is horrible due to lots of reasons (BIOS voltage clock controls inb4 you say no one does it, go to TPU and check for TS userbase or at NBR as well, Throttling due to sub par cooling, this applies for Macs they throttle a lot and they use poor VRM components on top to make it worse)

    Achievement is AMD, near bankruptcy almost written off to a market leader on all fronts from DIY to Server datacenter market. Qualcomm tried to make an achievement in breaking the Intel market at DC with Centriq lineup and failed. They could've gone to AMD for a better per/watt ratio but when their core userbase don't even need such power & paying tons of cash to Intel while paying billions to TSMC on the side shows a poor figure for the Intel Mac business.

    This is not ground shattering. Sorry nope. That crown goes to AMD this decade.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Ground breaking and ground shattering are your words not mine.
  • Spunjji - Thursday, November 19, 2020 - link

    "it's a modified A14"
    And the A14 is itself quite the achievement. Have you seen the graph of Apple's performance increases, year on year? It's kind of silly.

    Then there's the fact that no other ARM vendor has done this. A few have tried with server chips and failed, the few who have succeeded have done so with caveats. There are no caveats with this - it beats out the best Intel could offer in the same market segment, and an entire segment above, and is still competitive even with code translation in play.

    It's taken them a decade to go from being dependent on Intel to competing with them and winning. Not bad going, really.
  • The Garden Variety - Tuesday, November 17, 2020 - link

    Because that's what Anandtech is all about: Articles that celebrate and dissect new technology, test it, poke it, see what it does, and hundreds of comments that are the forum equivalent of someone interrupting someone else to yell, "ACTUALLY..."
  • vais - Wednesday, November 18, 2020 - link

    Well, it wouldn't paint the M1 in the picture the author wanted, of course!
    How dare you suggest to show benchmarks where it doesn't seem to be best of the best, crushing the filthy x86 plebs?

    To me this article and the last one about A14 seemed like sponsored pieces... There are a lot of interesting details, but the benchmarking part is 90% advertisement for M1 / A14. When the chart shows it is near or beats a desktop x86 - it is praised (as it should be). On other benchmarks, this same desktop x86 is omitted as you said. And yet claims are made how M1 somehow outperforms not only in performance/watt, but absolute performance even the best of x86 - what?!

    And then we have this:

    https://images.anandtech.com/graphs/graph16252/119...

    Comment: Apple doesn't win, let's downplay the x86 lead.
    - I agree M1 competing with a much more power hungry desktop CPU is amazing, even if only single core.

    The more interesting multi-thread Cinebench:
    https://images.anandtech.com/graphs/graph16252/119...

    Comment: M1 is destroyed by AMD's mobile CPUs, so let's completely ignore this fact and instead praise how M1 is crushing Intel offerings.
    - Also the 5950x completely omitted, we don't want to skew the chart _that_ much...

    Don't get me wrong, I do think M1 is fantastic for Apple and it really does outperform Intel CPUs - but guess what - those aren't the leaders in x86. It is a better CPU than those previously used in Apple, but x86 has better CPUs with better perf/watt just in AMD.

    The GPU however is a best!
  • Spunjji - Thursday, November 19, 2020 - link

    Leaving a 16-core chip that pulls 142W during the benchmark out of the MT chart makes sense.

    Keeping it in the ST chart also make sense - it gives you a ballpark idea of where Cezanne will be playing, as AMD don't lose much in ST performance from their mobile chips.
  • vailr - Tuesday, November 17, 2020 - link

    My question would be: how far away is Microsoft from supporting Windows 10 ARM edition from being able to boot and run on an Apple M1 machine? Either in Boot Camp or "bare metal" mode?
  • YesYesNo - Tuesday, November 17, 2020 - link

    I don't see it happening anytime soon. At least not until Microsoft themselves have better arm based options.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Bootcamp is EOL. It's dead. Win10 ARM is a long shot, they are trying very hard at Qcomm SQ1 based Surface Pro X and lack of Software on ARM and translation won't help them get there either.
  • Eric S - Tuesday, November 17, 2020 - link

    64-bit Translation will be out this month. The question is will Microsoft license it. I’m leaning toward they will. Macs are too popular among developers.
  • Glaurung - Tuesday, November 17, 2020 - link

    Dual booting? Probably never happening: Apple would have to do the work of creating Windows drivers (for bootcamp they just repackaged generic windows drivers for the hardware they were using) and they don't care to.

    Running windows ARM in a VM? Microsoft only has to make their ARM version available for purchase - right now you can only get it preinstalled on ARM devices. The rest is up to parallels/virtualbox/VMware. I'd say a year, maybe less, but first MS has to come through with the necessary licensing policy.

    Running ARM Windows in a VM with native graphics drivers? Would require Apple to write the drivers, and again, that's not happening. So, kiss GPU intensive windows gaming on Apple hardware good bye.
  • Tomatotech - Tuesday, November 17, 2020 - link

    To be honest, if you have a fast internet connection, the need for a beefy GPU is going away. I've just joined Geforce Now for £4 per month, and very impressive it is. I'm working through my Steam backlog on a laptop with no GPU enjoying the 2080 level graphics via Geforce Now.

    Is it perfect? No. But the economics are good for the providers, witness the rash of online game streaming services coming out.
  • BushLin - Wednesday, November 18, 2020 - link

    Enjoy your laggy games!
  • Eric S - Tuesday, November 17, 2020 - link

    Never. However Big Sur can run VMs and at near bare metal speeds.
  • Eric S - Tuesday, November 17, 2020 - link

    Including near bare metal graphics if that wasn’t obvious with Mac, Linux, and Windows (assuming licensed) clients.
  • Speedfriend - Tuesday, November 17, 2020 - link

    So on SPEC 2017, the M1 beats the 15W 4800U by 10% in int and 35% in fp. Am I right that the M1 is estimated to be using around 24W here? Is Cezanne meant to have 20% IPC improvement over renior? And what would a move to 5nm bring in terms of higher clocks at 15W. It doesn't seem like AMD is that far behind Apple .
  • BlackHat - Tuesday, November 17, 2020 - link

    Indeed, I would like to see what Cézanne can do, but the tech press go too far saying that is the end of x86.
  • vais - Wednesday, November 18, 2020 - link

    SPEC is not the real measurement anyway.
    If we look at multithreaded Cinebench, AMD is already winning:

    https://images.anandtech.com/graphs/graph16252/119...
  • defferoo - Wednesday, November 18, 2020 - link

    why exactly do you think Cinebench is a better benchmark than SPEC? Cinebench is doing 1 thing, SPEC actually tests a bunch of different workloads.
  • Spunjji - Thursday, November 19, 2020 - link

    This seems to be the other side of the fanboy coin from "Cinebench is useless".
  • mekpro - Tuesday, November 17, 2020 - link

    Where is the gold reward ?
  • Kuhar - Wednesday, November 18, 2020 - link

    It goes to AMD for having a better CPU. Apple gets a gold-rookie award.
  • nevcairiel - Tuesday, November 17, 2020 - link

    The performance of these chips is entirely irrelevant until such a day that you can buy them on a free market and build a PC in a configuration I want, which runs exactly the software I need (which does not include anything from Apple)

    Thus, someone wake me when this becomes relevant.
  • DeathArrow - Tuesday, November 17, 2020 - link

    Exactly my thoughts.
  • DPUser - Tuesday, November 17, 2020 - link

    You left out the words "to me".

    Relevant is relative. In case you hadn't noticed, there are a few folks out here who prefer to run macOS and who find the products Apple offers appealing.
  • nevcairiel - Wednesday, November 18, 2020 - link

    Even to someone that chooses to run macOS, being able to freely build a system to your specifications would be an advantage.
  • The Hardcard - Wednesday, November 18, 2020 - link

    To each their own. I agree with you that free systems are better. I wish that Apple was open in both hardware and software. But, I can achieve so many more of my goals and desires with a faster closed system than a slower open system.

    And Apple knows this as well. That's why they spent so much money developing these processors, to give a reason to buy counter the disadvantage of having a closed system. And they got me. All of my foreseeable tech purchases are going to have M chips and A chips, because they are faster and make what I want happen sooner.
  • grant3 - Friday, November 20, 2020 - link

    ... speaking of 'entirely irrelevant' ...

    You opened up an Apple hardware review article just to make an announcement about how you don't use any software which runs on Apple computers?
  • WaltC - Tuesday, November 17, 2020 - link

    It's too bad for Apple, though, that hardware is merely one side of the compute equation. Software is the other side. And here Apple goes with yet-another-CPU-transition with partially functional emulator software. As Intel is no longer the performance leader in x86, Apple has to beat AMD and that isn't going to happen, imo. The great thing about "x86" (although that's a misnomer as current x86 CPUs bear no resemblance to 8088/802/3/486 CPUs of yesterday) is that AMD can beat everything out there and yet still maintain backwards compatibility with software written 30 years ago--which is still useful today--with only minor tweaks required for SoA Win10 x64 compatibility. ARM is a great choice for low-power device environments--but that's always been the case for ARM, of course. *Nothing* new there! But if raw performance is required and software and hardware compatibility is required--nothing comes close to AMD "x86-64." It's always baffled me to hear people state that raw performance and 3rd-party software and hardware compatibility simply don't matter in the scheme of things. Those things matter so much, in fact, that it is impossible to overstate their importance. Indeed, the entire consumer compute industry today is built on them! Apple CPUs: too little, far too late, as usual (for Apple.)
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Very well said. The whole x86 is based off how it works on the software side of things, a simple program from 1993 (DOOM) would work on the latest x86 HW without anything. On the Linux as well same. While on ARM machines just look at the phones man Android Phones, EOL after 2 years. Okay custom ROM keeps them alive but at what expense ? Loss of the OEM functionality (LG, SONY, Samsung) like Camera processing, Video, OS optimization. Need CAF and GPL v2 compliance which Huawei and Mediatek garbage fail at. It's a disaster on ARM side when software stack is so broken.

    Apple land same. Their new OS abandoned the old Xeon based Mac HW for what reason ? No idea. Whereas with Linux and Windows one can install both on any Celeron or pre i7 machines too, latest games which need SSE also work with a patch to the game exe. The power of computing comes when everyone can run whatever software they want to, which is why Microsoft succeeded in the PC market since they put computers in everyone's hands while IBM MF machines were big machines restricted to a giant room and accessed by only a sect of group. Gated Software is a trait of Apple corp. Their HW is also similar. Any sort of intervention will be blocked at all costs, look at the new fancy Encryption technology of T2 chip on Macs, one cannot even repair the damned machine unless it goes through proprietary Apple software licensed with Internet at Apple store or certified only repairs at high cost it doesn't end there they even managed to get Intersil to make chips supplied only to them and not available for the repair market ISL9240 is the one I'm talking about. They abandoned 32bit x86, Mojave was the last, imagine how many applications lose the functionality to run on the latest OS.

    So great yet so much of lockdown.
  • Eric S - Tuesday, November 17, 2020 - link

    Going to the extent of patching out new instructions in noncompliant executables is *not* compatibility. You can do the same on Mac and many people do to extend the life of really old machines, but you can’t expect a stable system on any OS once it hits that point.
  • Spunjji - Thursday, November 19, 2020 - link

    Troll agrees with troll, news at 10.
  • ws3 - Tuesday, November 17, 2020 - link

    Within a year or two we shall see if you are right about performance vs. AMD, when Apple releases an ARM architecture Mac Pro. I suspect you will be wrong, but we’ll see.
  • BlackHat - Tuesday, November 17, 2020 - link

    The think is with SMD going to zen 4 in 5nm and another 20% IPC improvement isn't like their are sleeping.
  • ws3 - Tuesday, November 17, 2020 - link

    I agree. I also agree with the proposition that this in no way means that x86 is dead. But I also see Apple’s low end fanless laptop chip putting up amazing single thread numbers and, given the abundant evidence that Apple’s chip designed teams are competent, expect that Apple will be quite able to improve upon this chip’s single thread performance as well as expand to significantly more performance cores.
  • DPUser - Tuesday, November 17, 2020 - link

    I want the X-Mac!
  • marrakech - Wednesday, November 18, 2020 - link

    an mobile chip in an 15 inch is a lot faster https://i.imgur.com/ISL1mbf.png 14 vs 26 or 22 vs 42 or 56 vs 107
    Braw3:1 cpu test lover scores are all the m1 ... so why isnt it outclassing an one year old chip?

    how much of it is the fast ssd? i bet u 50% of the speed peole wil se is only from the ssd in new macs
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    Given that they're getting this sort of performance from an "entry level" chip, I think you'll be eating crow. An entry level machine capable of outperforming an i7 or i9 in single-core performance while being able to scrub 8k video in DaVinci Resolve without dropping any frames at all is insanely impressive.

    I can't wait to see what their pro chips alongside 32GB+ will be capable of delivering. Your assumptions, as usual, will be shown to be baseless and then you'll go back to shifting goalposts again
  • Silver5urfer - Tuesday, November 17, 2020 - link

    What are you talking about, outperforming i7 and i9 LOL, did you even see the damn bench ? it is losing a Zen 2 based APU. Almost comedy when you talk out of your rear about beating a desktop chip, don't mention useless Geekbench trash please. Why did the AT bench didn't include a 5950X or 10900K in CBR23 SMT ? Because it will kill the M1s insane hype, in one shot, as oppossed to the stupid Geekbench score and mega synthetic workloads of SPEC.

    32GB+ ? 8GB is the peak for this chip, for that high amount of DRAM Apple has to go HBM2e route and that is not at all cheap. Also the PRO chips do not exist, comback when they have, and why Intel i9 is still being listed at Apple website when the so called M1 is able to outperform the i7, what a joke really. Apple transition plan is for 2 years, until then Intel and ARM will coexist and AMD, Intel are not going to sit idle. Both are on track, AMD with Zen 3 based Milan is going to crush Intel's leftover Xeon and Intel is preparing for the IceLake Server chips, and they have to deliver else AMD will eat up, they ate DIY, mobile is already coming from AMD, Server already began. While Apple will eek out a few more chips with world's most fastest core and 98% faster than all PC laptops BS claims.
  • defferoo - Wednesday, November 18, 2020 - link

    try harder brah
  • Spunjji - Thursday, November 19, 2020 - link

    "8GB is the peak for this chip"
    Somebody didn't read the article (it's 16GB)

    "for that high amount of DRAM Apple has to go HBM2e route"
    Says who?

    "and why Intel i9 is still being listed at Apple website"
    Because the larger chip isn't ready yet, and those are completely different products from the ones with M1.

    You seem to think that M1 is the only chip Apple will have for the next 2 years. That's about as daft as assuming that people writing positive comments don't know that AMD have an entire roadmap ahead of them.
  • Silver5urfer - Saturday, November 21, 2020 - link

    Very good way to pick strawman lol, and thanks for repeating what I said...

    "and why Intel i9 is still being listed at Apple website when the so called M1 is able to outperform the i7, what a joke really. Apple transition plan is for 2 years, until then Intel and ARM will coexist"

    Says who lol, you want and people here are dreaming of chips in PRO for upto 100W lol with some magical your word only "bolt" on cores, you think the IHS size would just keep on decreasing and decreasing while the transistor density increases rapidly on top of Apple's magic keeps on giving ? They couldn't cheat physics for their life until this omega chip, they used trash components in their Intel macs, they are already garbage performing because of trash cooling systems, and comparing that garbage to this new shiny garbage is worst comparison.
  • Spunjji - Monday, November 23, 2020 - link

    @Silver5urfer - One does not "pick" a straw man, it has to be built. I just replied to a specific part of what you wrote...

    The rest of your post is similarly not-good. Says Apple, is the answer - they're transitioning their entire range, that means there will be higher-performing chips on the way. Who knows what they'll look like. I'm interested to see, because unlike you, I don't have an axe to grind.
  • Eric S - Tuesday, November 17, 2020 - link

    There was a large effort to get it Windows to run on AMD. Think of all the drivers. Windows will likely run on a VM on the M1 using its existing ARM support. Parallels already wrote the drivers.. No software compatibility issues. Remember that we are getting close to the smallest node possible. Apple’s path represents the future where you need close integration with software and specialty hardware like the neural engine in the M1.
  • Spunjji - Thursday, November 19, 2020 - link

    Oh gee, it's another random internet comment guy who thinks they know more about selling computing devices than the world's first trillion dollar company.

    To a lot of people raw performance and backwards compatibility do not, in fact, matter at all. Case in point: Apple's market is not built on them. To those for whom they matter, the M1 isn't an option, but that doesn't somehow make it too little too late. Absolute hogwash.
  • IntoGraphics - Tuesday, November 17, 2020 - link

    The burning question : M1 16GB vs Intel 16GB vs Intel 32GB vs Intel 64GB ?
    M1's 16GB Unified Memory, is equivalent to having an Intel 2018/2020 Mac Mini with xxGB of DDR4 SODIMM ?
    Under which load(s) does the Mac Mini's 16GB suffer ?
  • Glaurung - Tuesday, November 17, 2020 - link

    If this matters to you, wait six months for the next group of Macs to transition - this round they only did machines you could not get with more than 16gb of RAM. Next round they'll do the higher end machines that can be configured with 32gb or more.
  • IntoGraphics - Friday, November 20, 2020 - link

    I'm only interested in the Mac Mini. With Intel or Apple silicon.
  • IntoGraphics - Friday, November 20, 2020 - link

    And the next group of transitioned Macs (with or without more than 16GB Unified Memory) will still not answer any of my questions.
  • misan - Tuesday, November 17, 2020 - link

    Why didn't you run 3dmark Wild Life? The M1 mini should be able to run the iOS version, right?
  • Andrei Frumusanu - Tuesday, November 17, 2020 - link

    We didn't have results on anything else, for now.
  • Fstein - Tuesday, November 17, 2020 - link

    Pardon a newbie
    Pardon a newbie. I use my 2012 Mac Mini for music playback and for photography. 8Gb minis will appear soon, fewer 16 Gb are available. Is more memory always better? Is there a "sweet spot" re $$$?
  • andreltrn - Tuesday, November 17, 2020 - link

    8Gb is more than enough. More memory is better for some use but it depends on the workload. But if you want to keep this machine for a long time and that you can afford it do go for the 16gig
  • BlackHat - Tuesday, November 17, 2020 - link

    It isn't the 4800U drawing 40 watts according to Notebookcheck? Isn't it far from the 31 in multicore with the M1?
  • BlackHat - Tuesday, November 17, 2020 - link

    Anyway, the 4800U is beating in CB23 in multicore with relatively same power consumption, I won't bet in the dead of x86 being sooner.
  • vaddieg - Tuesday, November 17, 2020 - link

    we don't know it's AC wall consumption
  • lebe0024 - Tuesday, November 17, 2020 - link

    Can you add the configuration for the various machines used? (CPU, motherboard, ram & configuration, etc)
  • sonicmerlin - Tuesday, November 17, 2020 - link

    This is so cool. ARM chips being used in a major desktop OS on laptops and desktops. So so cool.
  • twotwotwo - Tuesday, November 17, 2020 - link

    Who at AT is rocking the Asus G14? (I see the nnew Cinebench R23 tested on a 4900HS.) Whoever it is is my comrade in enjoying a fast laptop CPU and no Home key 🤣
  • Spunjji - Thursday, November 19, 2020 - link

    I feel like I'm the only person around who never ever uses the Home key 😅
  • jvl - Tuesday, November 17, 2020 - link

    > It’s to be noted that currently we do not have a functional Fortran compiler on Apple Silicon macOS systems, (...)

    Hmm, any bets on numpy then?
    .. I guess it's time to switch to Julia for good ..
    .. once *they* compile it to ARM-macOS :-/
  • andreltrn - Tuesday, November 17, 2020 - link

    That should be too long.
  • hellocopter - Tuesday, November 17, 2020 - link

    Few to no compromises other than the fact that you can't even run Linux on it..
  • boeush - Tuesday, November 17, 2020 - link

    Once these chips become widespread, there will quickly emerge Linux patches and distros to support them. Chicken-vs.-egg stuff.
  • Nicon0s - Sunday, November 29, 2020 - link

    What exactly makes you think they will become widespread?
  • Ppietra - Monday, November 30, 2020 - link

    20 million Apple computers per year seems a good start to become widespread.
  • YesYesNo - Tuesday, November 17, 2020 - link

    Didn't they show linux running in parallels during the announcement?
  • UNCjigga - Tuesday, November 17, 2020 - link

    With the number of IT departments transitioning to web-based/cloud-based apps (G Suite etc.) and concerns regarding IT security for remote workers, I could see the new MBA/MBP 13 with M1 being a popular corporate-issued laptop.

    Well, maybe once Apple restores full VPN functionality in Big Sur...
  • Alistair - Tuesday, November 17, 2020 - link

    This is the ideal "for my dad" computer. No more tech support required.
  • bigvlada - Tuesday, November 17, 2020 - link

    Apple lost the corporate world in mid eighties, when they tried to prevent users from upgrading RAM themselves. You needed bayonet sized screwdriver and soldering iron in order to expand memory. And ofc, you lose the warranty. They tried something similar in the late eighties, with fud campaign against much cheaper VGA monitors.
  • Zerrohero - Tuesday, November 17, 2020 - link

    Macs are everywhere in the corporate world, especially software companies.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    You can't swing a dead cat around Google, Facebook, or even Microsoft without hitting a Mac. What are you on about?
  • grant3 - Friday, November 20, 2020 - link

    it's actually because PC clones were prolific, cheaper, and had a lot more software available.

    The desire to crack open a computer case and install more ram in your brand new purchase (instead of just buying a higher-ram machine in the first place) wasn't even in the top-10 considerations for most corporate PC purchases of any decade.
  • c933103 - Tuesday, November 17, 2020 - link

    Would it be possible to install alternative system like Linux distro or Chromium OS for ARM into the system and try to run some test in a non-MacOS environment?
  • 8steve8 - Tuesday, November 17, 2020 - link

    While intel's current/best 28W 1165G7 is on a few graphs, I was on more of the graphs. comparing a new chip to old chips isn't that interesting to consumers IMO.
  • 8steve8 - Tuesday, November 17, 2020 - link

    *I wish it was on more of the graphs
  • Alistair - Tuesday, November 17, 2020 - link

    can't buy one anyways, every Tiger Lake Laptop at Bestbuy where I live is 18W maximum. And the DDR2666 they come with put the Tiger Lake GPU at less than half of what Intel led us to believe. Gets stomped by the Mini.
  • Spunjji - Thursday, November 19, 2020 - link

    To be fair, most Apple buyers will be comparing it to other Apple devices.
  • 8steve8 - Saturday, November 21, 2020 - link

    It's still early , there will be more tiger lake laptops coming soon. but here's one 28W one : https://www.costco.com/hp-pavilion-15.6%22-touchsc...
  • Spunjji - Monday, November 23, 2020 - link

    Based on how other Tiger laptops are performing, I'd advise checking reviews before buying. So far only a few of them live up to the promise of the demo unit, and HP have a long history of cheaping out on cooling - especially in the low-end devices..
  • 5080 - Tuesday, November 17, 2020 - link

    When a first-generation ARM desktop/notebook CPU is on par with an eighth generation x86 CPU It clearly shows that the x86 instruction set architecture is close to reaching its limits. Looking forward to other architectures like RISC-V and competing ARM SoC's and what they'll be capable of in the near future.
  • bigvlada - Tuesday, November 17, 2020 - link

    When a first generation Power PC desktop CPU is on par with an fifth generation x86 CPU It clearly shows that the x86 instruction set architecture is close to reaching its limits.
  • dgb448 - Tuesday, November 17, 2020 - link

    ROFL. Yeah I remember those days... PowerPC showed us that better instruction set ≠ long term dominance. You also need focus and competence, which IBM never had with the Power series. My suspicious is that Apple will have both with the Mx series.
  • BushLin - Wednesday, November 18, 2020 - link

    https://en.m.wikipedia.org/wiki/POWER9

    Power 9 pisses in your face.
  • Kamen Rider Blade - Tuesday, November 17, 2020 - link

    What's going to stop Apple from doing well is the closed EcoSystem of Apple/Mac nature, not the CPU performance. The CPU performance is impressive, but until it has the open-ness of x86 and Windows along with it's backwards compatibility, it's going to suffer being a pretty golden cage.
  • Eric S - Tuesday, November 17, 2020 - link

    Backwards compatibility is about the same. Macs average 10 years.
  • Eric S - Tuesday, November 17, 2020 - link

    I’d also argue the closed nature of the hardware will be a huge advantage when we hit the smallest possible process node.
  • grant3 - Friday, November 20, 2020 - link

    Apple has had a closed ecosystem for basically forever. How's that stopped them from doing well over the past 15 years?
  • garygech - Tuesday, November 17, 2020 - link

    I already purchased the MacMini. It was discussed for years they would deliver on this, and for the non-gamer, it is more than enough power. Apple will deliver a beautiful system that will last for years. At $699 for 8GB/256GB it is a bargain. I checked my MacMini 2010, opened 12 programs and hit 6 GB of RAM used. I hardly use any SSD space these days. I use OneDrive as a backup, and it works great. I decided on OneDrive after I crashed an archival hard drive that fell off a table. My experience with the MacMini has been A-plus. I have recorded music and done video, but to be honest, what I like is the Operating System and simplicity. Does it have the power of a New PC with a New Gaming card, no, but then you can just buy a new X-Box Series X if you want to game, and that follows Apple's lead, innovate, integrate, update, repeat.
  • vais - Wednesday, November 18, 2020 - link

    Your use case is exactly what M1 is targeted to. The Macbook Air is also a stunning little thing - cool, long battery, enough performance for most stuff.

    But when an article tries to claim it is revolutionary, destroying all of the competition and somehow spells the doom of x86, it's just laughable.
  • grant3 - Friday, November 20, 2020 - link

    Couldn't find the "x86 is doomed" portion of this article. Are you sure you commented to the right review?
  • Spunjji - Monday, November 23, 2020 - link

    For the first time ever, Intel and AMD fanboys are united in attacking one common (fictional) enemy: the Anandtech writer announcing that x86 is doomed.
  • marrakech - Thursday, November 19, 2020 - link

    thanks for telling us that 8 gb or ram and 256 gb ssd can keep up with an office workload
  • Spunjji - Thursday, November 19, 2020 - link

    That describes my Windows work notebook, so yeah, it definitely is!
  • Kamen Rider Blade - Tuesday, November 17, 2020 - link

    The backwards compatible Software Library of Linux/Windows is the real power behind x86.

    Apple isn't even backwards compatible with it's own software spanning back several major OS versions. Apple abandons old POWER PC / Motorola based Mac apps when they transititioned.

    That's the real power behind any great architecture, the software library.

    Console War history has proven this, it's not the most powerful console, but the library behind it.

    What x86 has it that giant library of software and that's only going to grow.
  • apoctwist - Tuesday, November 17, 2020 - link

    But's it's also a huge crutch as MS has tried several times to move the technology forward but still have to cater to legacy. Apple can do things like move to a completely new cpu arch because they are not tied to legacy.

    At the end of the day x86 doesn't matter. The apps do and if the developers are using Apple's core apis then cross compiling is almost trivial.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    I agree with this point, this is the biggest thing. Just look at the Emulators on Android and Windows even on Linux every single Emulator works, the applications utilize every piece of HW in the machine on Windows platform without any BS restrictions of API or other crap.

    Software is what that makes HW shine, Android abandoning storage for an iOS style sandbox was the biggest blow, thankfully that greedy MS is going that way because their core business is Enterprise, having such restrictions will backfire big. They are trying though to make the application ecosytem a pain by that UWP trash and more and more of Mobile based UX with as a service garbage OS update system. There is still time, Windows can only be ruined at a limited rate per year. Also Linux is more shining bright thanks to Proton and Wine with Vulkan, man even Vulkan doesn't work on Mac. What a mess of a software.

    If we factor in DIY, this whole propreitary utopian Apple just crumbles, look at the SSD, soldered and asking amount is $200 for 256GB and same for RAM, wth is that pricing ? for that price I could get a C15 3600MHz B Die RAM, or even a C17 4000MHz B-Die DDR4 memory. Custom blackbox encryption is even worse, and they recently got caught in the privacy issue but having prepared a bunch of excuses too. Security through obscurity is never real.

    9to5mac.com/2020/11/15/apple-explains-addresses-mac-privacy-concerns/
  • Spunjji - Thursday, November 19, 2020 - link

    Oh damn, are you actually Quantumz0d? Or are you another person who just happens to have the same bad, angry takes and bizarre obsession with Android's storage API? 🤔
  • Eric S - Tuesday, November 17, 2020 - link

    It is the system libraries more then the arch. Apple has some great libraries and Unix compatibility. An architecture change is just a recompile usually. Apple has already helped major affected open source projects using assembly convert their code.
  • Spunjji - Thursday, November 19, 2020 - link

    Then why do so many people buy Apple devices?

    There's a place in the world for both strategies.
  • grant3 - Friday, November 20, 2020 - link

    How many applications do you rely on that haven't been updated in the past 5+ years?
    Would you choose which brand of $2000 laptop to buy based primarily on a need to keep using those obsolete apps, regardless of which machine is better for modern software?
    Why do you think the ~70+% performance that Rosetta seamlessly provides wouldn't be good enough to run these ancient apps?
  • tkSteveFOX - Tuesday, November 17, 2020 - link

    The only missing benches are sustained performance and thermals + overclocking.
    While the last option may not be available, you could have run the 3d mark stress tests and compare to intel and AMD in terms of scores and temperature.
  • Alistair - Tuesday, November 17, 2020 - link

    You don't need sustained benchmarks, sustained is the same, Apple doesn't use boost like Intel and there is no thermal or power limit in the Mini.
  • BlackHat - Tuesday, November 17, 2020 - link

    Why no? We need to know if our PC is cooking inside its shell despite no lowering performance.
  • Alistair - Tuesday, November 17, 2020 - link

    No you don't need to know, it runs within normal parameters and doesn't throttle.
  • vais - Wednesday, November 18, 2020 - link

    And how do you know that? Corporate daddy told you something and you just trust it?
  • Kuhar - Wednesday, November 18, 2020 - link

    Are you serious, without knowledge or with damaged short-term memory? A few of Apples had problems with overheating, throttling and lowering performance. All thanks to bad designs. But they are beautiful to look at.
  • Alistair - Thursday, November 19, 2020 - link

    it is the mini, it has never throttled like that, you are thinking of the base Macbook Air
  • Kuhar - Thursday, November 19, 2020 - link

    Alistair, sorry, my bad, I apologize for my rude comment. I really have 0 experience with minis.
  • Spunjji - Thursday, November 19, 2020 - link

    At 24W, in the Mac Mini chassis? Really?

    Reeaaallllyyy?
  • Eric S - Tuesday, November 17, 2020 - link

    They do use boost, it just doesn’t need to throttle down so much.
  • BushLin - Wednesday, November 18, 2020 - link

    Source for this information?
  • marrakech - Thursday, November 19, 2020 - link

    an mobile chip in an 15 inch is a lot faster https://i.imgur.com/ISL1mbf.png 14 vs 26 or 22 vs 42 or 56 vs 107
    Braw3:1 cpu test lover scores are all the m1 .
  • ET - Tuesday, November 17, 2020 - link

    Thanks for the review. Would be nice if you could fill out some of these benchmarks as they seem to be all over the place in terms of the hardware being benchmarked.

    Anyway, quite impressive, and that GPU is definitely powerful for an integrated one, and, more than this, one running on standard RAM (even if fast LPDDR4x).
  • AntonErtl - Tuesday, November 17, 2020 - link

    Thanks for this interesting mini-review. A very impressive CPU, GPU and emulator by Apple. Too bad that they will be locked up behind Apple's garden walls.

    And from what I hear, these Apple boxes are also very expensive: EUR 1200 for a non-expandable Mac Mini with 16GB RAM and 512GB SSD; for comparison, we recently built a box with a Ryzen 3700x and 500GB SSD for <EUR 600 (16GB RAM would cost EUR 50, but we put in 32GB ECC RAM that we had lying around; something that you cannot do with the Apple machine despite its high cost).
  • Eric S - Tuesday, November 17, 2020 - link

    The Mac is open except for hardware. Walked garden is iOS.
  • Eric S - Tuesday, November 17, 2020 - link

    *walled
  • AntonErtl - Wednesday, November 18, 2020 - link

    My information is that you can only run this Mac Mini under MacOS; other OSs run only in virtualized environments. It's a walled garden.
  • Eric S - Tuesday, November 17, 2020 - link

    May not be a spec for spec comparison. The SSD and RAM are both very high end in the mini. It also has the neural engine and other functionality. However, you are not buying these for the hardware. You buy them for the OS.
  • AntonErtl - Wednesday, November 18, 2020 - link

    16GB is not a high spec for RAM; non-expandable is not a high spec; and non-ECC is not a high spec, either. I could not care less for the neural engine, but sure, if it's worth EUR 600 for you, just go for it.

    I certainly don't buy Apple hardware for the OS. In 2004 the iBook G4 was the cheapest 12" laptop, so I bought it. The first thing I did after turning it on was to insert a Linux CD and overwrite the MacOS installer (no, I did not make a backup of the installer first).
  • Spunjji - Thursday, November 19, 2020 - link

    Then you're in that tiniest of tiny minorities who don't buy Apple products for their OS. It's not representative of the general public, though.
  • gotnate - Tuesday, November 17, 2020 - link

    Substitute Qualcomm and Samsung for AMD, and this sounds just like the last decade of disbelieving reactions to Apple Silicon. "Just you wait! The next ~~Qualcomm~~ AMD chip will crush the current Apple chip!"

    The only difference? Apple started out a lap ahead of Qualcomm and Samsung, while they're starting out neck and neck against AMD. And these will be the _slowest_ AS chips to ever ship in a Mac.
  • WaltC - Tuesday, November 17, 2020 - link

    Interesting link here--if the source is trustworthy--multicore CB 23 results where the M1 is blown away by AMD and Intel:

    https://wccftech.com/intel-and-amd-x86-mobility-cp...
  • ThreeDee912 - Tuesday, November 17, 2020 - link

    They’re comparing 8 core, 35W TDP chips to a 4+4 lower power chip with one benchmark. Yes, they’re faster at the one thing. Anandtech did the same Cinebench test on page 2 which shows the same thing. The only thing the wcct article has to add is “[Apple] can use this node advantage and combine it with its ARM architecture to significantly increase its profit margins” which is a dumb take.
  • BlackHat - Tuesday, November 17, 2020 - link

    Well, Anandtech is only providing Geekbench and Speedometer, which isn't better.
  • Spunjji - Thursday, November 19, 2020 - link

    Guess who didn't RTFA before posting. Iiiiit's BlackHat!
  • hanskey - Tuesday, November 17, 2020 - link

    4600U and 4800U from the wccftech article which also outperform the M1 in multi-core performance have a cTDP of 10-25 Watts, not 35W.
  • bcortens - Thursday, November 19, 2020 - link

    Unfortunately we don't actually know how much power the AMD chips are using. Anand has tested them in other longer tests and when measured during these tests the AMD chips (rated at 15 watts) can consume up to 25 watts for prolonged periods of time and can peak up to 40 watts.

    Without better data taking the AMD provided (or Apple provided) power ratings at face value is a mistake.

    https://www.anandtech.com/show/16084/intel-tiger-l...
  • Spunjji - Thursday, November 19, 2020 - link

    They usually consume around 40W when boosting, which most designs out there can sustain for the majority of a single run of Cinebench.

    We have no idea what cTDP the chips are running at, though. Or the cooling solution. Or the RAM speeds. Just... nothing you might need to make a fair comparison.
  • vais - Wednesday, November 18, 2020 - link

    And how is “[Apple] can use this node advantage and combine it with its ARM architecture to significantly increase its profit margins” any dumber than implying x86 is reaching it's "limits" and "doomed"?
    Actually the profit margins are likely to increase and doing something for bigger margins is a pure Apple move - they are a very successful company after all.
  • Spunjji - Thursday, November 19, 2020 - link

    Ugh. So many things wrong with that article.

    The whole thing is written to spin M1 as negatively as possible - for instance, they say M1 is "thrashed" by Intel because Intel have a ~2% single-core performance advantage, even though they only just got done minimising Intel's 17% multi-core deficit (at twice the power draw under load!). They even do a bit of singing about how Intel's upcoming 8-core Tiger will top the chart, whilst failing to mention that it will need more than double the power to do so - 45-65W TDP, ~100W boost.

    I think my favourite was how they describe Tiger Lake as having "half the cores" of M1. Apparently they don't understand how Big/Little works - it's roughly analogous to HyperThreading from a performance perspective, so these processors are eminently comparable.

    They make no mention of the systems used for the testing, so we don't even have any idea what boost clocks and TDP the processors were actually running at. WCCFT suck.
  • vais - Tuesday, November 17, 2020 - link

    Seems the technological revolution some people were expecting is postponed:

    https://images.anandtech.com/graphs/graph16252/119...

    The multithreaded CineBench shows Ryzen 4800U has significant advantage over M1 and it is around the same TDP as well - so far for unprecedented efficiency. And this is comparing M1 to a Zen 2 CPU - then Apple Silicon comes to the desktop it will have to compete with Zen 3 as well.

    In the multi-threaded R23 runs, the M1 absolutely dominates past Macs with similar low-power CPUs. Just as of note, we’re trying to gather more data on other systems as we have access to them, and expand the graph in further updates of the article past publishing.
  • hanskey - Tuesday, November 17, 2020 - link

    The performance measured would have exist for Apple to be able to dump Intel, so it makes sense and kudos to the tax-dodgers on the achievement afforded to them by poaching everyone else's engineers with their tax-free $billions.

    That said, this device is totally irrelevant for me for all the normal reasons the vast majority the computing market also does not choose Apple products - they are a terrible value and don't run my software.

    What's on offer here from Apple is simply not good enough to con me into giving up choice, third party hardware and software, the right to repair, a solid upgrade path, and paying a competitive price for all the above. If I'm not mistaken they are still up-charging by multiples $100's for 512GB of storage, which I can get for around $30 in a MicroSD card, or HDD, not to mention I added a 1TB NVMe SSD to my non-Apple laptop for around ~$130!
  • marrakech - Wednesday, November 18, 2020 - link

    apple goes for the people who work on browser and office software
    an part of the market that can be run at 2.1 Ghz 4 core and for 10 seconds at 3 ghz speed
    for the profesionals who do 3d work or need 32 gb ram as an minimum this is like giwing u an 6 year old smartphone to work with
  • Duncan Macdonald - Tuesday, November 17, 2020 - link

    Why are the single threaded benchmarks using the Zen 3 chips for comparison but the multithreaded benchmarks are using the Zen 2 chips. This has an unfortunate effect of making the multithreaded performance seem much better than it is.
  • BlackHat - Tuesday, November 17, 2020 - link

    There isn't zen 3 mobile yet, and the according to other sources the multi is equal to a R5 3600X.
  • guycoder - Tuesday, November 17, 2020 - link

    The comparison with Zen 3 is only for the purpose of demonstrating the single core performance that Apple has achieved against the current architectural leader in the recent AMD Zen 3. Multithreaded comparison is a like for like and what you can expect against a comparable laptop product available today. There are no Zen 3 based laptop processors in the market today only Intel Tiger Lake and AMD Renoir. Comparing M1 against desktop class processors is not a relevant as you might as well compare it against a 64 core Rome.
  • Makste - Wednesday, November 18, 2020 - link

    What he says.

    Except for the epyc rome part
  • dan82 - Tuesday, November 17, 2020 - link

    Awesome review, thank you so much for putting this together!

    I think the Rosetta2 vs native scores % in the end are particularly interesting. This made me wonder: How does this compare to Microsoft's x86 emulator in Windows? The Surface Pro X doesn't perform particularly well and I'd be curious to know how much of that is the weaker chip (it seems that the m1 is about 2.2x as fast as the SQ1 on the cpu side) vs how much is due to emulation.

    Maybe we could have a deep-dive emulator Rosetta2 vs Microsoft's emulator comparison? :-)
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    The level of salt and cope in these comments is something else.

    People should be excited for this, not mad. This is a staggering level of performance compared to what Intel and AMD are delivering. Even if you have no intention of ever buying a Mac you should be glad that this will spur competition from other companies, whether it is from AMD or Nvidia perhaps (maybe Jensen will finally commit to entering the CPU game after the ARM acquisition).

    I've lost all hope that Intel does anything worth a damn again, but we'll see, more unlikely comebacks have happened.
  • Makste - Wednesday, November 18, 2020 - link

    Agreed.

    I'm expecting apple and its M1 derivatives to be looking at AMD competition and for the most exciting part, AMD to be comparing its performance to apple silicon 😁. It is such a pleasant surprise for AMD and its customers because nosooner had AMD defeated Intel, than apple showed up with M1 at equal or even better performance (coz let's face it ryzen 3 based laptops will not be at the same performance level with their desktop counterparts due to power limits). But this means overall, that AMD cannot relax, and if it (AMD's ryzen, newer or
    older architecture) can keep up with this new architecture, it'll take it well ahead. By the power vested in me as a customer, I give this new competition between AMD and Apple my blessing 😌
  • BushLin - Wednesday, November 18, 2020 - link

    The only thing that's mad is fanboys saying the Apple chip is far ahead of anything when all it's done is show impressive single thread performance on a 5nm process Apple paid to have to itself. If those comments didn't exist and need correcting it'd just be enthusiasts congratulating Apple on the wide design and GPU efficiency.
  • Spunjji - Thursday, November 19, 2020 - link

    "all it's done is show impressive single thread performance on a 5nm process"

    That's... impressive, you know? It's cleanly ahead of Intel's best and thrashes both AMD and Intel on the iGPU side, so that's kind of more than I expected and certainly more than enough.
  • BushLin - Thursday, November 19, 2020 - link

    Thanks for repeating my point.
  • Spunjji - Monday, November 23, 2020 - link

    I was disagreeing with your tone, not your point - although your larger point that "fanboys" need "correcting" is hogwash. The majority of comments are from angry Intel/AMD fanboys declaring how rubbish this chip is.
  • Calista - Tuesday, November 17, 2020 - link

    Not sure if I'm facing this with a yawn or a cheer. On the one hand what Apple have accomplished is very impressive, running head to head with the very best x86 chips in the market. On the other hand that's all they do, running head to head. It's a bit like moving from a 4C/8T i7 to an 8C/16T Ryzen. Sure, the CPU is faster and consume less power. But in the end everything is the same as before. It can also be argued that where the M1 shines, in ultraportables, the performance we have today is more than sufficient for an overwhelming majority of users
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    If we're looking at ultraportable performance that is comparable to or faster than an i7 or i9 that draws significantly more power, what does that mean when they start going with higher power draw chips in their higher end models?

    I have a Threadripper workstation and I am incredibly excited to see what this means for their higher end desktops and laptops
  • BlackHat - Tuesday, November 17, 2020 - link

    The 4800U get close with similar power envelope, a zen 3 units should be close.
  • KoolAidMan1 - Tuesday, November 17, 2020 - link

    Curious to see application comparisons once all of these are out. Scrubbing 8k files in Resolve without dropping any frames is kind of incredible even for a normal desktop chip, let alone something like the entry-level M1 that draws hardly any power at all.
  • Spunjji - Thursday, November 19, 2020 - link

    This is going into a form-factor (MacBook air) that was previously stuck with Intel's 10W misery. It's running far more than head-to-head in that respect.
  • The Hardcard - Tuesday, November 17, 2020 - link

    for everybody arguing about what this means for the computer market, remember this is the low end first gen Apple Silicon. A 12 core (8+4) version will be here early next year, end a 16 core version is also possible for 2021. it’s not hard to see where they will stand in multicore. The 12 core will be the fastest mobile CPU and the sixteen would be the fastest desktop.

    The question is how many cores and memory controllers they will want to put in a Mac Pro replacement. They are clearly capable of having the dominant processor in whatever market they choose to compete in.
  • vais - Wednesday, November 18, 2020 - link

    Jumping to astonishing conclusions, just like this article.

    How exactly are they "clearly capable of having the dominant processor in whatever market they choose to compete in"?

    Did you miss this? :
    https://images.anandtech.com/graphs/graph16252/119...

    AMD 4800U (15W) slams M1 in the ground - how is M1 as amazing as you claim in performance?
  • bcortens - Thursday, November 19, 2020 - link

    For 1 we don't know the power consumption of the 4800U (15W) during the test... it may say 15W on the box but if we look here: https://www.anandtech.com/show/16084/intel-tiger-l... we can see that the power consumption can vary quite a bit depending on actual use.

    Until there is an actual power measurement rating for both CPUs we don't know if the 4800U is actually beating the M1 at perf/watt.
  • Spunjji - Thursday, November 19, 2020 - link

    Their comment was explicitly speculating about what the higher-core-count variants will bring, not M1.

    If you're not bright enough to extrapolate how an 8+4 core chip based on this architecture with a ~40W TDP (4800U's actual power under load) would look on that chart, then that's your problem.
  • The Hardcard - Thursday, November 19, 2020 - link

    No, I saw that, you missed my point, which is that M chips with more cores are coming. Given the huge gap in power efficiency, not only are Firestorm cores the most powerful cores available for consumers, for any given power envelope, Apple can put more cores on a die than either AMD or Intel or Qualcomm.

    The upcoming 12 core M will be faster than any mobile Intel or AMD CPU. Faster than the H’s with less power than the U’s
  • Spunjji - Monday, November 23, 2020 - link

    @The Hardcard - steady on there. If a 12-core M comes out, it will be in an H-class power envelope.
  • milli - Tuesday, November 17, 2020 - link

    Andrei, I'll give you one more data point.
    Ryzen 5 4600U MT R23: 8000pt
    So AMD's one year old 7nm midrange CPU beats M1 while consuming a similar amount of power.
    I hope you eat your Geekbench branded shorts.
  • Nick B - Tuesday, November 17, 2020 - link

    I must have missed the part of the review where Andrei personally insulted your manhood.
  • milli - Wednesday, November 18, 2020 - link

    You did but replace manhood with just common respect.
  • Spunjji - Thursday, November 19, 2020 - link

    If by "similar amount of power" you mean ~50% more then sure.
  • Aragole - Tuesday, November 17, 2020 - link

    Can it output 4k@120Hz ?
  • Spunjji - Thursday, November 19, 2020 - link

    It can do 6K at 60Hz, so I think it probably can - though what exactly you'd do with that I'm not sure.
  • paviko - Tuesday, November 17, 2020 - link

    The most important sentence in while article: "Since our A14 results, we’ve been able to track down Apple’s compiler setting which increases the 456.hmmer by such a dramatic amount – Apple defaults the “-mllvm -enable-loop-distribute=true” in their newest compiler toolchain ".
    We are comparing "apples" to "oranges". M1 benchmarks use Apple compiler, while AMD/Intel - 3rd party. I remember articles at Anandtech that showed when using Intel compiler suddenly Intel cpu gained 20-30% in SPEC results. The same was doing Intel with it's 56 core Xeon showing how it beats AMD 64 core Zen, but using its complier (Intel compiler). The winner is AMD Zen3, unfortunatelly there is no mobile part yet. I think then is Intel, that struggle with 10nm - what it could do with 5nm? Nonethelese M1 is great, but lets not create hype it's the best.
  • Luminar - Tuesday, November 17, 2020 - link

    You must have missed the part of the article where they said they purposely are using an old version of SPEC, from before when Intel and AMD started tampering with the code.
  • Andrei Frumusanu - Tuesday, November 17, 2020 - link

    There is no code tampering in any version, don't spread misinformation.
  • Makste - Wednesday, November 18, 2020 - link

    The difference is not too much though, 3rd party software for x86 based processors is optimised for these processors (although not so optimized for AMD which wins again) in the same way these compilers r optimised for Apple. But Apple has been using intel processors, so the software should be compatible enough to run on intel and AMD processors, yet still, AMD performs well enough on these softwares these days. It should be fun if against all odds, AMD takes the performance crown whilst using optimised softwares for corporations which have different processor architectures from AMD and yet still beats them. That'll showcase the performance superiority of AMD architecture design to other architectures.
  • Dodozoid - Tuesday, November 17, 2020 - link

    I wonder if AMD keeps the K12 project alive in some way so the could continue wit high performance Arm cores if the industry shifts that way. Also, what does nVidia Arm acquisitin mean for that prospect?
  • Dodozoid - Tuesday, November 17, 2020 - link

    Could Andrei or Dr. Ian maybe bring this topic up in some discussion with AMD officials? I mean with their current architecture, they could really only change the core and tape out relatively small CCD...
  • Silver5urfer - Tuesday, November 17, 2020 - link

    Lisa Su already mentioned x86 is their commitment and they are not pursuing the same Alder Lake bs of Biglittle garbage (Intel lost SMT perf, their ring bus doesn't scale up with high power) and neither is for ARM from AMD.

    M1 is nothing to sneeze at look at the benches how Zen 2 based BGA product beats it with more choice of HW vs a BGA soldered mess with beta software. Zen 3 based Cezzane will crush this M1 to hell. And with Zen 4 it is going to even big of an IPC change with TSMC on top.
  • GeoffreyA - Tuesday, November 17, 2020 - link

    I must admit, the M1 is making even Zen 3 look bad. Faintly echoes, to me at any rate, the Pentium M vs. Athlon 64.
  • vais - Wednesday, November 18, 2020 - link

    How exactly does it do that? In the speculative synthetic benchmarks or in some multithreaded real world example?
  • GeoffreyA - Wednesday, November 18, 2020 - link

    Well, first of all, let me just add that I can't stomach Apple or ARM. I hope AMD (and even Intel) thrash the folk from Cupertino with their upcoming products; but the thing is, while Zen 3 does prevail in most, or even all, tests, and leaves Apple's product dead on the floor when it comes to multi-threading, I feel M1 does tarnish its image somewhat, where, only a few weeks ago, it had complete dominion over its market, and now you've got this low-power design striking at it.

    Perhaps not a balanced statement on my part, I am not sure, and still trying to let it all sink in and see exactly what the M1 means.
  • GeoffreyA - Saturday, November 21, 2020 - link

    To find truth here, it's important to keep the 5950X out of the picture, and look closely at Renoir and the M1. As others have already pointed out, 15W Renoir is not so far behind in single-threaded performance (which is the building block of everything else). Zen 3 + 5 nm should put them on a roughly equal footing. Give or take. Also, according to the TGL review, I see that 15W Renoir draws about 23W on a multi-threaded load: similar to the M1 as far as I can tell.* And Z3 uses about the same, or even less, power than Z2.

    This suggests that Apple's and AMD's microarchitectures, according to Cinebench and some of SPEC at any rate, are roughly the same from a performance and power point of view. Great work from Apple, certainly. But hardly a slaughter. Seems these old-fashioned x86 CPUs, while shackled by the variable-length chains, are able to hold up against ARM after all. Who knows, maybe Rosetta 3 will be translating back in a decade's time ;)

    * Did see a test, on Notebookcheck, where the 4800U drew about 50W, but that's in Prime95 and likely using a higher than 15W TDP. A review of the Yoga Slim 7 on Ultrabookreview showed a range of TDPs.
  • GeoffreyA - Wednesday, November 18, 2020 - link

    As for the Pentium M and Athlon 64, I am making that statement from memory and need to double-check the actual review: I remember Anand did that test a long time ago, before the Core days, and what I remember (only read it a few years ago) was that the Pentium M was too close for comfort to the K8, even beating it quite a few times.
  • Spunjji - Thursday, November 19, 2020 - link

    You're 100% accurate with that recollection. I bought a Pentium M at the time and rolled it into a desktop system because I couldn't afford an Athlon 64. It could run passively under a Zalman Flower cooler with a pin mod that took it from 1.6Ghz to just over 2Ghz, and at that rate it was competitive with (not better than!) some of the best gaming systems out there.

    This is quite reminiscent of that - a new low power CPU showing up and bloodying the nose of the incumbent heavyweight(s). Only difference is we don't get to build this into our own systems :/
  • GeoffreyA - Friday, November 20, 2020 - link

    Great memory of your Pentium M clocked at 2 GHz. Though I never owned one, I hold this CPU in very high regard, simply because it was such an excellent piece of work (nice intelligent additions on top of the Pentium III's design) and the foundation of Core. Its surprising performance against the "infallible" Athlon 64 was a foreshadowing of things to come.
  • GeoffreyA - Friday, November 20, 2020 - link

    As for me, I got an Athlon 64 back then, but it was before the X2 came out, so prices weren't that bad. Wasn't very high-end though: the Socket 754 3000+ clocked at 2 GHz. So I missed out on Socket 939 and dual channel memory. My motherboard was a Chaintech VNF3-250. Unfortunately, it came with a no-name brand 300W PSU, which slowly fried something over time. All in all, could barely boot after just 4 years.
  • Spunjji - Monday, November 23, 2020 - link

    The AOpen MoDT board I built my Pentium M system with failed after just under a year - and that was with a very high-quality PSU. The RMA replacement failed again in a similar time-frame. I think a lot of boards from that era just weren't that great!
  • GeoffreyA - Thursday, November 26, 2020 - link

    Yes, those old boards were a mess, packed to the brim with components. It's startling to reflect how clean boards have become over the past decade.
  • trboyden - Tuesday, November 17, 2020 - link

    Still seems to be a lot of fanboyish behavior when it comes to Apple performance testing. Apple has long been known not to put the best performing Intel CPU in their machines which is why they always suffered performance-wise against performance oriented PCs. Then you have these test results where the tester claims a win for Apple when the M1 can't even beat an Intel 1165G7 processor at 10nm and 4 cores. Geekbench scores have always been suspect, so no one really trusts those. The Cinebench scores are probably the closest to reality, but the testing suffers from a lack of testing against modern Intel 8-core processors such as the i9-10885H which themselves are still only 14nm. I think the Apple ARM chip is definitely going to be competitive, but it is far too early to call it a dominant chip. With Apple turning all of their devices into glorified tablets running mobile apps, it is not exactly the best platform to benchmark the performance of a working class PC.
  • bernstein - Tuesday, November 17, 2020 - link

    one should note that performance doesn't scale with power... or rather not linearly. on the other hand having the fastest core that uses less power than competitors mostly negates this, as scaling from a 4-core to 16-core should be easily doable. this would put them at a roughly comparable mc-perf than a ryzen 5950x
  • xenol - Tuesday, November 17, 2020 - link

    I'd argue the M1's GPU is a much bigger eyebrow raiser than the CPU. 21W system power consumption (so GPU is probably floating around 12-15W) yet gets within spitting distance of the 1650. Even if we take the laptop version, it's still 35W-50W for just the graphics module. That's nuts.

    Though one has to wonder how much inefficiencies on the 1650 are due to software inefficiencies.
  • demofly - Tuesday, November 17, 2020 - link

    I wish there were more benchmarks besides Geekbench and Cinebench. They are unreliable in comparing processors of the same ISA, let alone completely different ISAs.
  • morgenrot - Tuesday, November 17, 2020 - link

    Well, there's compiler performance, where the M1 outdoes an 8-core i7 both in speed: https://techcrunch.com/wp-content/uploads/2020/11/... and in energy efficiency: https://techcrunch.com/wp-content/uploads/2020/11/...

    I think this is great, it puts the fairytale to bed that ARM processors are only good for iPads or cell phones and couldn't possibly compete with "real" processors (i.e. x86) (of course, that's what people said in the '90ies about x86, lol). It also adds extra spice to the amazing resurgence of AMD, and puts extra fire under Intel's butt to get its act together. Hopefully Intel comes back and produces the successor of Tiger Lake at below 10 nm, and when Zen 4 comes out on 5 nm on AM5 with DDR5 and Apple adds more memory channels and DDR5 to its desktop processors, this is all going to get quite exciting. Watching processor tech hasn't been this much fun in a long time! Now I gotta go and get more popcorn.
  • Farfolomew - Tuesday, November 17, 2020 - link

    I'm curious how much the man, the myth, the legend, Anand La Shimpi, has had a hand in developing/architecting Apple's custom silicon, up to, and including, this M1. Would be cool to hear an update from him :-). This is a major moment in CPU history.
  • Tomatotech - Tuesday, November 17, 2020 - link

    Most likely none. The Ax series were well advanced before Anand joined Apple. Also remember that Anand was a journalist here, not a computer science developer. I'm sure he's doing good work at Apple though. If I take a wild guess, he's possibly working in some sort of communications capability, translating between the people who write reports / press releases, and the wild-haired uber-nerds who only speak binary. That kind of role would use the skills he built at Anandtech - understanding technical content and making it accessible to non-technical people.
  • realbabilu - Tuesday, November 17, 2020 - link

    Great.
    Apple can control all software hardware for optimization. The result is amazing.
    But please don't sluggish it deliberately old Mac so customers want to buy a new one like iphone.
    The gpu iis crazy here,please more mac games developed, not like intel mac. Also more cad and cae, this small M1 needs killer application
  • Farfolomew - Tuesday, November 17, 2020 - link

    I don't think it will happen, but imagine if you could Bootcamp these new Macbooks and install Windows? If they ran translated x86-64 at 66% performance vs that of native AArch64 (based on Rosetta2 results), the hardware alone would be some of the best PC Laptop/Ultrabook hardware we've ever seen, with massive battery improvements and low TDP.

    Just a thought
  • Makste - Wednesday, November 18, 2020 - link

    I like it
  • zodiacfml - Tuesday, November 17, 2020 - link

    Thanks Anandtech. What do you think is the power required to run the integrated RAM? Is the M1 in the Mini a 15W TDP chip? If it is then Apple's work here is on par with Intel's and AMD's current gen.
    One thing left out though is Apple's more powerful part for larger products. I think they will have a 16 core CPU and 16 core discrete GPU
  • Ryan Smith - Tuesday, November 17, 2020 - link

    "Thanks Anandtech. What do you think is the power required to run the integrated RAM?"

    It looks to top out at around 1.6 Watts, judging from some additional power info we've been able to collect this afternoon.
  • zodiacfml - Wednesday, November 18, 2020 - link

    Nice. Thanks!
  • Soul_Master - Tuesday, November 17, 2020 - link

    For Cinebench r23, you cannot directly compare between macOS and Windows. In Deve2D video, he already compared between Intel/M1 on the same macOS platform. Score of intel CPUs are significantly lower than Windows platform.

    https://www.youtube.com/watch?v=XQ6vX6nmboU&fe...
  • realbabilu - Wednesday, November 18, 2020 - link

    i will checked using my hackintosh. running all worlds.
  • Dug - Tuesday, November 17, 2020 - link

    Impressive launch.
    I was expecting Rosetta to cripple it, but they've done a great job.

    The fact that this is such low power and high performance, and even benchmarked against desktop cpu's, speaks volumes.
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    Woot! Finally. After waiting over 5 years for them to do this and explaining to people the prowess of the ARM microarchitecture’s ability in Apple engineers’ hands, it’s finally done.
  • RedGreenBlue - Tuesday, November 17, 2020 - link

    I still expect an 8+4 CPU design for higher thermal and power envelopes. They got this far. Might as well rub it in.
  • Leeea - Tuesday, November 17, 2020 - link

    Those rosetta emulation results are very impressive.

    They really knocked this one out of the park.

    People are proclaiming this to be the death knell of x86, but it is to soon for that. However, in the expensive low power laptop segment they are the clear winner.

    However, the only thing keeping the x86 laptops in the running was multithreading and discrete gpu's. Nothing stopping Apple from adding their own discrete gpu.
  • ABR - Wednesday, November 18, 2020 - link

    Please add the Macbook Pro 16 to the comparisons, and drop the old 15.
  • RSAUser - Wednesday, November 18, 2020 - link

    "Naturally, in higher power-level, higher-core count systems, the M1 can’t keep up to AMD and Intel designs, but that’s something Apple likely will want to address with subsequent designs in that category over the next 2 years."

    That comment is very strange, that would be a completely different chip design. ARM excells at <20W, x64 excels at 30W+ with lots being ~45-60W sweet spot. They're competing in different markets.
  • scottrichardson - Wednesday, November 18, 2020 - link

    This is what I'm interested in seeing too - how Apple scales these chips up with higher clock speeds, higher TDP, and larger memory pools.
  • Spunjji - Thursday, November 19, 2020 - link

    The theory goes that because Apple now have a core design that can compete with Intel and AMD's single-thread performance, if they can put more of them together, they'll also compete with those company's multi-thread performance.

    It depends a lot on how well their core design scales up in terms of inter-core communication fabric and memory access, but in theory, they could compete with AMD's 105W 7nm 16-core chips just by bolting 12 more large cores onto this - with a resulting TDP in the 80W region.

    Whether they can get one of those out before AMD gets out their own 5nm chips is another matter entirely.
  • briceio - Wednesday, November 18, 2020 - link

    Oh crap, I just sold my MBP16, my Apple TVs, my iMac, and the iPad Pro of my wife... thinking Apple was going anywhere those last years... speak about good decisions & timing.
  • dontlistentome - Wednesday, November 18, 2020 - link

    Going to buy one, mostly out of geeky fascination for new stuff (even if it looks and acts like old stuff).
    Re the power consumption - does read a little like you're disappointed by this, lots of excuses/reasons for it being broadly in line to what's already out there at the low power end. Would love to see a Zen 3 laptop running an external display (lid shut) benchmarked as a comparison. Would there be any difference? I suspect not...
  • dizzynosed - Wednesday, November 18, 2020 - link

    Me too. Which one u will buy. I will go for the cheapest airbook.
  • joms_us - Wednesday, November 18, 2020 - link

    It is laughable to see how you continue to use Geekbench and the primitive version of Spec to compare ARM to x86 counterparts and then tout how godly superior it is. Just look at the huge discrepancies between native and rosetta mode. That is why you should compare two processors with same OS or same architecture (be it emulated)
  • dizzynosed - Thursday, November 19, 2020 - link

    Why? Why should we not compare how fast can it accomplish a task?
  • abufrejoval - Wednesday, November 18, 2020 - link

    Honestly, I am perhaps most impressed with how well the Ryzen 4800U did in Cinebench, especially the multicore. Sure, it's 8 big cores vs. probably just 4 big ones on the M1, but it's Zen 2 and 7nm so there is still potential to reduce the gap.

    And when you look at Geekbench results, there is one observation that sticks out very visibly and applies to both Geekbench 4 and 5:

    Windows results for identical hardware are the worst, significantly bettered by Linux result and again trumped by MacOS results.

    The explanation: The compilers are simply not the same and it shows.

    My Ryzen 7 5800X stays shy of 1700 on Windows for single score, but goes beyond on Linux.
    The first leaks in excess of 2000 on Ryzen 5000 were all on "MacOS/iMacPro1,1" or Hackintoshs.

    I'd guess they are using a Microsoft compiler for Windows, GCC on Linux and LLVM on MacOS.

    So to avoid comparing apples to lemons you need to compare the M1 to an 4800U Hackintosh, or better yet, Linux on both.

    Alas, without Linux M1 is useless to me. Shame, because the silicon and the box seems pretty nice, apart from that partially eaten fruit that's turned a rather less appetizing black over time.

    I am ready to bet a case or two of beer, that if Apple sold M1 to PC makers and opened their M1 hardware up to run Windows and Linux/Android for ARM, that would outsell PC running MacOS by far.
  • milli - Wednesday, November 18, 2020 - link

    That's exactly the thing that bothers me about Geekbench but this fact is mostly missed by the community.
    It is kind of funny that the Anand staff thinks it can draw so many conclusions about the underlying chip performance all the while running different OS' on these CPU's. Obviously, for now, running anything else than macOS on M1 is far fetched but as you say, you can run Hackintosh on AMD. If not that, then at most it's a platform comparison.
    Even running SPEC with the same compiler is up for debate vs optimized compilers.
  • blackcrayon - Wednesday, November 18, 2020 - link

    I'm going to guess this isn't much of an issue, if it were, Prime Labs themselves wouldn't keep it a secret. You don't think they've tested the same chips running on different OS's (wherever possible)?
  • blackcrayon - Wednesday, November 18, 2020 - link

    (primate labs)
  • thunng8 - Wednesday, November 18, 2020 - link

    All the 2000+ results on hackintoshes were wildly over clocked to 6ghz+. Geekbench does not run any faster on macOS compared to windows. Linux does seem to run a bit faster than windows though
  • Vik32 - Wednesday, November 18, 2020 - link

    Andrei, thanks for the work done! Great review as always.
  • tyger11 - Wednesday, November 18, 2020 - link

    Ouch. Thanks for the work on the article, but it's time to use an editor again; this was a painful read.
  • marrakech - Wednesday, November 18, 2020 - link

    blackmagic has an test for raw video stuff
    https://i.imgur.com/ISL1mbf.png
    m1 test is legit from an user
    take a lock at the right where it Says cpu and then 3:1 ,5:1 or 8:1
    i compare Braw3:1 at 8K and 6K
    m1 scored 14 at 8K and 22 at 6K
    amd mobile chip gets 26 at 8K and 42 at 6K
    apple did good making an chip for its os and paacking an fast ssd on the macbooks
    the new os and the way the gpu talks to the system is no where else to be found just on the new mackbooks
    its like an lamburghini 0 to 60 vs tesla ludacris stuff , its different
  • samerakhras - Wednesday, November 18, 2020 - link

    Looks like a paid review by Apple.

    CineBench R23

    when it is about single thread it is compared aganst 5950X

    and in Multi thread suddenly the Ryzen 5950X disappears and only 4900HS is there ..

    Yea Right !
  • mjtomlin71 - Wednesday, November 18, 2020 - link

    He's testing for two different things:
    1. Single core performance across the spectrum... this tells you about raw performance.
    2. Multi core performance in a low'ish watt design... this tells you about efficiency.

    No one is trying to say the M1 has the best performing CPU in the world. There's no contest when it comes to multicore performance; the M1 can't touch those other CPUs. So the second test, is to see how well it performs against high end "mobile" designs.
  • samerakhras - Wednesday, November 18, 2020 - link

    Sorry , he is trying to say M1 is competing with everything. and in the earlier article he was hinting at that as well . and his narrative as well.

    Thats why in single thread he put all chips and in multi threads he put only the lower ones.

    I never seen in all my life Anandtech site fall that much. and becomes Paid like others.
  • Silver5urfer - Thursday, November 19, 2020 - link

    As long as they are paid and look at the comments, all over the place on the M1 being ultra fast, same for other MSM sites like Ars, and etc. Glorying it to the moon.
  • Spunjji - Thursday, November 19, 2020 - link

    Weird, because a huge chunk of the comments are you trying to rules-lawyer your way out of admitting that Apple made a surprisingly good chip.

    The idea that Ars and AnandTech are "MSM" is *chef's kiss*.
  • Spunjji - Thursday, November 19, 2020 - link

    Your inability to read correctly does not signify an error on the part of the person who wrote the article. You just had it explained to you and you're still substituting your own absurd conspiracy theory. That's not a great look.
  • samerakhras - Wednesday, November 18, 2020 - link

    oh and one more thing ,

    This is a MAC MINI , not a notebook , you can install a full 65 watts desktop CPU in that case and will work fine.
  • Makste - Wednesday, November 18, 2020 - link

    I'm still impressed by this M1 chip. Goes to show what ditching the L3 cache for a larger L2 cache can do for performance uplift (not that its the only reason for M1's performance). X86 fellow fans should accept this challenge for the greater good of technological advances, especially when considering the intergrated M1 graphics and a fan free system. AMD should be able to come up with an equal or better option since it deals in both gpu and cpu technology. What it requires is reasonable funding.
  • ricebunny - Wednesday, November 18, 2020 - link

    Why is the Intel i7 1185G7 (28W) over 20% slower in ST SPECfp 2017 in this review, compared to your review from 2 months ago?

    The score from the original review was: 6.42,10.75 for int,fp respectively. In this review it scores 6.17,8.60. That would make it marginally faster than m1 in fp and marginally slower in int.

    Link: https://www.anandtech.com/show/16084/intel-tiger-l...
  • ChrisGX - Thursday, November 19, 2020 - link

    Yep, that is a discrepancy that needs to be explained. I say that as reader with great admiration for Andrei Frumusanu, not an Intel fanboy.
  • The Hardcard - Thursday, November 19, 2020 - link

    It is explained in the article. He doesn’t have a FORTAN compiler for the Apple hardware, so the composite numbers for this article excludes those tests.
  • trboyden - Thursday, November 19, 2020 - link

    Because as usual in biased reporting, if it goes against the agenda you have for an article, you skew the numbers and hope not to get caught. Similar to why they didn't compare to the i9 mobile processor that has 8-cores like the M1, and instead compared to an i7 that only has 4-cores. The i9 would toast the M1, but that goes against the author's agenda for the article.
  • ChrisGX - Thursday, November 19, 2020 - link

    We need some discrepancies explained not a tale of woe infected by a fanboy nightmare.
  • Spunjji - Thursday, November 19, 2020 - link

    👍👍👍
  • Spunjji - Thursday, November 19, 2020 - link

    Alternatively, as noted in the article, they couldn't run the whole SPEC 2017 suit yet due to the current lack of an appropriate Fortran compiler.
  • KPOM - Friday, November 20, 2020 - link

    For starters, 4 of the cores run at 1/10 the power of the other 4 and are intended for lower-demanding tasks. That said, Apple said that those 4 cores have the same overall power as the dual-core MacBook Air from earlier in 2020.
  • Spunjji - Thursday, November 19, 2020 - link

    I'm pretty sure it's because of this:
    "It’s to be noted that currently we do not have a functional Fortran compiler on Apple Silicon macOS systems, thus we have to skip several workloads in the 2017 suite, which is why they’re missing from the graphs. We’re concentrating on the remaining C/C++ workloads."

    Hard to know exactly what effect that would have on the overall rankings. Good spot, though!
  • ognacy - Wednesday, November 18, 2020 - link

    Performance aside, I'd point out that unlike most of what has been released this year by AMD, nVidia and Intel - its actually available to buy. I find this amazing.

    Also, performance too. As a developer and a software architect I always wanted a design like this one. Quiet and fast. The xcode performance is jaw dropping. My xeon-equipped notebook, heavy and noisy, is on its way out.
  • mjtomlin71 - Wednesday, November 18, 2020 - link

    Not sure if people understand that this first M1 was specifically designed for the MacBook Air (a fan-less notebook)? And this article is comparing it against CPU's by AMD and Intel that were designed for high end desktops? It only made it into the 13" MacBook Pro and Mac mini, because even Apple was surprised how much performance they were able to get from it. You can be sure, this is NOT Apple's highest performing SoC, it is just the one they decided to release first.
  • blackcrayon - Wednesday, November 18, 2020 - link

    Yep- some of the comparisons are pretty embarrassing.
    "Hey look at this 45 watt chip that's slightly faster than the M1 in certain tests. Sure it runs in laptops with a 50% larger battery, and 50% shorter run time. Anyway Apple should exit the market and give the money back to the shareholders!"
  • scottrichardson - Wednesday, November 18, 2020 - link

    Thank you. Quite a lot of folk desperately trying to pluck examples from the tests that refute the M1's capabilities. Nobody is saying this chip is meant to be the most powerful overall CPU available across all markets. This is Apple's first chip, designed for their cheapest, lowest-end notebooks and mini desktop. Andrei didn't even need to include high end desktops in his review comparisons, but has done because the M1 is simply so good it deserves to be compared against these too. For a first-attempt at a new computer CPU platform, you can hold back all your specific examples of where a chip may beat the M1, because the vastly overwhelming, obvious conclusion is that this new M1 is groundbreaking, when considering it's use-case, power consumption, and performance, and the fact it's the first of likely many CPU designs from Apple. Heck, even by clocking one of these exact chips at ~4GHz would make these benchmarks a whitewash - with no other changes to anything architecturally.

    I'm interested in what Apple can do with a. larger memory pool and how they will handle the needs of people who use Macs like the iMac 27", iMac Pro, and the Mac Pro, who want variable amounts of RAM.
  • Spunjji - Thursday, November 19, 2020 - link

    The only disagreement I have is with this bit of speculation:
    "Heck, even by clocking one of these exact chips at ~4GHz would make these benchmarks a whitewash - with no other changes to anything architecturally."

    It's not clear that they *can* clock the chips that high - they would probably need to change them architecturally in order to do so.
  • mdriftmeyer - Saturday, November 21, 2020 - link

    Correct. They would have to abandon their extreme low power model for it to hit 4Ghz. This and many other reasons is why x86 general purpose computing wins out in the end.
  • velanapontinha - Wednesday, November 18, 2020 - link

    Are you guys done with reviewing GPUs?
  • silverblue - Wednesday, November 18, 2020 - link

    I doubt it, though it helps to be sent samples, certainly when having to buy your own when none are available is now the norm.
  • velanapontinha - Wednesday, November 18, 2020 - link

    No Ampere, no RDN2, seems strange. Not the Anandtech I've followed for over 20 years.
    I wish they would at least say something about the absent reviews.
  • Silver5urfer - Thursday, November 19, 2020 - link

    More Apple and less everything whatelse.
  • Arnulf - Wednesday, November 18, 2020 - link

    "also seeing 4x Thunder efficiency cores at 2064MHz"

    Thunder or Icestorm?
  • realbabilu - Wednesday, November 18, 2020 - link

    Andrei, could please type on terminal: sysctl -a | grep machdep.cpu.features. To see this cpu features?
  • damianrobertjones - Wednesday, November 18, 2020 - link

    ...2031 - Apple moves to Intel processors.
  • TouchdownTom9 - Wednesday, November 18, 2020 - link

    Seems like Apple punched out a processor that matches Tiger lake in single thread perf while also doubling battery life and outperforming TGL in multithread performance. Very impressive stuff for a first go at a laptop /desktop class chip. While also being in the mac mini, this is clearly still a laptop class chip. Will be very interesting to know how much of the benefits are being driven by the 5nm node. Will be a very interesting comparison to compare to Zen 3 laptop parts when they arrive in Q1. Also will be interesting to see how it will compare (or the M2 chips) when all of them are on the 5nm nodes.
  • Kuhar - Wednesday, November 18, 2020 - link

    Exactly as I predicted in my comment on previous article on M1 CPU. It is very good, but far from being the best. From graphs we can see that in quite a few scenarios even 1 year old AMD Ryzen 4800U (@15 watts) kicks M1 right out of the field. And using Rosetta in some scenarios is around 40% handicap. Too much hype on your side Andrei - I would like to see same test with Zen3 U-series Ryzen (when available ofc). I am sure Ryzen will surprise you.
  • TEAMSWITCHER - Wednesday, November 18, 2020 - link

    I think the next couple of iterations on Apple's M family of processors will tell the tale. How does the Professional M1 silicon (for the 16" MacBook Pro, iMac, and Mac Pro) perform - if it exists at all? And how will the second generation (M3 Processors) perform compared, and how soon will Apple have them ready? Finally, will AMD be able to keep Apple's pace. Apple is far more diversified company, with the deepest pockets, and customers with deep pockets.
  • Nicon0s - Sunday, November 29, 2020 - link

    "Finally, will AMD be able to keep Apple's pace. "

    Undoubtedly, Yes.
    AMD is actually more diversified in the context of the computing world and have access to way more market overall than apple. Just now the latest generation consoles have been launched and both use semi custom AMD designs. The race is not over anyway, AMD and Intel will continue to improve year after year and they still have in their pockets the majority of the computer market. Also I think Apple update cycle for their chips will be slower than AMD's anyway.
  • Ppietra - Monday, November 30, 2020 - link

    It depends in what you consider computing. Apple products scale from smartwatches to workstations and all of them will have Apple SoC. Apple doesn’t have a console yet but there are strong rumours that it will have one in the near future. And considering all Apple products, it sells more processors through its products than AMD.
    Apple update cycle is already known. 1 new CPU core every year and always using the latest fabrication process, ahead of AMD.
  • zodiacfml - Wednesday, November 18, 2020 - link

    What is the power consumption in the Cinebench MT run?
  • EthiaW - Wednesday, November 18, 2020 - link

    Intel can stay competitive in the laptop market by offering a 8-core TigerLake at 25W TDP, at least in the next year. However what's going to happen to them thereafter is unpredictable.
  • Spunjji - Thursday, November 19, 2020 - link

    Their 8-core Tiger won't come in under 45W.
  • hanskey - Wednesday, November 18, 2020 - link

    Yeah.

    Apple is generally only relevant to users with more money than sense historically, and this is why their PC market share is a joke and their tablet and phone market share are ever shrinking - because they are not competitively priced.

    Generally speaking, Apple products are for computer-device users who are not very tech savvy and who do not seek the best total-cost-of-ownership for their performance requirements, easily swayed by "the cool factor", and who also don't mind vendor lock in, the lack of reasonable pricing for minor upgrades, expensive repairs, and giving control over to Apple to decide what software you get to use. These are all problems for the vast majority of computer device users, which in addition to being very overpriced are why long term Apple just gets less relevant with each passing year.

    A nice CPU for some use-cases will not solve for any of that, I'm afraid. Don't get me wrong, I'm a CPU architecture nerd and they've impressively reached close to single-threaded feature parity with some of the neatest bits I can think of, but it remains to be seen if these cores can scale as they have been, but none of that matters like it would if VIA or a Chinese x86 manufacturer did they same on an open platform, because of Apple's terrible business practices.
  • Upsider - Wednesday, November 18, 2020 - link

    Had to log in to thank you for this. I needed the laugh today!
  • KoolAidMan1 - Wednesday, November 18, 2020 - link

    People on day one are literally running 12k raw video files with no drops, beyond Apple's example of running 8k video with no stutter in DaVinci Resolve. Their bottom end chip is performing as well as or better than Intel-based systems that cost twice as much, but yeah, its for more people with money than sense.

    Cope.
  • blackcrayon - Wednesday, November 18, 2020 - link

    "Generally speaking, Apple products are for computer-device users who are not very tech savvy"
    Must be why Google uses them. And that UNIX shell. Another non tech savvy user feature. The vast majority of desktop users use Windows PCs. And 99% of them design their own computers from scratch, then write software to run on top of it.

    You should be ashamed of yourself.
  • Silver5urfer - Friday, November 20, 2020 - link

    Googleers are the worst fools of Software development, they are chasing the same garbage Macbook system of Locked garbage and fake privacy features, the Android OS is rotting from inside out, they are shaking up fundamentals such as filesystem support to a sandboxed garbage Scoped Storage mess with SAF framework on top, their Chromebooks are utter garbage. The OS doesn't even have Native Apps it uses Android yet they don't have SD card access properly, the machine cannot boot into other OSes properly as well. The UI is a mess and it is made for the literal non tech savvy users just need a basic computer - Like Kids, that's why in US many schools got them and they were very cheap too.

    Coming to MS, same the Windows 10 UX is a mess because the garbage OS looks like a mobile OS first and then they are killing the information density to accomodate the touch screen system they could simply made 2 UX options on first boot but they don't and on top they kill Control Panel and more desktop centric system with the sub par Metro UI/Fluent system which puts lot of emphasis on the ugly design on top of the least productive workflow with the OS in terms of a Desktop OS. Their Surface Books are same trash like that they use Intel but the problem is they gate every single BIOS control with zero user tweak option and they added S mode and all sorts of garbage to block exe but it failed so backtracked on top. They are going in the same walled garden utopian approach of WaaS, As a Service unstable mess of an OS with every 6 months of hell with a beta for a damn Desktop OS and using Home version userbase as guniea pigs.

    Google gets the prize since their Pixel was an iPhone clone from day 1, they aped the iPhone 6 design with Pixel made by HTC, and then advertised 3.5mm jack but axed it with Pixel 2 and with Pixel 3 that Insane notch got a copy paste to the 3XL with the world's worst notch ever, and then with Pixel 4 they yanked the HW so bad that it doesn't get proper battery missing ton of features, their Software is subpar beta and their Apps like Pixel recorder where one records the Sound one cannot even see them on the storage thanks to Scoped disaster storage they have to hit share that level of copying is going on at Google fools and then the best part, their marketshare since 5 generations - less than 3%, which is less than Huawei which was blocked by CFIUS, that share of Huawei was through Honor.

    Ofc you are not even capable of understanding all of this but again say something like that you said to the OP. Keep it up inb4 others come and try, google Scoped Storage Commonsware and then if you understand then we can talk.
  • Spunjji - Monday, November 23, 2020 - link

    Would love to see Silver5urfer's posts run through analysis software and compared to Quantumz0d. Even if they're not the same person, the overlap is fascinating.
  • Spunjji - Thursday, November 19, 2020 - link

    One of my most frugal developer friends codes on an ancient MacBook Air. But sure. 😂
  • Sandbo - Sunday, November 22, 2020 - link

    And the history has been history with M1, actually. You can try to show me a laptop with the performance of MacAir base model while maintaining the same battery life.

    RAM is one limit but for lots of people who only use the laptop for web browsing and maybe zoom, like students, this is the no brainer laptop to me.
  • Xanadu1977 - Wednesday, November 18, 2020 - link

    It would interesting to see (if ever) and Arm based CPU could be paired with a discrete high end GPU from AMD or Nvidia like a 6800xt, 2060 super, 3070 or better.
  • sfwineguy - Wednesday, November 18, 2020 - link

    Andrei and Ryan (or others who have seen tests) - any insight yet into how much the M1 equipped machines benefit from 16GB vs 8GB of RAM? Traditionally Apple has certainly seemed to do a better job with less RAM than others in either the Intel or ARM worlds (and they certainly position themselves that way); I'm curious as to whether my typical impulse to buy the upgrade is justified even less now. I have seen plenty of commenters in many places weigh in on this, but they are all just generalizing from past experiences; I have not yet seen anyone test loads and compare. Thanks for a great and fast look at the M1 stuff, esp combined with Andrei's other article on the SOCs.
  • Steven Choi 4321 - Wednesday, November 18, 2020 - link

    I see Intel and Amd as Blackberry and Nokia in back of the days. X86 chips like cooking burners. It getting worse and worse no matter what they trying to sell.
  • krakennuts - Thursday, November 19, 2020 - link

    except that Blackberry used an ARM processor after switching from Intel, hmmmm
  • mdriftmeyer - Saturday, November 21, 2020 - link

    AMD's power consumption gets more and more energy efficient with each Zen release. Intel, the exact opposite.
  • kkromm - Wednesday, November 18, 2020 - link

    2018 mac mini 4 thunderbolt and 2 USB 2020 mac mini 2 thunderbolt ports, that's it. Completely useless
  • Alistair - Thursday, November 19, 2020 - link

    no, it has 2 thunderbolt and 2 usb ports
  • mdriftmeyer - Saturday, November 21, 2020 - link

    The latest Mac Mini Intel has 4 TB 3 ports and 2 USB ports. Just read their specs.

    https://en.wikipedia.org/wiki/Mac_Mini#Fourth_gene...

    4× Thunderbolt 3 (USB-C 3.1 Gen 2)
    2× USB 3.0 Type-A
    HDMI 2.0
    3.5 mm headphone jack

    2020 M1

    2× Thunderbolt 3 (USB-C 4)
    2× USB 3.0 Type-A
    HDMI 2.0
    3.5 mm headphone jack

    Note: The lack of HDM 2.1 in 2020, and the lack of Thunderbolt 4. Why? It requires Intel VT-d baesd DRM and PCIe 32Gb/s and min PC speed requirements of 40Gb/s. Apple is capping Thunderbolt at 3. TB 4 also requires PC charging on one of the ports. That's another area Apple knows their reduced Power solution would have to jump up quite a bit to provide support on.
  • blackcrayon - Friday, November 20, 2020 - link

    If you're a bot, you probably need your algorithm patched for the word "useless".
  • tomInCanada2020 - Thursday, November 19, 2020 - link

    In terms of the overall performance, another consideration about the potential for the M1 to see even greater real world application performance is that SpecInt and FP stress the integer and floating point performance of the chip. Useful but does not necessarily tell the full story where the additional on chip accelerators for machine learning, neural engine, ISP, video encode/decode etc... come into play.
    I can only imagine that in real world application workloads where developers utilize apples core libraries such as CoreML, CoreImage, CoreMedia etc... that performance would be even better and more in favour of the M1 SoC design when compared with traditional x86 silicon.
    When you factor in that the chip does all this AND manages to do it on a skimpy power budget.... mind. Blown!!
    I’m really hoping that this move lights a fire under Microsoft and Qualcomm to get a more aggressive performing solution to the Surface Pro X out the door!
  • lucam - Thursday, November 19, 2020 - link

    The M1 with the PowerVR Series A GPU is a real killer! Look fwd to M1X!
  • Alistair - Thursday, November 19, 2020 - link

    Yeah I don't want any more CPU cores, I want clock speed increases. I also want double the GPU. M1G with double the GPU, make it happen Apple.
  • ChrisGX - Thursday, November 19, 2020 - link

    Long ago, I was in the computer trade, for a time. People, from time to time, still ask me about product purchases and issues of the moment shaping computing products for consumers. From the information already available on the M1 I can say that, at this point, it won't be hard to recommend the MacBook Air (and MacBooks and Mac Minis more generally) to anyone who wants to know what I think. I would mention all the standard caveats, of course. The Apple bear hug is not for everyone but it is hard to miss the fact that Apple is now making better consumer computers than its competitors and, for the first time from any computer products supplier, Apple's computers for average users rate very well on both performance and energy efficiency.

    As far as I am concerned the Tiger in Tiger Lake turned out to be toothless. Meanwhile AMD has been stung but can't be counted out. If anyone wants to get a computer less expensive than a MacBook Air I would advise and ARM based Chromebook or Tablet.
  • nicball - Thursday, November 19, 2020 - link

    Is it true that the SPEC scores are run in WSL for Windows machines? Then it only makes sense if you compare them among Windows machines, not across OSes.
  • ChrisGX - Thursday, November 19, 2020 - link

    Is it true? Hmm...NO!

    SPEC isn't a secret cabal. You can look this stuff up on their website. Try here:
    https://www.spec.org/cpu2017/Docs/system-requireme...
    https://www.spec.org/cpu2017/Docs/install-guide-wi...
    https://www.spec.org/cpu2017/Docs/
    https://www.spec.org/cpu2006/Docs/system-requireme...
    https://www.spec.org/cpu2006/Docs/install-guide-wi...
    https://www.spec.org/cpu2006/Docs/
  • substance90 - Thursday, November 19, 2020 - link

    The denial of AMD fanboys is so strong.. I can't resist getting the popcorn out.
  • Spunjji - Thursday, November 19, 2020 - link

    Yeah, I've been disappointed to see a few familiar names howling with rage at this one.

    That said, the Intel shills are clearly here too - just under new aliases, as seems to be customary.
  • vaddieg - Thursday, November 19, 2020 - link

    Andrei, why haven't you used 'sudo powermetrics' for chip power introspection? AC Wall measurement doesn't seem to be very accurate
  • Spunjji - Thursday, November 19, 2020 - link

    I'd like to see that, too.
  • ricebunny - Thursday, November 19, 2020 - link

    Anandtech please clarify which compilers and platforms you are using. I do hope you are using the Intel compiler for Intel chips?

    The same Tiger Lake chip scored 20% less in this review than it did 2 months ago. This destroys the credibility of your results.
  • Spunjji - Thursday, November 19, 2020 - link

    Only in the SPEC 2017 benches, which they explicitly noted aren't running with the full suite...
  • realbabilu - Thursday, November 19, 2020 - link

    Since Catalina,intel compiler mafia have issue and incompatibility. The big sur have worse compatibility.

    https://community.intel.com/t5/Intel-C-Compiler/In...

    That's why my hackintosh stay at mojave. The only works for now is gcc apple, and it doesn't have gfortran. I don't know if latest gnu gcc Catalina could works in big sur rosetta 2.
  • realbabilu - Thursday, November 19, 2020 - link

    Dull t9
    Should be intel compiler have issue
  • Phemg - Thursday, November 19, 2020 - link

    It would be nice a comparison of multi-thread against efficiency cores...
  • QuantumKot - Thursday, November 19, 2020 - link

    I wonder whether Apple has removed support for 32 bit instructions from the M1 hardware altogether or it is still there just in case?
  • Jenoin - Thursday, November 19, 2020 - link

    "it takes a Ryzen laptop with a Radeon 560X to finally pull even with the Mac Mini"

    Oh really? A Ryzen, that was the slower of two products in it's product stack, when it launched 3(!) years ago on a 14nm process coupled with a 2 year old refreshed version of a refreshed version of a refreshed version... 14nm GCN based GPU. What do you think the market value of such a laptop would be today assuming you could find one new? $250? $300? This comparison and the language used for it is disingenuous at best.

    What a joke.
  • hagjohn - Thursday, November 19, 2020 - link

    Apple Silicon allows Apple to be in the drivers seat. There is no more waiting for intel to release products, which has been pretty hard lately. They have done a damn good job with their first product. I'm excited to say I ordered a M1 Mini (16GB/1TB) and I can't wait to get it.
  • Alexvrb - Friday, November 20, 2020 - link

    I'd like to see Rosetta2 compared to the Windows ARM's x86 support. Obviously the QC chip isn't as fast, but I mean as a % of native performance. I'm sure Apple's version has less overhead, but I'm still curious how big the gap is.
  • alufan - Friday, November 20, 2020 - link

    Hmm no Nvidia GPU reviews and now no AMD GPU reviews?
    Seems Anandtech is becoming an intel and Mac site either that or you have seriously annoyed some folks who send out review kits
  • tuxRoller - Saturday, November 21, 2020 - link

    This is a FAR more interesting story. We've finally got a major desktop player going all in with arm and the results are astonishing.
    That kind of change doesn't happen often.
    If you want to read about how many more fps you can get with the news cards read any of more than a dozen benchmarking sites.
  • Spunjji - Monday, November 23, 2020 - link

    I was hoping for an architectural deep-dive on both of them that you don't usually get from other sites.

    I'm still hoping for that, tbh. I don't much care about it being "on time". It's still beginning to get worrying how badly things have gone, though.
  • tkSteveFOX - Friday, November 20, 2020 - link

    Just imagine if they up the TDP to 40W on the next 3nm process next year?
    Perhaps the CPU part won't get big gains (let's face it, CPU is better than anything on the market up to 60W), but GPU should double the performance. By that time 95% of apps will be native as well, so M1 performance will gain another 10-20% additional performance in all scenarios.
  • mdriftmeyer - Saturday, November 21, 2020 - link

    GPU double performance. That's delusional right there. Nothing in Apple's licensed IP will ever touch AMD GPU performance moving forward. Your claim the CPU is better than anything on the market up to 60W is also delusional.

    Enjoy 2021 when very little changes inside the M Series but AMD keeps moving forward. Being a NeXT/Apple alum I was hoping my former colleagues were wiser and moved to AMD three years ago and pushed back this ARM jump two more years.

    ARM has reached its zenith in designs in the embedded space and that is the reason everything wowed is about Camera Lenses. Fab processes are reaching their zenith as well.

    M1 is 12 years of ARM development by Apple's teams after buying PA Semi in 2008. It took 12 years with unlimited budgets to produce the M1. It's far less impressive than people realize.

    Most of Apple's frameworks are already fully optimized after the past 8 years of in-house development. People keep thinking this code base is young. It's not. It's mature. We never released anything young back at NeXT or Apple Engineering. That hasn't change since I left. It's the mantra from day 1.

    The architecture teams at AMD have decades more experience in CPU designs and nothing released here is something they haven't already worked on in-house.

    Intel's arrogance is one of the greatest falls from the top in computing history. And it's only going to get worse for the next five years.
  • dontlistentome - Saturday, November 21, 2020 - link

    Not sure when Intel will learn - they let the Gigahertz marketeers ruin them last time AMD had a lead, and this time it was the accountants. Wondering who screws them up again 15 years from now?
  • corinthos - Monday, November 23, 2020 - link

    Intel made so many mistakes and brought in outsiders who just didn't have the goods to set it on a good path forward. The board members who chose these folks are partially to blame. Larrabee was just one of the earlier warning signs.
  • corinthos - Monday, November 23, 2020 - link

    So are you saying that there's more limited growth opportunity for Apple going down the ARM path than people realize, and that the prospect of AMD producing competitive/superior low-powered processors is going to be much better?

    For now, it seems that for the power consumed, the M1 products have a leg up in power-performance over Intel or AMD-based competing products. Can Apple take that and scale it upwards to be competitive or even a leader in the desktop space?

    I think about how there are some test results coming in already showing how a 2019 Mac Pro with a 10-core cpu and expensive discrete amd gpu and loads of ram being outshined by these M1's in some video editing workloads and wonder if a powerhouse desktop is such a good investment these days. That thing came out like a year ago and cost probably around $10K.
  • Focher - Tuesday, November 24, 2020 - link

    From your post, I suspect you are not going to enjoy the next 2 years. Saying things like the code is already fully optimized is so ridiculous on its face, it’s hard to believe someone wrote it. If time led to full optimization, then what’s the magic time horizon where that happens? If you think Apple just played its full hand with the M1, you’ve never paid attention to Apple.
  • blackcrayon - Tuesday, November 24, 2020 - link

    I would think doubling GPU performance would be one of the easier updates they could make. More cores, more transistors with their existing design - the same thing they've done year after year to make "X" versions of their iPhone chips for the iPad. The M1 isn't at the point where doubling the GPU cores would make it gigantic and unsuitable for a higher end laptop or desktop. Unless you thought he meant "double Nvidia's best performance" or something which isn't going to be possible currently :)
  • zodiacfml - Friday, November 20, 2020 - link

    Coming back here just to leave a comment though the M1 truly leaves any previous Apple x86 product in the dust, it is far from the performance of a Ryzen 4800U which has TDP of 15W. The M1 is at 5nm while consuming 20-24W.
    The M1 iGPU is mighty though, which can only be equaled or beaten by next year/generation APU or Intel iGPU
  • thunng8 - Saturday, November 21, 2020 - link

    The 4800u uses up to 50w running benchmarks and under load. You will never see a current gen Ryzen without active cooling. In laptops running benchmarks, the fans ramps up to 6000rpm while the m1 can run in the MacBook Air with no cooling with hardly any performance degradation. And there’s also the issue of battery life where the m1 laptop with a smaller battery can far outlast any Ryzen laptop.

    In short you cannot compare intel and AMDs dubious tdp numbers with the number measured for the m1.
  • BushLin - Saturday, November 21, 2020 - link

    M1 and 4800U (in 15W mode) are consuming similar 22-24W power when the 4800U is showing better Cinebench performance, no denying the single thread advantage of the M1 though.
    If you've seen a 4800U anywhere near 50W, it won't have been in its 15W mode and running an unrealistic test like Prime95.
    Just the facts Jack.
  • thunng8 - Sunday, November 22, 2020 - link

    What are the cinebench results of the 4800u running at 15w. All I see is the 4800u running benchmark it is well over 40w.

    For example here: https://www.notebookcheck.net/The-Ryzen-7-4800U-is...

    Power usage peaks at 57w and in games the laptop maintains 49w indefinitely. That is very very far from 15w.

    Anandtech has actually test the m1 with fan in the mini and it uses 15w in cinebench and scores 7700. The non active cooled MacBook Air uses 7w when thermally throttled (this is in the 30min run) and scores approx 6000
  • BushLin - Sunday, November 22, 2020 - link

    notebookcheck are testing a 4800U in 25W mode and stressing CPU+GPU simultaneously.
    Here's accurate charts of the 4800U in 15W mode.
    https://www.anandtech.com/show/16084/intel-tiger-l...
  • thunng8 - Sunday, November 22, 2020 - link

    Here is some actual figures for a Renoir system.

    https://forums.anandtech.com/threads/new-apple-soc...

    Constrained to 25w, it score 6600.

    While m1 scores 7700 at 15w.

    https://twitter.com/i/web/status/13287773335122780...

    The 4800u would be using 40w+ to outperform the m1 at 15w.
  • BushLin - Sunday, November 22, 2020 - link

    Results are for a different, 4700U CPU, posted by some random person.
    Why not simply look at the results from the article you're leaving a comment on? Which are more likely to be taken under controlled conditions.
  • thunng8 - Sunday, November 22, 2020 - link

    I am looking for actual performance results at 15w, not boosted to 40w or higher which all 4800u seems to do.
  • BushLin - Sunday, November 22, 2020 - link

    Didn't take much effort to find the 15W M1 and 15W 4800U have very similar power draw, if that's what you're actually looking for.
    They both boost above their rated power in a similar way.

    https://images.anandtech.com/graphs/graph16252/119...

    https://images.anandtech.com/doci/16084/Power%20-%...
  • thunng8 - Sunday, November 22, 2020 - link

    I was looking for specific cinebench results which you alluded to was running faster than M1 while using the same power. The few videos I have seen shows the 4800U laptops fan spinning up to maximum over most of the entire Cinebench 10min run while the M1 fans (in the macbook pro) was not audible and the air has no fan at all.
  • BushLin - Monday, November 23, 2020 - link

    https://images.anandtech.com/graphs/graph16252/119...
  • Ppietra - Monday, November 23, 2020 - link

    BushLin
    But those aren’t the real power draw numbers for the M1 and 4800U laptops in the tests. The 4800U 15W is just a reference, no measurements were made for the power consumption during Cinebench. The 4800U can draw much more power than 15W.
    As for the Mac mini the power draw shown is for the all computer, not just the processor. If you had bothered reading thunng8 links you would see that M1 power consumption probably tops at 15W during Cinebench.
    I have seen tests where a 4800U laptop consumes almost 3 times more power than a MacBook Pro with a M1 chip, during Cinebench. It’s the power consumption of the laptops not the chips but the difference is gigantic.
  • BushLin - Tuesday, November 24, 2020 - link

    The power measurement for both systems is at the wall rather than just the SoC and the numbers available from this site, under controlled conditions, show a similar power draw under a multithreaded workload. There is a difference in depending on factors like AVX instructions but the 4800U in 15W mode is demonstrably pulling similar amounts of power to an M1 in 15W mode running a similar workload.
    The 4800U can be run in a higher, 25W power mode but not on the pages I've linked as it wasn't used to get the cinebench results on this article.
  • Ppietra - Tuesday, November 24, 2020 - link

    Sorry, but no! What you showed for the 4800U isn’t the power draw of the computer, it’s the power draw of a processor doing a completely different test in an unnamed computer in unknown settings. Not to mention that this M1 analysis is with a desktop computer not a laptop. Why are you trying to deceive?
    If you want to see computer power consumption go to Tally Ho Tech youtube channel, you will see how a laptop with the 4800U consumes around 3 times more in Cinebench than a laptop with a M1 chip.
  • BushLin - Tuesday, November 24, 2020 - link

    Oh... I have to wade through some random YouTube channel to see some other flawed comparison? If you can link to an objective test showing actual power draw, post the link.
  • Ppietra - Tuesday, November 24, 2020 - link

    hmmm! So you complain about a channel doing a straightforward comparison in similar conditions, but you have no problems in using different kinds of power consumption from completely different tests to validate your "theory". Grand!!
    For the record thunng8 links shows you the power consumption of the M1 in a Cinebench test done by Anandtech’ author.
  • BushLin - Tuesday, November 24, 2020 - link

    No, I'm continuing to complain that you don't post a link to your evidence, I'm not sitting through every YouTube video of some random when in the next breath you refer again to a forum post linked by thunng8 which is for a different, 4700U CPU.
    No amount of FUD changes the fact that the M1 is great at single thread, especially for the power but isn't the Jesus CPU you dream it to be in multithreaded workloads.
    Put up or shut up.
  • Ppietra - Tuesday, November 24, 2020 - link

    I have said nothing about a 4700U CPU nor referred to links to talk about AMD CPUs. I very clearly only mentioned M1 data when referring to thunng8 links. Neither have I said that the M1 had the highest performance of all, I only talked about power consumption.
    https://www.youtube.com/watch?v=wuvZQOUDCKY
  • BushLin - Tuesday, November 24, 2020 - link

    Great YouTube video, didn't measure the power draw. Yet you're the one throwing around words like deceive.
  • Ppietra - Tuesday, November 24, 2020 - link

    Didn’t measure the power draw? Really?
    So showing battery level change after 10 minute Cinebench test, with known battery capacities, isn’t sufficient data to determine approximate power draw? Not even enough to give you an idea? A little bit of basic math, then!
    16% drop in a 60.7Wh battery gives 9.7Wh of energy use for the AMD laptop. If you prefer in watts, since it was a 10 minute run that gives 58W.
    6% drop in a 58Wh battery gives 3.5Wh of energy used for the MacBook Pro. In a 10 minute run that gives 21W, to do the same task.
    From a previous video the Cinebench MT scores were 9976 for AMD and 7800 for the MacBook.
    If you normalize energy consumption to the Cinebench score, that would mean the MacBook Pro consumes less than half for the same performance!
  • BushLin - Tuesday, November 24, 2020 - link

    What are the cinebench results when running on battery? What is the battery's actual capacity and how fast does it drain when fully loaded? How does the system report battery percentage vs voltage of the cells? How much power does the screen consume? Can you trust the person making the video to be thorough and objective?
    Could go on and on, or just stick a $15 meter on the AC plug and actually measure it.
  • Ppietra - Tuesday, November 24, 2020 - link

    ?? I gave you the Cinebench results. I gave the actual battery capacity, I gave how much it drained when running Cinebench and in what time interval - it is all in the videos!
    How much power does the screen consume??? REALLY!!!? For someone who argued for the use of "power measurement" "at the wall rather than just the SoC " to now come and argue against measuring the battery power drain in a 10 minutes test it is laughable. Even more so when in the end you make another U turn and just argue about measuring power at the plug - so the screen doesn’t matter now!??
    Just face reality M1 power consumption has been measured at 15W in Cinebench, by Anandtech. 4800U power consumption can go higher than 25W since it can be boosted to 35W.
  • BushLin - Tuesday, November 24, 2020 - link

    Since you're either a troll or insane, I'm not going to expend any further effort beyond answering this:
    "Just face reality M1 power consumption has been measured at 15W in Cinebench, by Anandtech"

    With this:
    https://images.anandtech.com/graphs/graph16252/119...
  • Ppietra - Tuesday, November 24, 2020 - link

    No,
    with this:
    https://twitter.com/i/web/status/13287773335122780...
    anandtech author’s twitter account, something that for some reason you have refused to read!
    amazing how you cannot distinguish between processor power consumption and machine power consumption.
  • Nicon0s - Monday, November 30, 2020 - link

    "?? I gave you the Cinebench results. I gave the actual battery capacity, I gave how much it drained when running Cinebench and in what time interval - it is all in the videos!"

    LoL, you only linked to unprofessional, amateurish video. Nobody in their right mind would try to suggest that a video like that is conclusive in terms of how efficient a 4800U chip is.
  • Nicon0s - Monday, November 30, 2020 - link

    "Didn’t measure the power draw? Really?

    Yeas, really. You have so clear perception problems.
  • Nicon0s - Monday, November 30, 2020 - link

    Exactly, quite a flawed way of comparing processor efficiency, also he didn't even test the laptops until the end to see is the power is lost progressively on all laptops.
  • Nicon0s - Monday, November 30, 2020 - link

    "Sorry, but no!"

    WoW, some apple fanboys here are really dense.
    Also recommending an amator youtuber, that was cringey.
  • Pacinamac - Saturday, November 21, 2020 - link

    I am smiling know who had huge involvement in the development of M1...

    You all know who I am talking about...

    Anand. :)
  • GeoffreyA - Sunday, November 22, 2020 - link

    My brother and I were talking about the same thing a week or two ago. I reckon he's been giving Apple advice/guidance on this whole matter. Someone who can see the whole forest in one fell swoop, but understands the wood and bark as well. "The Enemy is moving with their armies of Zen, while the Coves of Sunny languish in the Lakes of Ice, under spell of the Willow Lady. The hero Rocket is detained in the Lost Fields of 14. Now, now is the time to strike with the M1, smiting down the Free Peoples of PC-earth, while the wind speaketh the language of ARM." ;)
  • GeoffreyA - Sunday, November 22, 2020 - link

    On a serious note, I think he's been giving the CPU team a bit of design advice too.

    Anyhow, it would be great to hear an update from him, even if it were just a personal one, telling us how he's doing.
  • Tomatotech - Sunday, November 29, 2020 - link

    He can’t. A lot of contracts specify absolutely no talking to media (which would include Anandtech) without corporate approval. For someone coming from a journalist background, even running a personal blog might raise eyebrows as it is ‘public media’. I don’t like it, but many non-Apple companies are sensitive and paranoid about this kind of thing. I can only imagine it 100x worse at Apple.
  • Tomatotech - Sunday, November 29, 2020 - link

    To add, even if he got corporate approval for a screened interview / statement, it almost certainly couldn’t be with AnandTech. Apple / others might see that as unduly favouring one media outlet because he has personal history with AnandTech. The choice of outlet has to be done by someone else, choosing for professional reason, and fitting in with Apple’s media strategy.
  • GeoffreyA - Thursday, December 3, 2020 - link

    Sad, but true. I remember his "Road Ahead" article in 2014 and that was it. Gone. Well, let's hope the man leaves Apple someday and comes back home to Anandtech. I know, the chances of that happening are exceedingly slim.
  • pjc15 - Saturday, November 21, 2020 - link

    This might be evidence that ARM is the way of the future, but Intel, AMD, and Microsoft are too entrenched in x86 to do anything about it. They would have to maintain x86 and ARM versions of everything, and on the hardware side especially, they don't seem to have the bandwidth to do that. Maybe custom servers will move to ARM, maybe some types of pros will move to macOS if Apple's lead extends, but for the vast majority of cases, x86 is good enough and cheap enough that Wintel will be unaffected. In order to upend the industry, Apple would have to sell its chips to OEMs, or sell sub-$500 computers, both of which have almost no chance of happening.
  • profquatermass - Sunday, November 22, 2020 - link

    So where does the M1 go from here?

    How can they make the M2?
  • Spunjji - Monday, November 23, 2020 - link

    The next move will probably be a larger variant with more (8?) large cores, more (+50%?) GPU resources, a wider (256bit?) memory bus, and LPDDR5. They're all fairly obvious ways to provide higher performance, but they mean a larger die, which means lower yields on a 5nm process that's still in its early stages.
  • utferris - Wednesday, November 25, 2020 - link

    I hope there will be Apple ISA. ARM ISA can be limiting at some point if apple want to have their own custom instructions for better performance.
    And NVidia will be owning ARM, which is the worst thing to Apple and the world.
  • Focher - Thursday, November 26, 2020 - link

    Nvidia owning ARM will have zero impact on Apple.
  • tokale - Friday, November 27, 2020 - link

    This is the biggest misconception out there, some folks think Apple's chip fate it tied to ARM. Truth is the only thing ARM related to Apple's chips is the instruction set, everything else's is Apple's.
  • PickUrPoison - Saturday, December 12, 2020 - link

    True. But OP is right about instruction set extensions. Do we know there aren’t any? I expected some for perhaps the purpose of Rosetta2 performance acceleration, esp for the on the fly emulation vs. Rosetta2 translation.
  • helpmeoutnow - Thursday, November 26, 2020 - link

    looks like another bubble from Apple. but they will sell it as something amazing.
  • PickUrPoison - Saturday, December 12, 2020 - link

    Yes, congratulations to Apple, absolutely! As Andrei said, “overall, Apple hit it out of the park with the M1”.
  • MH_Muso - Sunday, December 20, 2020 - link

    Mac M1 needs to have a faster GPU or permitted eGPU. Mac M1 is losing to a low 1660 ti. I hope it will improve next year. The good thing is that Mac M1 is cheap. The new architecture is innovative.
  • BR117 - Saturday, May 1, 2021 - link

    Hey Andrei, I’m revisiting this review to compare but I’m not sure why your Speedometer scores are so low for Ryzen. My 5800X scores 189-200... are you guys using Chrome? Wondering also if there has been a browser or benchmark update since then that could affect this?

Log in

Don't have an account? Sign up now