Comments Locked

644 Comments

Back to Article

  • :nudge> - Tuesday, November 10, 2020 - link

    ...waiting for the M3
  • linuxgeex - Tuesday, November 10, 2020 - link

    Samsung released the M3 (Meerkat) in 2018 ;-)
  • Luminar - Tuesday, November 10, 2020 - link

    Like the Tesla?
  • wolrah - Thursday, November 12, 2020 - link

    > Like the Tesla?

    @Luminar no. The M3 is a BMW, and has been for decades before anyone had heard of Elon Musk.
  • Peskarik - Wednesday, November 11, 2020 - link

    M3 CSL
  • Agent Smith - Wednesday, November 11, 2020 - link

    M3x Pro 😀
  • Agent Smith - Wednesday, November 11, 2020 - link

    oh, and a Pro version not limited to only 16Gb memory. Misuse of the word Pro to use the same M1 chip and restrict the memory like that.
  • melgross - Wednesday, November 11, 2020 - link

    Apple’s IP doesn't require the amount of RAM that x86 (or Android) does. It’s very likely, going by the ability of my 2020 iPad Pro 12.9” to edit 4K faster than most laptops, and with only 6GB RAM, that 8GB and 16GB is more than enough for those new machines.

    It’s the 16” Macbook Pro that is more performant, that will likely get more RAM. The 27 or rumored 28” iMacs as well.
  • Rcko - Wednesday, November 11, 2020 - link

    The amount of "memory" needed, is a big "depends". The OS often will cache things in memory, whether needed or not, it doesn't know, its just pointless to clear it, it'll use it if needed, it won't, if not. Swap isn't a performance killer, not when ssd of today is *faster* than memory of even a few years ago. It is true the unified memory built into the soc is the fastest memory in this system. I'm not intending to confuse the conversation, but the SSD "disk" subsystem is being used as a type of second tier memory and that's fine. It might be the case that some folks would like more unified memory than 16GB and they may wish to wait for Apple to release their higher end systems - but most people that think they need more, don't...just buy a big SSD, you're fine.
  • alysdexia - Wednesday, November 11, 2020 - link

    Who is a folk?
  • 29a - Thursday, November 12, 2020 - link

    Folks are people, bit you probably knew that.
  • CharonPDX - Saturday, November 14, 2020 - link

    My sister has needed a new laptop for a while - she has a Windows laptop now, but her work laptop is an Apple, and she likes Apple. (She's a government employee, so *CAN'T* use her work laptop for personal uses.)

    So, ordered a MacBook Air for her for Christmas (splitting the cost with our parents.)

    While I was at it, ordered myself a Mini to play with, and probably replace my old 2012 Mini in "home server" use.
  • Alej - Saturday, November 14, 2020 - link

    There’s a running theme on twitter and on a thread on Blender artists forums where they point out that the unique memory and texture compressions available to the M1 chip memory hardware would allow it to fit scenes with 100+GB of textures inside those 16GB for GPU rendering.
    Apple for a long time has touted that they have clever RAM management at the OS level and I truly believe it, I can use an 8GB 2015 MacBook for light-to-mid work and it’s pleasant while using a Windows Laptop with the same 8GB can prove sluggish even when just doing browser navigation (although many other variables for that to happen besides being actually the memory available).
  • mdriftmeyer - Saturday, December 5, 2020 - link

    Must be some mysterious Fractal Compression [bs] to pull off that fantastical feat of creative story telling.
  • mattbe - Saturday, November 14, 2020 - link

    "not when ssd of today is *faster* than memory of even a few years ago."

    This is simply not correct. Please show me an example where SSD is faster than memory (DDR4) from a few years ago.
  • alysdexia - Wednesday, November 11, 2020 - link

    faster -> swiftlier; Macbook -> MacBook; will -> shall
  • adt6247 - Thursday, November 12, 2020 - link

    But regardless of OS, some workloads need more RAM. It's why I jumped ship from Mac laptops a couple generations ago. In situations where I'm running tons of VMs and Docker images, building enterprise Java apps, etc. that 16GB of RAM get chewed through real quick.

    It's not to say that 16GB isn't good enough for most people -- it certainly is. But in 2020, calling it a "Pro" laptop and unable to handle certain classes of increasingly common professional workloads is not great. That being said, this is a first step. It's clear they're going to have to go to RAM off-package for some machines (at least the Mac Pro), hopefully that will trickle down to the Pro laptops.

    Also, I'd LOVE to see Linux distros targeting this hardware. Should be fun to run some benchmarks.
  • danbob999 - Friday, November 13, 2020 - link

    The required RAM has nothing to do with the architecture.
    ARM requires as much RAM as x86 to run the same applications. No reason why macOS would use half the RAM with Apple silicon. With universal binaries taking more space, RAM usage could even go up a bit.
  • NetMage - Friday, November 13, 2020 - link

    Universal binaries may need more storage but there is no reason they should need more RAM.
  • alysdexia - Wednesday, November 11, 2020 - link

    a iPad Pro version?
  • dotjaz - Wednesday, November 11, 2020 - link

    16Gb? That's less than iPhone of yesteryear. Apple have the choice of 32/48/64/80/96Gb chips so they can do 8/12/16/20/24GB versions as they wish. And it'll be ready within a week.
  • NetMage - Friday, November 13, 2020 - link

    What iPhone has more than 16GB of RAM? (Hint: none)
  • NetMage - Thursday, November 12, 2020 - link

    I know you're just trolling, but let's pretend for a minute you didn't know that the MacBook Pro 13" two port Intel model was also limited to 16GB anc just commented from ignorance.
  • realbabilu - Monday, November 16, 2020 - link

    pro need fan. for indicating you are working on non-sleep condition
  • YB1064 - Wednesday, November 11, 2020 - link

    What a terrific article! Thank you.
  • Gondalf - Thursday, November 12, 2020 - link

    Very expensive piece of 5nm silicon. It costs two times A13 per square millimeter.
    As usual boutique things for few peoples, irrelevant being in medium/low volume at the best.
    If you go to a finer process, obviously you are more power efficent than competitors; the downside
    is Apple Laptops will remain definitively a product of niche in a big world of 8 billions of people.

    No problem for AMD or Intel
  • SuperSunnyDay - Thursday, November 12, 2020 - link

    Big niche though, and could get a lot bigger
  • Spunjji - Thursday, November 12, 2020 - link

    "If you go to a finer process, obviously you are more power efficent than competitors"
    The why does Intel's 10nm suck so hard for power efficiency? 🤔
    And what of Apple's measurably superior architectural efficiency? 🤔🤔

    Oh, Gondalf.
  • daveedvdv - Thursday, November 12, 2020 - link

    I'm surprised by that claim. Sure, it's currently probably more expensive than the A13, but that's not what it's replacing. It's replacing chips that Intel sells at quite a premium to Apple. I suspect the M1 saves Apple quite a bit, in fact. Maybe that contributes to the lowering of the Mac mini price?

    I think AMD and Intel are toast in the medium term. The problem is that Apple is showing it can be done. The Huaweis, Samsungs, and Qualcomms of this world are not going to just sit there: They're going to come for AMD & Intel's income streams, especially with the pressure that will come from other PC OEMs (Huawei and Samsung included).
  • vais - Friday, November 13, 2020 - link

    ARM server CPUs have existed for quite a while now but are still very low percentage of total server CPUs used - I wonder why that is.
  • BedfordTim - Friday, November 13, 2020 - link

    The reason is that until recently they weren't good enough, and that there was a chicken and egg situation with software.
    That changed with the Neoverse cores and you could equally ask why Amazaon and other are now offering ARM servers.
  • Silver5urfer - Sunday, November 15, 2020 - link

    People forget things and people do not know what they are even talking about. Marvell Thunder X3 article right here on AT it was purported to be challenging AMD and Intel. And they abandoned the SKUs for the general purpose meaning, they won't be selling off shelf parts. They will do a custom chip if any company wants them like Graviton for AWS is custom like that Marvell will do for X company per se.

    And next we go to even further past, Qualcomm Centriq, remember Cloudflare advertisement on the Centriq, they were over the top for that CPU infrastructure. And then what happened ? Fizzled out, Qualcomm axed the whole Engg. Team forever which was the team who made 820 Kryo full custom chip (why did qcomm made full custom ? 810 and 64Bit disaster) and we have Nuvia, Altera, both have to prove themselves, EPYC7742 is the leading champion in the arena right now, Intel is trying so hard to contain the beast of AMD still no viable product, until their 10nmSF is ready.

    And here we have people reading this article which has mentioned only GB and Apple graph and then specint perf charts with Single Core performance only and then deciding on Intel and AMD are dead along with x86. Hilarious and even on hackernews, people are discussing some bs medium post on how Intel is going to get shunned...

    The world runs on Intel and AMD is just 6.6%, and we have people thinking Samsung and Qualcomm, Huawei are going to come at Intel. Epic joke really. Even more funny part is how Intel Macs are still on Apple.com website at a higher price tag, WHY WILL APPLE STILL KEEP INTEL CPUs IF THIS M1 IS FASTER THAN 5950X and 10900K GOD KNOWS !!!
  • Sailor23M - Friday, November 13, 2020 - link

    ^^This. I fully expect Samsung to launch their own Arm chip based laptop and then grow from there to perhaps servers as well.
  • duploxxx - Monday, December 7, 2020 - link

    Medium term?
    You mean people moving to the over expensive eco system of apple because they have a better device that is only compatible with 5-10% of the available sw?

    Moving away from x86 neither that will happen. But only for the parts that are written for mobile devices. Back end sw is not just shifting over. Arm servers are evolving but far from x86 portfolio. Only amazon is showing some progress in there own platform which they offer very cheap to bring people over. Those who dev in cloud look into future portability and complain e. Much to discuss about that.
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    The number of in-flight loads and stores the Firestorm core can sustain is just crazy.
  • lilmoe - Tuesday, November 10, 2020 - link

    Numbers are looking great for Zen3. ARM X1 should also give Firestorm a run for its money. That being said, the key advantage of the M1 platform will be its more advanced co-processors, coupled with mainstream adoption, should drive devs to build highly optimized apps that run circles around any CPU bound workload... Sure NVidia has Cuda, but it won't be nearly as wide-spread as the M1, and not nearly as efficient. If done right, you can saw bye-bye to CPU-first compute.
  • Ppietra - Tuesday, November 10, 2020 - link

    considering what is known ARM X1 will still be quite behind
  • Kurosaki - Tuesday, November 10, 2020 - link

    Yes, several generations behind, but Intel would still be worse. Crazy world nowadays! :D
  • Mgradon - Tuesday, November 10, 2020 - link

    Well, it is not the question of CPU speed only. PowerPC was better then Intel but this did not help Apple. Moving the code of all key software to native Apple OS code will take long time, and in virtual mode speed of those CPUs will be low. I am using now MacBook Pro, do I want to be like a beta tester with OS based on IPhone ecosystem - not really convinced.
  • Kilnk - Tuesday, November 10, 2020 - link

    Apple reports that graphics-intensive apps run faster on Rosetta on this chip than they were running natively on previous Macs with integrated graphics. It's in the keynote at 18:26. That's how fast it is.
  • Mgradon - Tuesday, November 10, 2020 - link

    Fantastic, chip with strong integrated GPU against Intel w/o. But let see the reality with 1st notebook launched.
  • richardshannon77 - Tuesday, November 10, 2020 - link

    Intel client CPUs have integrated GFx. Tigerlake GPU looks impressive.
  • Luminar - Wednesday, November 11, 2020 - link

    It's good that Intel's "UHD" chips finally have real competition.
  • skavi - Wednesday, November 11, 2020 - link

    Apple always opted for the best available Intel graphics.
  • kaidenshi - Wednesday, November 11, 2020 - link

    GMA950 on the first Intel Macs would like a word.
  • Spunjji - Thursday, November 12, 2020 - link

    @kaidenshi - "best available Intel graphics". At the time the first Intel Macs came out, GMA 950 was the best they did. It's notable how rapidly they improved when they had Apple pushing them to do so.
  • Billrey - Wednesday, November 11, 2020 - link

    The first notebook already launched. The MacBook Air and MacBook Pro. They are generally faster in emulation than Intel native.
  • Luminar - Wednesday, November 11, 2020 - link

    Will the emulator support DLSS and DX12, Ultimate?
  • Ppietra - Wednesday, November 11, 2020 - link

    why would Apple support DX12 when it never supported DX12?
  • blppt - Wednesday, November 11, 2020 - link

    Heck, they won't even allow Vulkan onto MacOS---you have to run a translation layer like MoltenVK.

    DX12? Highly unlikely.
  • dotjaz - Wednesday, November 11, 2020 - link

    They are probably referring to BootCamp.
  • daveedvdv - Thursday, November 12, 2020 - link

    Apple has confirmed that Apple Silicon Macs won't support BootCamp anymore.

    No, they were definitely talking about Rosetta 2. That said, I don't think they said it was a common occurrence. Just that it had been observed on a real application. At least, that's my recollection.
  • guycoder - Wednesday, November 11, 2020 - link

    I suspect a lot of the graphics privatives are translated to running native and are getting an uplift from the new GPU. Sure this is a fast SOC but Rosetta is also converting large parts of any X86 application to the corresponding Apple ARM system library and taking advantage of the extra hardware on the M1. Still all very impressive so far and it's going to be fun to see what having the CPU / GPU / NPU so tightly integrated will bring.
  • valuearb - Tuesday, November 10, 2020 - link

    It's running MacOS, which isn't based on iOS. Actually WatchOS is based on iOS is based on MacOS which is based on NextStepOS.

    Benchmarks have already shown emulated Intel code is running roughly as fast on Apple Silicon as it runs on Intel Macs. These benchmarks came from Developers using the DTK, the ARM based Mac mini Apple gave developers to develop on earlier this year, and it was running a year old iPad processor.!

    And PowerPC wasn't better than Intel, especially at the end.
  • Luminar - Wednesday, November 11, 2020 - link

    PowerPC had the advantage of 3DNow! Instructions
  • Zoolook - Wednesday, November 11, 2020 - link

    3DNow! was AMD's answer to MMX, was later dropped (mostly) when they licensed SSE.
  • Zoolook - Wednesday, November 11, 2020 - link

    Wish we had an edit function, @Luminar maybe you are thinking about Altivec which was the PowerPC SIMD functions, it was indeed poweerful when it was introduced.
  • Ppietra - Wednesday, November 11, 2020 - link

    moving the code shouldn’t take that long for many apps. for many it will only need recompile and debugging.
  • Cubesis - Wednesday, November 11, 2020 - link

    Um PowerPC was not better than intel... it was way more power hungry. That’s why they switched to Intel in the first place. Even if moving the code did take a long time, (it won’t) Rosetta 2 improvements combined with them being faster than Intel chips, Means that although I can’t wait for everything to be optimized, I don’t really care when everything performs about the same emulated with Rosetta 2 sometimes even outperforming native intel binaries.
  • Cubesis - Wednesday, November 11, 2020 - link

    By the way although I understand where you were going with you were admittedly justified criticisms of Big Sur, your Intel MacBook Pro will be running that same operating system. Unless you plan on letting the system go obsolete with an OS that will be left behind. Fun fact. iOS technologically was derived from MacOS X so to say it’s being done the other way around makes no sense. MacOS can’t be a copy of iOS when iOS is already a copy of MacOS. If your Criticisms are solely about interface changes and methodology then I couldn’t agree with you more and hope it’s at least part of their secret long term plan to incorporate touch into macOS. That would be the only smart reason to make it shittier to navigate using keyboard and mouse LoL
  • daveedvdv - Thursday, November 12, 2020 - link

    Fun fact: One of the ways Rosetta 2 is faster than the original Rosetta (which was no slouch) is that Rosetta 2 doesn't have to do endianness transcription.
  • dotjaz - Wednesday, November 11, 2020 - link

    Several? More like one. X1 is quite close to Lightning (int) and Firestorm (fp).
  • bobdesnos - Tuesday, November 10, 2020 - link

    need 4 ys after we will see
  • Luminar - Wednesday, November 11, 2020 - link

    X1 is probably dead in the water
  • Spunjji - Thursday, November 12, 2020 - link

    Not really - it won't have to compete with Apple's CPU in any market besides phones, and TBH Android and iOS aren't really "competitors" in that sense.
  • misan - Wednesday, November 11, 2020 - link

    I think you are missing the point where Zen3 is consuming 4 times the power to get comparable performance. These tests re comparing a sub 5 watt phone CPU to latest and greatest of x86 desktop CPUs. I believe the ridiculousness of this situation has not properly sank in yet.
  • vais - Wednesday, November 11, 2020 - link

    I don't understand what exactly those benchmarks are testing. Even it apple's architecture is 3-4 times more efficient, such close scores are unexpected between a 5watt vs 110 watt CPUs.
    To me it seems this benchmark doesn't accurately represent the real world performance of the different CPUs.
  • Coldfriction - Wednesday, November 11, 2020 - link

    My opinion is that using dedicated silicon for a specific task and not generic CPU computing is where almost ALL of the improved performance comes from. Apple is including a lot of dedicated silicon that isn't just general computing. A Zen 3 Ryzen has some of that, but most of it is general computing. Programmable silicon will never be as fast as dedicated silicon. The M1 will look great for very specific things, and admittedly they'll be the very specific things that Apple users use the most, so it's likely a win for them. The technological claims however are bogus. Like you said, there's essentially no way that Apple made a general computing CPU that is faster than Intel or AMD; it's all the dedicated silicon.
  • michael2k - Wednesday, November 11, 2020 - link

    Like you said, there's essentially no way that Apple made a general computing CPU that is faster than Intel or AMD; it's all the dedicated silicon.

    It’s like you didn’t read this article or any other Anandtech article on Apple’ A series CPUs. It’s got an 8 wide cpu design; wider than Intel or AMD, it’s got 12MB L2 cache, and a much larger reorder buffer. It’s also got 8 cores, larger than Intel’s default 4 or 6, even if half of them are the efficiency cores.
  • Coldfriction - Wednesday, November 11, 2020 - link

    No doubt what they have is efficient. But their claims are out of this world high. If it was simply a matter of making a wider CPU design, AMD or Intel would have done exactly that years ago. If it were simply a matter of making a larger L2 cache, AMD or Intel would have done that years ago.

    The claims are extraordinary, very extraordinary. Extraordinary claims need extraordinary evidence. Apple didn't provide that. We have yet to see any real details and this article is very very specific using a very small scale test.

    CPU design is somewhat of a solved art these days. When you normalize things a specific way, it looks like things are better than they are in absolute terms. I've no doubt Apple has the best performing CPU cores for the power, but that isn't in absolute terms.

    What this article needs to emphasize is that custom built silicon ALWAYS punches above its weight and performs better than generic computing devices EVER will. I watched the M1 presentation. The games looked anemic and low FPS. That's not 3X more powerful or whatever they were claiming. I want absolute performance, not normalized against specific metric performance. I also want full compatibility and not a locked ecosystem. I'll never buy a locked down Apple product like this because of the cage it would put me in.
  • grayson_carr - Wednesday, November 11, 2020 - link

    The article explains that a wider CPU design would be much more difficult for x86. You should probably read it.
  • vais - Thursday, November 12, 2020 - link

    Wider instruction decoder doesn't always mean better. And remember you are comparing two entirely different instruction sets. As mentioned in the article x86 has variable length instructions, so one instruction can be decoded into multiple micro operations.

    Anyway, take a look at this quote from the article specifically:

    "On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon"

    So Samsung's core is 6 wide, while Cortex are 4 wide. Isn't snapdragon using Cortex cores, assuming "only" 4 wide - how the hell it outperforms the "superior" architecture?
    Apple's A14 is 30-40% faster than Exynos and Snapdragon (let's say). And how do Exynos and Snapdragon compare to desktop x86 CPUs? They are lightyears behind and this is normal.
  • rtharston - Thursday, November 12, 2020 - link

    Hear hear. Armv8 (ARM64) is a newer, cleaner ISA, with different design decisions, including purposefully breaking compatibility with ARM32 in many ways* (hence why iOS and macOS dropped 32 bit app support a while back), which means Apple has an easier time making some things bigger, because they are simpler than in x86.

    *Yes, you can run ARM32 bit apps on ARM64 chips, if they've included the necessary support, but it is separate hardware and commands. Apple decided to rip out that extra hardware a few generations of A chips back, freeing up space and complexity for other things. x86 hardware still supports running code meant for the 8086 which released nearly 40 years ago, and that adds a lot of complexity.
  • misan - Thursday, November 12, 2020 - link

    I really don't know what more evidence you need. You have been shown various common CPU algorithms running with comparable performance on Apple's phone chip and AMD/Intel desktop chips. It's literally there in the article you are commenting. If you don't find this evidence convincing, how can you be sure that Zen 3 is a fast CPU? That statements based on exactly the same kind of evidence.
  • vais - Friday, November 13, 2020 - link

    @misan - the problem is this "evidence" is circumstantial at best and does not reflect actual performance for a normally running A14 vs a normally running (all cores and at full power) AMD 5950X.
    It does show the architecture is good, but the claims about a 5W chip somehow slamming in the ground a 105W, 16 core Zen3 CPU are nothing but hilarious. Otherwise Crysis would have long been ported for the magically powerful iPhone...
  • Spunjji - Thursday, November 12, 2020 - link

    @Coldfriction - You seem to be getting confused. Custom silicon does indeed punch above its weight, but none of the "small scale tests" done in this article will take advantage of any of it.

    SPEC results don't translate readily to application performance, but they do serve as excellent ways to compare CPU architectures; whichever way you slice it this architecture is demonstrably impressive.
  • Mgradon - Thursday, November 12, 2020 - link

    Agree fully - for MacBook Air it might be the right think to do. If the new OS will work well. For MacBook Pro i am not sure, would like to see what it can and what it cannot do.
  • daveedvdv - Thursday, November 12, 2020 - link

    > No doubt what they have is efficient. But their claims are out of this world high. If it was simply a matter of making a wider CPU design, AMD or Intel would have done exactly that years ago. If it were simply a matter of making a larger L2 cache, AMD or Intel would have done that years ago.

    Of course. So that's not what dominates Apple's lead. For example, a significant part of Apple's advantage is power management: PA Semi had critical patents in that area and I believe they're still in effect.
  • techconc - Wednesday, November 11, 2020 - link

    "...there's essentially no way that Apple made a general computing CPU that is faster than Intel or AMD..."

    That's exactly what they've done. They've also done so with a far more power efficient solution. It's funny how people can deny reality even while seeing the results in articles like this. Take the blinders off and see reality.
  • Coldfriction - Wednesday, November 11, 2020 - link

    What results? The benchmarks here are extremely limited in scope. That's not "generic computing".
  • grayson_carr - Wednesday, November 11, 2020 - link

    Don't worry. Macs with the M1 are shipping to consumers right now. We'll have more proof one way or the other very soon.
  • novastar78 - Wednesday, November 11, 2020 - link

    Yea well let's see it first. These are all very generic and not relevant. Let's see the real world benchmarks of this thing blowing away a Ryzen 5950 desktop in gaming peformance and then you have my attention.

    Otherwise ....yawn
  • Spunjji - Thursday, November 12, 2020 - link

    @novastar78 - Why would you expect to see a mobile chip that's going to be used for MacOS "blowing away" a 5950X in games? Who's going to be running games on this thing as a primary purpose? The platform doesn't have many of them and it can't do Bootcamp.

    The fact remains that gaming performance is one aspect of a CPU, and in the other ones that are more relevant, this architecture kicks ass.
  • star-affinity - Friday, November 13, 2020 - link

    @Spunjji

    iOS/iPadOS games will be able to run directly on the Apple Silicon Macs, så there will instantly be a lot more games macOS. Should be relatively easy for developers to adjust those ”mobile” games to the more powerful hardware in a Mac. But sure, so far we don't have any Apple GPU in a Mac that can compete with AMDs and NVIDIAs latest offerings.
  • star-affinity - Friday, November 13, 2020 - link

    så = so
  • vais - Thursday, November 12, 2020 - link

    Sorry to say it but you are deluding yourself. On one hand the article compares apple's to oranges without saying what exactly it's measuring - is it width, density, total mass, circumference? Or are we just comparing which colour is "better"?
    Continuing the fruit analogy it seems the article is measuring performance/watt, so density instead of total mass - get the difference?

    It's a shame as the architecture details are actually amazing, but when it gets to the benchmark part comparing A14 to x86 it goes off the cliff hard. A measurement without any units or clear information on what is being measured is just drawing colourful lines...
  • Spunjji - Thursday, November 12, 2020 - link

    @vais - are you even reading the same article? If you want an in-depth evaluation of what the SPEC tests do, go and read one of the dedicated articles that explain it. Otherwise it's "bigger bar better" unless the chart says otherwise, and it provides the units right alongside them. Bigger score in a SPEC test = better CPU for that task. Better scores overall = faster CPU overall.
  • vais - Friday, November 13, 2020 - link

    @Spunjji - let's wait a few weeks for actual workload tests like rendering and see how the miraculous M1 measures up against a desktop CPU. I would bet my Sandy Bridge i5 will be better, but we will see.

    If it somehow renders a scene faster than AMD's 5950X a lot of people will be jumping on it, although this is just a wet dream of Apple's marketing department.
  • Spunjji - Friday, November 13, 2020 - link

    @vais - Sure, let's wait, I'm in no hurry. Your comment was hogwash, though, irrespective of what more complex benchmarks may show. They will clearly *not* show it beating a 5950X in multi-core rendering tasks, and nobody's expecting it to - it's a ~10-20W mobile chip. That doesn't change the fact that on a per-core level A14 is surprisingly competitive at a lower TDP, and M1 is based off the same tech.

    M1 is going to kick the pants off your Sandy Bridge i5, though - you already have the data required to draw that conclusion (SPEC here, and Apple's own data comparing Coffee Lake and M1).
  • Irish910 - Tuesday, November 17, 2020 - link

    Sounds like some salty Intel/AMD fans!!!! 😂
  • sonny73n - Saturday, November 14, 2020 - link

    We all know Apple is always BS. Somehow their BS always work on some people.
  • Alej - Saturday, November 14, 2020 - link

    @sonny73n: really man, who is ‘we all’? That doesn’t include the andand tech writers for sure in that ‘all’. They didn’t find this BS.
    But alright, fine, let’s not listen to Apple, let’s listen to anybody else NON-Apple: check the leaked M1 benchmarks, single core and multi core through the roof. Integrated GPU compute scores on par with discrete Radeon 580X. Octane X GPU renderer devs stating that they can fit scenes with 100+GB worth of textures in those 16GB UMA. Affinity Photo devs saying that they have increase several fold the equivalent tier from previous generation. DaVinci Resolve having 5x in some parts of their editing. (Again these are not Apple’s statements).

    Will it beat the latest RTX cards? Of course not... in the same sense that the latest RTX power hungry constantly thermal throttled crazy heavy laptops won’t last the 20hrs a MacBook Pro M1 can. The two at aiming at different goals.

    Also, all the ‘general purpose’ or ‘not general purpose’ comparison or chit chat... what the heck is that coming from. I’m just a normal user, I boot my machine and launch ANY GENERAL PURPOSE program... and it runs faster, better, leaner, quieter; but then I’ll complain? “fine, but it’s maybe not general purpose so I’ll go to these other systems that do exactly the same but slower, but hey, they are REAL general purpose”.

    Excuse me the rant vibe, I understand there’s the blind sheep Apple following and the gratuitously apple haters. There’s a middle ground, just look at the numbers for the fun of it... and actually, maybe even don’t if the device wouldn’t even be something to be considered for purchase. You know why I never go around Lamborghini shops bashing or criticizing their cars? Because I’m honestly not interested in buying one.
  • novastar78 - Wednesday, November 11, 2020 - link

    You got man!

    The question I have it, can it run Crysis?

    This is all just Apple fan boys getting worked up. I'll most likely never own an Apple product so for us old guys this is just fluff.

    Its just an ARM chip with a PowerVR GPU (that they stole). Not seeing the big deal here except for the Appleites
  • Spunjji - Thursday, November 12, 2020 - link

    @novastar78 - "Its just an ARM chip with a PowerVR GPU (that they stole). Not seeing the big deal here except for the Appleites"

    One of the world's most popular and influential computer manufacturers is producing an ARM-based chip that beats the pants off Intel's best in terms of power-efficiency *and* performance, and slaps AMD around for power... and that's NBD to you? Okay. I guess you're just not that interested in tech 🤷‍♂️
  • vais - Friday, November 13, 2020 - link

    @Spunjji - three times hurray for jumping to conclusions! But it's not really your fault, the benchmarks don't say they are only single threaded... M1 will absolutely be more power efficient than both Intel and AMD - but to have more performance total, when all are running all cores at normal power? People's daydreams will be shattered when real benchmark results are released.
  • star-affinity - Friday, November 13, 2020 - link

    @vais

    https://browser.geekbench.com/v5/cpu/4648107

    Daydream?
  • vais - Friday, November 13, 2020 - link

    I'm not convinced by geekbench results. If we look there, Snapdragon 865 is half the single-core performance of 5900X.

    If that was true and actually translated to real world performance, a lot of services would have transitioned to those fabled ARM chips.
  • Spunjji - Friday, November 13, 2020 - link

    @vais - "If that was true and actually translated to real world performance, a lot of services would have transitioned to those fabled ARM chips" - have you even paid attention to what's happening with Graviton?

    https://www.anandtech.com/show/15578/cloud-clash-a...
  • Alej - Saturday, November 14, 2020 - link

    @Spunjji just wanted to chime in as I have noted the effort at keeping it all quiet and factual. Not jumping to hasty conclusions and pointing out facts/numbers where available. I was already convinced but even got extra pointers out of the sometimes out of whack discussions.

    Hats off, that’s hard work.
  • GeoffreyA - Sunday, November 15, 2020 - link

    100%. Spunjji is always bringing sense and sanity back to conversations, and he's critical and objective. I always look forward to his comments.
  • Spunjji - Friday, November 13, 2020 - link

    @vais - Perhaps I should have clarified that my comment was indeed regarding the single-core results? But I sort-of assumed anybody reading it would have *read the article* and thus was aware of that context. 🤦‍♂️

    We won't get multi-core data until later, for sure, but your attempt to pretend that we therefore have *no idea* what's coming is mere sophistry. I'd advise you adjust your expectations accordingly, as you've already indicated a level of certainty in the outcome (mY i5 WiLl BeAt It) that you are simultaneously arguing nobody else is entitled to. That cognitive dissonance has been noted.
  • vais - Friday, November 13, 2020 - link

    I had missed Graviton, thanks for the link!

    It seems very interesting and is a much more fair comparison as all 3 CPUs have similar TDPs. AMD are still some way from Zen3 based Epyc CPUs, but even if they have better performance than Graviton 2, they still could be far behind in the performance/$.

    As for M1 all I'm saying is that comparing it to other laptop CPUs with similar TDP (of course higher too since it is more efficient) is one thing. But comparing it to the latest desktop CPUs is another story and reality might not reflect the synthetic benchmarks that well.
    If say M2 was positioned for the Mac Pro at 50-60-70W TDP, then comparing it to 5950X would make sense and it really could have better performance - that is all.
  • misan - Thursday, November 12, 2020 - link

    Dedicated silicon for compiling code? Or for doing scientific computation? Or for traversing linked lists? SPEC benchmark suite is extensively documented and the individual benchmark behavior is understood. This is all plain C code that targets the CPU.

    I understand that it might be hard to accept it, but the simple fact is that Apple has made a substantially better chip. They can do much more work per clock than anything else on general-purpose code, which allows them to be fast without needing very high clocks.
  • Spunjji - Thursday, November 12, 2020 - link

    @Coldfriction - the "dedicated silicon" you refer to played no part in any of the tests in this article.
  • Coldfriction - Thursday, November 12, 2020 - link

    You mean all of the memory on package with the CPU doesn't make a difference? That's the "dedicated silicon" sort of thing I'm talking about. How fast was the SNES CPU was 3.58 Mhz. It took a massively more powerful intel, amd, or cyrix chip to do what the SNES could do. MASSIVELY more powerful. What Apple is doing here is making a console PC. The performance isn't all derived from their CPU cores independent of the rest of the system. Everything is tightly integrated with no flexibility on the users end. The Cell architecture boasted similar stuff back in the day. Yes, Apple has a strong ARM CPU here, but it's the tight integration that makes it so strong, not the core itself. There's a reason the memory is in the package and non-upgradable. The functions tested in this article may drastically favor the cache system of the M1, but once you go outside of that, you lose a lot.

    It's ALWAYS been the case that custom built systems have outperformed generic computing devices.

    This article doesn't test very many things. It certainly doesn't test demanding workloads that saturate much of the systems capabilities.

    I owned Amigas back in the day. I had access to a variety of computing devices. The IBM compatible PCs were the ugliest slowest machines around, but they succeeded where everyone else's prettier systems failed. Why? Compatibility and ability to swap the software and hardware from different vendors around. They became cheap and maintained by a variety of people due to that. The Apple Lisa I had as a kid blew away my first 386, but then Apple still nearly went bankrupt a decade later.

    Custom build design is great for a very short term solution, but Apple's leash of leather is being swapped out for a leash of steel chain with this move.
  • daveedvdv - Thursday, November 12, 2020 - link

    > You mean all of the memory on package with the CPU doesn't make a difference? That's the "dedicated silicon" sort of thing I'm talking about.

    That's quite the stretch. What exactly is that memory dedicated to? It's not like other manufacturers cannot package main memory with their chips either. It sounds like you're grasping at straws because you don't like the news.
  • Spunjji - Friday, November 13, 2020 - link

    @daveedvdv - I think you nailed it there. There's a lot of that going on in these comments.

    Hell, I don't *like* the news. I'm a Windows guy and I don't buy Apple devices; if it turns out they'll have exclusive access to some of the best-performing mobile silicon on the planet it'll be kind of a bitch. But it is what it is.
  • daveedvdv - Friday, November 13, 2020 - link

    @Spunjji:
    Thanks. And, FWIW, while I'm an Apple eco-system person, I'm under no illusion that others will be able to match the achievement relatively soon. There are lots of players in the ARM ISA world, and they each have some serious talent working for them.
  • Spunjji - Friday, November 13, 2020 - link

    @coldfriction - On-package memory isn't "custom silicon". Either you're talking about "dedicated silicon" - i.e. the accelerators that Apple's chip has and others don't - or you're talking about shared caches and memory interfaces that every other chip out there has / can have.

    The SNES CPU thing is a weird flex - everybody knows that emulation requires more resources than the original system, but the SNES CPU wasn't remarkable in any way. A better example would have been the Amiga's video controllers, but then you'd run straight into what I pointed out, which is that such a comparison is irrelevant to what was actually tested in this article - the CPU architecture (including caches and, in some tests, memory performance).

    You're right that it doesn't test demanding workloads or the entire system - that wasn't the remit of the article. We'll see that stuff when they have actual M1-based systems to test; running those tests on A14 in an iPhone would be worse than useless for estimating M1 performance.
  • magreen - Sunday, November 15, 2020 - link

    Keep it up, Spunjji! Thanks for injecting rational discourse into these comments. I have no horse in this race, but I recognize measured statements and rational arguments based on evidence when I see them.
  • rtharston - Thursday, November 12, 2020 - link

    These benchmarks are all CPU only (with some memory bandwidth too, since the CPU can't do anything without memory...). All these figures are CPU only. Yes, Apple managed to make a CPU that is as fast or faster than Intel and AMD. Just read the article and you'll see how. They have more ALUs. They have more registers (thanks to ARM64). They have larger (and faster) caches across the board. All that adds up to a higher performance CPU.

    You are right about the dedicated silicon being better at other things though, so now imagine how much more performance Apple's devices will have when using these fast CPUs *and* the dedicated silicon to do other things.
  • daveedvdv - Thursday, November 12, 2020 - link

    > My opinion is that using dedicated silicon for a specific task and not generic CPU computing is where almost ALL of the improved performance comes from.

    No. Neither SPEC nor GeekBench would take advantage of that.

    Furthermore, the applications that Apple uses to boast about the CPU (not GPU) performance are thing like Clang, Ninja, and CMake, which wouldn't benefit from that either.
  • millfi - Wednesday, June 23, 2021 - link

    Dedicated silicon is very fast and efficient when run specific Tasks with a high degree of parallelism, such as ML. But this cannot explain that M1 chips high IPC recorded in spec int. Because this benchmark runs general tasks which are written general CPU Instructions. If Apple could offload such a task to a dedicated circuit, that would be magic.
  • melgross - Wednesday, November 11, 2020 - link

    The M1 isn’t that much of an advance.
  • BlackHat - Tuesday, November 10, 2020 - link

    Guys, the main reason why I loved your website is because you dig in marketing footnotes, I don't know if you already read it (Aparrently no) but they are out and Apple claims that "the most powerful chip or the most power-efficient" is against their own 2018 MacBook, an 14nm i7 SkyLake with LPDDR3, no even against their last Ice Lake model, let alone Renoir (yes I know Apple doesn't have ryzen products) I don't know know if I missing something but I think that you ARM destroying x86 isn't going too far? Yes is power efficiency but it is that big difference to ignore all the performance lost?
    Greetings.
  • BlackHat - Tuesday, November 10, 2020 - link

    And the single core claims are based in single core peak performance in "leadership industry benchmarks" whatever that means and an combination of JavaScript test and Speedometer (this last one I heard from you that was close to Apple) so, anyways we will wait for bench.
  • Kilnk - Tuesday, November 10, 2020 - link

    https://browser.geekbench.com/processor-benchmarks
    https://browser.geekbench.com/ios-benchmarks#
    https://www.cpu-monkey.com/en/compare_cpu-apple_a1...

    These are the benchmarks they are talking about.
    Single core is faster than anything else in the consumer market. Yes. ANYTHING else.
    Multicore lands just above the 9700KF.
  • BlackHat - Tuesday, November 10, 2020 - link

    The think why people don't thrust Geekbench is because even they confirmed that old Geekbench benchmarks (basically 4 and older) were inaccurate due its dependency of bigger caches, meaning that CPU with bigger caches could beat other CPUs in short workloads (something that Geekbench those) but I can be wrong.
  • name99 - Tuesday, November 10, 2020 - link

    You mean Apple "cheat" by adding to their CPUs the pieces that make CPU's run faster, like a larger cache?
    OMG, say it isn't so!

    The pretzels people twist themselves into when they don't want to face reality...

    Just as a guide to the future, look to what really impresses those "skilled in the art" about this CPU. It's explicitly NOT the cache sizes; those are nice but even more impressive are the LSQ sizes, the MLP numbers (not covered here but in an earlier AnandTech piece) and the spec numbers for mcf and omnetpp.
    Understand what those numbers mean and why they are impressive and you'll be competent to judge future CPUs.
  • BlackHat - Tuesday, November 10, 2020 - link

    Talking about twist and you twist my comment, what Geekbench maker themselves said is due their bechmarks being of short run, CPUs with big caches show a big margin (no matter if Apple or other maker, in fact, they show how Samsung Exynos mongoose took advantage of this), for long workload the benchmark was useless.
  • hecksagon - Tuesday, November 10, 2020 - link

    There is also the issue of the benchmarks not being long enough to cause any significant throttling. This is the reason Apple mobile devices are so strong in this benchmark. Their CPUs provide very strong peak performance that slows down as the device gets heat soaked. That's why it looks like an iPhone can compete with a i7 laptop according to this benchmark.
  • misan - Wednesday, November 11, 2020 - link

    Apple delivers performance comparable to that of best 5.0 ghz x86 chips while running at 3 ghz and drawing under 5 watts. You argument does not make any sense logically. Ys, there will be throttling — but at the same cooling performance and consuming the same power Apple chips will always be faster. In fact, their lead on x86 chips will increase when the CPUs are throttled, since Intel will need to drop the clocks significantly — Apple doesn't.
  • hecksagon - Tuesday, November 10, 2020 - link

    No he is saying that Geekbench weight on cache bound workloads to no represent reality.
  • techconc - Wednesday, November 11, 2020 - link

    GB5 scores are inline with Spec results, so there is no merit to the claim that they don't match reality.
  • chlamchowder - Wednesday, November 11, 2020 - link

    The large lsq/other ooo resource queues and high MLP numbers are there to cover for the very slow L3 cache. With 39ns latency on A13 and similar looking figures here, you're looking at over 100 cycles to get to L3. That's worse than Bulldozer's L3, which was considered pretty bad.
  • name99 - Wednesday, November 11, 2020 - link

    Why not try to *understand* Apple's architecture rather than concentrating on criticism?
    (a) Apple's design is *their* design, it is not a copy of AMD or Intel's design
    (b) Apple's design is optimized for the SoC as a whole, not just the CPU.

    The L3 on Apple SoC's does not fulfill the role of a traditional L3, that is why Apple calls it an SLC (System Level Cache). For traditional CPU caching, Apple has a large L2 (8MiB A14, 12MiB M1).

    The role of the L3 is PRIMARILY
    - to save power (everything, especially on the GPU side, that can be kept there rather than in DRAM is a power advantage)
    - to communicate between different elements of the SoC.
    The fact that the SLC can act as a large (slow, but still faster than DRAM) L3 is just a bonus, it is not the design target.

    Why did Apple keep pushing the UMA theme at their event? The stupid think it's Apple claiming that they are first with UMA; but Apple never said that. The point is that UMA is part of what enables Apple's massive cross-SoC accelerator interaction; while the SLC is what makes that interaction fast and low power. How many accelerators do you think are on the A14/M1? We don't know -- what we do know is that there are 42 on the A12.
    42 accelerators! Did you have a clue that it was anything close to that?
    Sure, you know the big picture, things like ISP, GPU and NPU working together for computational photography, but there is so much more. And they can all interact together and efficiently via SLC.

    https://arxiv.org/pdf/1907.02064v1.pdf
    discusses all this, along with pointing out just how important it is to have fast low energy communication between the accelerators.
  • techconc - Wednesday, November 11, 2020 - link

    Why are we still arguing about the validity of Geekbench? The article even states the following from their own testing...
    "There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded."
  • BlackHat - Wednesday, November 11, 2020 - link

    Because the creator of the benchmark themselves admitted that their old version were somehow inaccurate.
  • Spunjji - Thursday, November 12, 2020 - link

    Just as well we're not really discussing those here, then 😬
  • hecksagon - Tuesday, November 10, 2020 - link

    Too bad the links are all for Geekbench. This is about as far from a real world benchmark you can get.
  • Spunjji - Thursday, November 12, 2020 - link

    Did you even skim through the SPEC benchmark sections here, or..?
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    Look at Anandtech's benchmarks in this article. What "performance lost" are you seeing?
  • BlackHat - Tuesday, November 10, 2020 - link

    Those benchmarks are for performance per watt, are very efficiency yes, it doesn't mean very powerful, the MacBook Pro is supposed to be for heavy workloads but Appel compare this chips against a i7 SkyLake U, that don't give me a lot of good vibe, but we will have to wait for the bechmarks.
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    No. They aren't. I'm not talking about Apple's numbers. I'm talking about Anandtech's numbers, on the fourth page of this very article, based on tests they ran themselves on the A14, which show the A14 generally exceeding Tiger Lake ST perf.
  • BlackHat - Tuesday, November 10, 2020 - link

    And that is why this article should have that in account, why Apple is claiming less performance than the bechmarks runs here? Maybe I missing something, I trying to understand.
  • ikjadoon - Tuesday, November 10, 2020 - link

    Exactly. Why on Earth should anyone care about Apple's marketing, Apple's benchmarks, or Apple's comparisons...when Andrei, Ryan & Anandtech have *tested* *independent* *benchmarks*?

    Firestorm is the fastest perf/W general computing uarch in the world & on the latest 5nm TSMC node, what ... else do people want?

    Companies sandbag performance *ALL* the time.
  • BlackHat - Tuesday, November 10, 2020 - link

    Companies sandbagging performance? Sorry but that last time that I checked all of them put heir products on the best light posible, Apple could have said: our new chips are as powerful or more powerful than the last generation MacBook (Ice Lake models) but they said of the world, ignoring Ice Lake, Tiger Lake or Renoir.
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    What's your problem here? We benchmarked the A14 being faster than every other chip out there except the Zen3 parts. You do realise this is a 5 page article?
  • BlackHat - Tuesday, November 10, 2020 - link

    Sorry I'm not trying to be a troll or something, it just that you said that you don't know against what chip Apple compared this products so you said that you your supposition is that are against the lastest chips but Apple footnotes show that they are comparing against the 2 olds SkyLake version MacBook, which is odds when they have theses numbers that you show here.
  • mmrezaie - Tuesday, November 10, 2020 - link

    They have compared it to the latest and greatest. What else can we expect from them? I think they have done the best possible so far.
  • valuearb - Tuesday, November 10, 2020 - link

    Sure they are comparing against Skylake, but if it's nearly 3x faster than Skylake, that still means it's significantly faster than Tiger Lake, and uses less than half the power.

    And this is Apple's ENTRY LEVEL Apple Silicon CPU. What awaits us this spring?
  • Dolda2000 - Tuesday, November 10, 2020 - link

    Regardless whom Apple are comparing themselves against, the point here is that AnandTech has compared them against the actual latest Tigerlake and Zen 3 parts.
  • tuxRoller - Tuesday, November 10, 2020 - link

    What does it matter?
    Anandtech posted their own benchmarks which can be directly compared to other cpus...including zen3.
    If you're simply interested in the tech it doesn't matter what the marketing claims.
    If the marketing aspect is what interests you this might not be a good forum for that discussion.
  • trixn86 - Wednesday, November 11, 2020 - link

    They compared it to what they think is currently the most commonly used product in their product line. An 2.8x faster sounds better than 1.5x faster or whatever it will be compared to a current Tiger Lake. So even from a marketing perspective it is wise to chose the Skylake over the Tiger Lake to compare it.
  • KarlKastor - Wednesday, November 11, 2020 - link

    @Andrei
    The question is, what is your problem?
    He is arguing factly. Maybe he is wrong. Than proof that with facts.

    Yes, it is a 5 page article. An this pages are full of single thread tests. I agree that they have one of the fastest core design. But the performance on multi threaded longer real world benchmarks might look different.

    Your article is superb, but even you are nor perfect.
  • techconc - Wednesday, November 11, 2020 - link

    @KarlKastor - The point of the article was to compare architecture performance of a core design. The point of the article is NOT to compare every multi-core implementation of these designs. Or course there will be Intel and AMD chips with MANY cores that will be faster than the phone based A14 chip. That's not the point of this article or even this discussion.
  • KarlKastor - Thursday, November 12, 2020 - link

    @techconc
    You can not talk about desktop class performance and stick to architecture performance itself.
    Yes, the architecture looks great, that is not new. But is it better than other CPUs? The test here can't answer. I hope Andrei has already a new Macbook ordered. ;)

    Another thing is comparing the package power of the 3950X and the M1. The 3950X has 16 Cores and tons of IO. Sure it consumes much more. And also sure the A14 is still impressive, it is not even the M1.
  • Spunjji - Thursday, November 12, 2020 - link

    @KarlKastor - they don't *have* the new M1 chip to compare, so how would they? They offered the best comparison they can give right now - one that simply measures architectural efficiency. No doubt the multicore benchmarks will come later, and no doubt the goalposts will be moved to a new reason why they don't matter. The fact remains that right now, with the best information we have available, it's clear that Apple have built a VERY capable chip.
  • KarlKastor - Monday, November 16, 2020 - link

    No one denied, that it is a capable chip.
    The diskussion is more about how to interpret this results.
  • techconc - Thursday, November 12, 2020 - link

    @KarlKastor - Let's take a look at the A14 and the M1 for example. Is there an architecture difference between the CPU or even GPU cores in these chips? No. The only difference is the M1 chip is bigger (perhaps with more bandwidth) but it's just using the same cores adding more of them. I'd expect Apple's next chip in this family to follow that formula exactly... Same architecture, more cores. So, yes, if you understand the performance of 1 core, it's to hard to extrapolate performance for the same architecture as it scales with more cores. Do you think a 4 core Intel chips is fundamentally different from an 8 core Intel chip of the same family? No, of course it isn't.
  • KarlKastor - Monday, November 16, 2020 - link

    @techconc
    Do you think a 4 Core Zen2 was different than a 8 Core Zen2. Yes it is much different. Zen 1 was even much inhomogenous with increasing core count.

    Apple can't just put 8 big cores in it and is finished. All cores with one unified L2 Cache? The core interconnect will be different for sure. The cache system too, I bet.

    M1 and A14 will be much similar, yes.
    But you can't extrapolate from a single thread benchmark to a multi thread practical case. It can work, but don't have to.
    The cache system, core interconnect, memeory subsystem, all is much more important with many cores working at the same time.
  • Kangal - Thursday, November 12, 2020 - link

    Hi Andrei,
    I'm very disappointed with this article. It is not very professional nor upto Anandtech standards. Whilst I don't doubt the Apple A14/A14X/M1 is a very capable chipset, we shouldn't take Apple's claims at face-value. I feel like you've just added more fuel to the fire, that which is hype.

    I've read the whole thing, and you've left me thinking like this ARM Chipset is supposedly similar to the 5W TDP we have on iPhones/iPads, and able to compete with 150W Desktop x86 chipsets. While that possible, it doesn't pass the sniff test. And even more convoluted, is that this chipset is supposed to extend the battery life notably (from 10hrs upto 17hrs or 20hrs) by x1.7-x2.0 factor, yet the difference in the TDP is far greater (from 5W compared to 28W) in x4.5-x6.0 difference. So this is losing efficiency somewhere, otherwise we should've seen battery life estimates like 45hrs to 60hrs. Both laptops have the same battery size.

    Apple has not earned the benefit of the doubt, instead they have a track-record of lying (or "exaggerating"). I think these performance claims, and estimates by you, really needed to be downplayed. And we should be comparing ACTUAL performance when that data is available. And by that I mean running it within proper thermal limits (ie 10-30min runtime), with more rounded benchmarking tools (CineBench r23 ?), to deduce the performance deficits and improvements we are likely to experience in real-world conditions (medium duration single-core, thermal throttling multi-thread, GPU gaming, and power drain differences). Then we can compare that to other chipsets like the 15W Macbook Air, the 28W MacBook Pro, and Hackintosh Desktops with Core i9-9900k or r9-5950x chipsets. And if the Apple M1 passes with flying colours, great, hype away! But if they fail abysmally, then condemn. Or if it is very mixed, then only give a lukewarm reception.

    So please follow up this article, with a more accurate and comprehensive study, and revert back to the professional standards that allow us readers to continue sharing your site with others. Thank you for reading my concerns.
  • Kangal - Thursday, November 12, 2020 - link

    I just want to add, that during the recent announcement by Nvidia, we were lead to believe that the RTX 3080 has a +100% performance uplift over the RTX 2080. Now that tests have been conducted by trustworthy, professional, independent reviewers. Well, it is actually more like +45% performance uplift. To get to the +70% -to- +90% performance uplift requires us to do some careful cherry-picking of data.

    My fear is that a similar case has happened with the Apple M1. With your help, they've made this look like it is as fast as an Intel Core i9-9900k. I suspect it will be much much much much slower, when looking at non-cherry picked data. And I suspect it will still be a modest improvement over the Intel 28W Laptop chipsets. But that is a far cry from the expectations that have been setup. Just like the case was with the RTX-3000 hype launch.
  • Spunjji - Thursday, November 12, 2020 - link

    @Kangal - Personally, I'm very disappointed in various commenters' tendency to blame the article authors for their own errors in reading the article.

    Firstly, it's basically impossible to read the whole thing and come away with the idea that M1 will have a 5W TDP. It has double the GPU and large-core CPU resources of A14 - what was measured here - so logically it should start at somewhere around 10W TDP and move up from there.

    To your battery life qualms - throw in some really simple estimates to account for other power draw in the system (storage, display, etc.) would get you to an understanding of why the battery life is "only" 1.7X to 2X their Intel models.

    As for Apple's estimates being "downplayed" - sure, only they provide *actual test data* in here that appears to validate their claims. I don't know why you think CineBench is more "rounded" than SPEC - the opposite is actually true; CineBench does lots of one thing that's easily parallelized, whereas SPEC tests a number of different features of a CPU based on a large range of workloads.

    In summary: your desire for this not to be as good as it *objectively* appears to be is what's informing your comment. The article was thoroughly professional. In case you're wondering, I generally despise Apple and their products - but I can see a well-designed CPU when the evidence is placed directly in front of me.
  • Kangal - Friday, November 13, 2020 - link

    @Spunjji

    First of all, you are objectively wrong. It is not debatable, it is a fact. That this article CAN (could, would, has) been read and understood in a manner different to yours. So you can't just use a blanket statement like "you're holding it wrong" or "it's the readers fault". When clearly there are things that can be done to mitigate the issue, and that was my qualm. This article glorifies Apple, when it should be cautioning consumers. I'm not opposed to glorifying things, credit where due.

    The fact is Andrei, who representing Anandtech, is assuming a lot of the data points. He's taking Apple's word at face value. Imagine the embarrassment if they take a stance such as this, only to be proven wrong a few weeks later. What should have been done, is that more effort and more emphasis should have been placed on comparisons to x86 systems. My point still stands, that there's a huge discrepancy between "User Interface fluidity", "Synthetic Benchmarks", "Real-world Applications", and "Legacy programs". And also there's the entire point of power-draw limitations, heat dissipation, and multi-threaded processing.

    Based on this article, people will see the ~6W* Apple A14 chipset is only 5%-to-10% slower than the ~230W (or 105W TDP) AMD r9-5950x that just released and topped all the charts. So if the Apple Silicon M1 is supposed to be orders of magnitude faster, (6W vs 12W or maybe even more), then you can make the logical conclusion that the Apple M1 is +80% -to- +290% faster when compared to the r9-5950x. That's insane. Yet it could be plausible. So the sensible thing to do is to be skeptical. As for CineBench, I think it is a more rounded test. I am not alone in this claim, many other users, reviewers, testers, and experts also vouch for it. Now, I'm not prepared to die on this hill, so I'll leave it at that.

    I realised the answer to the battery life question as I was typing it. And I do think a +50% to +100% increase is revolutionary (if tested/substantiated). However, the point was that Andrei was supposed to look into little details like that, and not leave readers thinking. I know that Apple would extend the TDP of the chip, that much is obvious to me even before reading anything, the issue is that this point itself was never actually addressed.

    Your summary is wrong. You assume that I have a desire, to see Apple's products to be lower than claimed. I do not. I am very unbiased, and want the data as clean as possible. Better competition breeds better progress. In fact, despite my reservations against the company, this very comment is being typed on an Early-2015 MacBook Pro Retina 13inch. The evidence that's placed in front of you isn't real, it is a guesstimate at best. There's many red-flags seeing their keynote and reading this article. Personally, I will have to wait for the devices to release, people to start reviewing them thoroughly, and I will have to think twice about digesting the Anandtech version when released. However, I'm not petty enough to boycott something because of subjective reasons, and will likely give Anandtech the benefit of the doubt. I hope I have satisfied some of your concerns.

    *based on a previous test by Anandtech.
  • Spunjji - Friday, November 13, 2020 - link

    @Kangal - The fact that a reader *can* get through the whole thing whilst imposing their own misguided interpretations on it doesn't mean it's the author's fault for them doing so. Writers can't spend their time reinventing the wheel for the benefit of people who didn't do basic background reading that the article itself links to and/or acknowledge the article's stated limitations.

    Your "holding it wrong" comparison is a funny one. You've been trying to chastise the article's author for not explicitly preventing people from wilfully misinterpreting the data therein, which imposes an absurd burden on the author. To refer back to the "holding it wrong" analogy, you've tried to chew on the phone and are now blaming the phone company for failing to tell people not to chew on it. It's not a defensible position.

    As it stands, he assumes nothing - nothing is taken at face value with regard to the conclusions drawn. He literally puts their claims to the test in the only manner currently available to him at this point in time. The only other option is for him to not do this at all, which would just leave you with Apple's claims and nothing else.

    As it is, the article indicates that the architecture inside the A14 chip is capable of single-core results comparable to AMD and Intel's best. It tells us nothing about how M1 will perform in its complete form in full applications compared with said chips, and the article acknowledges that. The sensible thing to do is /interpret the results according to their stated limitations/, not "be sceptical" in some generic and uncomprehending way.

    I think this best sums up the problem with your responses here: "The evidence that's placed in front of you isn't real, it is a guesstimate at best". Being an estimate doesn't make something not real. The data is real, the conclusions drawn from it are the estimates. Those are separate things. The fact that you're conflating them - even though the article is clear about its intent - indicates that the problem is with how you're thinking about and responding to the article, not the article itself. That's why I assumed you were working from a position of personal bias - regardless of that, you're definitely engaged in multiple layers of flawed reasoning.
  • Kangal - Friday, November 13, 2020 - link

    @Spunjji

    I agree, it is not the writers fault for having readers misinterpret some things. However, you continue fail to acknowledge that a writer actually has the means and opportunity to greatly limit such things. It is not about re-inventing the wheel, that's a fallacy. This is not about making misguided people change their minds, it is about allowing neutral readers be informed with either tangible facts, or putting disclaimers on claims or estimates. I even made things simple, said that Andrei simply needed to address that the figures are estimates so that the x86 comparisons aren't absurd.

    "You're holding it wrong" is an apt analogy. I'm not chewing on the phone, nor the company. I've already stated my reservations (they've lied before, and aren't afraid of exaggerating things). So you're misguided here, if you actually think I was even defending such a position. I actually think you need to increase your reading comprehension, something that you actually have grilled me on. Ironic.

    I have repeated myself several times, there are some key points that need to be addressed (eg/ legacy program performance, real-world applications, multi-threaded, synthetic benchmarks, and user experience). None of these have been addressed. You said the article acknowledges this, yet you haven't quoted anything. Besides, my point was this point needed to be stressed in the article multiple times, not just an off-hand remark (and even that wasn't made).

    Being an estimate doesn't make something not real. Well, sure it does. I can make estimates about a certain satellites trajectory, yet it could all be bogus. I'm not conflating the issue, you have. I've displayed how the information presented could be misinterpreted. This is not flawed reasoning, this is giving you an example of how loosely this article has been written. I never stated that I've misinterpreted it, because I'm a skeptical individual and prefer to dive deeper, read back my comments and you can see I've been consistent on this point. Most other readers can and would make that mistake. And you know what, a quick look on other sites and YouTube, well it shows that is exactly what has happened (there are people thinking the MBA is faster than almost all high-end desktops).

    I actually do believe that some meaningful insights can be gathered by guesstimates. Partial information can be powerful, but it is not the complete information. Hence, estimated need to be taken with a pinch of salt, sometimes a little, other times a lot. Any professional who's worth their salt (pun intended) will make these disclaimers. Even when writing Scientific Articles, we're taught to always put disclaimers when making interpretations. Seeing the quality of writing drop on Anandtech begs one to not defend them, but to pressure them instead to improve.
  • varase - Wednesday, November 11, 2020 - link

    Remember that M1 has a higher number of Firestorm cores which produce more heat - though not as much as x86 cores.

    There may be some throttling going on - especially on the fanless laptop (MacBook Air?).

    Jeez ... think of those compute numbers on a fanless design. Boggles the mind.

    Whenever you compare computers in the x86 world at any performance level at all, the discussion inevitably devolves into, "How good is the cooling?" Now imagine a competitor who can get by with passive heat pipe/case radiation cooling - and still sustain impressive compute numbers. Just the mechanical fan energy savings alone can go a good way to preserving battery life, not to mention a compute unit with such a lower TDP.
  • hecksagon - Tuesday, November 10, 2020 - link

    These benchmarks don't show time as an axis. Yes the A14 can compete with an i7 laptop in bursty workloads. Once the iPhone gets heat soaked performance starts to tank pretty quickly. This throttling isn't represented in the charts because these are short benchmarks and the performance isn't plotted over time.
  • Zerrohero - Wednesday, November 11, 2020 - link

    Do you think that the M1 performance with active cooling will “tank” like A14 performance does in an iPhone enclosure?

    Do you understand how ridiculous your point is?
  • Jasonovich - Wednesday, November 11, 2020 - link

    Active Cooling in wafer thin Apple Mac notebooks? You got to be kidding me!
    The company ethos relies on 99% marketing and 1% substance and a massive cult following of Libdroids, which will always net in the converted but I'm a pagan and I will eat my sock if the phone CPU proves me wrong!
  • Ppietra - Wednesday, November 11, 2020 - link

    the MacBook Pro has Active Cooling
  • Billrey - Wednesday, November 11, 2020 - link

    Happy sock eating then.
  • SuperSunnyDay - Thursday, November 12, 2020 - link

    I don't disagree with your claim about the amount of marketing that Apple engage in, but you are seriously misdirected by prejudice if you actually believe your assertion of only 1% substance. The M1 is a revolutionary component, and Apple's hardware and software are pretty much the definition of state of the art at present - who else has any claim to be more advanced in the mass market? Wake up and remove your blinkers. The truth is what is important, not what you want to believe.
  • Spunjji - Thursday, November 12, 2020 - link

    @hecksagon - "Once the iPhone gets heat soaked performance starts to tank pretty quickly."
    ...that's also true of the Intel processors in mobile systems; especially the Coffee / Ice Lakes.

    The point isn't to compare A14 in an iPhone to an i7 in a laptop, it's to let you know what Apple's architecture is capable of on a theoretical level - then we'll see how the finished product puts that into practice *when the finished product arrives*.
  • iphonebestgamephone - Tuesday, November 10, 2020 - link

    Atleast geekbench isnt perfomance per watt
  • vladx - Tuesday, November 10, 2020 - link

    Yep seriously disappointed on Andrei for writing such a lazy article.
  • ikjadoon - Tuesday, November 10, 2020 - link

    brahhh...if we could delete comments on Anandtech, this should be the first candidate.

    vladx, you realize this comment will be here in 10 years when people are quoting this article on the x86 demise?

    Hi, everyone from 2025. Yes, we know.
  • vladx - Tuesday, November 10, 2020 - link

    Lol ok, good luck with that if you think ARM Macs and Macbooks will cause the demise of x86 with 30 years of software library. In 10 years from now, you're gonna look back and wish you could've deleted your foolish statement.
  • Kurosaki - Tuesday, November 10, 2020 - link

    AArch64 emulates x86 faster than x86 runs software natively. Let that sink in Vlad... Let it sink in for a moment.
    If the x86 gang don't do something drastically, my next gaming rig will be an ARM one. PC and Win on ARM for sure.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    I will wait for that ARM gaming PC soon, wonder who is going to make it, Apple / Nuvia / Centriq / Qcomm / Samsung / Huawei / Mediatek / Marvell ? Please let me know where can I buy that.

    AArch64 emulates the x86 faster than native, can you please show me some benchmark or the application that it is doing, I want to see it. Please don't write some useless HW trash or SW or compiler math. A simple benchmark or a video will do, much easier for all of us.
  • grant3 - Tuesday, November 10, 2020 - link

    "AArch64 emulates x86 faster than x86 runs software natively. "

    I'm highly doubtful, especially since this claim has no context. Which AAarch64 chip vs. which x64 chip? (I assume you mean x64 because x86 has been superseded for so long)

    If it's already proven that AArch64 is the fastest at running x64 code, then why are you and others not already buying them for their gaming rigs?
  • Calin - Wednesday, November 11, 2020 - link

    Basically the Rosetta-2 will run x86 code on M1 cores fast enough that the better M1 graphics cores will be able to run graphically intensive applications faster than on the 2018 MacBook Air.
    Also, this is a "work done per watts used" comparison, and the 5nm TSMC processor is better than Intel's 2018 14 nm.
    Could an unrestricted x86 behemoth with graphic similar to M1's integrated do better than the M1 running emulation? I bet it is, however the comparison was between the 2018 MacBook Air and the 2020 MacBook Air (so total power limited to some 10 Watts).
  • Speedfriend - Wednesday, November 11, 2020 - link

    AArch64 emulates x86 faster than x86 runs software natively.

    That is not what they said, they said that in emulation it ran faster than on the old Mac's, which In this case had 2018 Intel processor or dual core.

    It is not faster than a current Intel chip
  • Ladis - Wednesday, November 11, 2020 - link

    I remember when Apple switched from Motorola to PowerPC the lowest spec PPC PowerBooks were slow at emulating the Motorola code because the emulator + the app's code didn't fit in the CPU cache. However it's the opposite this time - Apple Silicon's cache is huuuuge. Also as others stated, it's omparing to years old TDP-limited MacBook.
  • Spunjji - Thursday, November 12, 2020 - link

    Backwards compatibility isn't going to be a good enough reason for x86 to stick around forever - especially not if power/performance ends up tilting so far towards ARM designs.
  • hlovatt - Tuesday, November 10, 2020 - link

    @vladx That is a utterly unfair comment. Anandtech have done in depth independent testing of the A14 chip that is beyond anything else you will find. I think you should apologise to them and retract your comment.
  • Spunjji - Thursday, November 12, 2020 - link

    Whereas I have come to expect this sort of baseless accusation from you. I don't understand why you still come here now that reality is increasingly poorly aligned with your team-sports fantasies.
  • augustofretes - Tuesday, November 10, 2020 - link

    There's no performance lost at all. The M1 is obviously faster than anything Intel can put on a Macbook Air or Macbook Pro. The A14 is stuck on a phone and its giving processors with a much higher TDP to work with a run for their money.
  • BlackHat - Tuesday, November 10, 2020 - link

    Apple compare the M1 against their 2018 MacBook with SkyLake and LPDDR3, no even the 2020 model with Ice Lake, a chip that is heavily throttle, why this if is given "a run for their money"?
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    Look at Anandtech's benchmarking in this very article. I'm sure you can find it if you look carefully.
  • BlackHat - Tuesday, November 10, 2020 - link

    Yes, I see the benchmarks, but that doesn't calm my concerns that if those numbers looks so good here why compare their chips against an SkyLake chip?
  • michael2k - Tuesday, November 10, 2020 - link

    Because they didn't pay Intel for any SkyLake chips?

    Essentially Intel's designs only improve by 10% per given generation, so given they've been stuck at 14nm and only have some 10nm parts, you might see a 10% or 15% improvement with the newest Intel parts.

    A 10% or 15% improvement won't beat the M1
  • BlackHat - Tuesday, November 10, 2020 - link

    What do you mean? Apple have Ice Lake MacBook that they could used to compare.
  • valuearb - Tuesday, November 10, 2020 - link

    Apple compared the M1 Macs vs. their exact predecessors, which is a 1.7 Ghz I7 in the case of the Mac Book Pro two port 13 inch.

    Note that the 13 inch Tiger Lake MBP had to be made thicker and heavier to accommodate Tiger Lake with adequate battery life.

    Also that Anandtech's own benchmarks show Tiger Lake on par with the iPhone 12 A14 CPU, at four times the power usage.
  • name99 - Tuesday, November 10, 2020 - link

    Give it a fscking rest. Go read Apple's web site:
    Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as ***production 1.2GHz quad-core Intel Core i7-based MacBook Air systems***, all configured with 16GB RAM and 2TB SSD.

    They tested against the currently shipping Mac equivalents. You want to test against your favorite laptop do it yourself, Apple doesn't owe you ever benchmark you can possibly imagine.
    Apple tested against the competitor product as far as THEIR CUSTOMERS ARE CONCERNED. Everyone else in the world understands this, why can't you?
  • trixn86 - Wednesday, November 11, 2020 - link

    Exactly this.They compared it to the competing device in their product line that most of their customers currently have in use. And those are also the customers that are most likely to consider to upgrade to a new device. Also 2.8x faster sounds better than 1.whatever x faster if they'd taken a best intel notebook cpu available whose users wouldn't want to upgrade anyways.
  • Spunjji - Thursday, November 12, 2020 - link

    Because it's like-for-like *within their product range*.

    Apple customers don't care about how fast it is when compared with somebody else's products.
  • Glaurung - Tuesday, November 10, 2020 - link

    This article, that you seem to have not read, benchmarks the A14, in the latest iphone, against intel and AMD laptop/desktop chips.

    And the phone chip, with a 5w TDP, is as fast or faster than anything except the latest AMD desktop chip.

    Given that there is no way the M1 is going to be *slower* than the chip in this years iphones, I think the article's conclusions are fair. And we'll have benchmarks of actual M1 macs to look at in another week or so.
  • BlackHat - Tuesday, November 10, 2020 - link

    I'm just saying that if Apple didn't compare their chips against Ice Lake or even a more recent chip that give a lot of skepticism, Ice Lake is better than the Renoir in Spec but no so much in real word test, even those no multicore.
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    But Apple literally DID test the newest chips:

    https://www.apple.com/mac/m1/

    "Our high‑performance core is the world’s fastest CPU core when it comes to low‑power silicon.3"

    "Comparison made against latest‑generation high‑performance notebooks commercially available at the time of testing. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro."
  • chlamchowder - Wednesday, November 11, 2020 - link

    No doubt it does well in a low power envelope. However for desktops, it's not particularly relevant when Zen 3 can win in a power envelope that's still very acceptable and easy to cool.
  • mesahusa - Wednesday, November 11, 2020 - link

    They literally state on their website that they tested against currently offered MacBook CPUs. People like you are arguing in bad faith when you wheel out "bEtTeR CoMpArEd To WhO?", as if Apple hasn't *always* referred to their price offerings from the previous generation when making claims about new products. When Apple says that the new iPhone screen is 20% brighter, do you think to yourself, "20% bRiGhTeR tHaN wHaT? a HoLe? luuuul". And EVEN if that were true, and they were comparing to something like kabylake or coffeelake, it would still be an astronomical leap. Intel has seen a measly 5-10% improvement YoY, and while AMD has now surpassed Intel, they aren't even in a ballpark in the perf/watt that Apple is providing.
  • vais - Wednesday, November 11, 2020 - link

    Benmarks what? Performance/watt or absolute performance?
  • unclevagz - Tuesday, November 10, 2020 - link

    *Testing conducted by Apple in October 2020 using preproduction MacBook Air systems with Apple M1 chip and 8-core GPU, as well as production 1.2GHz quad-core Intel Core i7-based MacBook Air systems, all configured with 16GB RAM and 2TB SSD. *

    Last time I checked, the only quad core MBA Apple has been selling uses Ice Lake...
  • repoman27 - Wednesday, November 11, 2020 - link

    Correct. The 13-inch MBP with two Thunderbolt 3 ports that the new M1 version replaces was using an Apple exclusive Coffee Lake 4+3e though. The Ice Lake models with four Thunderbolt 3 ports are still available.

    M1 is Apple’s answer to Intel’s Y and 15W U series chips. It has not supplanted 28W U or 45W H in MacBook Pros or 65W H in the Mac mini.
  • Luminar - Wednesday, November 11, 2020 - link

    X86 will always be better than ARM in the high end, as in 95 watt and above.
  • chlamchowder - Wednesday, November 11, 2020 - link

    Not necessarily. It doesn't have anything to do with instruction set.

    But right now Intel and AMD have expertise at hitting higher performance/power targets than ARM manufacturers. And ARM people have experience in the opposite (lower power targets).
  • misan - Wednesday, November 11, 2020 - link

    As others have said, Apple's marketing is largely irrelevant at this point, since we have some independent data to raw from. But even if we accept the fact that they compare agains a 2018 MBP.... it doesn't change anything. The performance increase between Skylake and Ice Lake is meager at best. The M1 will outperform Tiger Lake with a very healthy margin, while being more power efficient, and this is what is relevant here.
  • 5j3rul3 - Tuesday, November 10, 2020 - link

    Great Revolution👍👍👍
  • armchair_architect - Tuesday, November 10, 2020 - link

    Great review! Could you share your methodology for power and energy measurement?
  • Speedfriend - Tuesday, November 10, 2020 - link

    All the performance claims for the uplift in performance appear to be against the base model of the particular product. Which in the mini and the pro would appear to be intel processors from 2018. And for the Air, a dual core processor. Now clearly Apple Silicon is an amazing achievement, why cheapen it with these frankly ridiculous comparisons against products Apple took the decision not to update.

    The other thing I don't understand is why not make the 13in Macbook even more powerful, unless they are planning a second version with a 12 core processor?
  • Glaurung - Tuesday, November 10, 2020 - link

    The M1 is optimized for battery life. It's the equivalent of Intel's U-series chips. And Apple has limited it quite a bit - max 16gb, max 2 thunderbolt ports, and so on.

    (consults tea leaves) The next round of Apple Silicon (M2 or whatever) will almost certainly have much higher TDP, and be for people who need 32gb of ram, lots of ports, and even more performance at the expense of battery life. And they will go into the 16" MBP, the 4-port 13" MBP, and the Imac. Plus maybe a higher performance model of the Mini.
  • trixn86 - Wednesday, November 11, 2020 - link

    They also compared it to the highest-performing CPUs for notebooks commercially available at the time of testing, not only to their 2018 macBook Pro which you can read in footnote 3 of their macbook pro teaser page. And the claim that they justified with that is: "Our high‑performance core is the world’s fastest CPU core when it comes to low‑power silicon".

    "why cheapen it with these frankly ridiculous comparisons against products Apple took the decision not to update"

    So they can claim it is 2.8x faster then what most of their customers are currently using instead of having to say that is is 1.x faster then what is currently available? First of all 2.8 sounds better then a lower number and secondly their target group for an upgrade is more likely to be found in the group that doesn't have the most powerful device currently available but rather an 2018 macbook or lower.

    So how does that cheapen anything? It is a rather smart marketing decision.
  • eastcoast_pete - Tuesday, November 10, 2020 - link

    Thanks Andrei and Ryan! Great deep dive; maybe I just didn't pick up on it on first read, but am I correct that neither performance not efficiency cores "do" SMT (not that they need it)? I read on another site that the efficiency, but not the performance cores can multithread, and that didn't make any sense to me, but then, I'm not the expert.
    Now I'll wait for the first actual test; besides Apple, have others (Adobe, MS) "gone native" for the A14 arch in time for launch? If not, how about emulation?
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    There's no SMT from Apple, and I don't expect it ever be a thing.
  • voicequal - Wednesday, November 11, 2020 - link

    Really good review; read every word. It seems the weakness of M1 will be its throughput, being only 4 high-perf cores, but there were no multithreaded benchmarks to illustrate this?
  • Spunjji - Thursday, November 12, 2020 - link

    They haven't benchmarked the M1 yet. Benchmarking the multi-core performance of A14 in an iPhone wouldn't give you much of a useful idea about how a much larger chip in a far larger chassis would perform.
  • name99 - Tuesday, November 10, 2020 - link

    The moral equivalent of SMT is the 4 small cores. SMT gets you about a 25% boost, a small core is worth about 25% of a big core.

    You can quibble about minor details but that's the big picture as far as *throughput* is concerned:
    people who think 8 vCores/8 hyperthreads = 8 large cores are morons
    people who think 4+4 big+little cores = 8 large cores are morons
    Morons dominate numerically on both sides of the discussion.
  • bji - Tuesday, November 10, 2020 - link

    Morons are especially prone to over-generalization.
  • chlamchowder - Wednesday, November 11, 2020 - link

    The difference between 4 small cores and 4-way SMT, is that 4-way SMT with a single thread active generally gets full access core resources, for better ST perf when you're not pegging all threads.
  • Speedfriend - Tuesday, November 10, 2020 - link

    Given Zen 3 is on 7nm, any guess as to what a move to 5nm would do for performance and efficiency?
  • Jaianiesh03 - Wednesday, November 11, 2020 - link

    By TSMC figures, 30% better efficiency , so in other words a zen 3 part would consume 14 watts at max ST perf, and because apple has a wide core it would still have 2x perf/watt
  • vladx - Tuesday, November 10, 2020 - link

    "Intel has stagnated itself out of the market, and has lost a major customer today."

    This statement is pure nonsense, how many Macs are sold per year? This will have a negligible impact on Intel's revenue.
  • IBM760XL - Tuesday, November 10, 2020 - link

    7% of all computers (laptops and desktops) worldwide in 2019, according to Gartner. Of which, all used Intel CPUs, while some of the other 93% used AMD's GPUs.

    So ballpark it as 10% of Intel's CPU revenue. Definitely negligible. /s
  • vladx - Tuesday, November 10, 2020 - link

    Consumer products represent only 60% of Intel's revenue, so yeah 5-6% is pretty neglibible alright.
  • diehardmacfan - Tuesday, November 10, 2020 - link

    Imagine telling a multi-billion dollar company that 5-6% of their revenue is negligible.
  • vladx - Tuesday, November 10, 2020 - link

    Yes it's negligible because other divisions can easy get that 5-6% if they grow and server and FPGA businesses are guaranteed to keep growing.
  • GodHatesFAQs - Tuesday, November 10, 2020 - link

    Their growth is not accelerated as a result of losing Apple's business. Apple's business is the 5-6% that they'll never get back, while they may grow other parts of their business.
  • Wilco1 - Tuesday, November 10, 2020 - link

    I guess you also didn't notice that Intel's server marketshare is being eaten away fast by AMD and Arm servers?

    And the financial hit is just the start - the move to Arm in servers, laptops and desktop is a clear inflection point.
  • hecksagon - Tuesday, November 10, 2020 - link

    Haha you said ARM servers. Will be quite a while before that's a problem. Hell even AMD isn't really eroding Intel's server market share all that much. That market is all about the platform maturity.
  • Wilco1 - Wednesday, November 11, 2020 - link

    Graviton was 10% of total AWS instances back in August and will likely be close to 20% by the end of the year: http://pdf.zacks.com/pdf/JY/H5669536.PDF (chart 6 shows how quickly Arm and AMD are eating Intel's share - from 12% to 30% in just over a year)
  • Spunjji - Thursday, November 12, 2020 - link

    @hecksagon - "Will be quite a while before that's a problem" seems to have been Intel's approach to their foundry plans eating dirt and, oh dear, now they've gone and lost performance leadership and Apple. 😬😬😬
  • Chinoman - Tuesday, November 10, 2020 - link

    You forget that this is likely to send customers in Apple’s direction as well. People who would have otherwise bought non-Apple Intel-powered products could now be persuaded toward buying a Mac since it now has a compelling enough technological gap between itself and the competition. Competition is good for consumers but not desirable for businesses. Intel is going to lose more than the share of the market that Apple currently occupies. It’s a matter of time. This isn’t even considering that they haven’t been very competitive across the board against AMD as in years past.
  • Nicon0s - Thursday, November 12, 2020 - link

    Software availability, compatibility are very important. I'm planning on buying a Zen 3 laptop next year. Better battery life is great buy for me is also very important to be able to run all my software and all the games I own. For that reason a Macbook with ARM processor ia not even on my list.
  • diehardmacfan - Tuesday, November 10, 2020 - link

    That's not how businesses work my guy.
  • bji - Tuesday, November 10, 2020 - link

    You can't be serious. So you're saying that if I steal $100 from you it's OK because if you work overtime hard you can make that $100 back. And you don't even realize that you'd be $100 richer in the 'working overtime' scenario if I hadn't stolen $100 from you.

    I think I'll steal $100 from you, have you wash my car and mow my lawn and do my laundry, and then pay you the $100 that I stole from you and you'll be like, THAT WORKED OUT!
  • grant3 - Tuesday, November 10, 2020 - link

    Fun moving target.

    "I don't know how many units apple buys, but i'm sure they're not a major customer!"
    "Oh, it's 10% of their processors? Well now I want to include unrelated revenues to dilute the loss"
    "Oh, it's still 5% of their revenue? Oh well I'll pretend that doesn't matter because they plan to EVENTUALLY make 5% somewhere else"

    .... and that's how you avoid admitting your original claim was wrong. Make it seem like Apple is doing Intel a favour by not burdening them with billions in sales.
  • Nicon0s - Thursday, November 12, 2020 - link

    Intel has barely been able to cope with demand in recent years, I'm sure they will be able to find costumers for their chips.
  • vladx - Thursday, November 12, 2020 - link

    @Nicon0: Exactly, losing Apple as a customer is no big loss to Intel.
  • Spunjji - Thursday, November 12, 2020 - link

    @grant3 - you nailed the exact brand of sophistry vladx likes to engage in. Unsupportable statement followed by layers of equivocation.
  • Spunjji - Thursday, November 12, 2020 - link

    To be fair, imagine a multi-billion dollar company giving a flying shit what vladx thinks 😅
  • valuearb - Tuesday, November 10, 2020 - link

    Apple's Macintosh line was actually close to 20% of PC market revenues in Q3. The ASP of Macs is around $1,400 vs. industry average $500, so that 7% by unit sales works out to nearly triple the revenues.

    That likely means Apple is much higher than 10% of Intels consumer revenues, since they are buying a higher proportion of high end Intel CPUs than other PC makers (basically buying near zero low end CPUS). So it's possible Apples CPU purchases were close to 10% of Intels revenues.
  • BlackHat - Wednesday, November 11, 2020 - link

    You are comparing the full machine to price with CPU price, no matter that Apple sell an i7 for 1400 or Dell for 700, both (in theory) buy the chips at the same price from Intel so the revenue for Intel is the same.
  • Nicon0s - Thursday, November 12, 2020 - link

    Funny, PC OEMs generally use the newest and best Intel CPUs faster and more than Apple doesn't so Apple is not as large as a costumer as you imagine, it's definitely much smaller than Dell or Lenovo (which also sell a lot of servers and professional laptops for companies).
  • Spunjji - Thursday, November 12, 2020 - link

    Apple were the only OEM to regularly use Intel's highest-end large-die CPU+GPU offerings with eDRAM. They literally subsidized Intel's development efforts in those areas, and you can guarantee the margins on those were better than the average crap Intel sling to OEMs. The only reason Apple didn't push harder to adopt Ice Lake is because for the longest time after release, Intel literally could not make a variant that met their requirements.
  • jabber - Tuesday, November 10, 2020 - link

    Apple sales pay for the coffee machines at Intel maybe.
  • bji - Tuesday, November 10, 2020 - link

    You have exposed yourself as clueless! Embarrassing!
  • Luminar - Tuesday, November 10, 2020 - link

    Meanwhile my Intel stock surged and I just made $4,000 despite this arm announcement.
  • TechnicalEntry - Wednesday, November 11, 2020 - link

    "Surged" lol come on bud. Intel stock is down 14% in the last month, 24% in the past 6 months and is currently plumbing levels it touched at the depths of the March stock market crash. In that same time frame apple has DOUBLED.
  • Spunjji - Thursday, November 12, 2020 - link

    @TechnicalEntry - nice. Factual burn!
  • name99 - Tuesday, November 10, 2020 - link

    Look at the long term picture.
    Graviton2 has validated that ARM is a good enough alternative for many data warehouse use cases. Graviton3 (probably introduced end of this month) should be a nice boost.
    Apple has validated that ARM's a superior alternative to the desktop.
    AMD is ready to pick up as much of the x86 business as TSMC can supply them.

    Hard to see how Intel maintains value with this 1/2/3 punch.
  • Luminar - Tuesday, November 10, 2020 - link

    Intel isn't going anywhere.
  • Spunjji - Thursday, November 12, 2020 - link

    "Hard to see how Intel maintains value with this 1/2/3 punch."
    Vs.
    "Intel isn't going anywhere."

    See how you replied to a point that name99 didn't make? Intel aren't going anywhere *yet* - their sheer size and market penetration precludes that. But damn if they aren't facing a worse situation than they have faced in decades.
  • Spunjji - Thursday, November 12, 2020 - link

    I like to visualise it as a counterfactual - if an Intel exec could stride into the boardroom and announce that he could increase Intel's CPU revenue by 10% in one smooth motion over the course of two years, he'd get a bonus.

    I'm sure they'll all still get bonuses anyway, though, because most people think like vladx.
  • Glaurung - Tuesday, November 10, 2020 - link

    Apple sells about half as many Macs per year as Dell sells PCs. They are definitely a major customer for Intel.
  • name99 - Tuesday, November 10, 2020 - link

    Half as many macs right now. It will be fascinating to see what sort of elasticity exists in this market; I don't think anyone REALLY knows.

    There's obviously an industrial segment to PCs that will stay on x86 for a long while.
    The business segment is conservative and demands Enterprise Windows compatibility.

    But how large is the "flexible" PC segment? The people who just want a nice PC and don't have stringent Windows or other compatibility issues? I suspect it's at least 30 to 40%.
    Those won't all move this year. Hell, plenty of Apple people won't buy this year, waiting at the very least for machines that are closer to what they need (iMac style, or larger memory); some waiting for the real performance machines, the ones that come with at least LPDDR5, SVE/2 and some HBM.
    But in three years...?

    There's also (another HUGE unknown) how much benefit mass consumers will see in being able to run iPhone/iPad apps on their macs. Is this a nice convenience (the way I think of it)? Or is a defining breakthrough, something that massively changed the PC value proposition for people who are iPhone native and throughly familiar with those apps?

    Anyone who tells you they know the answer to either of these questions today is lying. Maybe in a year we'll start to have an idea.
  • pSupaNova - Wednesday, November 11, 2020 - link

    @name99 stop sitting on the fence, you have been watching these hardware wars for nearly as long as I have, you probably subscribed to Byte magazine as a kid plz tell us what you really think.
  • vladx - Wednesday, November 11, 2020 - link

    Remember that outside US, iOS only has around 20% of the mobile market. So no, ARM Macs won't attract that many new users because iPhone/iPad apps runs on them.
  • markiz - Thursday, November 12, 2020 - link

    Of those 30-40%, how many are willing to pay what apple demands?
  • Nicon0s - Thursday, November 12, 2020 - link

    "But how large is the "flexible" PC segment? The people who just want a nice PC and don't have stringent Windows or other compatibility issues? I suspect it's at least 30 to 40%."

    A lot of "but" and "what if".
    Even for final general consumers software compatibility and availability is very important. Windows also has a very large library of free software and especially PC games. I suspect most users wouldn't be willing to give up on those to move to a Mac. Also there are lot the people that own Android phones. How well these new Macbook will work with Android phone is also important. If I only run light workload I can already look at Chromebooks, why would I pay premium for an ARM MacBook?
  • Spunjji - Thursday, November 12, 2020 - link

    Agreed with you 100% on this. It's going to be interesting times.
  • valuearb - Tuesday, November 10, 2020 - link

    And their Macintosh revenues are significantly higher than Dell's PC revenues, given Mac's much higher average sales price. Macs on average use significantly more expensive Intel processors than Dells do.
  • Nicon0s - Thursday, November 12, 2020 - link

    "And their Macintosh revenues are significantly higher than Dell's PC revenues"

    I doubt it's much higher.
  • Spunjji - Thursday, November 12, 2020 - link

    Dell's entire "Client Solutions Group" netted 45.84 billion USD in revenue in FY 2020. That includes a lot more things than just PCs, of course.
    Apple brought in just over 28 billion across a similar time frame for "sales of Macintosh computers". (Source for those is statista.com)

    Dell sell a LOT more devices than Apple - the only sound conclusion is that Apple make a lot more per device sold.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Intel will face the impact for sure, 10% of the world wide marketshare is owned by Mac trash. So that whole thing is lost, and the reputational damage for losing a customer like Apple then losing to TSMC and AMD.

    But the biggest joke about these Macs and this processor is, how it is being compared to a fucking 10900K and 5950X WTF. As if both can do the goddamn same fucking work. And if Apple is doing this with 5W vs fucking 100W processors why the hell they are also not doing the same, probably x86 is shitty uArch. What else does the article infer... I don't know.

    Look at the benchmarks, there's no fucking Cinebench for this CPU nor any real world test like Blender, 7Zip compress and Decompression. Nothing but SPEC bars and GB scores, as always while in reality all of the iPhones lost to the application performance a.k.a real world performance to the pathetically slow Qualcomm processors.

    x86 and this ARM is horrible measuring dicks, Apple will enforce all application devs to transition to their ARM only system, expect Rosetta2 to drop 32Bit x86 as well once they get full transition done, also on top why would I be a fucking retard to run iOS apps on a Mac, do I like crippled software ? Unless one is retarded that enough they can run what else...esp given how pathetic the Filesystem sandbox on iOS vs a Macintosh OS. Apple is already converging both their Mac market is only for the people who use macs for Xcode and music creation and some light work.

    Don't even start on the GPU side it's even more pathetic to see how the article portrays.... while absolute dogshit FPS on some game they were showing at the event which looked like PS2 and early PS3 GFX. Even Switch runs better than that with it's ultra garbage ARM processor, they are claiming some magical 2.3Teraflop figure in such small package, so that means it will be faster than a 780Ti and 970GTX chip ? Fucking nope, not even close to it.

    AT Apple articles are extreme tbh, too much lime light. There's no mention of the I/O issues, lockdown garbage ecosystem, no user replace ability, mega anti consumer bs. Just glowing reviews and all..
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    Meth: Not even once.
  • name99 - Tuesday, November 10, 2020 - link

    Oh Sarah! It's funny because it's true.
  • McCartney - Tuesday, November 10, 2020 - link

    it's so funny how you fools think english is his first language. it's pretty obvious that it's not. and a lot of what he says is true, but the sad thing is the number of stock-pumping losers on the internet has grown exponentially.

    instead of understanding his point that x86 is not going to be overtaken by ARM anytime soon (with which i agree), outside of the legacy software dominance that others have pointed out here, you attack him.

    it's clear from the content of his posts that this person has both knowledge and opinion. instead of respecting his right to exercise this diversity, like the follower you are, you suggest he's a drug user. he is not a drug user. he is simply voicing his passion for an architecture that raised the real contributors to the computing world, whom you continue to sponge off with your casual interest in processor and compiler design. you are a noob, like name99.

    i bet the majority of ARM defenders here are so invested in AAPL (foolish to buy it at 600 in 2014, now hoping their put sells to some uneducated pinoy who trades on cramer's tips).

    da NAZ going 20k baby!
  • Chinoman - Tuesday, November 10, 2020 - link

    Someone doesn’t need to have speak English natively to know how to write prose without injecting useless subjective commentary to make a point. He just comes off as unhinged and immature which I believe are traits which transcend language barriers. His main point of contention was that x86 chips are being portrayed unfairly against the A14 in these benchmarks and that he wishes to see more figures before “declaring” some sort of victor like it’s a video game. Computer drag racing isn’t really the whole point here, though.

    The market has decided that Intel failed to produce a compelling product and this review, despite the limitations of not even having an M1 at hand, tries to demonstrate why Apple decided they should invest billions into their own processor instead of continuing to count on Intel or AMD.
  • name99 - Tuesday, November 10, 2020 - link

    "you are a noob, like name99"
    OK then.

    Meanwhile:
    https://www.anandtech.com/comments/16214/amd-zen-3...
  • AshlayW - Tuesday, November 10, 2020 - link

    ^ Agreed. The OP was rude and didn't need to swear but had some really valid points.

    Waiting for real-world tests now.
  • Qasar - Tuesday, November 10, 2020 - link

    yes, very rude. and swears needlessly.
  • Spunjji - Thursday, November 12, 2020 - link

    Out of interests, which of z0d's points did you think were valid that aren't addressed by the article itself and/or are fundamental limitations of not yet having M1 hardware?
  • techconc - Wednesday, November 11, 2020 - link

    I think you're missing the point. The M1 is Apple's first entry of a family of Mac specific SoCs. It's their low end variant. The fact is, Apple is able to ship a far more compelling product for these form factors using their own Silicon. The industry will absolutely take notice of this. No, it's not the end of Intel overnight, but it does demonstrate how bad things are with them right now.

    The lucky thing for Intel is that there is no real ARM based competitor that is not Apple. X1 based SoCs will be more compelling but they won't be enough for people to dump Intel and switch to ARM. Intel had better hope that situation remains the same in the years to come. All the same, Apple has once again established a hardware advantage in addition to the software advantage they've always had in the market.
  • palladium - Thursday, November 12, 2020 - link

    Is M1 awesome? For sure, even allowing for process node advantage and the extra power the interconnect consumes (on the Ryzen 3/Intel CPUs). Will it dominate mobile performance? I'm sure it will. Will a higher clocked, higher core count variant of it dominate the desktop market? I don't think we have enough information to make the call - the MacPro market is dominated by AVX heavy workloads. Does M1 even have a SIMD equivalent instructions? How well would this bigger M1 perform in MT workloads? How well does the architecture scale at 50W or 100W TDP? How will Apple implement a higher core count version? Are they going with a big monolithic die like Intel (in which case what are the yields going to be like? What about cache?), or a chiplet design like AMD (in which case they need a good interconnect - we saw how problematic interconnects can be in the early TR chips).
  • Spunjji - Thursday, November 12, 2020 - link

    @Palladium - "Does M1 even have a SIMD equivalent instructions"
    Yup - "The four 128-bit NEON pipelines thus on paper match the current throughput capabilities of desktop cores from AMD and Intel, albeit with smaller vectors."

    All good points about scaling up to desktop-level performance with more cores and higher TDPs, but one thing to bear in mind is that with IPC like this, they don't necessarily need to in order to be competitive.
  • techconc - Thursday, November 12, 2020 - link

    @palladium
    Yes, ARM has SIMD instructions. This is nothing new. Apple even has their own custom AMX units since the A13 primarily used for machine learning. etc, etc..

    As for scale, yes, we'll have to wait and see what Apple does. I don't think I'm particularly clairvoyant by expecting another iteration of scale (maybe called M1x) that will be used for iMacs, etc. with likely 12 cores and an even bigger GPU.

    So, then, beyond that, the only question becomes how Apple plans to address the Mac Pro market. I suspect we're at least a year away from that happening. If rumors are accurate, then it looks like Apple will be creating their own custom GPU card. That makes sense. They're not likely to have an SoC for their Mac Pro product. My guess is they'll use their own tech but on a more traditional bus architecture with expandable memory, discrete (Apple) GPU, and a CPU chip with many cores.
  • Spunjji - Thursday, November 12, 2020 - link

    This comment is... *chef's kiss*

    Where to begin? White knighting for Quantumz0d is already a hilarious place to be, but doing it with "woker than thou" blather about non-native English speakers is gold. It's not about z0d's grammar, it's about the barely-relevant ranting that inevitably turns to one or two unrelated pet topics. If you're going to invoke identity as a defence, make sure it's actually in play first!

    Then to say that it's clear z0d has knowledge and "opinion". Sure, but those opinions suck and often aren't based on their knowledge! that's the problem!!! 😂

    But my absolute favourite bit - the BEST - was where you whined about people attacking the man and not the idea, then started calling other people noobs and sponges. 🤣

    Is this you, z0d? Or maybe a sock of TheJian, given the whine about "stock-pumping losers"? Regardless - most of us don't care about stonks, but feel free to keep getting angry when line go up. 🤭
  • ex2bot - Tuesday, November 10, 2020 - link

    Thank you. Sad to see so many trolls here.
  • vladx - Tuesday, November 10, 2020 - link

    "Intel will face the impact for sure, 10% of the world wide marketshare is owned by Mac trash. "

    Even if Macs represent 10% of all PCs, Intel consumer division represents only 60% of Intel's revenue which means the loss is only 6% of Intel's total revenue at most.
  • GodHatesFAQs - Tuesday, November 10, 2020 - link

    That's very significant. iPads are also 7% of Apple's revenues, imagine if Apple lost that entirely. Nobody would say that's negligible.
  • Chinoman - Tuesday, November 10, 2020 - link

    Also now people cross-shopping between a MacBook Pro and a Dell XPS would choose the MBP since it has better battery life and performs the tasks people buy laptops for better than an XPS. Intel is gonna feel pressure from OEM’s to produce better products or they’ll shift to ARM chips as well.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Thin and Light BGA market is not applicable for the DIY, granted these move a lot. But majority of the people want gaming PCs to shop for..And factor in the price too, Apple products are expensive, look at MBP same M1, its expensive than 1K USD. And Intel MBPs are more expensive than M1 at the moment, so Intel Macs also exist still. AMD Ryzen is very good in limited TDP and with RDNA1 on the chip or RDNA2 on the chip competing against Xe Intel it's not easy battle and nope 10% market-share is not going to accelerate tbh, more people are buying smartphones and tablets than computers themselves.

    And on top the gaas of Stadia / Xcloud / Luna might be OS/Platform agnostic. Still they have to prove themselves for the lockdown of every piece of sw and game and entertaintment however not the point of discussion but Gaming is one where Apple can never touch AMD / Nvidia with Intel spending more money on Xe GPUs it is not going to change either.
  • Chinoman - Tuesday, November 10, 2020 - link

    Yeah but not everyone is trying to build a gaming PC. Have you not noticed the massive growth of ARM-based games in recent years?

    Most gaming titles now aren’t even about pushing some sort of graphical or technical limits like back in the days of Crysis. People just want a system that can run the shit they wanna play like Among Us or Fall Guys. Obviously, there’s going to be a niche for PC gamers who want to push the envelope. These SoC’s aren’t for those people. It’s for people who want the performance of a fast laptop and the battery life of a phone.
  • Qasar - Tuesday, November 10, 2020 - link

    keep in mind people, to vladx, intel is god.
  • vladx - Wednesday, November 11, 2020 - link

    Lol, nice attempt trying to "poison the well" against me. Sorry that speaking facts triggers you so much.
  • Qasar - Wednesday, November 11, 2020 - link

    that fact is, your previous posts on here, have been pro intel. anytime something bad is said about intel, you are right there defending them. sorry that your god intel, is in a tough time right now.
  • vladx - Thursday, November 12, 2020 - link

    "that fact is, your previous posts on here, have been pro intel. anytime something bad is said about intel, you are right there defending them. sorry that your god intel, is in a tough time right now."

    Yeah sorry that I'm not a mindless sheep that goes with the opinion of the majority, I prefer using my brain instead. You should try it sometimes.
  • Spunjji - Thursday, November 12, 2020 - link

    @vladx - "Yeah sorry that I'm not a mindless sheep that goes with the opinion of the majority" is an interesting flex for someone who routinely shows up to defend the honour of the world's largest and most profitable desktop, datacentre and laptop CPU designer and manufacturer, whose products have been considered to be a default by every PC-illiterate person on the planet for decades now.

    You're reminding me of the right-wing tools who say they reject the mainstream media and then obsessively parrot Fox, Breitbart and Daily Mail talking points - or say that "conservativism is the new punk". 🤦‍♂️
  • Nicon0s - Thursday, November 12, 2020 - link

    Wow those are quite weak excuses.
  • Qasar - Thursday, November 12, 2020 - link

    yea ok, thats why you keep defending intel, cause you are a mindpess sheep.
    and loss of business, is not a good thing, no matter how small.
  • Spunjji - Thursday, November 12, 2020 - link

    Apparently vladx would, because vladx is smort.
  • iphonebestgamephone - Tuesday, November 10, 2020 - link

    Ah yes the youtube speedtests where qualcomm consistently beats a series in blender and cinebench every year...
  • ThreeDee912 - Tuesday, November 10, 2020 - link

    Well for one thing, they already dropped 32-bit x86 support last year. The summer event was on an A12Z from an iPad Pro on an unoptimized beta running an emulated x86 game. Wait for the actual M1 devices to ship. Once Cinebench, Blender, and 7zip are ported you can have all of those benchmarked too.
  • bji - Tuesday, November 10, 2020 - link

    Your profanity says a lot about your mind and makes you easy to ignore. Keep it up, I appreciate when walls can be ignored so easily.
  • MetaCube - Sunday, November 15, 2020 - link

    Not sure if retarded or dumb
  • eastcoast_pete - Tuesday, November 10, 2020 - link

    In addition to the number of actual units/CPUs sold, Apple also buys or bought Intel's higher revenue/margin CPUs, not the $ 60 Celerons. Furthermore, Apple is a marquee customer; with Apple going ARM-based, ARM's X1 design is more likely to land wins, especially if MS and others (Adobe, Autodesk..) make their software run native on ARM-based chips.
  • Nicon0s - Thursday, November 12, 2020 - link

    "Apple also buys or bought Intel's higher revenue/margin CPUs, not the $ 60 Celerons."

    Apple uses less high performance CPUs from Intel than most PC OEMs. Even it Windows PCs are often hundreds of dollars cheaper they generally have better CPUs than the ones found in competing macbooks, not to mention dedicated GPUs.
  • Spunjji - Thursday, November 12, 2020 - link

    You've said this twice, so I'mma point out that it's wrong twice.

    Apple have consistently used Intel's largest, most costly mobile chips in their MacBooks and hilariously expensive Xeons in the Mac Pro / iMac Pro devices. Everywhere else they tend to use comparable chips to PC OEMs, with the exception that unlike all the others, they have literally never produced a device with an Atom, Pentium or Celeron CPU in it.
  • grayson_carr - Wednesday, November 11, 2020 - link

    There are about 260 million x86 computers sold per year, and the latest estimates say 62.5% (162 million) of those contain Intel chips. Apple sells around 20 million Macs per year, so Apple buys over 12% of all Intel x86 chips. If that's not a major customer, then I don't know what is. The only companies that sell more computers than Apple are Lenovo, HP, and Dell.
  • GC2:CS - Tuesday, November 10, 2020 - link

    Have you tried to run spec in low power mode to see how the power efficiency changes ?
    Or at least the clock speed table. It is 1,85 Ghz down from 2,66 for Lightning.

    If the small cores are significally more powerfull now, how do they compare to A55 (sigh) ?
    Which Big core would be in some way equivalent to this year and previous year small cores ?
    Can we say something like A10 zephyr was like A4 mainCPU and Icestorm is like a cyclone core now ?
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    I'll follow up on LPM figures, but it's running at 1294MHz which is kinda slow.

    ~4x faster than an A55, ~ same power, ~2x better energy efficiency: https://www.anandtech.com/show/14072/the-samsung-g...

    The new small cores are roughly in line with an A9 big core.

    I don't have data pre-A9.
  • Fritzkier - Tuesday, November 10, 2020 - link

    I still don't understand why ARM doesn't make new efficiency cores...
  • eastcoast_pete - Tuesday, November 10, 2020 - link

    I guess ARM is too busy chasing after "big game", servers etc. Good efficiency cores aren't sexy enough for Aging Risc Machines, I guess. Let's hope archs like RISC-V catch on and up, and bust the ARM monopoly in mobile.
  • Spunjji - Thursday, November 12, 2020 - link

    My best guess: they have limited resources, and they don't *have* to improve on A55. Most of their customers don't really seem to care.
  • Spunjji - Thursday, November 12, 2020 - link

    Damn! Thanks for the additional context, Andrei.
  • name99 - Tuesday, November 10, 2020 - link

    You can ballpark a lot of it. If you're willing to live on the sane side of reality, for most purposes, and within a sane range of interpolation:
    - IPC is pretty constant across frequency (so performance scales as f^1)
    - total energy use scales like f^2
    - power scales like f^3
  • name99 - Friday, November 13, 2020 - link

    Follow up to the above. If we assume the multiplier is 6%, which seems to be what all the data indicates (3 vs 3.2GHz), then f^3 gives us that the additional power of an M1 big core at full speed is ~1.2x the A14 big core, so 6W rather than 5.
    So worst case CPU scenario would be maybe 24W (sure, also the small cores, but they're going to be miniscule). Throw in worst case GPU is maybe, what, 8W?

    This suggests that
    - in theory an MBA might have to throttle. What can that chassis dissipate comfortably? 15W? So you might have to dial the cores down to 2.5GHz or so. But not a huge deal.
    - and that MBP and mac mini are both going to be way overspecced in their fans, unless you plan to run them in hot poorly ventilated closets.
    - perhaps if you're planning to play some seriously well threaded games you may notice the MBA vs MBP/mini difference, but not otherwise.

    Can anyone see a fault in my reasoning? Or to put it differently, do people find that playing the most aggressive games on iPad Pro ever throttles? I'd guess the iPad Pro has about the same thermal dissipation as an MBA.
  • HurleyBird - Tuesday, November 10, 2020 - link

    "We don’t know if this is comparing M1 to an AMD Renoir chip or an Intel ICL or TGL chip"

    You're being extremely generous. This is Apple we're talking about so that kind of comparison would be a nice surprise, but we're probably dealing with a low-end Comet Lake, assuming this graph is actually based on any kind of real data and isn't just a wishful guesstimation.

    For context, on the new MDP page the claim "up to 2.8x faster processing performance than the previous generation," the footnote lists a 1.4GHZ quad core processor. This would presumably be the i5-8257U in the previous gen MBP.

    In Apple speak, "up to" means "we cherry picked the absolute best benchmark." I'm sure you can find benchmarks where , say, a 4800U is around that 2.8x speedup compared to the i5-8257U. I found a benchmark (CB R15 Multi) that is around 2.43x almost immediately.

    So, that's Renoir out of the question. It's too close to M1 performance to ever work in that graph. ICL/TGL might be possible in an MT workload, but I wouldn't put money on it.

    I'll note that if M1 is faster than Renoir, which is certainly possible, it's likely slower than Cezanne.
  • HurleyBird - Tuesday, November 10, 2020 - link

    And I almost forgot that you also can't rule out these claims being based on GPU or NPU acceleration.
  • Glaurung - Tuesday, November 10, 2020 - link

    Apple has fine print details of what they're comparing the M1 to on their website. It's not as detailed as an Anandtech review, but they do identify exactly which Intel Macs they are benchmarking against for each claim.

    https://www.apple.com/mac/m1/#footnote-1
  • valuearb - Tuesday, November 10, 2020 - link

    A14 is 50% faster in single core than a Ryzen 4500U (Renoir) and only about 20% slower Multicore. M1 isn't just going to be faster than the A14 in single core, it's going to at least double A14 multicore performance.

    Cezanne doesn't have a shot at the M1 in single core, though it might get close in multicore. It also will likely remain far behdind in power per watt and cost.
  • Nicon0s - Thursday, November 12, 2020 - link

    >A14 is 50% faster in single core than a Ryzen 4500U<

    Maybe in some synthetic benchmark. The 4500U is AMDs low end Renoir after all.

    "M1 isn't just going to be faster than the A14 in single core, it's going to at least double A14 multicore performance."

    It only has 2 additional performance cores. In what world does that translate as double the multicore performance?

    >Cezanne doesn't have a shot at the M1 in single core, though it might get close in multicore.<

    You must be joking

    >It also will likely remain far behdind in power per watt and cost.<

    7nm is less efficient than 5nm but I don't see how it's also more expensive.
  • Spunjji - Friday, November 13, 2020 - link

    @Nicon0s - "It only has 2 additional performance cores. In what world does that translate as double the multicore performance?"

    To be fair to valuearb, that's likely not far off. The performance cores are between 3X (Int) and 4X (FP) the performance of the efficiency cores, so from a multi-core performance perspective it's like going from a 3.2 core chip to a 5.2 core chip. Working from the fairly safe assumption that M1 has higher power limits and/or runs at a slightly higher clock speed, a doubling of A14's sustained multi-core performance isn't an unreasonable outcome.

    We'll see soon enough how things shake out for Cezanne vs. M1. My bet would be on a narrow single-core victory in SPEC for M1 and a commanding multi-core lead for Cezanne.
  • Awanderer - Tuesday, November 10, 2020 - link

    I posted this in the live blog comments but posting here as well because we do have some idea what apple is measuring their perfomance gains against if you dig deep:

    Found some apples to apples .... Apple says: "With an 8‑core CPU and 8‑core GPU, M1 on MacBook Pro delivers up to 2.8x faster CPU performance¹ and up to 5x faster graphics² than the previous generation." in their disclaimers, they say 1) Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad‑core Intel Core i7‑based 13‑inch MacBook Pro systems, all configured with 16GB RAM and 2TB SSD. Open source project built with prerelease Xcode 12.2 with Apple Clang 12.0.0, Ninja 1.10.0.git, and CMake 3.16.5. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.
    2) Testing conducted by Apple in October 2020 using preproduction 13‑inch MacBook Pro systems with Apple M1 chip, as well as production 1.7GHz quad‑core Intel Core i7‑based 13‑inch MacBook Pro systems with Intel Iris Plus Graphics 645, all configured with 16GB RAM and 2TB SSD. Tested with prerelease Final Cut Pro 10.5 using a 10‑second project with Apple ProRes 422 video at 3840x2160 resolution and 30 frames per second. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro.

    Correct me if I'm wrong, but isn't the 1.7ghz/Iris 645 TWO generations old? Tiger lake would have something to say about this .... Is Tim Cook talking up 5nm vs 14nm TWO Macbook generations back? The most recent Macbooks are the 10th gen and are 2.3ghz. Also, 10 seconds of stress testing GPU hardly allows for thermal throttling.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Apple is able to fool people to max by removing 3.5mm jack and removing charger from box and making phones with dead pixel zones for 4 years. So yeah logic doesn't work when the product is all shiny and has best in class synthetic performance tests and ridiculous comparisons to Cometlake and Vermeer.
  • valuearb - Tuesday, November 10, 2020 - link

    Anandtech is the one that did the benchmarks that show the iPhone A14 being on par with the best Intel and AMD can make, not Apple.
  • Spunjji - Friday, November 13, 2020 - link

    Apparently being able to read graphs and interpret the results makes us all gullible rubes who don't care about 3.5mm headphone jacks. News to me 🤷‍♂️
  • BlackHat - Tuesday, November 10, 2020 - link

    Yes, they are comparing against the SkyLake version when they have an Ice Lake model.
  • Awanderer - Tuesday, November 10, 2020 - link

    A flat out lie opens them up to litigation. I was wondering how Apple can say "previous generation" when we're talking TWO Macbook generations back, and it not be a flat out lie. I guess the only answer is Ice Lake isn't the previous generation Macbook until M1 is actually released. Tim Cook should get into politics with that sort of dirty twist if that's the explanation.
  • valuearb - Tuesday, November 10, 2020 - link

    They are comparing against the previous two port 13 inch MacBook Pro with the fastest Intel processor that it offered.

    Their only Ice Lake model was a one-off that was heavier and required a fatter case to achieve decent battery life with Ice Lake. And there is no doubt the M1 smokes it too, given Ice Lake was only 25% faster than the model they benchmarked against. So instead of 2.8x faster, it's "only" 2.2x faster?
  • BlackHat - Wednesday, November 11, 2020 - link

    Nor really because they compare product with product like MacBook Pro vs MacBook Pro.
  • defferoo - Tuesday, November 10, 2020 - link

    They're comparing against the outgoing model of that machine. In this case, it's the 2-port 13" Macbook Pro. When they replace the 4-port version I'm sure they'll compare against the Ice Lake based Intel CPU.

    I know it isn't what we wanted to see (we love CPUs here so we want Apple compared to the latest Tiger Lake and Zen3 processors from Intel and AMD) and I'm not giving Apple a pass for the shady marketing speak that this entire presentation was filled with, but for the average consumer, it's easier to compare previous generation to this new generation.

    That said, since we understand these things, we have other benchmarks that we can use to infer the relative performance of M1 vs. TGL or Zen3. Even better, we can just wait until the machines come out and somebody will benchmark the crap out of it. In a couple weeks there are going to be so many M1 benchmarks flying around, we'll have a pretty good idea what the actual performance is like without vague multipliers.
  • Awanderer - Tuesday, November 10, 2020 - link

    I see what you're saying ... it's definitely a notch above shady and instead bordering deceptive trade practices, which is even unusual for apple. A lot of companies engage in smoke and mirrors and unverifiable claims, but "previous generation" is tough to justify as the two port which essentially carried over from 2019, with the introduction of the latest generation 2020 ice lake.

    Go to Apple's website, and for the M1 press release Apple says "Introducing the next generation of Mac" ... So, shall we say Ice Lake is also the same generation since it's still available as 4 port? If M1 is next generation, then what is Ice Lake? Or are we saying we bifurcate the 13" and have different generations of Mac depending on Apple intentionally making minor modifications, keeping it around, so it won't be a flat out lie?

    I agree, wait to see the real life tests. The deception here is inexcusable.
  • Jbmorty - Tuesday, November 10, 2020 - link

    I’m not sure I agree. If BMW say the 2021 3-series is 10% faster than the last generation (2019) 3-series, I don’t assume that includes the 2020 M3...
  • Chinoman - Tuesday, November 10, 2020 - link

    These commenters literally never cared about the performance of a MacBook Air or 13” MBP but now act like they’re up to date on this. Technology has been stagnant for too damn long. It’s turned everyone skeptical that we can even get gains like these anymore.
  • Awanderer - Tuesday, November 10, 2020 - link

    Not quite. I'm a BMW guy so I'll modify your analogy. It would be more like Toyota saying they're dropping BMW for the next gen Supra, saying they made their own engine and it's better in every way, greatest thing in 6 cylinder gasoline engine technology compared to previous gen, then compare it to the N not B series 4 cylinder BMW engine. It's shady at best, a lie at worst
  • valuearb - Tuesday, November 10, 2020 - link

    You are so very easily butthurt over your own mistakes.
  • Chinoman - Tuesday, November 10, 2020 - link

    Let’s be real though is there a big difference between Intel from 2018 and Intel in 2020?

    Their numbers are probably accurate even if we used the latest offerings from Intel.
  • ioiannis - Wednesday, November 11, 2020 - link

    yeah that's right
  • Jbmorty - Wednesday, November 11, 2020 - link

    I like this way of thinking about it. And not just because I’m rocking an ‘18 B48 myself!
  • Spunjji - Friday, November 13, 2020 - link

    To be fair to Awanderer, the 3-series and the M3 are far more distinct than the "MacBook Pro 13" with 2 ports" and the "MacBook Pro 13" with 4 ports".

    On the flip side, the histrionics over Apple marketing to their own customers with the same comparisons they always use are more than a little bit tiresome.
  • ioiannis - Wednesday, November 11, 2020 - link

    well, imo comparing the m1 graphics to such an old product allows them to say 5x faster, which is beter than saying 1.2x faster. Besides i dont think the new xe graphics are that much of an improvement (5x), but correct me if im wrong. In terms of the throttle, the macbooks with intel also suffer it. In any case, of course they are using the best case scenario but i think is still an incredible performance gain
  • Spunjji - Friday, November 13, 2020 - link

    The full-fat 96EU Xe iGPU in Tiger Lake is roughly 2.5x faster than the Iris Plus 645 in Apple's previous MBP 13" with 2 ports. So, even if they made the comparison to Intel's best, they could still claim it's 2x faster.

    We'll see how that shakes out in practice, of course, but I find it believable in a purely theoretical sense. Xe isn't particularly efficient in terms of performance-per-watt or performance-per-area, and Apple are working with a node advantage too.
  • YaleZhang - Tuesday, November 10, 2020 - link

    128 KiB L1 cache with 3 cycle latency. I must be dreaming. For a long time, 32 KiB was the standard. You can't make it any bigger than 32K or else you have to increase the associativity to > 16 (at the cost of power) or give up virtually indexed, physically tagged (at the cost of latency).
    How do they do this?
  • MetalPenguin - Tuesday, November 10, 2020 - link

    Apple designs their own custom memory circuits. You also have to take into account the minimum page size when comparing cache capacity and associativity. Apple has been using a minimum page size of 16KB for a while, not 4KB. Meaning that even without including techniques to handle aliasing, with 8-way associativity they could have a 128KB cache.
  • YaleZhang - Wednesday, November 11, 2020 - link

    Very good point. With 16 KiB pages, you have 14 physical address bits to use, so the capacity can be 4x larger: 64 bytes/line * 8 lines/set * 256 sets = 128 KiB. Earlier I assumed 16 way associativity. Ignore that - that's not what Intel or AMD uses; way too power hungry.
  • name99 - Friday, November 13, 2020 - link

    But Apple may also be using a way predictor. (And likely are for power reasons.)
    A way predictor is problematic if you are doing speculative scheduling (which Apple also likely are) IF you're a lazy moron and handle replay via flushing or something similarly heavyweight.

    But it you have a quality replay mechanism (which is a gift that keeps giving in terms of how much additional speculation it allows) then a way predictor is probably well worth doing (and could allow them to grow the L1 even larger if time of flight allowed -- perhaps when 3D becomes available).
  • x064 - Tuesday, November 10, 2020 - link

    > The one thing that stands out the most are the memory-intensive, sparse memory characterised workloads such as 429.mcf and 471.omnetpp where the Apple design features well over twice the performance, even though all the chip is running similar mobile-grade LPDDR4X/LPDDR5 memory.

    The chip comes packaged with HBM, not LPDDR: this can be see on the store page, by clicking on the "how much memory do I need" button.
  • DarkXale - Tuesday, November 10, 2020 - link

    There is no mention of HBM on the store page, including that button.
  • x064 - Tuesday, November 10, 2020 - link

    Rough translation from the Finnish store page: "The M1 chip bring the extremely fast unified memory (architecture) to the MacBook Air. The low-latency HBM-memory is integrated into one component, which enables app to powerfully share data between the CPU, the GPU, and the Neural Engine – so that all the things you do are done fast and smooth."
  • defferoo - Tuesday, November 10, 2020 - link

    wow, you're right. I wonder why they didn't call this out? https://translate.google.com/translate?hl=en&s...

    unified HBM would be huge
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    Just to be clear, that product page is wrong.

    The M1 is running on 128-bit LPDDR4X.
  • defferoo - Tuesday, November 10, 2020 - link

    figures :) too good to be true.
  • name99 - Tuesday, November 10, 2020 - link

    The (non-translated US Store) English equivalent is
    "The M1 chip brings up to 16GB of superfast unified memory. This single pool of high‑bandwidth, low‑latency memory"
    I think there's a Translation issue here. US English says high-bandwidth. Finnish says HBM (in the original). I'm guessing a not-too-tech-savvy translator did not know that HBM means more than just "high bandwidth DRAM".

    You can compare the physical package to the A12X package. Looks the same.

    I'm sure HBM will come to the mac soon enough. But not in this context of the lowest end products.
  • Spunjji - Friday, November 13, 2020 - link

    Yeah, that sounds like the sort of thing they'd reserve for a high-end chip.
  • GC2:CS - Tuesday, November 10, 2020 - link

    Those are A14 benchmarks.
    Previous chips were PoP’s but A12X came out with something very similar to the M1 package shows.

    A12X did not show a huge increase in memory perf HBM would bring (I think so).

    If A14 is HBM then it is a stacked TSW solution. But I think it would show more performance and latency gain if that is the case.
    Or a big efficiency gaiin.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Woah, as expected. Apple was able to beat both 10900K and 5950X at fraction of power consumption is it magic ? or something else... I think guys lets retire all our computers and buy this garbage product made of BGA trash and low I/O and virtually no Software or HW advantage over a PC. Where they showcased nothing but bs games and other drama of cringe worthy video montage from a group of developers.

    So x86 is dead ? Why don't we run a software which can run on both this BGA trash product and the x86 part and decide instead of these useless SPEC trash. 5nm and all drama but how much fast it is over the application performance of 865+ and the Kirin ? Not much, Youtube speed tests are all we need to see how fast the phone is performing in real life.

    Apple is shooting for Stars x86 beware, yea. I will wait for 2023 when the transition is fully done, if this is the world's fastest processor why do their Mac Book Pros with Intel CPUs are costing more ?

    I hope Andrei gets a Job at Apple's Srouji team that's all I have.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    And ST garbage is all of these synthetic bullshit tests, wonder what will happen once SMT kicks in the x86 to this ARM space age technology. Maybe Intel and AMD will go out of their businesses and AWS should abandon buying the Milan and start ordering Graviton 3 and M1.
  • SarahKerrigan - Tuesday, November 10, 2020 - link

    You seem like a sane, well-balanced person.
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    Unfortunately we can't ban people for being insane. Apple is literally is ditching x86 and he's still in denial.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Keep it up Mr. Andrei. Wonder why didn't Apple employ you like how they did Anand. Yes I'm insane for calling out your article which has ZERO cross OS benchmarks, which doesn't translate to real world application performance.

    Just like all of your iPhone articles showing extreme performance increases over Android processors like Kirin and Qcomm and the even worst part is how they are priced exactly in conjunction with Apple's iPhones and iPads. Rather not cheap since there's a huge deficit of performance. Which can be observed with CPU pricing in Mainstream and HEDT (how X299 is dead now vs basic AM4), GPU pricing, Cars, Houses and everything, "You pay what you get" nothing more nothing less.
  • Chinoman - Tuesday, November 10, 2020 - link

    You can’t deny that Apple’s architecture has improved rapidly while Intel has stagnated. I don’t really understand why you’re upset when it’s clear that Intel has been sleeping while competition has worked hard to catch up.

    Apple’s execution within its model of vertical integration is really what’s been outstanding here. Just appreciate it and keep buying your x86 i7’s. No one will stop you. I promise.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Read my post again. There are no real world benchmarks, why would I run a silly goose chase benchmark with ST core only crippling it's performance (i.e x86 hw like i9 and Ryzen 9) vs the ARM cores here and declare the winner ? Run it at all cores, and then translate to the performance. The market will decide it's price.

    This blatant green pass quoting Apple in that small glorified bs SPEC bar and GB is lame like seeing half picture and deciding the winner.

    There's TDP throttling here too, run a strong workload and see how it crumbles in that pathetic fanless machine of Macbook Air. Ryzen 9 achieved the performance it has at whatever TDP they had and Intel i9 at whatever TDP they have or break the TDP with full blown max power at 100-200W, they do the workload which ranges from ultra high FPS or super fast Encoding / Decoding or Rendering work with ALL CORES. Beat that then we talk and decide who is fast and which is old, or the hypothetical scenario of being on a bigger TDP socket and blowing the same processors.

    ARM companies tried it and failed. Centriq abandoned, Marvell is full custom for the consumer, Fujitsu is again customer centric, Graviton 2 is only comparable one we have right now and it is also limited, EPYC7742 wipes the floor clean, with Milan dropping it's going to be a nightmare for Intel and the innovation that AMD did is spectacular vs this small pathetic feat of running a SPEC score powering up a BGA garbage anti repair product.
  • valuearb - Tuesday, November 10, 2020 - link

    Good point. Once Apple revs the M1 to run above 30W, how badly does it crush AMD and Intel then?
  • Chinoman - Tuesday, November 10, 2020 - link

    I think x86 is effectively dead for anything non-enthusiast level with this release. It’ll just take time to saturate the market, but the real world gains like battery life are hard to beat without going Arm. 99.9% of people are fine with the performance this will provide. At this point you’re arguing for a niche crowd more so than the average person. Just be happy this exists and think of it as a step forward in computing.
  • bigvlada - Wednesday, November 11, 2020 - link

    You might want to read the archive of any decent computer magazine from mid nineties. The same claim was repeated over and over again for new Motorola and IBM PowerPC machines and new Apple Power Macs. A few years later Apple almost went bankrupt. IBM should be the prime example of what happens when you try to do everything in house.

    Arm architecture does not have any intrinsic advantage over x86. In order to beat the opponent, you have to be much faster than him. Even that does not guarantee success, as we saw with DEC Alpha.
  • markiz - Thursday, November 12, 2020 - link

    Don't they state in the article that in fact this intrinsic advantage is the ability of being wider, because they don't have variable instruction lenght?
  • ex2bot - Tuesday, November 10, 2020 - link

    Intel paid you to say that. Incidentally, how do you get a gig like that, Quantum? Do they pay you in chips or wat? Bitcoin?
  • Kuhar - Wednesday, November 11, 2020 - link

    I must say I mostly agree with you though your language is a bit offensive.

    What you do not realize is that the world is changing and Apple just made a CPU that is "the best" for average person. By average i mean by the Gaussian function. One shouldn`t fool itself that Apple is made for computer enthusiast, gamers, researches, etc. who have heavy workloads on multiple cores, no. The world is now all about instagram and instant media, burst information, get fast, finish fast, next thing on the list. Apple CPU is the best for this kind of work.

    Apple`s CPU sure is way below average in most other CPU bound tasks, but who really cares? The instagram generation? Nopes. Apple figured out that CPUs in most average-people`s machines idle 90% of time so they build their CPU accordingly while Intel and AMD have overbuild CPUs.

    I also think anandtech could and should focus more on this part of CPU reviews.
  • Spunjji - Friday, November 13, 2020 - link

    @Quantumz0d - "I demand that you compare apples to entire trees of oranges, otherwise I'mma get irrationally upset about it. I shall attempt to justify this with valid but totally unrelated complaints about Apple's poor repairability."
  • ex2bot - Tuesday, November 10, 2020 - link

    You’re an Intel plant! I just realized. Nice try, Intel boy. Your dinosaur is going down and taking you with it.

    - Ex2bot, not owned by Intel (yet)
  • Spunjji - Friday, November 13, 2020 - link

    Like most of the angry people in here, it seems like you skipped to the bar charts without reading any of the explanatory pages. Or maybe you did and didn't understand them. Or hey, maybe you just like complaining and ranting whether or not it has any bearing on what's actually going on? That does seem to be 90% of your output here.
  • name99 - Tuesday, November 10, 2020 - link

    Like I always say -- every mental pathology that is ascribed to politics or religion is in fact a human universal and is seen as frequently in non-political domains.

    The Seventh Day Adventists still exist many years after William Miller, but thats unfair; the entire Church still exists many years after Paul.

    You can adjust your beliefs after reality doesn't go as expected. Or you go in uh, another, direction. The second choice has universally been more popular...
  • Spunjji - Friday, November 13, 2020 - link

    Sad but true.
  • vegaskew - Tuesday, November 10, 2020 - link

    Are you taking core count into account when comparing power draw? It wasn't clear to me. Also I think it could be clearer than you are using a single threaded benchmark here, so I think the overall performance of 5950 is much higher than A14, right?
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    Those power figures I'm referring to are single-threaded load power.
  • headeffects - Tuesday, November 10, 2020 - link

    Andrei, do you think this M1 is the same die as the upcoming a14X? I wonder this because of the same 4+4 CPU and 8 core GPU configuration. One thing that gives me pause is the M1 uses in package DRAM and A series chips stack them on top of the SoC, but that doesn’t necessarily make it a different die I wouldn’t think.
  • sergebeauchamp - Tuesday, November 10, 2020 - link

    The PowerPC to Intel transition had a precedent for Apple: the transition of Mac hardware from Motorola 68k chips to PowerPCs a decade before.
  • blackcrayon - Tuesday, November 10, 2020 - link

    And a decade before that, they started the 6502/65816 transition to 68k =) With hardware backward compatibility solutions in some cases.
  • dontlistentome - Tuesday, November 10, 2020 - link

    I can understand why Ian didn't write this - all this talk of speeding on the M1 would have him coming out in a cold sweat expecting the rozzers on his tail.
  • Spunjji - Friday, November 13, 2020 - link

    😂
  • Sub31 - Friday, November 13, 2020 - link

    🤣🤣🤣🤣
  • Arian_amg - Tuesday, November 10, 2020 - link

    I had pretty high hopes for X1 it looks like it's already well behind firestorm 😂
  • Jaianiesh03 - Wednesday, November 11, 2020 - link

    About 26% ARM roadmap says they will have 30% improvement in the next years,so yeah they are behind for 2 years
  • GeoffreyA - Tuesday, November 10, 2020 - link

    Am at a loss for words. Such a giant ROB and massive PRFs. Was I dreaming or did I actually read all that? (It's 11.30 p.m.)
  • MetalPenguin - Tuesday, November 10, 2020 - link

    The Apple cores have very large structures. But in terms of ROB size, there are techniques you can use to have a window of (for example) 400 instructions without a 400-entry ROB.
  • GeoffreyA - Wednesday, November 11, 2020 - link

    Thanks :)
  • name99 - Friday, November 13, 2020 - link

    Giant ROB is easy. Be more impressed by
    - the PRFs
    - the LSQ sizes
    - the renamer

    Those are the hard parts to keep scaling up.

    (Slight technical note. Growing the ROB is easy, getting value from it is not. Unless you grow your branch prediction quality at the same time, the larger ROB will not achieve much because after a certain point the unresolved branches piling up behind the blocking load will mispredict and work done after that point is mostly useless.)
  • GeoffreyA - Sunday, November 15, 2020 - link

    Thanks. It makes good sense. When I look at Sunny Cove, with its 352-entry ROB, I get the same feeling. Z3 was able to surpass Sunny's IPC using a much smaller 256-entry ROB (and smaller load/store), which I reckon is a better achievement. Incidentally, Sunny's L/S is fairly close to Apple's design.
  • GeoffreyA - Sunday, November 15, 2020 - link

    (To be fair to Sunny, Z3 has got a much larger micro-op cache, almost double in size.)
  • name99 - Sunday, November 15, 2020 - link

    Unlikely that the LSQ is close to Apple's design.
    Intel pay for the Moshovos patent so are presumably using store sets.
    WARF tried to sue Apple for this patent and ultimately lost, which suggests that Apple is using something different. I know of at least one superior alternative to store sets that has been published, so Apple are presumably using that or something even better.

    This is a pattern with which I have personal familiarity. When I was at Apple we were sued by ATT over the way I implemented random access in MPEG. At first the lawyers simply could not imagine than anyone had a better solution to the problem than their patent until I slowly walked them through my mechanism, one step at a time.
    The people who invent things often find it very difficult to imagine that anyone could invent a better solution. If there is one single phrase that seems to encapsulate Apple's HW success, it's that they don't seem to know the meaning of the phrase NIH.
  • GeoffreyA - Thursday, November 19, 2020 - link

    Agreed: companies and their patents and their lawsuits make me a bit sick. I wonder how VVC is going to be affected by all the patent rubbish.
  • ChrisGX - Tuesday, November 10, 2020 - link

    "Apple’s performance and power efficiency claims here are really lacking context as we have no idea what their comparison point is."

    The way Apple is presenting things is inadequate but not exactly without context. Where the target of a comparison is undefined it is always against the previous generation of the same Apple product. Of course, a number of different chips with different performance characteristics may have been options in earlier generations of a particular product making the comparisons rather loose. Still, it is likely that comparisons are against the baseline model in a product line.

    There is a further complication, though. It appears that there might be different performance/power tunings for the M1 SoC, judging from the products it has been incorporated into. The fanless Macbook Air clearly has a more constrained tuning. It will be interesting to see whether we get different TDP variants of the M1 or whether everything is handled by (programmable?) power management and performance controller logic.
  • ChrisGX - Tuesday, November 10, 2020 - link

    What is the "Latest PC laptop chip"? Okay, I admit that is lacking context. It is hardly going to hurt Apple however if serious tech journalists start plotting these power/performance graphs and without ever discovering what the latest PC laptop chip is confirm that there is no x86-64 chip out there that can match the energy efficiency of the M1.
  • Quantumz0d - Tuesday, November 10, 2020 - link

    Go to the Apple website and check out the bottom of the page footnotes for the chips they compared to & nope these are not at all going to beat 10900K from what this article infers to at all, unless we see a benchmark that translates to the real world application performance, which is what a fair comparison would be not some SPEC benchmark and ST performance with burst load.
  • ChrisGX - Tuesday, November 10, 2020 - link

    Yes, footnote at
    https://www.apple.com/mac/m1/
    does tell us quite a lot about how Apple did its benchmarking.

    The point of my comments was energy efficiency not outright performance so I am happy repeating the claim that there is no x86-64 chip out there that can match the energy efficiency of the M1. Also, I am happy to wait and see what more 'real world' benchmarking shows about the performance of the M1 (and whatever higher end M series chips follow) rather than making the assumprions that you are making about relative performance.

    I would say that Intel just suffered a lot of damage to its reputation as a top rank supplier of microprocessors - today, it is looking distinctly second rate. It is up to Intel to provide the technologies and products to arrest further damage to its reputation.
  • Jbmorty - Tuesday, November 10, 2020 - link

    It might though. Apple says 2.8 x i7-8557U. Passmark give this a score of 7820. 7820 x 2.8 = 21896 as an estimate for the M1 in passmark. Passmark scores the 10900k at 24262. This is a wholly unreliable method of estimating the M1’s performance but certainly suggests Apple may be in the right ballpark. The claim is up to 2.8 x performance so it is conceivable that the M1 could outperform the 10900k on some benchmarks and maybe even some real world use cases.
  • Spunjji - Friday, November 13, 2020 - link

    Which 13" notebooks out there have a 10900K in them, z0d?
  • GeoffreyA - Tuesday, November 10, 2020 - link

    Armies of Santa Clara, of the Core and the Zen, 'tis time to stand together and fight, fight for the future of PC-earth. Even now the Enemy is moving, and secretly forges an uncanny weapon in the fires of TSMC, ever watching and brooding from his fastness in Cupertino ;)
  • bobdesnos - Tuesday, November 10, 2020 - link

    APPLE MARKETING
    this are bla bla , apple pay for bla bla, use it with app like photoshop AE and plugins , 3d render , the real world baby , 2 K $ are not free
  • thedillyo - Tuesday, November 10, 2020 - link

    I appreciate the time you must have spent on the article and actual facts divulged but the piece seems rushed, at times incoherent and is riddled with typos and does not flow properly.
  • WaltC - Tuesday, November 10, 2020 - link

    This "deep dive" to me seems long on speculation but short on concrete facts--which is not your fault--only thing, shouldn't call it a "deep dive" into something you don't even have on hand to test...;) It's more of a "speculation" piece, I expect. It's interesting to look at single-core benchmarks, I suppose, that likely have not yet been optimized for Zen 3, but doesn't that sort of miss the point of what AMD is doing with 8/12/16/32 & 64-core CPUs with SMT? At this point, we have nothing but marketing from Apple--which is typical of the company--always has been. Apple frequently states it's "best" at this and that when plainly it isn't...;)
  • Hul8 - Tuesday, November 10, 2020 - link

    Wait for it: Walled garden. iMac App Store. Apple taking 30% off all purchases and subscriptions - including professional software.
  • valuearb - Tuesday, November 10, 2020 - link

    The Mac App Store has been around for nearly a decade. And Apple has already confirmed they aren't locking down the Mac like iOS, they understand different use cases better than you apparently.
  • ex2bot - Tuesday, November 10, 2020 - link

    Apple’s cut is way too high. They need to look at Microsoft and Steam, for example, for more sane percentage takes.

    Microsoft is a more reasonable 30% while Steam has the volume to only take 30%. Makes Apple’s app stores look downright greedy.

    Sources:
    https://www.theverge.com/2018/11/30/18120577/valve...

    https://en.wikipedia.org/wiki/Microsoft_Store
  • Kilnk - Tuesday, November 10, 2020 - link

    ex2bot Apple's cut is 30% too wtf you talking about lmao
  • ex2bot - Tuesday, November 10, 2020 - link

    Oh, it is?? Hmm. They ALL take 30%? (I was poking fun at Hul8.)
  • amouses - Tuesday, November 10, 2020 - link

    As a predominantly Windows and MS guy since well about 1979 .... if fluppin Apple can do Rosetta2 emulation of X64 better than Microsoft's current lame attempts for their ARM .. then I will finally give up. Apple wins.
  • Kilnk - Tuesday, November 10, 2020 - link

    The problem with Microsoft is that it has neither direction nor commitment and it takes both do to something like this. It's not that they technically can't, it's that they just won't. Every few years they have some project about creating a new version of Windows from the ground up but they always end up trashing it in favor of something this weird Frankenstein OS that we have. It will get worse and worse because Microsoft cares less and less about Windows as they just want to throw everyone into the cloud. If gaming wasn't holding me back I would have switched a while ago.
  • ioiannis - Wednesday, November 11, 2020 - link

    Another thing is that microsoft isn't fully compromised with arm, they release 1 product a year, and the rest of their lineup is still x86. The sales of the surface x -or whatever it is called- are not enough to push developers to make their apps run natively on arm. Apple on the other hand just made their most successful laptop an arm laptop
  • Cubesis - Wednesday, November 11, 2020 - link

    Love to see people finally get over their Apple hate and give credit where it’s due lol
  • Spunjji - Friday, November 13, 2020 - link

    Hey, I still hate them, but I'll be damned if I let that get in the way of being excited about a new CPU core design!
  • citan x - Tuesday, November 10, 2020 - link

    This is a great article and a lot faster than I was expecting. These Apple chips have really become something.

    The silicon is very powerful, but we have yet to see if Mac OS can take full advantage.
  • Vya - Tuesday, November 10, 2020 - link

    No one is paying attention to the kind of transistor budgets and logic that Apple needs to get this sort of numbers. See, what's going to happen when AMD, Intel and ARM will eventually expand their design to have 200KB+ instruction/data caches and 600+ instruction windows and all of these other things that boost performance in those lovely SPEC and Geekbench tests ? Matter of the fact is their designs are way more efficient in terms of extracting similar or better performance from much smaller and narrower cores.
  • Chinoman - Tuesday, November 10, 2020 - link

    The whole point is that Apple isn’t even a traditional chip company but still managed to come out and drop this on their competition. How is it that AMD and Intel, when their entire job is to produce competitive chips, are now somehow behind and need to prove themselves like some sort of underdog?
  • Vya - Wednesday, November 11, 2020 - link

    I don't know what "a traditional chip company" is supposed to mean. But what I do know and looks like everyone else doesn't is where are these chips are meant to compete. AMD and Intel cater primarily to the server space where their designs would crush anything Apple could make any day of the week, their designs are much more area efficient and therefore can fit a tremendous amount of compute per package, more efficiently so as well. Apple doesn't need or want to compete with them in that regard so the areas where AMD and Intel operate mostly do not overlap with theirs. So they don't need to prove anything, it simply doesn't matter to either party.
  • Chinoman - Wednesday, November 11, 2020 - link

    I don’t think you understand the point that’s being made. Intel has failed to come to market with a compelling product and allowed somebody else to fill that niche, resulting in a loss of revenue from that segment.

    They had a lead years back but squandered it on incremental improvements instead of providing broader, more versatile offerings. You mention compute power as if most of the population buying these will notice the difference between an i9 and an M1 when using their machines daily. x86 will stick around but is increasingly being relegated to business use cases like servers rather than consumers. I think it’s a good shift in the way thinks have gone as even you’ve noted, AMD and Intel primarily cater to the cloud farms now.
  • Vya - Thursday, November 12, 2020 - link

    The point that is made seems convoluted. If most of the population wont notice the differences between an i9 and M1 then how does that fit with your idea that Intel has not come to market with a compelling product ? How do you make such a product if most of your user can't tell the difference ? See the problem ?
  • Spunjji - Friday, November 13, 2020 - link

    Intel's customer is Apple in this context, not the end-user. Apple have to use Intel's CPUs to make a compelling product for their own end-users - and as the MacBook Air has shown, that's been a struggle recently (check out the stories of it throttling).
  • Ppietra - Saturday, November 14, 2020 - link

    It seems like a problem when an entry level chip is able to beat an high end chip at a lower price point and with lower consumption. People will notice that!
  • techconc - Wednesday, November 11, 2020 - link

    It's even worse than that. This article is exclusively focused on the CPU portion of the SoC that Apple makes. I'd argue that what really sets Apple Silicon apart from Intel counterparts are things like the dedicated neural engine, the AMX matrix multiplication units, the ISP used for cameras, the secure enclave which makes a more secure computer and boot process, etc, etc.. CPU aside, there is nothing from Intel that competes with that.
  • ElvenLemming - Wednesday, November 11, 2020 - link

    That's true, but I think it's up for debate still how much the average consumer will see an immediate benefit from all of that. People with FCP or Garageband workflows almost certainly will, but I'm really curious how long it takes for the average media-consumption-only use case to see any noticeable benefit.
  • techconc - Thursday, November 12, 2020 - link

    A lot of that optimization is automatic and built into common APIs, etc. If you do encryption, you get built in hardware acceleration. If you're doing HEVC video, you get built in hardware encoding/decoding. If you use machine learning frameworks, you automatically get AMX acceleration, etc. If you use the camera, you get ISP features and acceleration, etc. Remember, they did say Big Sur was optimized for Apple Silicon. I'm betting you're getting much of that benefit for free or very little additional work in some cases.
  • Sub31 - Friday, November 13, 2020 - link

    "The secure enclave which makes a more secure computer and boot process"
    Intel ME, AMD PSP, have existed for years. The secure enclave is not necessarily a good thing either. There is a reason why OEMs sell devices to government agencies with them disabled.
  • ex2bot - Tuesday, November 10, 2020 - link

    You’re comparing Apple’s current chips to Intel and AMD’s theoretical chips from the future? That’s a really good point. I’m kind of jealous I didn’t think of it.
  • Vya - Wednesday, November 11, 2020 - link

    I am not comparing anything, I just pointed out that their designs would inevitably grow wider as time goes on. People are overestimating Apple's ability to make really good microarchitectures, what's certain is that they make really wide microarchitectures, the rest is highly debatable. Everyone can make something bigger, that's not particularity impressive to me, it's well known that as cores grow wider you get ever increasing diminishing returns. If you throw everything you have from the get go, you'll end up with a really bloated and difficult to optimize design.
  • Ppietra - Saturday, November 14, 2020 - link

    seems to me that Apple has an optimised design, considering what it is able to achieve at a lower power consumption, year after year!
  • Kilnk - Tuesday, November 10, 2020 - link

    As the article points out on page 2 : "The huge caches also appear to be extremely fast – the L1D lands in at a 3-cycle load-use latency. We don’t know if this is clever load-load cascading such as described on Samsung’s cores, but in any case, it’s very impressive for such a large structure. AMD has a 32KB 4-cycle cache, whilst Intel’s latest Sunny Cove saw a regression to 5 cycles when they grew the size to 48KB. Food for thought on the advantages or disadvantages of slow of fast frequency designs."
  • techconc - Wednesday, November 11, 2020 - link

    That's just it, Apple Silicon requires less transistor budget to implement such things as they don't have to deal with things like unequal sized instructions like Intel has does. Also, how many transistors are wasted converting x64 CISC instructions into workable micro OPS? etc... If you bothered to read the article, you'd see why Apple is easily able to make their chip wider and provide higher IPC than Intel can.
  • name99 - Wednesday, November 11, 2020 - link

    You have it exactly backwards.
    Apple gets their performance from MASSIVE numbers of transistors. They understood when they started designing that the future was transistors that kept getting smaller but not much faster.

    Intel and AMD both have substantially lower transistor density because they have prioritized transistor speed, ie GHz. Is this a sensible tradeoff? I'd say no! Apple can match their performance with less silicon area and a lot less power.
    Apple is designing to make best use of today's processes; Intel (and even to some extent AMD) are still stuck in a design mentality of 2000, still not truly accepting just how powerful small (slower, but many many more) transistors can be when organized optimally.
  • Vya - Thursday, November 12, 2020 - link

    Things such as humongous instruction windows and massive number of in-flight instructions are extremely inefficient in terms of power/area as they do not scale performance linearly, not even close. There is so much ILP that can be extracted, you still need clock speed. The best way to design processors is to strike a balance between clock speed and transistor budget, the only ones that I have seen to do that is ARM themselves, their cores have need been particularly wide or high frequency and they are very power and area efficient.
  • Vya - Thursday, November 12, 2020 - link

    never been*
  • techconc - Thursday, November 12, 2020 - link

    No, you're missing the point. Why do you think the Intel architecture is only 4 wide? It's due to the complexity of decoding CISC instructions. That's a limitation Apple Silicon doesn't have to deal with. Yes, Apple silicon is faster due to more transistors. That allows them to have deeper caches, etc. However "silicon density" is not a factor of processor speed. How many stages your pipeline is broken up into is the biggest factor which determines how high you can clock. However, as we've seen with the Pentium 4 Netburst architecture, highest clock speed does not equal highest performance. That trade-off was a bad route which is why Intel back tracked with their Core designs.
  • Vya - Thursday, November 12, 2020 - link

    Not that many, a CPU today needs no more logic for translating an x64 instruction to microcode than one did 15 years ago. And you are completely wrong with your assertion that Apple needs less transistors, just those L1 caches alone take up an insane amount of space. Apple can make their chip wider because they want to, I've already explained in a comment above, they don't need to fit many cores on a single chip hence these massive cores. Intel and AMD need to, hence narrower cores.
  • Kilnk - Tuesday, November 10, 2020 - link

    Isn't the consistent trajectory of performance increase a little odd, as if it was purposefully staggered?
  • Chinoman - Tuesday, November 10, 2020 - link

    You’re seeing TSMC’s staggering of their process tech. All companies using them for fabrication are basically bottlenecked by TSMC’s ability to fabricate at a certain node.
  • Kilnk - Tuesday, November 10, 2020 - link

    I see. Thank you for the clarification.
  • linuxgeex - Tuesday, November 10, 2020 - link

    Samsung released the M3 (Meerkat) in 2018 ;-)
  • odin607 - Tuesday, November 10, 2020 - link

    Why does the Frequency vs Thread count only go up to 6 instead of 8? Did I miss interpret this having 4 big 4 little?
  • odin607 - Tuesday, November 10, 2020 - link

    I guess the better question is why are only 2 big cores listed.
  • Andrei Frumusanu - Tuesday, November 10, 2020 - link

    Those are A14 numbers, which has 2+4 cores.
  • odin607 - Tuesday, November 10, 2020 - link

    Reading is hard, I see that now thanks.
    In retrospect that makes more sense anyways given the section.
  • id4andrei - Tuesday, November 10, 2020 - link

    Mr. Frumusanu is a 5W CPU in the same ballpark as a 50W Zen3? What is the difference in TDP between the Ryzen5 and A14 for? I always read the Spec bench and GB respectively as efficiency benchmarks, performance per watt.

    If theoretically I could run the A14 at 5W in a desktop would I achieve the same results in intensive CPU workloads as a 50W zen3?
  • techconc - Wednesday, November 11, 2020 - link

    How many cores in the Zen3? Make sure you're doing an "apples to apples" comparison (pardon the pun).
  • nwerner7 - Tuesday, November 10, 2020 - link

    I am concerned that wider architectures are going to repeat the mistakes of Bulldozer by AMD. Why are architectures for processors going wider again?
  • BlackHat - Wednesday, November 11, 2020 - link

    Also, Isn't ARM more transistor demanding than x86? I'm just asking because many sites including wiki chip say that.
  • Unashamed_unoriginal_username_x86 - Thursday, November 12, 2020 - link

    I think you've confused wide with deep. The A14 performs better than 10900k with <60% the clock speed. FX-8150 performs worse than 2600k with 110% the clock speed. A14 is 16(?) cycles deep with an 8 wide decode, Bulldozer is 20 cycles deep with 4 decode per module.
    I got most of that from wikipedia, don't quote any of it.
  • name99 - Friday, November 13, 2020 - link

    Apple have shipped, what, by my count 8 wide (and wider every few years) CPUs since they started this game, and have not made "the mistakes of Bulldozer".

    (BTW think of that. New model every year, impressive changes every year, they're now doing three models every year [gig core, small core, tiny core] plus GPU plus NPU! And no screw up.)

    I think we can rest assured that they know what they are doing.
  • boozed - Tuesday, November 10, 2020 - link

    Back when the rumours of Apple switching from Power to <something else> were swirling I predicted ARM.

    It seems I wasn't wrong, I was just 15 years early...
  • boozed - Tuesday, November 10, 2020 - link

    I should add that, realistically, it was a guess. The word "prediction" is too often misused these days.
  • abufrejoval - Tuesday, November 10, 2020 - link

    It really fits very nice with the Nuvia story and claims you told before: I really appreciate the insight and actually pre-sight you gave with them.

    I have not found that anywhere else, at least not this side of a paywall (hi Charly!).
  • mannyvel - Tuesday, November 10, 2020 - link

    I was wondering why the instruction cache on the m1 was bigger than the data cache. Now I think I know; it's to keep those pipelines full. That implies that Apple believes that it's more important to keep the speculative execution stuff full than to get the data in faster. I suppose it's optimized for typical MacOS workloads, where most of the time you're not plowing through big chunks of data; most of the time the code is figuring out what to do with small chunks of data.
  • MetalPenguin - Tuesday, November 10, 2020 - link

    There are a few factors here. It depends on the workloads that Apple is optimizing for, which may have larger instruction footprints. So it could be more performance/watt efficient to increase the size of the instruction cache versus the data cache. Also, for these high-performance cores it's generally harder to increase the size of the data cache for a variety of reasons.
  • abufrejoval - Tuesday, November 10, 2020 - link

    It is very hard to underestimate the industry impact of full desktop performance without any moving parts on a single charge for all waking moments of a day...

    ...except perhaps when you're staring at a 42" 4k screen in your lock-down home-office, typing on an original IBM PS/2 keyboard which no butterfly or scissor can match, while Ryzens and 250Watt dGPUs sleep like dragon at your feet only waiting for you to join them in a quick game.

    Yes, my Whiskey lake Ultrabook suddenly look very old and "breathless": But I haven't touched the thing since I stopped flying every other week. And then I really never work for much of any time without a power outlet, two small or one big extra screen and LAN connects with the dock.

    Outside I enjoy two free hands, rarely even pull my mobile and should I need a computer I have a car to carry it. I don't jump from coffee shop to coffee shop, customer to customer. I'd use my Android tablet if I wanted to.

    Yes, the computing power/consumption proposition of M1 is very attractive. As an incremental value to any of my current laptops, I'd go for it as a replacement part.

    But not at the price of becoming an iSlave.

    Please everybody, get to work, Apple has put down a new mark.

    But understand that I don't want to be an mSlave or gSlave (nor WeSlave or AliSlave for that matter) either.

    x86 is about the personal computers, freedom of choice and an open architecture.
    I don't mind changing the instruction set, but I do mind changing the culture.
  • webdoctors - Tuesday, November 10, 2020 - link

    x86 definitely isn't as open as ARM, no idea what you're talking about. Everyone and their dog can license the ARM chip and they're cheap as fastfood.
  • BlackHat - Wednesday, November 11, 2020 - link

    I always have that curiosity, ARM is really open but is own by one being (ARM Holdings) that doesn't make it more powerful in terms of control than x86? I mean, you have Intel, IBM, VIA AMD with x86 licences but ARM Holdings is just one.
  • abufrejoval - Wednesday, November 11, 2020 - link

    ARM is open, but the M1 isn't.

    That's why I asked SoC design houses to get to work and create an M1 alternative that doesn't require you to become an iSlave to use it.

    And I fully understand that such an alternative won't be as cheap as dogfood: 16Bn transistors on a single chip in 5nm won't be for a while.
  • samsonjs - Wednesday, November 11, 2020 - link

    x86 isn't going away it's just not the fastest consumer CPU for "computers" on the market by all measures anymore. As you pointed out there's no road for Apple's chips to replace x86 ones.

    (I put "computers" in quotes since the A-series chips have been beating Intel laptop chips in single-threaded performance for a couple of years now, but in iPhones and iPads instead of Macs)
  • Chinoman - Wednesday, November 11, 2020 - link

    Interesting choice of words. I don’t think anyone’s trying to enslave you. They’re just gadgets. You can own them all if you choose to it’s not some sort of allegiance.
  • zodiacfml - Tuesday, November 10, 2020 - link

    I hate Apple more than Intel but this is impressive, referring to the whole SoC. Intel should have been into this tech level at a minimum but operates at high efficiency, they operate like a car auto maker.
    What is the RAM in the package, DRAM or HBM?
  • ioiannis - Wednesday, November 11, 2020 - link

    i think it is DRAM, apparently the HBM came from a Finnish translation.
  • realbabilu - Tuesday, November 10, 2020 - link

    Does m1 support special instructions like SSE or A X2 , or apple created new instructions for integer / floating point.
    I wish maxon could built the m1 version of cinebench. It will relate all cpu benchmark.
    About gpu bench, I wish 3dmark could relate all.dx12 vs metal.
  • Blark64 - Wednesday, November 11, 2020 - link

    https://www.maxon.net/news/press-releases/article/...
  • realbabilu - Saturday, November 14, 2020 - link

    Meanwhile vector and render single multi cpu and gpu benchmark Metal vs DX12 already came out via affinity (a adobe competitor) that's has native m1,pc,and intel mac compatible software.

    https://forum.affinity.serif.com/index.php?/topic/...

    Vector
    M10(2032) vs i9 9900k(1762) vs ryzen 9 3900x (2079) vs 4700u(1371)

    Man, those tiny m1 on vector bench has same speed as 105w 12 core! Means for floating point calc, m1 may fast we even though intel had avx some.

    Raster bench
    M1(538) vs i9 9900k(668) vs 4700u(312) vs 3900x(877)

    Raster desktop wins against m1.

    How about the gpu?
    M1 gpu raster (7907) vs 2080super (12237) vs 1650super(8125) vs vega radeon apu (339)

    It has gpu that almost powered as 1650 super.
  • realbabilu - Saturday, November 14, 2020 - link

    I forgot swift already had Accelerate APi that has SIMD, BLAS/LAPACK integrated.
  • realbabilu - Monday, November 16, 2020 - link

    Just came out
    M1 cinebench result
    Multi 4530 vs
    4800HS
    Multi 10600

    So Pc guys can relax now,
    it's not for changing your render pc, or render farm. Still M1 is quite powerful as i5 8300H laptop.
  • realbabilu - Monday, November 16, 2020 - link

    https://m.hexus.net/tech/news/cpu/146878-apple-m1-...
  • Ppietra - Monday, November 16, 2020 - link

    they say M1 but the benchmark seems to have been run in the Developer Transition Kit which uses an A12X. You can find those results in some forums where you can clearly see that the chip was running at 2.5GHz/2.3GHz, which are the A12Z clock frequencies.
    Considering what we already know about the M1 we should expect at least 50% better performance than the A12Z. In multicore it will still be behind the current high end notebook CPUs, but considering the M1 only has 4 performant cores (+4 low power cores) and consumes less than half the power, it would be quite an achievement.
    Next year Apple’s high-end notebook SoC with 8+4 cores, could overtake the best x86 notebook CPU in Cinebench, while consuming less.
  • hmdqyg - Tuesday, November 10, 2020 - link

    I was always curious about A12z's performance if it has a fan. Now, Apple brings us M1, which is essentially A14x, and gives it a fan.
  • name99 - Tuesday, November 10, 2020 - link

    If we assume that the ONLY constraint (over a sensible extrapolation range) to running the A14/M1 faster is thermals, then we can use scaling. Cube root of 2 is good enough 1.25, so if you're willing to generate twice as much heat (say 10W rather than 5W) you can run at 25% higher frequency which will be, good enough, 25% higher single threaded performance.

    Are thermals the only constraint? Who knows? Other constraints could be electromigration, current delivery, or raw transistor switching speed.
    (Apple designs its circuits around small, dense, low power transistors. So the fact that AMD can switch TSMC transistors at 5GHz doesn't mean Apple can necessarily do the same. Even apart from FO4 issues [how transistor speed translates into CPU clock cycle] the raw transistor speed of the Apple transistors in critical paths is almost certainly lower than for AMD.)

    3.75GHz (3*1.25) may be too high, but 3.5GHz strikes me as probably feasible.
  • nyoungman - Tuesday, November 10, 2020 - link

    Regarding the GPU, Apple has posted a "Tailor your Metal apps for Apple M1" with some technical details. At 6:45 they demonstrate Baldur's Gate III running at 1080p with Ultra settings, though they don't state the frame rate. https://developer.apple.com/videos/play/tech-talks...
  • ex2bot - Tuesday, November 10, 2020 - link

    I’m tempted to get BG3 early to compare the new Pro to my 16” i9 5500m.
  • Vik32 - Tuesday, November 10, 2020 - link

    Andrei, thanks for your work! Great review as always. Looking forward to reviewing SoC M1 and new iPhone 12
  • nukunukoo - Wednesday, November 11, 2020 - link

    M1 seems like a CPU that was still designed for iPad. Likely a last-minute modified A14X with 2 USB4 controllers added and an extra bit in the memory controller to support 16GB. This actually tells a lot: The next IPad Pro will have a USB4/TB3 port and 8GB of memory (since Samsung already makes the newer, higher capacity mobile chips since last year). The M2 will likely show on the bigger MBP next year. The M2 being a “true” Mac chip. Likely with SMT and 4 independent USB 4 channels.
  • KarlKastor - Wednesday, November 11, 2020 - link

    I agree. M1 looks like what you would have expect of a A14X.
    So this is fine for the first transition of the lower power devices.

    Maybe next year they will come with three designs.
    A15 for phone, lower iPad
    A15X/M2 for iPad pro and lower option notebook
    M2X (or however to call it) for higher TDP notebook and maybe lower option mac pro

    But for proper Mac Pro CPU they will need to put more effort in it. The interconnect for all the cores is a completely different matter than what they are dealing with up to now.

    Even if they want to go with more than four big cores in the Macbook Pro, it will be more challenging. Maybe they will have to go to a different Cachedesign with a private L2 and a large L3.
  • markiz - Thursday, November 12, 2020 - link

    What are some of the demanding things you think an average air user would do that an average ipad user does not?
  • jvakon - Wednesday, November 11, 2020 - link

    Can someone please explain to me why the deprecated SPEC2006 suit of benchmarks is still used for comparison of mobile devices? There is a reason SPEC puts out new suits (current is 2017), and that's because overtime devices and compilers "learn" how to implement optimisations in benchmarks that are not found in "real-world" software.
  • techconc - Wednesday, November 11, 2020 - link

    SPEC takes longer to run and thereby helps prevent the thermal throttling cheats that we see all too often with Android OEMs. Anyone can compile SPEC themselves, so you can remove compiler differences from the equation. Then again, Geekbench attempts to do that with the compiler was well. No synthetic benchmark is perfect, but it does provide a foundation for such analysis.
  • jvakon - Wednesday, November 11, 2020 - link

    I was not suggesting not using SPEC benchmarks. Rather, I would prefer using the more recent ones! But, I was informed elsewhere the reason is practical; SPEC2017 requires a Fortran compiler for some elements, and such a compiler is absent in iOS & Android.
  • Coldfriction - Wednesday, November 11, 2020 - link

    Because Apple asked them to use it and required it in the agreement with Anandtech in exchange for either access to the chip or due to a marketing agreement most likely. Hardware vendors provide "recommended" test suites and Anandtech has a special love for Apple. Anand himself started becoming very pro-Apple before he sold Anandtech.
  • millfi - Wednesday, June 23, 2021 - link

    You assure me, but do you have any proof?
  • Zoolook - Wednesday, November 11, 2020 - link

    Great article Andrei!

    However you write "The long time PowerPC using company", but Apple used PowerPC processors for less time than they did Motorola 680x0 and Intel x86.
    Their switch from Motorola 680x0 to PowerPC would be the template for their forthcoming transitions of architectures.

    The current switch would be the third architecture transition since the introduction of the Apple Lisa (before that they used a MOS 6502)
  • JalaramMurti - Wednesday, November 11, 2020 - link

    I was thinking that too... Correct!
  • blackcrayon - Wednesday, November 11, 2020 - link

    Yeah, crazy. 5 major chip architectures, 4 transitions...
  • JalaramMurti - Wednesday, November 11, 2020 - link

    Question is, will I be able to BootCamp into Windows 10 ARM?

    That would be sick
  • mandirabl - Wednesday, November 11, 2020 - link

    Nope, won't be possible. a) Boot Camp is history, b) Microsoft ain't releasing Windows on ARM for private persons, c) WoA isn't 64-bit ready yet, and d) this kills every backward compatibility. So there's next to no reason for booting into Windows 10 ARM.
  • Jaianiesh03 - Wednesday, November 11, 2020 - link

    Virtualisation is the way to go
  • DanD85 - Wednesday, November 11, 2020 - link

    It's another step in papa Apple's plan to stranglehold user upgradability and choice! Apple grand plan is to force user to totally depend on their ecosystem from hardware to software. No freedom to you! Big brother Apple has said! It's either Apple way or no way at all! They just step up their anti end user tremendously! By using in-house chip, they cut their costs greatly but yet no price cut for you, and you still have to pay papa Apple $250 to go from 256GB to 512GB! So generous! Make no mistake, Papa Apple still pretty much the evilest, biggest bully in the yard!
  • techconc - Wednesday, November 11, 2020 - link

    Have you performed any sort of performance analysis to determine if you get anything extra for that money? Apple has apparently used higher performance controllers in the M1 and they have consistently been ahead of what you'd get in the PC market in terms of controller performance.
  • blackcrayon - Wednesday, November 11, 2020 - link

    Not sure why it's another step- the previous versions of these models had similar limitations. They also all have Thunderbolt which is this port you can use to "upgrade" the storage yourself (granted, more conveniently on the desktop).
  • UNCjigga - Wednesday, November 11, 2020 - link

    So now my only question is when will the $699 Mac Mini beat an Xbox Series X/PS5 for gaming? Two generations from now? One? :)
  • Agent Smith - Wednesday, November 11, 2020 - link

    Most probably not.

    Apple graphics doesn’t have Ray Tracing, VRR, Freesync or anything else you would want to support modern game titles. The most you’re going to get are media codecs and pixel pushing speed for Apple Store games I suspect.
  • Billrey - Wednesday, November 11, 2020 - link

    Yet. Apple haven't released their chips for high end laptops or any desktops yet. Wait and see what the iMac Pro or Mac Pro will be like with future SoCs.
  • techconc - Wednesday, November 11, 2020 - link

    I don't think the Mac mini was ever or ever will be optimized for gaming performance. People mostly uses these as servers.
  • Leeea - Wednesday, November 11, 2020 - link

    They have been proclaiming x86s death since before I was born.

    The 3rd party reviews will be more interesting. I noticed a lot of Apple marketing speak was "up to" language.
  • alufan - Wednesday, November 11, 2020 - link

    Another step in the direction of Apple being a standalone ecosystem with zero end user choice, what happens if you decide you dont want to use Apples particular brand of program and want to try another? Isheep never learn they just slavishly que up Day after Day to hand over the cash for less choice, I say this as someone forced to use an IPhone for work its so backwards its criminal
  • back2future - Wednesday, November 11, 2020 - link

    A15 bionic mass production is scheduled for Q3 2021 and x64/x128 on desktops is more versatile (what adds to power consumption), but impressive below 5W is a technical achievement that is reason for redesigning cpu/gpu/npu cores and their schedulings?
  • Wision - Wednesday, November 11, 2020 - link

    I seem to remember they mentioned a 10W target for MacBook Air, but didn’t mention their target for the mini or MacBook Pro. The store lists the mini with 150 W maximum power draw, could that be used to guess a probable target for M1 power in that machine, and from that some performance scaling compared to A14?
  • DeathArrow - Wednesday, November 11, 2020 - link

    Regarding this: " x86 CPUs today still only feature a 4-wide decoder designs that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions."

    1. isn't supposed that x86 uses internal RISC like microinstructions with fixed length?
    2. x86 instructions do more than simpler ARM instructions, so executing 4 x86 instruction in parallel might actually do more work than executing 8 arm instructions
  • Agent Smith - Wednesday, November 11, 2020 - link

    The advancement of x86 is about to be severely tested.
  • YaleZhang - Wednesday, November 11, 2020 - link


    1. True that x86 translates CISC instructions into micro ops, but it's the translation (decoding) itself that's the bottleneck due to the variable length. Intel/AMD will have to beef up the micro-op cache to get around this handicap. 2. A lot of x86 instructions allow 1 operand to be in memory, so it will be 3 micro ops, address calculation, memory load/store, and the actual operation. But high performance code will try to keep data in registers as much as possible, so I don't expect the fused load/stores to help that much
  • DeathArrow - Wednesday, November 11, 2020 - link

    @YaleZhang, great explanation. Thanks.
  • atomek - Wednesday, November 11, 2020 - link

    This is death of x86 and birth of new CPU architecture king. Hopefully Microsoft will quickly release Windows 10 for the new architecture. And I really hope I'll see desktops on the Apple Silicon. This is historical moment for the industry, the performance and efficiency leap is absolutely mind-blowing. Congratulations Apple, I hope AMD and Intel will be able do compete on the ARM segment. X86 is dead. Let ARM rule!
  • techconc - Wednesday, November 11, 2020 - link

    The problem is, most of the industry is dependent upon ARM reference designs. Someone like Microsoft, etc. would have to commission ARM or maybe Qualcomm, etc. to design a CPU core that can compete with Apple Silicon in order for ARM to every take off for the PC market. I'm sure Apple's new products will turn some heads as they will be the envy of the market, especially for laptops. However, I never underestimate the PC market to keep the status quo. The enterprise market doesn't like change. There are still companies running Windows server 2008 for production tasks for example.
  • Coldfriction - Wednesday, November 11, 2020 - link

    And there's no indication the M1 runs generic ARM instructions only and doesn't have additional stuff tagged on by Apple. Very likely this thing isn't compatible with any other ARM chips anywhere in any way.
  • techconc - Thursday, November 12, 2020 - link

    I think you missed the point of my post. Regardless of how much "potential" ARM based chips have, I don't seen anyone besides Apple making a run for the desktop market. I don't know if that's where ARM/nVidia will take it. I also don't know how much incentive someone like Qualcomm has to make a desktop class chip of their own design and not use the ARM reference designs.
    Your response seems completely irrelevant to that point. Having said that, yes, Apple does have custom instructions such as for their custom AMX units. However, they are not exposed to the public. Only Apple can use them. Developers can benefit form them by using Apple's ML APIs. Should these instructions disappear from future chips, it wouldn't impact existing code as those instructions are abstracted from developers.
  • millfi - Wednesday, June 23, 2021 - link

    >there's no indication the M1 runs generic ARM instructions only and doesn't have additional stuff tagged on by Apple.

    No, there is indication the M1 runs generic ARM instructions only. It's Armv8-A Architecture License. This License prohibited Adding your own Instructions available from the app. So Proprietary instructions cannot account for the benchmark scores in this article.
  • Silver5urfer - Thursday, November 12, 2020 - link

    MS already has custom chips lol, from Qcomm, Surface Pro X and the ARM translation from x86 is going to be nightmare. Esp given the x86-64 is not fully done and has severe perf impact on top of the 32bit translation and there's no reason why MS should follow.

    Windows runs on all machines on the planet from the SCADA systems to AWS. They all run x86 code. Linux runs on same hardware as well, and the vast software libraries for the x86 hw is not going to magically come to ARM. There are so many companies in the Datacenter Market trying to break Intel monopoly only one succeeded, AMD that is also x86 processor.

    And please stop with birth and death lmao, AWS / GCP / Azure make Trillions of dollars. Oracle Cloud also has the x86 only with Intel Xeon and AMD EPYC, not even any sight of ARM like AWS Graviton 2. So first of all come to the ground.

    Apple has world wide marketshare of less than 10% for their Macbooks. My reasoning behind this is 2 fold, Intel CPUs have throttling issues with the high performance, because of 14nm and Apple puts garbage VRM components on their Macs, absolute trash grade. No contest vs the usual DIY arena or even DTR laptops. Next up they have to pay Billions to Intel every year, on top of their ARM investments for their 55%+ revenue share of Apple, the iPhone. So the most logical step is when they want to converge the Software if their engineers can create a translation they can take the step of moving away from x86 as the whole stack is now controlled by them and saves them billions and millions in the end. Plus they can get away because they do not compete with DC market, HPC market, Gaming market (including Consoles and Server farms of the Xcloud, run on Scorpio Silicon farm, PSN, run on the Sony's SoC farm and Luna, run on Intel+Nvidia HW AWS) and DIY market.

    So tell me again lol, people here just fail to understand the market dynamics, logistics, enterprises, consumer market, business decisions and barely know how the CPU even works and what is freedom, this machine lacks it. - No Bootcamp, it's dead. No more 32bit apps, Apple will axe x86 32bit soon, no more repairs by 3rd party, this HW is fully controlled end to end by Apple worse than the T2 chip in their Macbooks for the SSD encryption, and BGA RAM, BGA SSD, Less I/O tons of glue with Battery and KB soldered/sealed tight to chassis, and uber expensive because Apple brand value, this is a big thing since this is the one most exposed to consumers than the pure HW benches and marketing mumbo jumbo lol, a Dell G5 will net them upgradable HW in the I/O and few places and they can game on them with MATLAB and other SW, while this one ? lol again.
  • hellocopter - Wednesday, November 11, 2020 - link

    I had to register just to say that this is the most misleading article ever posted on this site. No, the M1 is nowhere near the performance of what AMD and Intel have to offer. Not even the same order of magnitude. The posted benchmarks are 100% meaningless with 0 real world value
  • Kuhar - Wednesday, November 11, 2020 - link

    Lol, i just registered today for almost same reason and my first (and now second) comments are to this article.
  • thunng8 - Wednesday, November 11, 2020 - link

    Haha, x86 fanboy comments
  • hellocopter - Wednesday, November 11, 2020 - link

    At home I have a headless rPi3 running minimal Raspberry OS and basically just working as a server for my Anki Vector robot, 4GB rPi4 running LibreELEC (hooked to my TV), 8GB rPi4 running Raspberry OS (my desktop), and an ancient MacBook Air which just gathers dust. I'm hardly a x86 fanboy..
  • biigD - Wednesday, November 11, 2020 - link

    Okay, so you’re saying M1 is at most a tenth the speed of what Intel and AMD have to offer, and that the benchmarks are meaningless. Show your work then - what makes you say that?
  • Kuhar - Wednesday, November 11, 2020 - link

    He is just being objective. It is hard to come to some meaningful conclusion on two very narrow benchmarks of which one absolutely favors Apple CPU. I agree with you that 1/10th the speed is a bit exaggerated.
  • The Hardcard - Wednesday, November 11, 2020 - link

    He is not being objective, he is being obtuse. While benchmarks are not the end-all and be-all, they are also not narrow. If you look at the SPEC workloads, they test a broad array of CPU core capabilities across a broad array of real-world tasks. Take a minute to read about them. Actually, if you are familiar with modern computing work, you'll recognize a lot just from the titles.

    It is odd the number of people who seem to be so shaken by this acheivement that they refuse to accept hard evidence directly in front of their face.
  • Coldfriction - Wednesday, November 11, 2020 - link

    We need more than SPEC. Very likely ALL of what SPEC is doing fits in the cache of the chip, which makes it very fast. For the same reason a Threadripper with a very large cache can do things very quickly that a non-Threadripper chip can't simply due to cache size. There's also the question of sustained performance. These SPEC tests don't load the CPU under a stressful maintained load.

    We simply don't have almost any data. Unfortunately, due to the lack of native software on both the M1 and on other hardware being identical, it's impossible to know soon what the actual performance is. If Adobe focused on the 3950X and only on the 3950X, I'm certain they could increase their performance quite drastically too. Apple is forcing people to develop for a single chip. That software doesn't have to work on a huge variety of CPUs.
  • Kuhar - Thursday, November 12, 2020 - link

    True, SPEC test a broad array of CPU core capabilities but only 1 by 1. That is why it is narrow. Only a few "real world applications" only use 1 of those. Most applications are a mix of different CPU core capabilities. We will have to wait for some "real world applications" benchmarks to see how it works on both and we will see who will be surprised by the results - I bet it will be apple fanboys. I work in a very specific IT space and software we use works about 40 times faster on a 10 year old core2duo than on a 2 year old apple with Core i7. Thanks to MacOs, that is.
  • millfi - Wednesday, June 23, 2021 - link

    Is it really due to the OS? If the hardware is that old, isn't the binary running on it completely different? The difference in speed must have been brought about by the added functionality of the app! And without any concrete examples, it's not very convincing.
  • techconc - Wednesday, November 11, 2020 - link

    No Kuhar, "being objective" is looking at the facts and commenting on them. If you have a problem with the benchmarks provided, perhaps you can show a few examples of "real world application" performance that differs so greatly from these benchmarks.
  • Silver5urfer - Wednesday, November 11, 2020 - link

    unfortunately that's the case here from all the comments, the x86 seems to be dead. And Apple is the future based on one single core SPEC2006 and Geekbench scores, ever wonder why Cinebench doesn't exist on Mac ? I also do not know.. There won't be any 3rd party benchmarks that will put this through any Vray or Handbrake.

    The benches are done like this -

    1 Big core Apple at 5W / 1 x86 Ryzen, Zen 3 CPU at 50W for SPEC chart.
    The missing link is where are the other 15Cores and 32 Threads lol, based on that perception this processor wins, but if we consider the rest, we can see what is real.
  • Andrei Frumusanu - Wednesday, November 11, 2020 - link

    > ever wonder why Cinebench doesn't exist on Mac ? I also do not know.

    It does exist now, so that argument is dead.
  • Silver5urfer - Thursday, November 12, 2020 - link

    Yeah, cool chillout lol. Maxon updated it. So now the score scaling is also different vs R15, R20 and now R23. I will wait man, I will wait for the benchmark and performance of this super fast processor.

    I don't get one thing, how can a design on ARM beat x86 performance with such small envelope of power and small package without any sacrifice, granted this CPU is in spitting distance of their A13 processor in the same benchmarks but so far never saw it's performance vs a Windows / Linux machine.

    If Apple M1 is soooo fast than a new Ryzen / Intel CPU, the market will see who stands, because from the market's perspective anyone who doesn't hold performance is going to oblivion, this applies universally to any consumer good, best example is AMD pre Ryzen, their stock valuation today shows how performance dictates the success and market presence, almost bankrupt to exponential growth and now strides in AWS / AZURE / GCP for their EPYC.

    I will wait to see Steam Mac OS version updated with M1 and then Dota2 performance on this processor, but sadly there's no eGPU compatibility with this so meaning we can't even test it for the performance on gaming.
  • Blark64 - Wednesday, November 11, 2020 - link

    Cinebench has existed for Mac as long as Cinebench has existed, and today it was released supporting the M1: https://www.maxon.net/news/press-releases/article/...
  • vais - Thursday, November 12, 2020 - link

    Damn, this single line should have been included in each bar chart. Now apple maniacs are jumping to staggering conclusions...
  • techconc - Wednesday, November 11, 2020 - link

    For single core performance, yes, the M1 is exactly what was described in this article. Maybe you should actually read the article before commenting.
  • ChrisGX - Wednesday, November 11, 2020 - link

    Do you imagine that fanboyism rather than sound engineering practice advances the art of processor design? Synthetic benchmarks are an aid to the engineers who design the microprocessors that we use. All of the assumptions of a chip designer are tested against benchmark workloads that show whether a processor is meeting expectations or underperforming in this or that respect. So, once the work of the engineers is done, a good synthetic benchmark like SPEC (it is by no means the only one) will tell us a lot about how successful the engineers have been in their work.

    Now, it is a defeasible assumption that synthetic benchmarks will align with 'real world' benchmarks and the experience of users running applications and workloads. And, as it happens, commonly synthetic benchmarks, real world benchmarks and user experience do all paint the same picture about the performance of a processor or a computer system. Sometimes, however, there is a misalignment between these different approaches to the performance of a processor or a computer system. Following sound engineering practice that situation would normally lead to very deep technical investigations that would result in either: a) a revision to the processor or computer system or b) a revision to one or other benchmark. Engineering practice doesn't deny the importance of benchmarks it contributes to developing and improving them.

    It seems to me you are acting like a fanboy and throwing a hissy fit rather than asking any genuine technical questions about the M1 processor. The M1 does indeed now need to be tested against a bunch of real world workloads to see where it stands in performance terms. But, the indications from the synthetic benchmark data are good and if the M1 performs better than what has gone before it no one has anything to complain about.
  • ChrisGX - Wednesday, November 11, 2020 - link

    That comment was in response to @hellocopter who thinks the performance of a processor can be known from intuition without ever testing/measuring/benchmarking it.
  • Bartels - Wednesday, November 11, 2020 - link

    Rosetta was dropped with introduction of Snow Leopard, not Lion. Snow Leopard installation was much smaller in size because it dropped Universal binaries (although there are snippets on the web that some initial builds of Snow Leopard still had Rosetta)
  • thunng8 - Wednesday, November 11, 2020 - link

    That’s incorrect. It was in snow leopard. It was just not installed by default. You can easily add it in during install or even after install of the OS.
  • Oxford Guy - Wednesday, November 11, 2020 - link

    "The last time Apple ventured into such an undertaking in 2006, the company had ditched IBM’s PowerPC ISA and processors in favor of Intel x86 designs."

    Apple: "System 7 means you'll have to buy new versions of a lot of your software."
    Mac users: "Okay."

    Apple: "OS X Server is a huge improvement over AppleShare, at a bargain price of $500" ($780 in today's money).
    Mac users: "Uh... I just bought an expensive Mac tower and a SanCube but OS X Server doesn't seem to support Apple's firewire. You know... the super-duper great thing that you invented as an alternative to SCSI."
    Apple: "Hey hey hey! There's a brand new version of OS X Server. We'll call it OS X Server 10.0. We skipped past 9 versions just now because this one supports firewire! You can buy it for the low low price of no discount for having purchased OS X Server, which we call 1.x and don't talk about anymore!"

    Apple: "Carbonlib is fantastic."
    Mac users: "Okay."
    Apple: "Carbonlib has been deprecated. We broke the software. Yay."

    Apple: "You don't need us to license IBM's Rosetta anymore. We said it was really awesome but we changed our mind. "
    Mac users: "Can't you give us the option to pay for it? Some of the software isn't being re-released."
    Apple: "So? You'll have to stop using all that software. And, why would you want to? It's been deprecated!"

    Apple: "We've released a new version of OS X and so you'll have to pay for new versions of expensive software again. You know... Adobe and Microsoft stuff... And all that. It's fun!"

    Apple: "You don't need 32-bit software anymore. 32 bits aren't much fun. We deprecated them!"
  • atomek - Wednesday, November 11, 2020 - link

    Cutting out the technological debt could be painful for customers. But if there was no one brave enough to do it, we will be sentenced to x86 architecture for eternity. I really hope AMD and Intel will jump on ARM wagon as soon as possible to establish healthy competition against Apple.
  • vladx - Wednesday, November 11, 2020 - link

    "But if there was no one brave enough to do it,..."

    Apple and "courage", name a better duo.
  • Coldfriction - Wednesday, November 11, 2020 - link

    How about Apple just start selling these cpus as generic processors so other's can get in on the standard and make it ubiquitous? Oh, what's that? Apple doesn't share? Apple doesn't like open standards? This thing isn't a simple ARM chip but likely has it's own unique quirks that only belong to Apple?

    Yeah, I don't want anyone moving away from a multi-vendor x86 for an Apple only world.
  • Oxford Guy - Wednesday, November 11, 2020 - link

    I knew I’d forget things from the seemingly endless laundry list of Apple’s wanton breakage of software compatibility — which this article ridiculously (in the case is Rosetta) claims is done for consumers and not for Apple’s margin (which includes greasing big developers by giving them a constantly moving target that causes their customers to have to keep forming over money for “upgrades”).

    One of them is the way the company simply killed Classic. You don’t need that anymore — because we said so.
  • Oxford Guy - Wednesday, November 11, 2020 - link

    Even braver and more efficient is to hook customers directly to Apple’s cloud with a biological interface to remove the need for marketing and things like that. Let them pay in blood.
  • GeoffreyA - Thursday, November 12, 2020 - link

    Project directly onto one's retina or rather render the iOS screen straight to the visual cortex in the brain.
  • GeoffreyA - Thursday, November 12, 2020 - link

    Or, even better, direct notifications to the brain via 6G. Any time of the day, get the latest from Twitter, Facebook, and Instagram, straight to the brain. In the shower, no sweat. No need to miss out on the latest. And because Apple cares about our health, we can silence those notifications any time.
  • Silver5urfer - Thursday, November 12, 2020 - link

    Haha lmao, Apple doesn't deal with what AMD and Intel do neither Power from IBM. They do not overlap, Apple is a consumer centric corporation their whole marketing power comes from that only. Not the Server market and the margins there. It's not a same battlefield.

    AMD and Intel hold tons of patents and exclusive deals with each other for the x86, they will abandon all of that for Apple's tiny Macbook market ? Please ask someone working at Apple what server processors they use for their iCloud and the whole Infrastructure of services they run, hint - not A series. But rather Intel or AMD.

    ARM is not the thing people are imagining, with Ryzen Threadripper there's no contest in the HEDT arena, once Ryzen Zen 3 based TR drops the world will crumble lol (Intel's world) ARM scaling to the EPYC77xx series didn't happen so far, All I saw was magical numebrs for Nuvia and Altera's, Thunder X3 yet to be shown, all of them yet to be deployed and benched at STH, so far only Graviton 2 that is only exlc. to AWS,..
  • chlamchowder - Friday, November 13, 2020 - link

    Thunder X3 is dead by the way. Marvell cancelled it. The team got laid off
  • trini00 - Thursday, November 12, 2020 - link

    With Intel owning all x86 patents and not really enjoying competition or business with less than 40% margin its not so likely. Intel would rather sell off their CPU business like their ram business, phone business, modem business, 5g business etc
  • Teckk - Wednesday, November 11, 2020 - link

    It was Intel till now that was the limit for 16 GB RAM and now it’s Apple, releasing a new laptop in 2020 with the limit. Great.
  • techconc - Wednesday, November 11, 2020 - link

    Yeah, that's definitely a black mark from my perspective. I think it's criminal to ship a Mac with 8GB in 2020. The minimum should be 16GB and lower end machines such as those announced this week should have at least an option to go to 32GB. Let's hope Apple rectifies this problem with the iMac, and 16" MacBook.
  • alysdexia - Wednesday, November 11, 2020 - link

    I don't need more than 8 GB but still like Mac; fuck off.
  • techconc - Thursday, November 12, 2020 - link

    Maybe you should pull up your Activity Monitor and see how much memory you're using now before making additional ignorant comments.
  • GeoffreyA - Wednesday, November 11, 2020 - link

    As much as I hate to say it, this is impressive work from Apple. Besides the scores, where a 5W A14 is on the heels of 105W Zen 3, leaving me dumbfounded, the chart showing yearly execution is striking. 10/10.

    Anyhow, this isn't some "debunking" of Intel/AMD and x86. Yes, the former are carrying the weight of x86 decoding, which is where a lot of power gets eaten; and their cores are narrower but going for higher frequency. But I'm confident that, if they used the M cores as a measure, they would wake up and ramp up their own cores, or, if push comes to shove, even design an ARM one, and give Apple a run for its money round the Park.
  • GeoffreyA - Wednesday, November 11, 2020 - link

    As for the ARM-taking-over debate, how it will go remains to be seen and depends on Windows. Firstly, even if M1 is the best CPU in the world, these and future Apple cores will never go into ordinary (non-Apple) computers. Which means other ARM CPUs, if they want to take over, will have to beat Ryzen/Core on Windows---and this is assuming Windows on ARM is running x86-64 perfectly (presently it lacks x64 emulation but Microsoft will be implementing it soon). Now, AMD and Intel can extend their cores to knock those other fellows down; and even if they couldn't do it on x86 any more, I'm confident they could implement the ARM ISA and thrash the others, including Apple. Nvidia will play a part (a dirty part?) in this picture too.

    What might happen is a transition period, where we've got both Windows x86 and ARM running side by side (cf. x86-64 in the 2000s). Assuming software works transparently, people could build a computer with, say, a MediaTek CPU or stick with Ryzen on x86. Then, somewhere along the line, Intel and AMD release their own top-performing ARM cores and eventually phase out x86. Now, being old-school, I hope none of this happens and x86 wins the day.
  • Silver5urfer - Wednesday, November 11, 2020 - link

    Look at what AMD CEO Lisa Su mentioned, when asked for ARM vs x86, she said x86 remains their focus and innovation left to be done there. And recently Intel started doing Big Little garbage (why would a desktop S series unlocked TDP processor with high clocks want a slow x86 core ? ) for Alder lake and they even screwed up their own AVX512 on that, as small cores won't be able to do that.

    Moving to AMD again, they mentioned they are not going for this route and that is not some miraculous route either, saying it has been there since a long time, which is true. And they are not chasing that idiotic biglittle in PCs.

    Man one has to understand what is this benchmark about, people are parading x86 is dead. The benchmark has to be taken like this..

    5W for an M1 core on TSMC 5nm doing the work of SPEC and GB workload is almost same as 50W (no idea why the 1Core is 60W for Ryzenb 5950X btw since their max TDP is 105W and it stays very efficient unlike Intel MCE) for 1 x86 Ryzen core of Zen3.

    what about the other 15Cores and 32 Threads ? same for Apple's other 7 big cores and 8 small cores. What about the combined performance, it's not like this is first time Apple did something, A13 CPU was also fast by previous articles but did it materialize in a Mac ? no only this.

    Now the Software package, do we have anything to compare both on a Mac ? I think we will have to wait until Adobe optimizes their Software for ARM based Mac OS, because Apple paid them to do so and until that we won't know, so we will have to wait for Steam running on the same H.W and then we need to wait for other software and etc, and until then one cannot determine the winner.

    This article is giving false impression based on the benchmarks how they are carried out and compared with. On top there's no VM in this new Mac afaik, they threw whole Bootcamp out of Window, adding extra VM is a pain for them and so no more Linux nor Windows on this H.W, this is a mega closed box, even HW is soldered heavily.

    x86 is still leader, AMD as of today has 6.6% of Servermarket share and Intel has over 92%. That added with Consoles running x86 Hardware and vast majoriuty of the PC running x86 H.W barring a stupid Surface Pro X.. And Macs account for 9-10% of marketshare as well..
  • GeoffreyA - Wednesday, November 11, 2020 - link

    Great comment. I'm in 100% agreement. The point about a single core of Ryzen and the TDP escaped me, and that changes the whole picture. Also, lower Ryzen's clocks, drop it down to 5 nm, and suddenly Apple isn't looking too shiny any more, and its gigantic ROB, PRFs, etc., begin to look pretty overblown and silly.
  • mdriftmeyer - Wednesday, November 11, 2020 - link

    Add the Xilinx acquisition with FPGAs for their own Neural Engine, etc., and this wonder by my former employer which I will always support, but not blindly, is just a niche for Consumable laptops that kids and parents will enjoy.
  • GeoffreyA - Thursday, November 12, 2020 - link

    And for show in films: it can never be Windows. Always got to be Apple ;)
  • mdriftmeyer - Wednesday, November 11, 2020 - link

    FWIW: Apple's GPU is 8 cores and it's M1 CPUs are 4 Big, 4 Little.
  • alysdexia - Wednesday, November 11, 2020 - link

    its, hick
  • ex2bot - Friday, November 13, 2020 - link

    Um, actually, its “‘’it’s’’”
  • alysdexia - Wednesday, November 11, 2020 - link

    PCs -> Wintels, DTs; ? isn't a word; big -> great; fast -> swift; optimizes !-> their; "H.W": where'd you learn abbreviations? shit-head; etc -> etc.
  • Sub31 - Friday, November 13, 2020 - link

    AMD too had plans for a ARM architecture of their own to go alongside "Zen" back in 2015, but the entire project was probably scrapped (perhaps unsatisfying, not enough money).

    I can envisage that idea coming back in a few years.
  • GeoffreyA - Wednesday, November 11, 2020 - link

    On the x86/ARM debate, the conversation between mode13_h and deltaFx2 is worth reading:

    https://www.anandtech.com/comments/16176/amd-zen-3...
  • GeoffreyA - Wednesday, November 11, 2020 - link

    Hope my comment didn't sound hateful against Apple. They're brilliant at their job, but something about them has always put me off.
  • TouchdownTom9 - Wednesday, November 11, 2020 - link

    Will be worth seeing if they live up to their claimed numbers or if they are misleading. Will be very interesting to see how these compare to Renior CPUs as well as their Zen 3 successors. Given the node difference, there should be greater efficiency from the M1, but given that the prices for new macbooks (pro/air) have dropped by a few hundred while also now offering the consumer 8 core CPUs, it seems like apple is well on their way to regaining a justifiable claim to having the top premium notebooks on the market. Seems like the x86 to ARM transition was successful--and its pretty crazy how now in 2020 intel is being outclassed by both Apple and AMD in terms of their chips (though it isnt a slaughter when comparing tiger lake on strictly single threaded basis)
  • KPOM - Wednesday, November 11, 2020 - link

    Geekbench numbers came out today. Single core around 1680 and multicore around 7400. That’s the MacBook AIR.
  • SydneyBlue120d - Wednesday, November 11, 2020 - link

    No AV1 decoding?
  • floatingbones - Wednesday, November 11, 2020 - link

    Any reports about UWB in the M1? I expect Apple to have that on all new platforms, but I haven't seen any tech reports (including Apple's writeups) mention UWB. Thanks.
  • Oxford Guy - Wednesday, November 11, 2020 - link

    I will consider being duped by Rosetta 2 as soon as Apple gives us back Rosetta 1 and 32-bit compatibility so I can play SimCity 4 again — you know, like I could before Apple’s “upgrades” took away that functionality.

    But, what’s clearly much more trendy and hip is calling functionality “deprecated”.
  • GeoffreyA - Wednesday, November 11, 2020 - link

    Well, you must remember that we at Apple are forward-thinking. Change, change is always for the better, right? Can't have any old-fashioned things lying about, can we? Progressive, modern, up-to-date: such are our pillars here at the Park.
  • eatrains - Thursday, November 12, 2020 - link

    SimCity 4 was upgraded to 64-bit back in February.
  • Islapitirre - Wednesday, November 11, 2020 - link

    I have a 2010 MBP that has been pleading to be put to rest now for a couple of months. It's crazy that it still works but I have only used it for personal purposes as I have another work related computer. I have been thinking of upgrading my MBP but always felt I should wait. Well, I think now is finally the time. My question is... would the M1 MBP be a more reliable (i.e. work well for a long period of time) laptop due to the fan (thinking that it might not let the M1 chip overheat) or would the M1 MBA be more reliable because it doesn't have a fan (doesn't have a mechanical component that could fail)? I will be doing a 20 gun salute funeral for my 2010 MBP as soon as I receive my new MB_.
  • Islapitirre - Friday, November 13, 2020 - link

    Any assistance here? Just want to know which one might be more reliable between the M1 MBA and the M1 MBP13.
  • Joe Guide - Wednesday, November 11, 2020 - link

    First reported benchmark. Yikes.

    https://appleinsider.com/articles/20/11/12/apples-...
  • KPOM - Wednesday, November 11, 2020 - link

    Impressive stuff. The MacBook Air outperforms the 16” MacBook Pro i9 in short-burst tasks. Think about that. The 13” Pro has a fan and can sustain that performance, as well.
  • Silver5urfer - Thursday, November 12, 2020 - link

    Yikes indeed. A GB score.
  • ex2bot - Friday, November 13, 2020 - link

    It’s a good benchmark. Like all bench, it’s not sufficient by itself for an overall assessment.
  • darealist - Wednesday, November 11, 2020 - link

    This marks the downfall of x86. Cmon Nvidia give us fast Arm processors for Windows!
  • nonoverclock - Thursday, November 12, 2020 - link

    I think we still need to see benchmarks on a wide range of applications. How does code compiling perform? How about AAA PC games and ability to run the huge existing catalog of PC games?
  • vais - Thursday, November 12, 2020 - link

    Please don't confuse the poor apple fans with such logical and complicated thoughts!
  • Zerrohero - Friday, November 13, 2020 - link

    “How about AAA PC games and ability to run the huge existing catalog of PC games?”

    So...AAA PC gaming is something that people buy a fanless MacBook Air or an entry level MBP for?

    It’s amazing how you keep shifting the goal posts.

    The current M1 products should be compared first and foremost to existing Macs they replace. And that comparison looks favourable, for M1.

    Not to a massive desktop gaming PC, or a gaming laptop with poor battery life.

    You all know this *perfectly* well.

    Even if Apple M1 couldn’t beat the higher end x86 laptop chips, but come close with significantly smaller power draw - that’s very respectable.

    If Intel or x86 did the same, your reaction would be totally different.

    And you know it.
  • Zerrohero - Friday, November 13, 2020 - link

    “ If Intel or x86 did the same”

    Intel or AMD, obviously.
  • vais - Friday, November 13, 2020 - link

    If you read some of the comments, a lot of people think exactly this - a low power M1 will somehow compare to massive desktop CPUs.

    I completely agree, it might well be a big upgrade in the low TDP area and be perfect for Macbook Air.
    But some people don't understand this and think that this same Macbook Air will have the performance of a gaming PC with latest and greatest CPU and GPU.
  • eastcoast_pete - Thursday, November 12, 2020 - link

    Just to ask about real-world usefulness: which programs ("apps") are currently native for M1, and which ones are announced to be available by January 2021?
  • Joe Guide - Thursday, November 12, 2020 - link

    Native programs will of course include the standard Apple Work suite (Garageband, Pages, Numbers, Keynote) as well as ProLogic and Final Cut Pro. Microsoft will have their Office suite native. Adobe should be getting their stuff native soon. All others will work via Rosetta 2.

    https://www.macrumors.com/2020/11/12/microsoft-off...
  • mdriftmeyer - Thursday, November 12, 2020 - link

    Logic Pro X
  • vais - Thursday, November 12, 2020 - link

    Great article until it reached the benchmark against x86 part.
    I am amazed how something can claim to be a benchmark and yet leave out what is being measured, what are the criteria, are the results adjusted for power, etc.

    Here are some quotes from the article and why they seem to be a biased towards Apple, bordering on fanboyism:

    "x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions"

    And who ever said wider is always better, especially in two different instruction sets? Comparing apples to melons here...

    "On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon"

    Based on that alone would you conclude Exynos is some miracle of CPU design and it somehow comes anywhere close to the performance of a full blown desktop enthusiast grade CPU? Sure hope not.

    "outstanding lode/store:
    To not surprise, this is also again deeper than any other microarchitecture on the market. Interesting comparisons are AMD’s Zen3 at 44/64 loads & stores, and Intel’s Sunny Cove at 128/72. "

    Again comparing different things and drawing conclusions like it's a linear scale. AMD's load/stores are significantly less than Intel's and yes AMD Zen3 CPUs outperform Intel counterparts across the board. I'd say biased as hell...

    "AMD also wouldn’t be looking good if not for the recently released Zen3 design."
    So comparing yet unreleased core to the latest already available from the competition and somehow the competition is in a bad place as "only" it's latest product is better? Come on...

    "The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing."

    I am really interested where those power package figures come from, specifically for the 5950X. AMD's site lists it as 105W TDP. How were the 49W measured?

    I've read other articles from Andrei which have been technical, detailed and specific marvels, but this one misses the mark by a long shot in the benchmarks and conclusion parts.
  • Bluetooth - Thursday, November 12, 2020 - link

    They don’t have an actual M1 to test as they say in the artcle. The M1 will be available on the 24th.
  • GeoffreyA - Thursday, November 12, 2020 - link

    I think it would be instructive to remember the Pentium 4, which had a lot of "fast" terms for its time: hyper-pipelined this, double pumped ALUs, quad pumped that; but we all know the result. The proof of the pudding is in the eating, or in the field of CPUs, performance, power, and silicon area.

    AMD and Intel have settled down to 4- and 5-wide decode as the best trade-offs for their designs. They could make it 8-wide tomorrow, but it's likely no use, and would cause disaster from a power point of view.* If Apple wishes to go for wide, good for them, but the CPU will be judged not on "I've got this and that," but on what its final merits.

    Personally, I think it's better engineering to produce a good result with fewer ingredients. Compare Z3's somewhat conservative out-of-order structures to Sunny Cove's, but beating it.

    When the M1 is on an equal benchmark field with 5 nm x86, then we'll see whether it's got the goods or not.

    * Decoding takes up a lot of power in x86, that's why the micro-op cache is so effective (removing fetch and pre/decode). In x86, decoding can't be done in parallel, owing to the varying instruction lengths: one has to determine first how long one instruction is before knowing where the next one starts, whereas in fixed-length ISAs, like ARM, it can be done in parallel: length being fixed, we know where each instruction starts.
  • Joe Guide - Thursday, November 12, 2020 - link

    The benchmarks are coming out, and it looks like the pudding is quite tasty. But you have a good point. When in 2025 or 2026 Intel or AMD releases their newest 5 nm x86, you will be proven to be prophetic that the new Intel chip resoundingly beats the base M1 chip from 5 years ago.
  • GeoffreyA - Thursday, November 12, 2020 - link

    That line about the M1 and 5 nm is silly on my part, I'll admit. Sometimes we write things and regret it later. Also, if you look at my comment from the other day, you'll see the first thing I did was acknowledge Apple's impressive work on this CPU. The part about the Pentium 4 and the pudding wasn't in response to the A14's performance, but this whole debate running through the comments about wide vs. narrow, and so I meant, "Wide, narrow, doesn't mean anything. What matters is the final performance."

    I think what I've been trying to say, quite feebly, through the comments is: "Yes, the A14 has excellent performance/watt, and am shocked how 5W can go up against 105W Ryzen. But, fanboy comment it may be, I'm confident AMD and Intel (or AMD at any rate) can improve their stuff and beat Apple."
  • Joe Guide - Thursday, November 12, 2020 - link

    I see this as glass half full. There was been far too much complacency in the CPU development over the last decade. If it take Apple to kick the industry in the butt, well then, how is that bad.

    Moore's Law has awoke after a deep slumber and it is hungry and angry. Run Intel. Run for your life.
  • GeoffreyA - Friday, November 13, 2020 - link

    Agreed, when AMD was struggling, Intel's improvements were quite meagre (Sandy Bridge excepted). Much credit must be given to AMD though. Their execution of the past few years has been brilliant.
  • chlamchowder - Friday, November 13, 2020 - link

    In x86, decoding is very much done in parallel. That's how you get 3/4/5-wide decoders. The brute force method is to tentatively start decoding at every byte. Alternatively, you mark instruction boundaries in the instruction cache (Goldmont/Tremont do this, as well as older AMD CPUs like Phenom).
  • GeoffreyA - Saturday, November 14, 2020 - link

    Thanks for that. I'm only a layman in all this, so I don't know the exact details. I did suspect there was some sort of trick going on to decode more than one at a time. Marking instructions boundaries in the cache is quite interesting because it ought to tone down, or even eliminate, x86's variable length troubles. Didn't know about Tremont and Goldmont, but I was reading that the Pentium MMX, as well as K8 to Bulldozer, perhaps K7 too, used this trick.

    My question is, do you think AMD and Intel could re-introduce it (while keeping the micro-op cache as well)? Is it effective or does it take too much effort itself? I ask because if it's worth it, it could help x86's length problem quite a bit, and that's something which excites me, under this current climate of ARM. However, judging from the results, it didn't aid the Athlon, Phenom, and Bulldozer that drastically, and AMD abandoned it in Zen, going for a micro-op cache instead, so that knocks down my hopes a bit.
  • vais - Thursday, November 12, 2020 - link

    A great article up until the benchmarking and comparing to x86 part. Then it turned into something reeking of paid promotion piece.
    Below are some quotes I want to focus the discussion on:

    "x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions"
    - This implies wider decoder is always a better thing, even when comparing not only different architectures, but architectures using different instruction sets. How was this conclusion reached?

    "On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon"
    - So Samsung’s Exynos is 6-wide - does that make it better than Snapdragon (which should be 4-wide)? Even better, does anyone in their right mind think it performs close to any modern x86 CPU, let alone an enthusiast grade desktop chip?

    "To not surprise, this is also again deeper than any other microarchitecture on the market. Interesting comparisons are AMD’s Zen3 at 44/64 loads & stores, and Intel’s Sunny Cove at 128/72. "
    - Again this assumes higher loads & stores is automagically better. Isn't Zen3 better than Intel counterparts accross the board? Despite the signifficantly worse loads & stores.

    "AMD also wouldn’t be looking good if not for the recently released Zen3 design."
    - What is the logic here? The competition is lucky they released a better product before Apple? How unfair that Apple have to compete with the latest (Zen3) instead of the previous generation - then their amazing architecture would have really shone bright!

    "The fact that Apple is able to achieve this in a total device power consumption of 5W including the SoC, DRAM, and regulators, versus +21W (1185G7) and 49W (5950X) package power figures, without DRAM or regulation, is absolutely mind-blowing."
    - I am specifically interested where the 49W for 5950X come from. AMD's specs list the TDP at 105W, so where is this draw of only 49W, for an enthusiast desktop processor, coming from?
  • thunng8 - Thursday, November 12, 2020 - link

    It is obvious that the power figure comes from running the spec benchmark. Spec is single threaded, so the Ryzen package is using 49w when using turbo boosting to 5.0ghz on the single core to achieve the score on the chart while the a14 using the exact same criteria uses 5w.
  • vais - Thursday, November 12, 2020 - link

    How is it obvious? Such things as "this benchmark is single threaded" must be stated clearly, not rely on everyone looking at the benchmarks knowing it. Same about the power.
  • thunng8 - Friday, November 13, 2020 - link

    The fact that it is a single threaded is ni the text of the review.
  • name99 - Friday, November 13, 2020 - link

    If you don't know the nature of SPEC benchmarks, then perhaps you should be using your ears/eye more and your mouth less? You don't barge into a conversation you admit to knowing nothing about and start telling all the gathered experts that they are wrong!
  • mandirabl - Thursday, November 12, 2020 - link

    Pretty cool, I came from this video https://www.youtube.com/watch?v=xUkDku_Qt5c and the analogy is awesome.
  • atomek - Thursday, November 12, 2020 - link

    If Apple plays it well, this is the dawn of x86 era. They'll just need to open their M1 for OEMs/builders, so people could actually make gaming desktops on their platform. And that would be end of AMD/Intel (or they will quickly (2-5 years) release ARM CPU which would be very problematic for them). I wouldn't mind to moving away from x86, only if Apple will open their ARM platform to enthusiasts/gamers, and don't lock it to MacOS.
  • dodoei - Thursday, November 12, 2020 - link

    The reason for the great performance could very well be that it’s locked to the MacOS
  • Zerrohero - Friday, November 13, 2020 - link

    Apple has spent billions to develop their own chips to differentiate from the others and to achieve iPad/iPhone like vertical integration with their own software.

    Why would they sell them to anyone?

    It seems that lots of people do not understand why Apple is doing this: to build better *Apple* products.

    There is nothing wrong with that, even if PC folks refuse to accept it. Every company strives to do better stuff.
  • corinthos - Thursday, November 12, 2020 - link

    Cheers to all of those who purchased Threadrippers and hi-end Intel Extreme processors plus the latest 3080/3090 gpus for video editing, only to be crushed by M1 with iGPU due to its more current and superior hardware decoders.
  • mdriftmeyer - Thursday, November 12, 2020 - link

    This is an utterly ignorant comment.
  • vais - Thursday, November 12, 2020 - link

    Please preorder a macbook air and run any of this years AAA games on highest settings. Take a screenshot of the FPS and post it. Those losers will cry!

    Or maybe you will be the one crying, who knows.
  • corinthos - Thursday, November 12, 2020 - link

    won't need to.. i can get a ps5 or xbox series x for games.. for video editing, i'll use the measly macbook air m1 and blow away the pcmr "beasts"
  • vais - Friday, November 13, 2020 - link

    Oh yes, I bet Macbook air M1's will soon replace the crappy hardware in render farms too.
  • MrCrispy - Thursday, November 12, 2020 - link

    There is no doubt that Apple Silicon/M1 is a great technical achievement.

    There is also no doubt about Apple's consistent history of lies, misleading statements, consistently refusing to provide technical data or meaningful comparisons and relying on RDF/hype.

    e.g. 'x86 via emulation on AS performs as fast as native x86' - complete nonsense, since they compared a 2yr old cpu, have not bothered to test any edge cases or any guarantee they can offer full backward compatibility.

    Inb fact backcompat is almost 100% NOT guaranteed, as Apple in general scoffs at the concept and has never bothered to engineer it unlike Microsoft, and companies with much more experience e.g. Microsoft/IBM have failed to provide it when they tried.

    Will an M1 MBP be faster than the latest Ryzen/Intel desktop class cpu in any general purpose computing task, in all perf bands and not just in burst mode, whilst having 2x battery life, AND provide perfect emulation for any x86 app including 30-40 years of legacy code - highly doubtful.

    And even if it does it comes at a massive price premium, completely locked hardware and software, no upgrade or servicing or any reuse/recycling at all, extremely bad for environment.

    Apple is the very definition of a closed, anti-consumer, for profit company. Do I want the massive plethora of x86 pc's at every possible price range to disappear in favor of $$$$$$ Apple products that exclude 99% of thr world population - hell no !!
  • andynormancx - Thursday, November 12, 2020 - link

    Apple aren't claiming 100% backwards compatibility with all x86 code, they don't need to support "30-40 years of legacy code" and they aren't saying they are going to.

    For a start they are only support 64 bit x86 code, they don't need to support 32 bit code as they already depreciated support for that in the last version of MacOS (no doubt partly to support the x86 - ARM transition).

    They have also said that some instructions won't be supported, I think it was some of the vector based ones.

    It of course isn't Apple's first time to do this either. They had the very successfully PowerPC to Intel emulation when they moved to Intel. People including me ran lots of PowerPC apps on Intel for several years, it worked very well.
  • andynormancx - Thursday, November 12, 2020 - link

    I should say that emulation isn't really the right word to be using. Rosetta 2 isn't really attempting to model the inner workings of a x86 chip.

    It is translating the x86 code to equivalent ARM code when the app is first run. This includes wiring it up to the equivalent native ARM versions of any system frameworks the app uses. So apps that spend a lot of time in native non-translated ARM code for a lot of their run time.

    For things like games that use the Metal GPU framework extensively, there is every chance that they _could_ end up running faster on the new ARM machines with their translated code than they did in the outgoing x86 MacBooks.
  • andynormancx - Thursday, November 12, 2020 - link

    Microsoft have posted that x86 Office 2019 works on the ARM translation and "There are no feature differences". And for performance they say "The first launch of each Office app will take longer as the operating system has to generate optimized code for the Apple Silicon processor. Users will notice that the apps 'bounce' in the dock for approximately 20 seconds while this process completes. Subsequent app launches will be fast."

    https://support.microsoft.com/en-us/office/microso...
  • MrCrispy - Thursday, November 12, 2020 - link

    I understand Rosetta 2 won't support virtualization or AVX, thats fine. The question is will an x86 app run identically on an Intel and AS MBP? I don't know how much demand there is going to be for this as Apple is pretty strict about forcing devs to upgrade to latest, and app developers will be quick to claim 'natively runs on AS' badges which I'm sure will forthcoming.

    The question of speed still remains. When does embargo lift on M1 MBP reviews?

    Also M1 will not be able run a VM. MBPs are very popular with devs and running VMs is a big part of that.
  • andynormancx - Thursday, November 12, 2020 - link

    There will be plenty of demand for it. There will be plenty of apps that with take months or years to get updated fully and plenty that never get updated (though admittedly quite a few of those probably went away with the 32 bit depreciation).

    The embargo will typically lift the day before people starting receiving the first orders. So probably on Monday.

    Not being able to run x86 in a VM will certainly be a disadvantage for some people. I would have been one of those people until four years ago, I spent a lot of time in VirtualBox running Visual Studio on my Mac. An ARM laptop certainly wouldn't have been an attractive option for me then.

    There will I'm sure be truly emulated options, to run Windows/Linux in x86 on Apple Silicon, but then of course that will have a whole different level of performance overhead. If I need to run x86 Windows on my ARM Mac, when I have one, I expect I'll opt for a VM in the cloud instead.

    Also, I'm sure if performance under Rosetta 2 was bad then there would have been mutterings from the developers who've had the A12X based machines, NDA or no NDA...
  • mdriftmeyer - Saturday, November 14, 2020 - link

    The word is deprecated, not depreciated.
  • Zerrohero - Friday, November 13, 2020 - link

    ”no upgrade or servicing or any reuse/recycling at all, extremely bad for environment.”

    Apple does repair MacBooks, including battery replacements. Apple recycles your Apple products for free.

    I’m pretty sure that the average lifespan of a MacBook isn’t any shorter than it is for Windows laptops.

    Also, Mac volumes are way, way smaller.

    If you truly care about the environment, think of the Windows laptops and especially Android phones and their ridiculously short SW support. They are filling the landfills, not Apple products.

    I’m also pretty sure that your interest in the environment strangely stops at Apple. Do you eat meat? Do you drive a car? Do you ever fly?

    By the way, take a look at Greenpeace’s Green Electronics Rankings. Care to guess which brand is number one of all the big names?

    Hint: it starts with an A.
  • Tomatotech - Friday, November 13, 2020 - link

    I don't have a link to hand, but I've seen figures indicating that Apple devices are used for approx 3x longer than Windows devices. Comparing the used prices for Android phones vs iPhones, Macbooks vs Windows laptops, seems to bear this out.

    A 5 year old 2015 Macbook Pro still fetches high prices, but try selling a 5-year old Windows laptop...

    This indicates that the daily use base of Apple laptops could be 2x to 3x higher as a proportion of total laptop daily use than their sales figures indicate.

    (That's consumer market only. For business many staff suffer using 5-10 year old windows desktops and laptops that won't get replaced till they break down.)
  • Oxford Guy - Sunday, December 6, 2020 - link

    "A 5 year old 2015 Macbook Pro still fetches high prices"

    Because it's the last one with a good keyboard and no need for dongle hell?
  • Oxford Guy - Sunday, December 6, 2020 - link

    Also, it has MagSafe and no T2, afaik.
  • ex2bot - Friday, November 13, 2020 - link

    Stick with Microsoft and Google. They’re pro-consumer. They love you. They want what’s best for you, not just your money. They’re your corporate mommy. Embrace the sweet, sweet Windows ecosystem (and Android, too) that’s made to nurture you. Because they care about you. They love you and are proud of you. Momma Microsoft. Papa Google.
  • ex2bot - Friday, November 13, 2020 - link

    (Some of the above may be /s.)
  • GeoffreyA - Saturday, November 14, 2020 - link

    Well, we're living in Brave New World almost. For my part, *all* these companies are rotten eggs. Their job is to make money out of us, so whatever they do is shaped to that end. How kind soever their words are, no matter how sweet, it's with a motive. I think the ones I hate the most are Google and Facebook. Don't be evil indeed. When the dollars start rolling into Mountain View's coffers, when the immoral power over people's lives grows, then it's "do the right thing."

    I won't defend Microsoft either. I grew up with Windows, so will always have a fondness for it, but Microsoft went off the tracks from the Windows 8 era. As for Apple, I just don't like them. Perhaps it's how they paint themselves as this big humane company, of great heart and greater ideals---but at the bottom, what is it all about? They're just making money, and a lot of it too. Apple's products are not cheap. And I'd say, Microsoft is impotent and too silly: Apple is the one to be alarmed about. They've got genius and a relentless focus, and worst of all, a despotic streak running through them.
  • Oxford Guy - Sunday, December 6, 2020 - link

    "For my part, *all* these companies are rotten eggs."

    Corporations are soulless non-living financial inventions. They're designed to have no morality so amoral behavior on the part of the owners' class can be rationalized away.

    They are financial shields against culpability. The Justice Department, according to insiders on MSNBC, has long had a policy of finding corporations instead of jailing executives when laws are broken.
  • Oxford Guy - Sunday, December 6, 2020 - link

    fining, not "finding"
  • GeoffreyA - Saturday, December 12, 2020 - link

    I tend to agree. You know, these technology companies of today are wielding great power in this world, yet it's not very visible perhaps. They control with a smile on their face. Chains work best when people don't know they're shackled.
  • alpha754293 - Thursday, November 12, 2020 - link

    With x86/x64 MacBooks, I can do more than just whatever Apple offers.

    Now, it's back to not really being able to do much with Macs again.

    (It took probably close to a decade before technical computing apps were ported to MacOS on x86/x64. And now with this switch to ARM, guess that I won't be doing THAT anymore on any of the new Macs (since the apps, once again, aren't available, and hopefully, it won't take ANOTHER 10 years before the companies that make those programs, port them over. But then again, at least with x86/x64, they already had it for other OS platforms. This -- I'm not sure what any of the existing software vendors are going to do with this.)
  • samcolam - Friday, November 13, 2020 - link

    Such an interesting thread.
  • Tomatotech - Friday, November 13, 2020 - link

    Lovely article. I read every word. Looking forward to actual reviews of the M1 soon.

    I think Apple has played a blinder here. Scores of millions of people have built up years of experience of using the iOS ecosystem to fill their wants. They are people who don’t mind paying extra for an iDevice. Now Apple is presenting them with the chance to transfer that experience to a laptop or desktop where they can run all their iOS apps (plus macOS apps.)

    Now they don’t need to learn both iOS AND Windows or both iOS AND macOS. At work I see many people struggling with desktop OSes.

    Apple has just said bye to all that. I’m old school IT, I can run 3 OSes on the same laptop without breaking a sweat, but that isn’t what the masses want, and it doesn’t help non-IT people achieve what they want. iOS & Android have done all that and more.

    Google are trying with their Android-based chromebooks but it seems stuck at the low end for mass adoption. If Apple can deliver on this and it looks like they are doing so, their sales will soar.
  • MrCrispy - Friday, November 13, 2020 - link

    You're talking about people with disposable income who want and can afford Apple's premium $$$ devices. Its not 'the masses', far from it. 'people who don’t mind paying extra for an iDevice' are first world rich people.

    There's nothing special about iOS or MacOS. They are sold based on hype and because its tied to the hardware.

    btw Windows runs the same OS on everything from a wearable to a server and they did it years before anyone else. But Microsoft isn't 'cool' so no one talks about it.
  • Joe Guide - Friday, November 13, 2020 - link

    There may be some truth to your comment. Apple's products can be more expensive, but if you match for performance to price, such as Dell's comparable computers, the difference becomes less.
    But now the game is flipped upside down. The new M1 chips seem unbeatable based on performance, efficiency and price. The base, cheapo, affordable MBA is proving to be world class.

    Geekbench seems to suggest top single thread performance, and now actual performance on Affinity Photo suggests it's not hype.
    https://twitter.com/andysomerfield/status/13268661...

    Your point about iOS or MacOS being no better than Windows may be right. But you can't put Windows these new machines yet.
  • Silver5urfer - Friday, November 13, 2020 - link

    Lol, unbeatable, please go to Apple.com and then click on Mac and see what Macbooks are priced higher, you know why Apple prices their devices higher or any company for that matter ?

    Performance / Features.

    These new M1 Macs are not unbeatable dude, Macbooks running Intel are unbeatable because they compare with each other and not with Windows machines. Nothing is world class about this for many reasons

    - Software is beta phase, there's no Adobe software ready as Apple M1 is just shipping to consumers but Adobe needs time to optimize, Office is announced but it runs slow due to Rosetta2, 32Bit x86 is dead with past Mac OS release, that puts many users who want a desktop OS, then ARM translation means this machine is not going to have a VM, it discarded Bootcamp. Then the OS ecosystem, Mac OS software is not equal to iOS, running iOS apps on mac is nothing ground breaking, iOS has gated filesystem vs Mac nope it is open, still leagues behind Windows and Linux, many normies are not going to mess with the Gateway to change the applicaiton installtion etc.
    - Hardware, the Machine is costing over $1K+ and the RAM is 8GB, epic joke. And for SSD upgrade from 256 which is a joke at this price, is 200 bucks, and on top all is soldered. No one replace anything. Gaming performance is unknown at this point and Mac apps like SOTTR, BioShock needs to be updated for M1, then on top the Min spec for them is requiring a Dedicated GPU, so far AMD is not there, they are moving closer to their own ecosystem, there these games are not going to work. Geekbench is a shit benchmark, just hit GB4 scores on iPhones vs Android and then GB5 on the same and notice the difference with the SoCs on Apple vs Android, it's a worthless trash benchmark. The only viable is Cinebench R23 from Maxon. There ST is high but when we add SMT or here the BigLittle we do not know how perf scales, throttlng etc, since Mac Mini has fans while MBA the world class laptop doesn't, expect performance loss, no one can cheat physics. DIY repair is anyways dead on Macs since they removed the SSD and RAM, but with these and the Touchbar macs the KB is also sealed shut so is Battery heavily glued on.
  • Joe Guide - Friday, November 13, 2020 - link

    You should read Paul Spector's assessment about where things stand. I think it is a fair analysis.

    https://componentplanet.wordpress.com/2020/11/13/b...

    "By the same token, however, Apple’s marketing overreach on that stupid claim shouldn’t lull anyone into ignoring this chip. It represents the most potent threat to x86 dominance that I’ve seen in my entire career."
  • name99 - Friday, November 13, 2020 - link

    Seriously?
    If an author can't tell the difference between 8 CPUs(+SMT) and 4 CPUs (+4 small) then he doesn't deserve my time.
  • NetMage - Saturday, November 14, 2020 - link

    And if you can't tell the difference between Apple's low end passively cooled introductory chip, and what the future will bring, I think we know who's not worth any time.
  • name99 - Saturday, November 14, 2020 - link

    Dude, I'm on *your* side...
    The fact that the article has one correct point doesn't mean that it's mostly garbage.
    It's not a fair analysis in that it bends over backward to make x86 look good -- and still doesn't succeed.
  • Arsani - Saturday, November 14, 2020 - link

    You know Apple is the largest PC manufacturer by revenue right. I don't even own a mac but it seems they provide the best customer experience from what I see on the internet. Apple laptops are known to come with the least defects and their customer service is the best. Another factor is their UI is aesthetically more polished than windows and their products are just more cool than the rest. Now I don't own a mac because while I might be willing to pay for their starting prices what they charge for upgrading memory and storage is just a rip off. I also don't like that I can't upgrade the parts myself. But hey it is a business and the people who just want something that works hassle-free and premium buy apple. Many people just don't have time to deal with the problems of windows like drivers, upgrading and viruses.
  • Silver5urfer - Sunday, November 15, 2020 - link

    Man what are you talking ? Least defects ? Go to Louis Rossman and learn how garbage Macbooks are. They have tons of problems, Keyboard issues, bad wiring for Display Flex cables, horrible VRM component problems, Data recovery issues due to T2 chip, no 3rd party cheaper repair options vs their overpriced garbage Genius (those people at Genius are quite opposite to that, Apple's Technical support people are bots programmed to use Apple software and then do a software based test which is supplied by Apple and 100% propreitary, no one can bypass it has activation systems and etc licensing on top, latest iPhone blocks changing of modules.)

    In fact all those heating related and throttling issues are the primary reason Apple is moving away from Intel x86, they could go to AMD for better compat but their market and the investors want profit not more expenditure, re tooling every single software piece of another x86 processor which also has the TDP over 50W is not suitable for Apple computers. Because their userbase don't crunch numbers on the Macs nor game on them, nor do any heavy lifting, it's made for mostly a consumption device like iPad and less of a proper workstation / desktop class machine, the OS is also heavily leaning towards Mobile centric, just look at the Big Sur OS it's another Win10 type transitional OS onto Mobile UX, but only missing part is touchscreen, Apple don't want to do it because it will kill iPad. So having a cTDP (Configurable TDP) means another VRM adjustment and on top of heavy changes to the Software / Repair / Manufacture for one more x86, that won't net the job for them when they have Millions poured into TSMC (Yeah they poured tons of cash into TSMC for R&D) and their own A-Series / ARM derivative silicon engineering.

    It all boils down to the numbers and math on top of the very important factor - appeasing their customers who are non skilled in understanding computers vs the usual *nix and Windows / Gamers.
  • Lizardo - Wednesday, November 18, 2020 - link

    Macs are garbage? Wife has a 2013 Air she won't part with, Macs got both kids through college, and I've been working at home on a Mac for 15 years without some Windows tech support guy telling me he doesn't know either. Proper maintenance and they last a long while. And Big Sur is now running on my 2013 MB Pro.
  • Oxford Guy - Sunday, December 6, 2020 - link

    "nor do any heavy lifting"

    Like pro video and audio?
  • Alistair - Saturday, November 14, 2020 - link

    I bought an Intel Tiger Lake laptop today, i7-1165G7. What a piece of crap. The new Macbook is going to walk all over the PC competition. Windows notebooks looks good until you realize the fan will be running loud just browsing websites. That and the graphics performance is laughable (XE may be a big leap for Intel, but you aren't getting a solid 60fps in Overwatch at 50 percent of 1440p at low settings...).
  • Alistair - Saturday, November 14, 2020 - link

    also the screen, wow... so bad... 768p and TV in a $1000 CAD plus tax laptop

    meanwhile I have to add $500 CAD to get the same CPU in the next cheapest laptop with a good screen, so the Macbook's $1300 looks better and better now I know how bad the $1000 CAD one is...
  • Alistair - Saturday, November 14, 2020 - link

    TN
  • timecop1818 - Saturday, November 14, 2020 - link

    You may not be getting 60fps in overwatch but at least it runs. It's not going to run on this new useless apple shittycon laptop at all.
  • Alistair - Saturday, November 14, 2020 - link

    I'm averaging 34 fps in Overwatch at 1080p LOW settings. Way worse than any review would suggest. I re-watched Dave Lee's Razer Blade laptop review with the same CPU, the score was 117 fps.

    https://www.bestbuy.ca/en-ca/product/hp-15-6-lapto...

    Why is it so slow? I'm using the built in Xe graphics (Iris Xe) while the Razer is using external Xe add on GPU?

    Better than AMD? Don't make me laugh, this is the worst $1000 laptop I've bought. Integrated AMD graphics crushes it. Apple's will be more than twice as fast easily.
  • timecop1818 - Saturday, November 14, 2020 - link

    You were fully aware of what you were buying. Why did you even consider something with 720p screen? I have hp spectre 13" with 4k oled and it's great. It wasn't much more expensive than this thing either.
  • Alistair - Saturday, November 14, 2020 - link

    I don't care about the screen, it is hooked up to a monitor. I care about the abysmal performance. The screen has nothing to do with anything, I don't even know why you are mentioning it.
  • Alistair - Saturday, November 14, 2020 - link

    I mean, I traded the screen since it wasn't that important for the i7 upgrade for "free".
  • Alistair - Saturday, November 14, 2020 - link

    Can probably emulate Overwatch faster...
  • Samus - Saturday, November 14, 2020 - link

    How is this even possible from a CPU that is passively cooled...
  • Alistair - Saturday, November 14, 2020 - link

    Here's one reason I found out the Intel laptop is so bad. So my Tiger Lake Laptop is configured down to 18W maximum, but it doesn't say that anywhere. Intel is cheating by not calling it a "U" series part. You can't tell until you get home and install HWInfo and wonder at your low performance that your laptop is limited to 18W and not 28W.
  • Atom2 - Saturday, November 14, 2020 - link

    Perhaps it is not meaningful to use benchmarks from 2006. Apple would look even better with whatever was used in 1980s.
  • TotalLamer - Sunday, November 15, 2020 - link

    Alright, so help me out here... isn't ARM inherently unable to be as capable as x86 because it's a RISC architecture? I always took it to mean that while it's more efficient than x86 (and thus why it's used in phones) it's more limited in its abilities. Is that wrong?
  • thunng8 - Sunday, November 15, 2020 - link

    Yes it is wrong
  • andrewaggb - Sunday, November 15, 2020 - link

    It's pretty impressive. I hope Apple will eventually change their business model on the hardware side and start selling chips without requiring iphones, mac's, etc. I'm just not very interested in Mac OS and most of their hardware (other than Apple TV because the siri remote control is awesome). I'd rather run Windows and Linux and use my own case with big video cards and a half dozen drives.
  • Just_Looking - Sunday, November 15, 2020 - link

    What happened to "There is no denying that SPEC CPU2006 was never one of our favorite benchmarks in the Professional IT section of AnandTech."
  • hlovatt - Sunday, November 15, 2020 - link

    Looks like M1 is very fast even using Rosetta to translate x86 code:

    https://www.macrumors.com/2020/11/15/m1-chip-emula...

    Appears to take a 30% hit, but that still makes it very fast.
  • realbabilu - Monday, November 16, 2020 - link

    Auckland got it first.
    Benchmark cinebench 23 Multi 7566

    https://mobile.twitter.com/mnloona48_/status/13284...
  • Joe Guide - Monday, November 16, 2020 - link

    Wow. Single score is quite impressive as well for a no fan low end chip.
  • GeoffreyA - Tuesday, November 17, 2020 - link

    Impressive, yes, but still behind Tiger Lake (1165G7).
  • hlovatt - Monday, November 16, 2020 - link

    Looks like the GPU is good too:

    https://forums.macrumors.com/threads/m1-chip-beats...
  • magfal - Tuesday, November 17, 2020 - link

    The SLC cache on the die shot scares me a bit. Is that NAND in the CPU?

    Could it be a potential endurance issue where the chip dies in some years?
  • Oxford Guy - Sunday, December 6, 2020 - link

    NAND in a CPU is a great idea. Forensics will celebrate.
  • Makste - Wednesday, November 18, 2020 - link

    I never was interested in apple devices, but now things r getting interesting with this M1 chip. It's just the same way no one was expecting the coming covid-19 and then suddenly everybody turned their attention to it to try and find a cure 🤭
  • wrbst - Wednesday, November 18, 2020 - link

    @Andrei Frumusanu, when we will see comparison between Microsoft SQ2 vs Apple M1 ?
  • pcordes - Thursday, November 19, 2020 - link

    Thanks for the microarchitectural testing and details!
    However, some current Intel / AMD numbers you use for comparison aren't right. (Also, your ROB-size and load/store buffers graphs are missing labels on the vertical axis; I assume that's time or cycles or something with.)

    Intel since Skylake has 5-wide legacy decode (up from 4-wide in Haswell) and 6-wide fetch from the decoded-uop cache. (The issue/rename stage is still only 4-wide in Skylake, but widened to 5 in Ice Lake. Being wider earlier in the front-end can catch up after stalls, letting buffers between stages hide bubbles) https://en.wikichip.org/wiki/intel/microarchitectu...
    https://en.wikichip.org/wiki/intel/microarchitectu...
    (The decoders can also macro-fuse a cmp/jcc branch into 1 uop, so max decode throughput is actually 7 x86 instructions per clock, into 5 uops.)

    AMD Zen 2 can decode up to 4 x86 instructions per clock. (Not sure if that includes fusion of cmp/jcc or not). This is probably where you got your 4-wide number that you claimed applied to Intel. But that's just legacy-decode. Most code spends a lot of time in non-huge loops, and they can run from the uop cache. Zen 2's decoded-uop cache can produce up to 8 uops/clock.

    https://en.wikichip.org/wiki/amd/microarchitecture...

    The actual bottleneck for sending instructions into the out-of-order back-end is the issue/rename stage as usual: 6 uops, but I think those can only come from up to 5 x86 instructions. I thought I remembered reading that Zen 1 could only sustain 6 uops / clock when running code that included some AVX 256-bit instructions or other 2-uop instructions. Maybe that changed with Zen2 (where most 256-bit SIMD instructions are still 1 uop), I don't have an AMD system to test on, and stuff like https://uops.info/ only tests throughput of single instructions, not a mix of integer, FP, and/or loads/store.

    Anyway, Zen's front-end is at least 5-wide, and 6-wide for at least some purposes.

    ---

    You seem to be saying M1 can do 4 FADDs *and* 4 FMULs in the same cycle. That doesn't make any sense with "only" 4 FP execution units. Perhaps you mean 4 FMAs per cycle? Or can each execution unit really accept 2 instructions in the same clock cycle, like Pentium 4's double-pumped integer ALUs?

    That's only twice the throughput of Haswell/Skylake, or the same throughput if you take vector width into account, assuming Apple M1 doesn't have ARM SVE for wider vectors.
    (Skylake has FMA units on ports 0 and 1, each 256-bit wide. FP mul / add also run on those same units, all with 4-cycle latency. So using FMAs instead of `vaddps` or `vmulps` gives Skylake twice the FLOPS because an FMA counts as two FLOPs, despite being a single operation for the pipeline.)

    Zen2 runs vaddps on ports FP2 / FP3, and vmulps or vfma...ps on FP0 / FP1. So it can sustain 2/clock FADD *and* 2/clock FMUL/FMA, unlike Skylake that can only do a total of 2 FP ops per cycle. (Both with any width from scalar to 256-bit). Zen1 has the same port allocations, but the execution units are only 128 bits wide. (Numbers from https://uops.info/)

    https://en.wikichip.org/wiki/amd/microarchitecture... doesn't indicate any more FMA or SIMD FP mul/add throughput, except reduced competition from FP store and FP->int.

    You weren't looking at actual legacy x87 `fadd` / `fmul` instruction mnemonics were you? Modern x86 does FP math using SSE / AVX instructions like scalar addsd / mulsd (sd = scalar double), with fewer execution units for legacy 80-bit x87. (Unfortunately FMA isn't baseline, only available as an extension, unlike with AArch64.)
  • LYP - Sunday, May 23, 2021 - link

    I'm happy that I'm not the only one who thinks there is something wrong here ...
  • peevee - Wednesday, December 9, 2020 - link

    "Intel has stagnated itself out of the market, and has lost a major customer today."

    A decade+ concentrating on "diversity and inclusion" vs competency can do that to you. Their biggest problem today might be the Portland location and culture.
  • IntoGraphics - Wednesday, December 16, 2020 - link

    <blockquote>"If Apple’s performance trajectory continues at this pace, the x86 performance crown might never be regained."</blockquote>
    If Apple's performance trajectory does continue at this pace, the x86 performance crown will be irrelevant.

Log in

Don't have an account? Sign up now