M1 Graphics turned out to be absolutely dismal, with the Ultra M1 being unable to even keep up with a GTX 3050, and absolutely left in the dust by a GTX 3090
Maybe the M2 Ultra will finally match the GTX 3050?
If we consider the price per transistor, m1 and m2 air is actually much cheaper than intel or amd based laptop. Latest amd 6000 series apu only has 13.1 billion transistors
There are different methodologies for counting the transistors. Unless we are sure that the same methodology is used in both cases, these numbers are meaningless. Besides, end user does not care about the number of transistors. They care about the performance.
That is nothing like saying a cylinder is just a cylinder and then using different engine arrangements and ACTUALLY CYLINDER COUNT as a metaphor to support. At the end of the day neither tell you anything about performance out of context without other metrics.
A cylinder *is* a cylinder. And most people don't care about that number more than they care about number of transistors. Recently people have become completely disinterested in cylinders, going for "cylinderless" electric motors. Which according to the logic don't do much because... no cylinders right?
Nobody buys cylinders or transistors, they buy expected results: it's fast, it consumes little. Sure, they may buy "frequency" or "number of cores" but mostly because they keep being told that those numbers matter more than everything. I mean they matter, there's a correlation between frequency, transistors, cores and actual performance but they're just proxies. Like buying an engine based on fuel consumption, more fuel = more power.
I think the point of the original post was that there is no difference between the function of one single transistor Vs another, which is true, they have a binary state of open or closed. The size of them makes no difference. Engine cylinders on the other hand differ greatly based on displacement and materials, so one can have far more impact than another.
The analogy was terrible to begin with, but taking the argument to car engines doesn't make sense, given the comparison is completely inappropriate.
Cylinder size and count loosely translate to performance insofar as a larger cylinder can burn more fuel. A small bore vs large bore, then, means different cylinders aren’t the same.
Just like transistors. Even at the same process node a larger transistor can be switched faster; it’s pretty basic physics in both cases.
So no, a transistor isn’t just a transistor and a cylinder isn’t just a cylinder. There are multiple dimensions to their design that impacts the performance of the whole based on the individual component.
A cylinder *is not* just a cylinder. Bore, stroke, surrounding block material, surrounding cooling properties, lining material, how many valves are feeding fuel and air and by what design, etc. are all tremendously important to the performance of a given cylinder for an intended use.
There are 2 different transistors that can be counted. Schematic or layout if I recall correctly. Schematic transistors are the transistors in the design blueprints, and layout transistors are the transistors physically put on the processor. Transistors are not counted, but estimated instead. If I recall correctly there was an Intel product with 1.2B schematic and 1.4B layout transistors (Either Ivybridge-S or Haswell-S). Sandybridge-S had 956M schematic and over 1B layout transistors. They should be within 10-20% of each other. Regardless of methodology Apple is definitely providing more transistors in M2 than AMD or Intel.
When you layout a design you might need buffers to help drive signals further across the chip, these are not functional, they wont be written in the design but are required to make the design work in a physical sense.
The clock distribuition network for exaample, we know where the clock is going to go but we wont know how it needs to be balanced until we go from verilog design into physical design.
We also sometimes will duplicate logic etc in order to get a working physical design.
A transistor isn't really a transistor for a multitude of reasons. For one thing, they can differ in geometry (and size) in order to obtain different tradeoffs in terms of switching speed, leakage current, gate capacitance and other metrics, and one IC design often uses many different transistor designs for different parts of the design. For another thing, it is common for several transistors to share various parts, such as one gate serving to switch several different channels, which saves space compared to having the same number of distinct transistors. Various such aspects can be seen making a very distinct difference to total density; for instance, GPUs often pack a significantly larger number of transistors into the same area as a CPU (even on the same process), because it uses different designs to achieve different goals.
By your logic the CPU with the most transistors is the fastest. Why do we need AMD and Intel anyway, it's just to make sure enough transistors get manufactured on a large enough die....
rtx 3090 is faster than 3070 because 3090 has more cores. 5950x is faster than 5800* in multithreading because of more cores. more cores, more transistor.
for cpu, single core performance improvement today is mostly done by lengthening branch prediction which needs bigger caches. more caches, more transistors
> single core performance improvement today is mostly done by lengthening > branch prediction which needs bigger caches.
LOL. You're joking, right?
Intel and AMD put lots of effort into making their cores faster, and it's achieved by the sum of very many individual tweaks and changes. Try going back and reading the Deep Dive coverage on this site of all the new cores that have been introduced over the past 5 years.
The other thing you're missing is that new process nodes don't only increase transistor counts, they also move the transistors closer together. This enables longer critical paths, in terms of the numbers of transistors to be chained together, for a given clock frequency. That, in turn, allows the complexity of pipeline stages to be increased, which can result in fewer of them. Or, you can have the same number of pipeline stages, but they can have more complexity than before.
It depends on your needs. Certain MacBooks have been great on release and in its lifetime, and basically have a legendary status similar to the likes of ThinkPad's. We just came out of the "dark ages" with the 2016-2020 having the Worst Mac Lineup in a while. The M1 variant was great, and has forced MS/Windows and the likes of Intel and AMD to innovate or catch-up. But that was a first generation attempt, and I have been holding out for the second-gen release.
I'm hoping to see a MBP 14in, 32GB RAM, 1TB storage, with the M2 Pro chipset, with hopefully a +50% increase in the iGPU performance. That would ensure it remains competitive for a long duration (4-8 years).
As a reference, here are the well-praised devices: - The last upgradeable / unibody model (mid-2012 MacBook Pro-15 with GT650M) - The last medium retina model (early-2015 MacBook Pro 13in Retina with HD-6100) - The last large retina model (late-2015 MacBook Pro 15in Retina with M370X) - The last x86 model (early-2020 MacBook Pro 16in with RX5500)
You are the first person I have ever encountered that counts "price per transistor" Everyone else I have encountered thus far always uses "performance/price"
perf/transistor is a useful metric for comparing uArch sophistication. In Apple's case, it's a better comparison than perf/$, because the true price of the silicon is somewhat obscured. However, when talking about anyone else, it roughly correlates.
pushing higher frequency is very difficult. that's why manufacturers simply puts more cores and caches to increase parallelism. if amd wants to reach 3090 rt performance, then amd must put more rt cores. more caches/cores means more transistors.
Not really. It’s pretty good. It just depends on what you’re using it for. A number of scientific uses seem to be much faster on the M1 Ultra than anything else. A problem is that a number of games and other apps weren’t optimized for it. But now, performance on those apps are much better than they were in the beginning. But for games, it’s up to developers who so far haven’t cared all that much. But there are some games that will be coming out later this year that might work very well.
I reckon that anything which benefits from having ample memory bandwidth (as some scientific applications do) benefits. The M1 Pro/Max/Ultra and the M2 have quite a bit. The M1 Ultra has about as much memory bandwidth has top-end graphics cards and compute cards. Also, what kind of optimizations are possible (are you using an Intel CPU that supports AVX512 and is the code optimized for that?).
But overall, that's not a new problem. Before knowing what is different about the cores in the M2 compared to the M1 (apart from larger cache sizes), it'll be hard to explain which workloads see performance gains and to what degree.
I think the mismatch between M1 Graphics compute performance versus gaming performance comes down largely to software (both MacOS and actual games) more than the underlying hardware. Few developers organically develop for Metal (more often they get other developers to port the game across to Metal for them, which is never as efficient as developing from day 1 for Metal support), and the degree of optimisation for Metal comes down to the skill of the developer and how much effort they deem worth investing to target a pretty small market. There is also the question of how well optimised MacOS itself is for the needs of gaming, such as low-latency input management. It has never been a priority for Apple, who have largely targeted the creator market that doesn't really benefit from those things.
Personally, I think Apple is going to need to take a leaf from nVidia's book and bankroll a bunch of developers to develop Metal optimised versions of popular games, and through that try to nurture MacOS gaming as a viable, worthwhile target for all gaming developers. Until they do, most developers will continue to ignore MacOS or treat it as a porting afterthought with only just enough time put into it to make it run but not enough to make it well optimised. At present, the main way MacOS users play games is through Parallels or CrossOver, which doesn't help address the situation because it further disincentivises developers to develop decent MacOS ports.
Basically, Apple could make a RTX3090Ti-equivalent GPU, but it would still game poorly if every game is gimped by poor Metal optimisation or running through a clunky Vulkan to Metal translation layer.
You assume Apple is interested in pursuing the "hard core/console" gamer market. I see no evidence for this, now or ever. Apple is interested in the reverse issue of "make the mac experience richer for people who are already mac fans", but they have zero interest in "converting" non mac fans, especially the sort who rant on the internet about how much Apple sucks...
I agree with this, with the caveat that they also do a great job of appealing to extremely casual gamers/mobile gamers/children using Apple devices. They would clearly eyeball this as a potential future market and track it to see if is worth investing more in, as evidenced by them already moving the Apple TV platform further into their gaming ecosystem.
Of course all companies would be eyeballing crossover mobile users now, so you could classify it easily with your comment under these people already being 'mac fans'.
I agree that they seem to have little interest in targeting "gamers". It's a low profit market with lots of entrenched competitors.
What I don't get is why you think they should target the people "who rant on the internet about how much Apple sucks". Trying to "convert" people who never matured past a grade-school mentality of brand tribalism is a complete waste of time. They're impervious to reason.
Example: This thread was started by someone who actually believes the M1 Ultra's 64-core GPU is outranked by an RTX 3050. I don't know where they're getting that info, but it's pretty silly.
I didn't see any ranting about how much Apple sucks in my post. While I agree that Apple isn't interested in going after the 'hard core gamer' market, I strongly doubt that Apple don't see business benefits from broadening the appeal of their Macintosh line beyond current 'mac fans', given the Mac also serves as a gateway into the broader Apple ecosystem. The hardware is capable of gaming, but the software is lacking.
There are many people out there that would love to buy a Mac but also occasionally play games. Most people are generalists who want something that can do everything ok (think about how popular SUVs are now, even if they are rarely used offroad). Do you think the Mac's market share wouldn't increase significantly if it could provide all the performance it does, all the battery life it does, and could also reliably run the majority of AAA titles? Do you really think Apple isn't interested in increasing MacOS's marketshare? Sell a great laptop, and then next time that person buys a phone suddenly that iPhone has a lot more value add, suddenly Apple Music subscription seems like a better deal, etc. Before you know it, that individual who sat outside the Apple ecosystem is fully embedded. Or they can continue to target only 'mac fans' who are already in the ecosystem....
(a) Go read a site like phoronix. No matter WHAT Apple does, it's not good enough and is interpreted in the worst possible way.
(b) Remember this? https://www.escapistmagazine.com/john-romero-apolo... Apple does not want to be associated with that culture. And sure, so OK, 20 years on Romero grew up. Problem is, there is a constant stream of new teenagers with the exact same mentality to take his place.
(c) What do you want Apple to do? What developers say they want is Apple devices that look exactly like PCs. They want x86, and they want DX or Vulkan. Apple isn't going to give them either of those (for the very simple reason that they're not going to destroy their long term goals to gain 5% extra sales this year).
That's my point -- gamer whining is not rational, actionable whining. It is not a list of realistic things that Apple could do, it's whining for the sake of whining. For example, something Apple has just added as part of Metal3 is a technical call to get the base address of the GPU. This apparently will make Vulkan much more performant. But notice -- no gaming whiner was asking "provide this API call", something realistic and actionable. It was all "Apple sucks and they cost too much and they should be using Vulkan because it's a million times better than Metal, and I would never buy a Mac even if you paid me"...
> no gaming whiner was asking "provide this API call", something realistic and actionable.
If you want to know what actual game *developers* think of Apple, you might be looking in the wrong places. There's a lot of noise on the internet that can easily drown out whatever signal you're seeking, if you don't filter it carefully.
> Go read a site like phoronix. No matter WHAT Apple does, it's not good enough > and is interpreted in the worst possible way.
Phoronix is specifically focused on Linux and open source. Apple is famously secretive, prefers proprietary standards, practically invented walled gardens in computing, and caters towards less technical users. You basically couldn't find an audience more hostile to Apple, if you tried. To cite them as characteristic of some general reception towards Apple is either seriously misguided or disingenuous.
"There are many people out there that would love to buy a Mac but also occasionally play games." There are tons of options for those people. Get a mac and a cheap PC or a Console.
"Most people are generalists who want something that can do everything ok" Most people **aren't** hard core gamers who care about hitting 60 fps on max settings. Most hard core gamers think everyone is a hard core gamer. There are plenty of people who just want to play a bit of Stardew Valley or The Sims or Minecraft and they're perfectly happy doing it on an old PS 4.
"Do you really think Apple isn't interested in increasing MacOS's marketshare?" Yes. I do. Apple puts profit margin ahead of market size. If the market is huge, but the profits are razor thin, then you're dancing on a thin line between success and bankruptcy. It only takes small fluctuations in component prices for your margin to go negative and sink the ship. Game machines are notoriously a low-margin business. Apple has no interest in that.
"Sell a great laptop, and then next time that person buys a phone suddenly that iPhone has a lot more value add" The halo effect cuts both ways: enter a low-margin market and you'll have to compromise on quality to compete, then it tarnishes the whole brand.
"Or they can continue to target only 'mac fans' who are already in the ecosystem." It's rather self-centered to assume that Apple's only play is to cater to hard-core gamers or 'mac fans'. There couldn't possibly be other markets for Apple than the "coveted" gamer market, right? They just dropped a machine that can encode/decode 18 streams of 8K video simultaneously. Do you think they did that for shits and giggles? Don't you think they **might** be targeting some sort of market with that capability? <<rolls eyes>>
> Apple ... have zero interest in "converting" non mac fans
They are continually searching for ways to increase their revenues. With revenues as large as they already are, that's not an easy task. Therefore, I absolutely expect them to make a play to significantly enlarge their user base.
If you think about it, they're already in people's living rooms with Apple TV. So, wouldn't it be interesting for them to make a backdoor push for the console market, by introducing a significantly upgraded model? Nintendo has long shown that you don't need top HW specs to build a significant userbase - you just need the right content.
IMHO that's a very outdated view. Apple's most popular platform is iOS, and it has billions of users. All of the technology in the M2 will come to the next iPhone and next iPad. It seems likely that Apple will literally put the M2 in the next-gen iPad Pro.
By numbers iOS is likely the #1 gaming platform on the planet. And it isn't just candy crush-type games either. I live in Japan and I see plenty of people playing real games on their phones (RPGs are big here).
There's a certain logic in Apple approaching the console gaming market. There's a sizeable userbase, willing to spend significant amounts of money on software and subscriptions, which has already demonstrated a willingness to live within a walled garden. Plus, they value a simplified low-friction user experience vs. the technically rich environment of PC gaming.
It almost makes *too much* sense, if you think about it like that.
Shadow of Tomb Raider is updated to Metal API and uses 64Bit MacOS support, perhaps not full ARM compat and needs Rosetta translation, but guess what ? Apple's so called "In House" a.k.a PowerVR stolen / Poached technology powered by M1 ULTRA fully maxed out 48GB config gets pathetic 82FPS on HIGH preset at 1080P resolution rofl. Vs a 3090 with ULTRA and RT at 135-140FPS that's on a 5800X, if we use a 10900K and 12900K it will have even more performance.
source - Youtube search for that game on Apple M1 Ultra. Apple got OWNED.
Apple is not going to spend millions or billions on gaming market just because they are having free cash. Gaming is a strong market that demands heavy DIY or Console backing, with having ZERO IP or experience of this industry since ATARI. Apple is not fit for this task which is why they are targeting more ecosystem benefits to make their fans buy more of their walled garden Utopia products and venture into Services since iPhone is fading slowly to that "consumption" market. Soon Apple will launch AR and Car to further diversify.
RE8 Village is being ported to Mac by this Fall 2022. RE Engine is not heavy at all, so it will be easy to run that game for sure because it runs on Gen 8 consoles which are weaker than a Q6600 lol. Anyways the thing is I want benchmarks which would be interesting with their new M2 debut.
Yeah when performance is tatters you switch gears to Max TDP ratings.
600W ? 3090FE is 350W and you can underclock it and lose at max 5% by 300W, and for 12900K what makes you think gaming gets 250W PL2 ? not even 10900K goes that high, in gaming 12900K gets 120W tops and 10900K goes to 150W max, for 5800X3D which can be at similar perf is far far less, and if you take a 6800XT it will be at 10% less than 3090 in the same resolution. So that's out of the window. We were talking how Apple claimed 3090 performance and charged an arm + leg and got beaten badly on all occasions. Except the specialized ARM core workloads where they have IP blocks on the silicon, once FPGA gets embedded onto x86 by AMD it will be beaten again.
So does M1 Ultra have any expansion slot ? nope. Proprietary junk SSD, forget PCIe expansion, I can make any x86 PC into a Proxmox machine with HBA cards and what not. This pile of Junk can only boot Mac OS at best with best compat.
Ultimately ? Apple got SHREDDED, a $4000+ junk that can barely beat 2020 Hardware so much for bleeding edge PR smoke and mirrors.
PlagueStation is irrelevant, Apple got destroyed for a far less and direct competitor product PS5 is utter junk because DMC 5 on it runs at 4K30 FPS lol and 1080P it cannot get more than 120FPS, which is shameful since 1080Ti is 6 years old and runs at 170FPS at 1080P resolution in the same title.
Pretty pathetic that you are just making stuff up to defend Apple. 600W RTX generator? That would probably involve a ridiculous high volage cooled by liquid nitrogen, and even then, the results that the other poster mentioned didn't use that extreme overclocking set up to get those results.
Realistically, a 350W 3090 would get those results.
Strange how you point out that Shadow of the Tomb Raider is running in emulation, and yet Apple got "owned' on that game. Talk about unmaking one's own point.
I'm not sure what point you are trying to make by comparing non-native games running under emulation on Apple hardware and then pretending to make a hardware performance comparison based on it. That shows that you really don't understand what you are comparing.
Realistically, the M1 Ultra is the sort of upper mid-range offering. It's not Apple's Mac Pro offering. In something like GfxBench, it's getting very close to 3080 scores. That's not bad for the range it's targeting and especially considering the power usage. We'll see the numbers continue to improve and scale well as they have. Second generation silicon looks to be a nice improvement overall.
People keep whining about gaming performance on Apple Silicon Macs. I guess those people don't realize that the mobile gaming market, which Apple dominates, is larger than the entire console gaming market and entire PC gaming market COMBINED.
Apple dominates gaming, people who read forums like Anandtech don't accept that because can't play the latest AAA FPS on a Mac. Like it or not, it is people playing games on phones that is where the money is today, and has been for a while. PC gaming that requires an "RTX3090Ti equivalent GPU" is a niche market, sorry to have to break that news to you.
Most of us here don't care much for the next version of Candycrush and other PTP derivates that basically crowd the iOS gaming app store. I decided to keep my 2019 MBP 16 for now for bootcamp purposes as I would rather not carry two laptops on the go when I'm working on the road as I also game. I tried a MBP 16 M1 Max and it was pitiful for the few games I could run. Even lightweight FFXIV ran horribly on it.
We are here comparing the performance of M1, M2 to Intel AMD Nvidia counterparts. Not Snapdragon vs A series in which Apple is lost lmao. Worldwide Android userbase is exponentially higher than Apple. Apple even lost top 2 ranks now to Samsung Rank 2 and Xiaomi is at Rank 1. With BBK it will be even further down slowly.
Clown Apple is lost everywhere they rely on American market only and those fanboys plus Rich people who can afford their BGA machines and Smartphone a.k.a Social media toys.
Mobile gaming Joke is filled with Gacha trash called Genshin Impact MTX GaaS nonsense or Netease powered MTX garbage games. Psst... with all that power Apple A series processors do not have Emulation support. So all the Android phones are emulation kings, going by sheer library of what Android can do it's no contest.
Nobody plays AAA on Mac that junk already axed 32bit support so even old games are dead on that platform meanwhile DOOM from 1993 can be played on a Windows machine. Yeah nobody plays on an RTX3090Ti that is very thin marketshare. But many own 1080 class and the new consoles are 1080Ti-2080 class approx. Which Apple's own M1 Ultra lost to 2080Ti rofl.
Gaming on all sides Apple is goner. Their latest brag about Metal 3 is just trying to be competitive since DX12 and Vulkan left them in the dust since decades. You do not need RE8 to evaluate anything since SOTRR on Apple M1 Ultra using Rosetta but Metal absolutely got destroyed by a 3090 with 2X more performance. Once new gen drops that gap will double, and yeah even a CPU with 8700K can upgrade to an RTX4090, and RTX5090 while M1 Ultra ? Stuck in coping on how great M series is and how nice it is in power consumption.
Feel free to live in denial by looking at sales figures, which include lots of low end Androids sold to people who are not paying app developer salaries.
Apple has 84% of mobile gaming revenue, and as I said mobile gaming is bigger than console and PC gaming combined. Apple is the biggest gaming company in the world, you considering them a joke doesn't change that fact.
Just noticed that was US revenue only. In worldwide revenue according to Statista Apple had $42 billion in H2 2019 - H1 H2020 and Android had $27 billion. Still a clear majority.
A platform minting money because of dumb normies playing MTX drivel =! gaming market, if you want to count that as gaming because of the tag associated with it, let it be. As I said mobile gaming industry is a joke, it's filled with pure trash with zero art style, passion or anything worthwhile.
By 2025 Mobile gaming trash is going to beat the traditional industry by 60% leaving 40% only to Consoles and PC. Now you will probably celebrate hearing that how Apple is amazing since they make a ton of cash on the mobile platform.
Even if they take up 99% those are not games. Just like Singleplayer is being slowly phased out for MP MTX GAAS junk, Singleplayers are what defined as games. Quality >>>> Quantity & Passion to tell stories and worlds worth spending time is what It is.
I have an RTX 3090. and while I would certainly choose this for a top-teir game in 4K and content dev uses (3DSMax/Maya/Blender/Substance/etc), there are certainly Mac optimized applications where the M1 Ultra is no slouch, for its thermal/performance envelope.
And that's the actual point for some, especially a recording engineer/home DAW user who might worry about the fan noise for obvious reasons. I fit into all categories mentioned here, as well as some sci/viz uses and optics etc, and am extremely aware of the power envelope of the GPU's in here, and the sheer number of high performance but high power draw and thermal output machines. I'm constantly battling generating excess heat, and work to keep a constant temperature in here at all times.
The successor to the 3090/3090Ti ups the power envelope even more, to the point where it's going to be the primary power draw even in higher end Workstations by quite a large degree. One difference to Intel (even though comparing CPU & GPU is problematic) is that nVIDIA seems to use TDP well as a performance upper limit, whereas Intel allows burst power draw and thermals that fall well outside of the simplified stated specs (you have to understand the implementation and timings allowed for to understand why). So that comparison is again problematic...and just problematic overall for building rigs for various usages indoors and outdoors.
Performance in a tight thermal envelope is irrelevant from comparisons with the M1 Ultra and the RTX 3090. Apple claimed the M1 Ultra could match or even beat the RTX 3090. Instead the RTX 3090 clapped the M1 Ultra in all relevant workloads. M1 Ultra truly is impressive from a bandwidth and GPU perspective in the thermal package of the MacBook Pro, but the clowns at Apple marketing focused on false performance claims instead of real efficiency superiority.
The 3090 does better then the M1 Ultra on things like hardware based raytracing. However, other general GPU benchmarks such as 3D Mark, GfxBench, etc. show the M1 Ultra to be roughly on part with the 3090. Both products are optimized for different workloads. 3090 is more for gaming and raytracing, the M1 Ultra is more for video editing / filters, etc.
You are aware that the M1 Ultra doing well on benchmarks means nothing, right? Like, do you buy the product to get work done, or are you buying it to run benchmarks?
M1 Ultra had all the raw performance, the problem was with TBDR and IMR, most of the time, the GPU was sitting idle and that’s why getting thought they were underperforming
Why should we care about transistors? Let's compare end product to end product, where Apple is at best matched to x86 competitors, and often it loses on price to performance ratio. And this is even before you consider software compatibility issues due to ARM architecture (and due to macOS, but pre-existing Mac fanatics already learned to cope with that).
For me, the M1 Ultra is matching the GTX 3070 in sustain gpu load The M2 ultra with its upcoming 28.8 TFLOPS will surpass GTX 3070. Of course nowhere near GTX 3090 especially in gaming where the nvidia is what known for (My testing is as an Maya 3d modelling )
100% of the poor performance you encounter in games with Apple Silicon is due to lack of proper optimization and support from game devs (not saying this to *blame* them).
The GTX3050 is a 75W card (IIRC), and the 3090 is a 250W (I think) card. Meanwhile, the M2 uses up to 15W for graphics. The power discrepancy is too great here, no chance the M2 will beat a card running at five times the power budget. Assuming the M2 Ultra Pro Max triples the power budget for the integrated GPU, it still won't reach parity with the 3050.
No, the comparison was between the M1 Ultra and RTX 3050. The M1 Ultra consumes 215W at full load, where as the RTX 3050 only does around 75W at full load. Guess which one is faster by about 35%? It's the RTX 3050. So not only is the RTX 3050 more power efficient than the M1 Ultra, it does it on a worse node, and remains faster.
Yet, somehow, I am expected to believe an M2 Ultra is going to match a RTX 3090, when the base model M2 only has a 35% improvement in graphics over the M1.
Replace “Graphics” with “most current gaming and 3D modeling” and I’ll agree with you. In other software optimized GPU bound tasks, the M1 series chips is class competitive or leading when evaluating based on power efficiency, which is super important for laptops. Software optimization is just a matter of time now that there is such a quickly growing market of Apple Silicon Macs owned by affluent users to sell software for.
Initially I was thinking Apple would go for N3 then it didn't made sense how fast Apple can switch given the % revenue cut of Macs for entire Apple stream. On top of the increasing costs. So maybe N3 will debut for M3.
So a modest gain, I wonder how the cost scaling is, it's annoying to keep track of M1 release vs M2 release prices. I expect a bump by at-least $250-300.. Anyways Apple fans can buy this probably, but looking at this cadence of refresh. It's really stupid. I hate how the trend became nowadays, started with Smartphones about yearly refreshes. With the BGA machines it's even worse, you cannot even replace anything let alone ugprade. On top Apple's HW is super anti-DIY it's locked down to brim. Esp that Secure Enclave which is basically Intel ME / AMD PSP type backdoor system on the silicon.
Their graphs are super idiotic lol just remove that garbage curve and draw a bar chart you get the performance. Also their GPU is going to be decimated soon by upcoming Nvidia and AMD's new designs on TSMC 5N.
The CPU of M1 already got beaten by existing laptops when Andrei tested, esp it was only a meager % better over Zen 2+ Ryzen 4000 ultra low power BGA designs which were super inferior over even Comet Lake BGA parts and pathetic GPUs (1650 mobile LOL). IPC already was beaten by Alder Lake forget about MT since it cannot even beat many BGA processors.
M1 Ultra was faster in GPU but it was nothing in front of already existing designs and Apple got slapped by even Verge on RTX3090 class claim and beaten by an RTX 2080Ti, which is 4 year old product and costs less than their highly expensive ($4000) transistor bloated design.
I wonder if they are going to target RTX4090Ti / RX7950XT MCM this time with M2 Ultra ;)
You’re assuming there will be an M2 line structured the same way as the M1 line. I don’t believe so. Notice that no M2 mini was announced…
My guess is the M line will be (approximately) the battery powered line, like old Intel U-class. There will be a new line announced soon’ish after A16/iPhone14 that will span the equivalent of H-class to Xeon, not attempting the ultra-low energies of the M line, with consequent higher performance. If I’m right, don’t expect an M2 Pro, Max; the new line, call it Q1, will start at the Pro level, presumably in an 8+2 config (probably there’ll be a 6+2 yield-sparing version) and go up from there, with the Max being an ultra config (two chips) and the Ultra being a four chip (32+8) config.
Well, we’ll see! 3.5 months till September event, then maybe two months after that…
The M1 Ultra was actually part of a three-chip set developed for their flailing AR glasses project until Jony Ive decided to gimp the project (like he did the butterfly keyboard) by nixing the auxiliary processing unit. Source: The Information.
The M1 Ultra in AR Glasses? You mean in a hip-mounted rendering pack, a la Magic Leap? Because even at the lowest power threshold, it doesn't seem like you could put it in anything head-mounted that Apple would actually sell.
Another reason I'm skeptical the Ultra was really designed for AR is that you really don't need so many general-purpose cores for it. The computer vision portions of the pipeline are more suitable to run on DSPs or GPUs, while the rendering is obviously GPU-heavy. In MS' Hololens 2, did they even do any compute outside of the Snapdragon?
Not hip mounted. What's suggested in the patents is a base-station in the same room as the glasses.
I think it's foolish to make predictions about how much compute the glasses will or will not need for various tasks, especially if Apple will be, for example, making aggressive use of new technology like Neural Radiance Fields as part of their rendering.
Your argument is essentially "A fitbit can use some dinky ARM chip, therefore an Apple watch can also use same dinky ARM chip"; that only works if you assume that Apple wants to do the exact same things as fitbit, nothing more...
> What's suggested in the patents is a base-station in the same room as the glasses.
Are you sure they were talking about AR, then? Because AR is really about being fully untethered. Making an AR system with a basestation would be like an iPhone that had only wifi connectivity. You can do it, but it doesn't make a ton of sense - especially when the rest of the world is making proper cell phones.
> I think it's foolish to make predictions about how much compute the glasses will > or will not need for various tasks
Set aside your mild disdain, for a moment, and let's look at what I actually said:
"you really don't need so many general-purpose cores for it."
Nothing you said contradicts that. I simply pointed out that general-purpose CPU cores aren't the best tool for the heavy computational tasks involved in AR. That (and power/heat) are the reasons it sounds strange to use something like the M1 Ultra, for it.
> for example, making aggressive use of new technology like Neural Radiance Fields
Sure, and you need neural engines for that - not general-purpose CPU cores. So, that tells us that the Ultra has too many CPU cores to have been designed *exclusively* for AR/VR. That's not saying AR/VR wasn't _on_the_list_ of things it targeted, but @fazalmajid suggested it was developed *specifically* for the "AR glasses project".
> Your argument is essentially "A fitbit can use some dinky ARM chip,
Not exactly. There's a baseline set of tasks needed to perform AR, and Hololens is an existence proof of what hardware specs you need for that. Sure, you could do a better job with even more compute, but at least we know an Ultra-class SoC isn't absolutely required for it. And when we're talking about a little battery-sucking heater you wear on your head, the baseline specs are very important to keep in mind. That said, Hololens 2 is a great improvement over the original, but still could use more resolution + FoV, so the benefit of more GPU horsepower is obvious.
I think this take on the AR project is pretty dumb. Johny Ive screwed up a few things (yes, the butterfly keyboard) but that doesn't mean he's the idiot that the nerd-o-sphere believes him to be. He was a big part of resurrecting Apple and building it into one of the most powerful companies in the world. It's not like Apple's comeback happened *despite* Johny Ive.
It doesn't take a genius to figure out that if you have to tether AR glasses to a base-station or even a dedicated pocket computer, it pretty much defeats the whole point of AR. Tethered AR isn't AR: It's a gimmick. He pushed for an untethered device that had lower fidelity graphics. That sounds like the right decision to me. Nobody is going to give a rat's anus if your AR is photo-realistic when they're tethered to a brick.
If Apple killed the AR project it's because the tech isn't ready yet (kinda like VR). Apple learned a long time ago that there's no prize for being the first to market. The iPhone wasn't the first smart phone, but it was the first smart phone to feel like a complete design rather than a prototype.
> It doesn't take a genius to figure out that if you have to tether AR glasses to a base-station > or even a dedicated pocket computer, it pretty much defeats the whole point of AR.
Yes, that's what I'm talking about. Microsoft showed you can do it with tech of several years ago, and it's quite usable for several things. For Apple to then release a more restrictive or bulky product would basically be a non-starter.
Whatever they do, it has to be at least as good as Hololens 2, in all respects (and significantly better, in at least a couple).
Timing of specific product announcements is no indication that the M2 line won't be structured in the same way. The M1 iMac wasn't announced with all of the other M1 products either. It came later. If you haven't noticed there is a major supply chain global problem right now. Apple is only announcing what it thinks it can ship and they are struggling at that.
So you're dinging Apple for coming out with a new SOC with ~20% perf improvement after 18 months!? Just want to make sure I have that straight. Maybe you can remind me by the way, what was a typical increment for an annual Intel tick or tock?
Agreed. Also Ryan is very wrong with his assumption with the M2 using A15 processors. The M1 was developed during the A13 processor, and used many aspects of the upcoming A14 processor. And the performance was basically equivalent between A13, A14, and A15. Since they're essentially on the same node, even on the TSMC-4nm, you don't have enough room to squeeze out that much performance.
All signs point to this using a new architecture, as has been rumoured for a long time. The ~20% performance uplift is impressive, even if users only see about 10% of it in real-world usecase. The upcoming iPhone 14 (A16 chipset) will likely boast an impressive increase in performance similar to this.
The only thing that I am not thrilled about is the graphics. Apple did very well with the M1, basically beating both Intel Xe and AMD Vega, even if they relied on Rosetta2. But the market has changed. Intel Laptops are getting getting graphics bump thanks to Nvidia dGPUs getting cheaper. Whereas AMD has started competing with their RDNA-2 iGPU. I feel like the +35% upgrade is barely enough, and was hoping for a +50% or so.
On the optimistic side, it is possible that Apple has saved the best for last. Maybe we will get an M2 Plus, M2 Pro, and M2 Max chipsets soon. Perhaps these will cost more money, but come with more substantial upgrades. They certainly hinted towards that with the collaboration they had for Biohazard/Resident Evil Village which is getting a macOS port, with the latest Metal API which supports things similar to FSR/DLSS and DirectStorage. So Apple is right up there with the Competitor APIs like Vulkan/AMD/Nvidia/DirectX 12U.
Some of that was from AVX-512 and some of it was from raising the thermal ceiling even higher (which isn't an option for mobile-oriented CPUs). Also, it's easier to improve IPC, when you're starting from a lower base.
Not enough people are focusing on LPDDR5. Just look at what a benefit it was for Alder Lake's all-core performance! Surely, a chunk of the M2's IPC gains must be coming from this.
LOL... no, that's now how it works. If multithreaded performance is up 18%, that means the average of all cores is up 18%. You don't divide the 18% by the number of cores. Realistically, just like the A15, the M2 has modest increases with their performance cores and very significant increases with the efficiency cores which average out to 18% across the board.
Reading nanoreview's comparison between 6800U and 5800U from AMD, there is a 4% ST Cinebench advantage for 6800U and 24% advantage MT perf in Cinebench.
I've always read IPC increase as a measure of single core perf improvement instead of total SoC. Even if I am wrong, 18% increase in a multithreaded benchmark score does not necessarily translate to a 18% increase per core score result
"18% increase in a multithreaded benchmark score does not necessarily translate to a 18% increase per core score result"
On average, yes, it does. Again, in this particular case, the performance cores saw a more modest game but the efficiency cores saw much bigger gains. Across all 8 cores, the increase averages to 18%. That's exactly what that means.
I'd also note that it becomes increasingly silly to just look at the CPU when discussion an SoC. The GPU increased by 35%, the NPU creased by 50%, it has a more powerful ISP, better media encoders/decoders, much higher memory bandwidth, etc, etc. The net improvement of the M2 over the M1 is pretty substantial.
Let's wait and see the official data on benchmark workload and 3rd party reviews. Apple, like all companies, are notorious at cherry picking metrics and making use of wording like "up to". For instance the 35% is at peak consumption on the 10 core variant of the M2. You cannot infer GPU core progress when you compare 10 core vs 8 core(or maybe vs the M1 variant with 7 cores).
The question isn't whether Apple is cherry picking benchmarks or not. They very well may be. The point I was making is that if ANY device shows an 18% increase on multithreaded workloads, then logically, that's an average increase of 18% ON EACH CORE. From the A15, we already know the performance cores get a smaller bump and the efficiency cores get a bigger bump. Your original comment was suggesting that you'd have to split that by the number of cores to see the IPC improvement for any single core. That's simply not true.
"For instance the 35% is at peak consumption on the 10 core variant of the M2. You cannot infer GPU core progress when you compare 10 core vs 8 core(or maybe vs the M1 variant with 7 cores)."
If you look at their graphs, they show a 25% GPU improvement at the same 12W power level. It goes to 35% at 15W when all GPU cores are active. It's not exact, but it gives you a pretty good idea.
RKL had Double Digit IPC growth 19% it failed to translate to Real world and lost hard 11900K vs 10900K due to loss of 2C4T and 14nm backport. 12900K improves even more IPC 20% on top of RKL again.
Apple is not some magical unicorn that keeps on going without considering the physics and technical limitations and beat everything out as Appe fans see. In reality they got slaughtered in the benchmarks check on how M1 got destroyed by puny laptops and puny GPUs. For the M1 Max which is expensive option, it barely matches 3060 mobile BGA GPU.
For SPEC CPU which Anandtech always steers towards is the only place where it did something, also Zen 3+ 6nm designs will destroy that, on top if Zen 4 based BGA trash comes out M2 is toast. With Zen 4 and Raptor Lake, Apple cannot compete on ST and MT performance AND SPEC. And ofc GPUs once the new Ada and RDNA3 BGA junk comes. It's over.
Intel improvements were amazing given that they had no CPU designers from 2014-2020, they laid off most of them in 2014, instead promoting marketing nerfs into their jobs and into the executive and ceo positions ...
1. Kind of hard to release a product on a node that has been delayed, and currently isn’t producing chips.
2. 10% inflation, so the real price increase isn’t as large as people make it out to be. Still a $100 premium, so there is good reason to be concerned. However, Apple doesn’t typically raise prices consecutively, so I would expect M2 to slot into M1, and M3 to replace M2.
3. The performance/watt curve is incredibly significant. Excluding the academic usage to see the inter generational advantages and micro architecture efficiency sweet spot, Apple products do not run at max TDP for extended durations except for certain models. Meaning that curve tells as the real performance differences depending on the chassis the M2 chip is in.
4. M1 had the best GPU in class, and it’s CPU was highly competitive by offering comparable performance in significantly reduce power envelopes. The 4000U class products from AMD are not designed to maximize performance, but deliver good performance with excellent battery life. The M1 chip simply does this better. M2 will not cede ground in this regard.
5. Such claims are lies because Tim Cook is the messiah that can only deliver the truth of Apple supremacy.
6. With how they got burned on the last claim, I think marketing might switch to the massive efficiency advantage the M2 Ultra will have.
If Apple can save money using AV1 and increase profits they will do it. They have an obligation to their shareholders. AV1 support will eventually appear on one of their next SOCs.
Apple will continue to double down on whatever codec their iPhones produce when recording video. They will not shift to AV1 if it doesn’t provide substantial performance improvement over h265, as die space is constrained. It speaks volumes that they implemented proRes decode before AV1.
Yeah, I did a double take on reading that too… I thought for a second my memory might be playing tricks on me but I went and checked and yep, the M1 Pro/Max/Ultra all use LPDDR5…
Why is this only for Macs? If it is a fast chip, it might be interesting to put in a motherboard that actually had PCIe slots, m.2 slots, DDR5 DIMM slots, and SATA ports. Put it in along with an RTX 3090, or the next gen graphics cards when they come out. Might make for a nice system, I wonder how it would compare to something based on Alder Lake or Zen 4.
It would kick their butts, as the M1 already dit that.. Needs some software to be able to manage the new hardware though. Personally I believed that Windows for ARM would explode, and that new ARM designs would come up and compete. has not been the case yet. Apple is the CPU king as we speak, and there seems not to be any serious competition to be seen, even on the horizon. Still its more expensive than ever to buy cpus for stationary gaming PCs, that gets beaten by the 10-50W M1 design.
Apple is the most Anti-Consumer corporation on the planet, they axed 3.5mm jacks, I/O ports only to bring back now, Soldered everything and what makes you think Apple will make an LGA socket with full chipset ? Not even in 100 parallel universes this will be possible.
Also ARM junk will never make it to DIY machines. Because of how awful their Software support is. Mass market means you need the heavyweight support without that ARM cannot sustain. Also on top every single ARM processor out there relies on proprietary junk blobs including the most dev friendly Qcomm with it's CAF. They will never release the important base firmware. Plus more over ARM processors are always suited for lower performance workloads. Please do not bring Graviton 3 into equation. It's again a full custom proprietary silicon with support only offered by AWS. If we want we can talk Altera, Marvell but they are also slowly moving full custom off shelf parts. Fujitsu A64FX is niche.
So ultimately x86-x64 reigns supreme. Zen 4 will eat M2 for breakfast. Just wait until AMD releases it.
I do not have any AMD product man.. lol I didn't get Zen 3 because it's crappy IO Die and instability for me, It works great for many and I need OC and tuning which AMD cannot provide. So I got CML, But I can see how AMD is much superior in MT workloads and how their AM4 platform is such a boon for DIY to use and have longevity which Intel doesn't care. Forget this BGA trash Apple technology which is use and throw, you cannot replace even an SSD forget the rest. Heck the MacBooks have KBs with copious amounts of Glue.
AMD is taking a huge chunk of Datacenter Marketshare from Intel. AMD gained more than 8% of Server Marketshare. In Mainstream AM4 saturation is massive, which is why AMD decided to not put more money in R&D for X3D refresh for enter Zen 3 stack as Alder Lake is already selling a lot for System Integrators and BGA junk.
Zen 4 will destroy Raptor Lake, you cannot simply get more MT performance from weak x86 E cores. AMD's Zen 4 got a massive boost in clocks as well, Apple has no chance. Not even their M2 Ultra can beat as mentioned in the articles, look at how M1 Ultra gets Slaughtered by 3990WX a 2019 product, same HEDT class. That's where your beloved Apple is in reality. Don't even waste time on BGA M1 and M1 Max as they are already destroyed by lowly Zen 4000 BGA parts and 1650 mobile vs M1, and M1 Max barely scratching 3060 Mobile rofl.
Qasar, that is hilarious you are calling lemurbutton an apple fanboy, when you yourself are a SilverSurfer fanatic. :P
The fact is x86_64 and ARMv9 are just trivial points. What matters is the end goal. What things can they compute? Which can compute faster? How much energy does it use? You usually cannot remove software (or hardware) from the equation. You have to look at both together as the software-hardware characteristics. Just like you can't remove Space when making calculations about Time, they are intertwined as the Space-Time fabric.
" Qasar, that is hilarious you are calling lemurbutton an apple fanboy, when you yourself are a SilverSurfer fanatic. :P um yea ok sure, i guess you havent seen any of lemurbuttons posts then, as they ALL praise apple in some way, and bash every thing else in another.
I hadn't noticed. But calling someone a fanboy (even when true) and ending the comment there achieves nothing. More often than not, drags the convo back.
The fact is SilverSurfer doesn't have a coherent argument, when he is comparing x86_64 to ARMv9. And he won't do anything to address the point that I made; you cannot abstract the "software" portion out of the "hardware" for comparison. That's one of the things Steve Jobs got right.
I don't think it is possible to reverse the trend of ARM in laptops, it is game over for Intel/AMD in laptops. Huge performance/watt gains compared to X86. ARM works for most of the people using laptops.
> put in a motherboard that actually had PCIe slots, m.2 slots, DDR5 DIMM slots
Who knows about the M2, but the M1 didn't have much more PCIe lanes than needed for a M.2 slot. Also, didn't have an extra memory controller for external DDR5.
The M1 chips were designed for thin & light laptops, first and foremost. They'll make a CPU for the next Mac Pro that has more PCIe lanes and supports external DRAM.
"They'll make a CPU for the next Mac Pro that has more PCIe lanes and supports external DRAM." Sources?
Apple is Apple because their vertical integration and massive savings + walled garden because of that. I think it's oxymoron for them to make a socketed CPU+GPU.
The current Mac Pro has 64 PCIe lanes. They aren't going to release something with much less than that, which tells us it's not going to use a M1 Max die.
Also, it probably has to support a couple TB of DDR5, which suggests they're going to have to use external DRAM.
No, I'm not a Mac whisperer. Watch name99's posts. He seems to follow them more closely than most. He also has a twitter (under a different name). You can find it if you search for his M1 Explainer.
The 100GB/s main memory bandwidth seems quite modest for CPU+GPU, but Apple has large on chip caches .. they like to make comparison with x64 which makes sense as it is their main competitor..
But there's another direct chip comparison that is interesting to make (but unfortunately hard to test easily) - that's with Nvidia's ORIN chips - in fact just their half-size ORIN NX.
ORIN 200GB/s bandwidth(double) ; ~4TFLOPS GPU (close), 8/12 A78 cores @2GHz+ (less, but could be close with 12 cores) ; "Neural" - similar 10-20 TOPS ; AV encode/decode - Nvidia seems better here (?) ; power 15-60W ; transistors (17 billion when 12 cores) ..
It would be an interesting comparison with M1/M2 , even more so since Orin is made on the inferior Samsung 8nm node ..
The comparison would be difficult in practice, but could shine some light on Apple's skill as a chip maker (and not necessarily the 'flawless victory' they can claim against x64)
Orin is a much larger chip (hence the doubled bandwidth). It plays closer to the M1 Pro than the M1/M2 proper.
Which may also play into why the AGX Orin module is 900$ (for 1K units) for the mostly-fully-enabled version, $1600 for the full-fat experience.
The Orin NX module variants are probably closer to this thing, conveniently with equal memory bandwidth too, and to get there they chop off/harvest half of the GPU.
That being said, Orin is also made for a different market with different requirements. It'd probably be pretty interesting if Nvidia built a proper Tegra SoC again for actual mobile products and how that might compare to this. Like a Tegra Windows on Arm SoC.
Yeah, they are in better position than most to do such a thing (including Qualcomm).
I think their compute customers would be interested in a datacenter "APU" akin to Intel's Falcon Shores, which they could bin and spin into a M1-Max-esque product.
"Orin is a much larger chip" - well it has 17B transistors, vs 16/20B for the M1/M2 .. a larger chip by die size though
"It'd probably be pretty interesting if Nvidia built a proper Tegra SoC again for actual mobile products and how that might compare to this." - supposedly Nvidia has designed a Orin variant "T239" for Nintendo, presumably for a Switch 2 (?) .. all I know is it has 1536 Ampere cores halfway between Orin and Orin NX. It seems to be real based based on API leaks and other stuff.. I'm assuming it's another mobile part but I may be wrong.
Definitely far more capable than anything Qualcomm has right now, maybe Nuvia will change that..
Qualcomm claims 1.8 tflops for 8cx, and 8cx gen 3 is claimed to have 60% gpu perf increase. And they are also using the x1 cores in it, looks like they are just behind in multicore for now, compared to this t239.
I'd love it if nVidia got back to making Tegra SoCs for Windows on ARM! Last one they did was the Tegra4 back in 2013 I believe. I don't mind the Qualcomm chip in the Surface Pro X, but I suspect nVidia could do better. Especially on the GPU side, as they've been writing Windows drivers for a long time.
True, but also by comparison it's not much more than the Steam Deck APU at 88GB/s (1.6TFLOP GPU) - I wonder if Apple is still using a variant of PowerVR's tile based deferred rendering to reduce gpu bandwidth requirement ?
> I wonder if Apple is still using a variant of PowerVR's tile based deferred rendering
I think everyone is using tile-based deferred rendering, now. Nvidia switched in Maxwell (GeForce 900-series). AMD started to switch over in Vega (search for "DSBR", which didn't yet work at launch) and presumably refined it in RDNA. And I think Intel switched with Xe.
It could go some ways towards explaining why Infinity Cache was such a huge win for AMD, in RDNA2. One thing TBDR does is improve access coherency, which should increase cache efficiency.
The issue outside of Apples garden is that with what is going on in the world and how water and energy become so important...the x86 will come to an limit Servers, and gamers soon will draw 500W just from the gpu only. This must go to an end, servers needs buildings cold with A/C and water tubes because how much power and heat produce Apple made a statement that the world needs this. I hope Microsoft with their mac mini style arm will take a step in the right direction This nvidia 4090-5090-9090 that will draw significantly more power just for 30-40% increase must to end (Intel cpu included here too). We should not reach 1000W just for the gpu...is outrageous At least from my Gtx 3070 and M1 ultra, in Maya 3d modelling there are equal in sustain full load, but wall power source says a different story. M1 Ultra system draws in total around 210 W, while my GTX 3070 system over 620W...this is insane
Are you having issues cooling a sub 700W PC ? If that's the case you should sell the PC and buy a laptop or a Macbook.
I do not get this argument at all. Basic PC and on inferior node but have a ton of flexibility compare to an M1 Ultra which is $4000+ with zero user expansion options and say how it's lower power consumption. Specialized workloads on ARM is not a new thing, ARM has IP blocks that are dedicated for that type of workloads.
4090 is just 30% increase over 3090 I presume, where did you see this claim I would like to read the source.
1000W for GPU, well I would like to know which consumer GPU is this. RTX3090Ti ? that's 600W peak limited, and 450W max and its the worst card to buy $2000 for very minor boost over 3090 which is already a small 15% increase over 3080. Or you are talking about the HPC cards from AMD or Nvidia ?
The remark is: Newest CPU/GPU power consumption is crazy high. And the trend is still going up, while we have limited resources. They (Intel, AMD, NVidia etc) should make something that more 'power friendly'.
Sure, Ryzen and Core are using more power than the M series at present; but there seems to be an unspoken assumption that ARM, Apple, and kin are radically more power efficient, good for the planet, and will take us far into the future, while poor x86 will go up in flames.
ISA and microarchitecture does make a difference, but at the end of the day, more performance will take more energy, and it will go on like that till the laws of physics change. It's conceivable that, in principle, there's even a limit to computation.
In any case, if Apple were so eco-friendly, they should just close shop and call it a day; for that surely would use less power.
The way you are framing this argument is silly, because if you want to know what consumes a lot of resources to run, it's data centers.
Google doesn't even want to let anyone know how much fresh water they are consuming in their data centers. Probably because if the public knew, law makers would be forced to do something about how wasteful data centers are.
For those who value energy efficiency, Apple's silicon has no peer. The idea of a kW desktop system in the home has little appeal except to those who are gamers (I'm not a gamer).
As a parent of a young child, I gave up building Gaming PCs because I do not have the time to play with drivers and settings for hours. Maybe your PCMR experience is different, and we all know half of the joy is fighting the system to make it “better”.
Now I console game because I just turn the system on and a minute later I am playing.
I think Apple has some serious opportunities to get me to play games on their systems if they can give a mix of console ease and PC customization.
Also: very possibly, by the time they get there my kid will be old enough to want to build gaming PCs with me. Who knows.
Apple really needs to make an AppleTV with real gaming chops. Then some game makers will take Apple silicon seriously. For me, I stopped buying games for Macs because they all stopped working at some point. Unlike the PC that supports even many DOS games to this very day.
Yeah, agreed. The AppleTV is seriously a missed opportunity. Apple should have 2 versions. Simple streaming only version that maybe plugs into an HDMI port like a stick and a M1/M2 version that's capable of gaming.
We'll have to wait for dedicated hardware AV1 codec support. I've read it can handle AV1 software decoding. Though, idk how much that affects battery life.
We can expect an A16, which should have a better core as everything we know indicates M2 uses A15 cores.
I would not necessarily assume A16 will be ARMv9, and if it is that's mostly irrelevant because v9 doesn't add bring anything to the table even slightly revolutionary.
SVE2 (SIMD) support is the only cool feature of ARMv9. SVE2 can handle up to 2,048 bit length instructions. Total speculation, but, Apple could include SVE2 512-bit SIMD instruction capability on a future M series processor just like Intel's AVX-512.
A16 cores will likely be more efficient/better IPC than those in M2, but not faster. M2 cores are clocked higher due to the higher power envelope. A16 will also be 5nm process…I don’t expect much of a performance gain in the CPU, but they will no doubt continue to push GPU and neural engines.
It took 18 months to roll out the M1 family of chips. Now that the M2 is starting to roll out, I wonder if they will take about 18 months again. It seems likely. So M3 on a 3 nanometer node at the end of 2023. M3 will likely skip the A16 architecture and jump straight to the A17 architecture. When including the node jump and two architecture generations, M2 to M3 should be a much bigger jump than M1 to M2. Still, the improvements here are nothing to scoff at. M2s biggest areas of improvement are in areas that make the most sense to invest in: GPU and neural engine. Too bad about the 1 external monitor limit, I really think this machine is more than enough for most but a lot of people have become used to having a dual monitor setup. This move begs Apple (or anyone) to make a much cheaper 6k display.
> It took 18 months to roll out the M1 family of chips. Now that the M2 is starting to roll out, > I wonder if they will take about 18 months again.
We don't know whether they truly planned all 3 M1 variants from the beginning. Also, I think you should start counting the M1 rollout from the A14, which is the first time they'll have gotten some experience with the cores and could start to see how well they might scale.
I wonder if they'll take a more modular approach, this time. Like, maybe doing just two chips, but with more interconnects?
If the M2 can scale to a 2-chip configuration and the M2+ can also scale to a 2-chip config, then you get the same 4 products that comprised the M1 line, but with approx 2/3rds of the silicon design & validation effort. However, maybe M2+ would scale all the way to a 4-chip config? For single-hop latency, each chip would just need to have 3 links, which seems manageable.
The base model only has 8 gpu cores so I’m guessing we’ll see a 15% advantage over m1 gpu. Though if you need those 2 cores you’ll pay a ridiculous $100 along with $200 for each 256gb storage and 8GB ram upgrade, coming in at a ridiculous $2500 for the top configuration. You might as well purchase a pro model for that price since you get the 16” bae model with 10/16 Soc, 16GB ram and 512 Gb storage or a 14” model with 1TB SSD. So you lose a lot of storage and 8GB ram but gain larger mini led screen and a much faster SOC.
"Carried over to M2, and it’s not unreasonable to expect similar gains, though the wildcard factor will be what clockspeeds Apple dials things to" A preposition is an awkward thing to end a sentence with. please learn english.
Obviously, these are hastily-written articles. If they're not published in a timely fashion, they're practically worthless. And they can't afford a copy editor, due to ad blockers.
I am thinking that the days of Intel + NVidia laptops are DEAD. AMD is shipping 2 Tflops integrated GPU (6xxx series), Apple is shipping 3.6 Tflops integrated GPUs, and Intel can't ship anything new in the GPU area, for 8 full freaking years, nothing, nil, nada, its zenith was iris pro 5200.
It is a shame the x86/windows crowd can't be a little more appreciative of Apple's role in the tech industry, maybe in a few years time when the Windows ecosystem follows Apple *successful* transition to ARM based SOCs....
Meanwhile, for those worried that Apple hasn't been giving non-mobile gaming enough love, I imagine their experience in designing SOCs for headphones/watches/mobiles/tablets/laptops/desktops could come in quite handy when developing a AR/VR headset?
FWIW, I *do* respect Apple and what they've achieved. I wouldn't exactly use the word "appreciate", but the competitive pressure is certainly a positive aspect.
I have no love for x86. Even 5 years ago, I thought the ARM transition would be much further along, by now.
The industry has a pattern. Apple does something different like removing the floppy, no removable battery, adds working secure biometric security, etc. First, the industry mocks it until they end-up doing the same thing. Going the SoC route is really a no-brainer, especially for laptops, etc. Intel/AMD will have to eventually field something similar to be competitive. Some will complain because they can't add more memory, etc. Most will appreciate the much greater efficiency and battery life. Anyway, we're just in the early stages of the industry mocking Apple on this approach... they'll follow soon enough. They have to in order to remain competitive.
I think the reason Apple was first to deploy in-package memory, at a mass-market scale is due to being vertically integrated. That lets them absorb higher CPU costs more easily than a traditional laptop maker.
The other thing is the GPU, which is the main beneficiary of moving DRAM in-package. Until now, laptops wanting a powerful GPU would deploy one on a separate die. AMD and Intel are now both tapping this market for additional silicon. In Apple's case, they had no separate GPU and didn't want to support it. So, they could more easily decide to go all-in on the iGPU approach.
Agreed, but I think it’s more than that. Modern workloads are increasingly benefitting from more than just the CPU and even GPU. For good well rounded performance, a modern system needs things like a Secure Enclave, dedicated Neural Engine, matrix multiplication unit, dedicated media blocks for common formats, etc. These dedicated units not only bring great performance, but they also bring great efficiency with their special purpose processing.
> dedicated Neural Engine, matrix multiplication unit
Those are mostly about AI, except that matrix multiplies could be useful for HPC, if high-precision formats are supported.
And hard-wired AI provides the most benefit in mobile applications, due to the efficiency benefit vs. a CPU or even a GPU. In a desktop with a bigger power budget and often a bigger GPU, you're better of just offloading it to the GPU.
BTW, both ARM and Intel have matrix arithmetic ISA extensions.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
171 Comments
Back to Article
meacupla - Monday, June 6, 2022 - link
M1 Graphics turned out to be absolutely dismal, with the Ultra M1 being unable to even keep up with a GTX 3050, and absolutely left in the dust by a GTX 3090Maybe the M2 Ultra will finally match the GTX 3050?
zamroni - Monday, June 6, 2022 - link
If we consider the price per transistor, m1 and m2 air is actually much cheaper than intel or amd based laptop.Latest amd 6000 series apu only has 13.1 billion transistors
lilo777 - Monday, June 6, 2022 - link
There are different methodologies for counting the transistors. Unless we are sure that the same methodology is used in both cases, these numbers are meaningless. Besides, end user does not care about the number of transistors. They care about the performance.ballsystemlord - Monday, June 6, 2022 - link
What do you mean? A transistor is a transistor, right?michael2k - Tuesday, June 7, 2022 - link
That’s like saying a cylinder is just a cylinder when comparing a boxer engine, a v8, or an i3 engine.bldr - Tuesday, June 7, 2022 - link
That is nothing like saying a cylinder is just a cylinder and then using different engine arrangements and ACTUALLY CYLINDER COUNT as a metaphor to support. At the end of the day neither tell you anything about performance out of context without other metrics.at_clucks - Tuesday, June 7, 2022 - link
A cylinder *is* a cylinder. And most people don't care about that number more than they care about number of transistors. Recently people have become completely disinterested in cylinders, going for "cylinderless" electric motors. Which according to the logic don't do much because... no cylinders right?Nobody buys cylinders or transistors, they buy expected results: it's fast, it consumes little. Sure, they may buy "frequency" or "number of cores" but mostly because they keep being told that those numbers matter more than everything. I mean they matter, there's a correlation between frequency, transistors, cores and actual performance but they're just proxies. Like buying an engine based on fuel consumption, more fuel = more power.
Wooloomooloo2 - Tuesday, June 7, 2022 - link
I think the point of the original post was that there is no difference between the function of one single transistor Vs another, which is true, they have a binary state of open or closed. The size of them makes no difference. Engine cylinders on the other hand differ greatly based on displacement and materials, so one can have far more impact than another.The analogy was terrible to begin with, but taking the argument to car engines doesn't make sense, given the comparison is completely inappropriate.
michael2k - Wednesday, June 8, 2022 - link
Uh, transistor size scales to power usage and performance, just like cylinders.https://en.wikipedia.org/wiki/Engine_displacement
Cylinder size and count loosely translate to performance insofar as a larger cylinder can burn more fuel. A small bore vs large bore, then, means different cylinders aren’t the same.
Just like transistors. Even at the same process node a larger transistor can be switched faster; it’s pretty basic physics in both cases.
https://docs.lib.purdue.edu/cgi/viewcontent.cgi?ar...
So no, a transistor isn’t just a transistor and a cylinder isn’t just a cylinder. There are multiple dimensions to their design that impacts the performance of the whole based on the individual component.
TheDoctor7x - Monday, June 27, 2022 - link
A cylinder *is not* just a cylinder. Bore, stroke, surrounding block material, surrounding cooling properties, lining material, how many valves are feeding fuel and air and by what design, etc. are all tremendously important to the performance of a given cylinder for an intended use.Otritus - Tuesday, June 7, 2022 - link
There are 2 different transistors that can be counted. Schematic or layout if I recall correctly. Schematic transistors are the transistors in the design blueprints, and layout transistors are the transistors physically put on the processor. Transistors are not counted, but estimated instead. If I recall correctly there was an Intel product with 1.2B schematic and 1.4B layout transistors (Either Ivybridge-S or Haswell-S). Sandybridge-S had 956M schematic and over 1B layout transistors. They should be within 10-20% of each other. Regardless of methodology Apple is definitely providing more transistors in M2 than AMD or Intel.ballsystemlord - Tuesday, June 7, 2022 - link
Thanks! That explains it nicely.boozed - Thursday, June 9, 2022 - link
It's fascinating that they wouldn't know exactly how many transistors their designs contain.Do they conduct their simulations on simplified versions of the circuits?
jab701 - Thursday, September 8, 2022 - link
When you layout a design you might need buffers to help drive signals further across the chip, these are not functional, they wont be written in the design but are required to make the design work in a physical sense.The clock distribuition network for exaample, we know where the clock is going to go but we wont know how it needs to be balanced until we go from verilog design into physical design.
We also sometimes will duplicate logic etc in order to get a working physical design.
Dolda2000 - Thursday, June 9, 2022 - link
A transistor isn't really a transistor for a multitude of reasons.For one thing, they can differ in geometry (and size) in order to obtain different tradeoffs in terms of switching speed, leakage current, gate capacitance and other metrics, and one IC design often uses many different transistor designs for different parts of the design.
For another thing, it is common for several transistors to share various parts, such as one gate serving to switch several different channels, which saves space compared to having the same number of distinct transistors.
Various such aspects can be seen making a very distinct difference to total density; for instance, GPUs often pack a significantly larger number of transistors into the same area as a CPU (even on the same process), because it uses different designs to achieve different goals.
zamroni - Tuesday, June 7, 2022 - link
transistor count very correlates to performance, and it's lineary in most cases.more caches, cpu pipelines, gpu alu need more transistors.
SunMaster - Wednesday, June 8, 2022 - link
By your logic the CPU with the most transistors is the fastest. Why do we need AMD and Intel anyway, it's just to make sure enough transistors get manufactured on a large enough die....zamroni - Friday, July 1, 2022 - link
basically yes by increasing parallelism because pushing for higher frequency is much harder to be done.zamroni - Friday, July 1, 2022 - link
rtx 3090 is faster than 3070 because 3090 has more cores.5950x is faster than 5800* in multithreading because of more cores.
more cores, more transistor.
for cpu, single core performance improvement today is mostly done by lengthening branch prediction which needs bigger caches.
more caches, more transistors
mode_13h - Monday, July 4, 2022 - link
> single core performance improvement today is mostly done by lengthening> branch prediction which needs bigger caches.
LOL. You're joking, right?
Intel and AMD put lots of effort into making their cores faster, and it's achieved by the sum of very many individual tweaks and changes. Try going back and reading the Deep Dive coverage on this site of all the new cores that have been introduced over the past 5 years.
The other thing you're missing is that new process nodes don't only increase transistor counts, they also move the transistors closer together. This enables longer critical paths, in terms of the numbers of transistors to be chained together, for a given clock frequency. That, in turn, allows the complexity of pipeline stages to be increased, which can result in fewer of them. Or, you can have the same number of pipeline stages, but they can have more complexity than before.
Dante Verizon - Monday, June 6, 2022 - link
Nah.. It's not even close, you get a good laptop with an offboard GPU and 16gb for this price, best of all it's not limited to the apple system.Kangal - Wednesday, June 8, 2022 - link
It depends on your needs. Certain MacBooks have been great on release and in its lifetime, and basically have a legendary status similar to the likes of ThinkPad's. We just came out of the "dark ages" with the 2016-2020 having the Worst Mac Lineup in a while. The M1 variant was great, and has forced MS/Windows and the likes of Intel and AMD to innovate or catch-up. But that was a first generation attempt, and I have been holding out for the second-gen release.I'm hoping to see a MBP 14in, 32GB RAM, 1TB storage, with the M2 Pro chipset, with hopefully a +50% increase in the iGPU performance. That would ensure it remains competitive for a long duration (4-8 years).
As a reference, here are the well-praised devices:
- The last upgradeable / unibody model (mid-2012 MacBook Pro-15 with GT650M)
- The last medium retina model (early-2015 MacBook Pro 13in Retina with HD-6100)
- The last large retina model (late-2015 MacBook Pro 15in Retina with M370X)
- The last x86 model (early-2020 MacBook Pro 16in with RX5500)
meacupla - Monday, June 6, 2022 - link
You are the first person I have ever encountered that counts "price per transistor"Everyone else I have encountered thus far always uses "performance/price"
mode_13h - Tuesday, June 7, 2022 - link
perf/transistor is a useful metric for comparing uArch sophistication. In Apple's case, it's a better comparison than perf/$, because the true price of the silicon is somewhat obscured. However, when talking about anyone else, it roughly correlates.zamroni - Friday, July 1, 2022 - link
https://en.wikipedia.org/wiki/Transistor_countpushing higher frequency is very difficult.
that's why manufacturers simply puts more cores and caches to increase parallelism.
if amd wants to reach 3090 rt performance, then amd must put more rt cores.
more caches/cores means more transistors.
TheinsanegamerN - Tuesday, June 7, 2022 - link
Price per transistor apple sucks, my 6800xt and 5900x combined cost a fraction of a MacBook.michael2k - Wednesday, June 8, 2022 - link
I’m sure an M2 also costs a fraction of a MacBook.melgross - Monday, June 6, 2022 - link
Not really. It’s pretty good. It just depends on what you’re using it for. A number of scientific uses seem to be much faster on the M1 Ultra than anything else. A problem is that a number of games and other apps weren’t optimized for it. But now, performance on those apps are much better than they were in the beginning. But for games, it’s up to developers who so far haven’t cared all that much. But there are some games that will be coming out later this year that might work very well.TheinsanegamerN - Tuesday, June 7, 2022 - link
LOL mac gaming.......OreoCookie - Wednesday, June 8, 2022 - link
I reckon that anything which benefits from having ample memory bandwidth (as some scientific applications do) benefits. The M1 Pro/Max/Ultra and the M2 have quite a bit. The M1 Ultra has about as much memory bandwidth has top-end graphics cards and compute cards. Also, what kind of optimizations are possible (are you using an Intel CPU that supports AVX512 and is the code optimized for that?).But overall, that's not a new problem. Before knowing what is different about the cores in the M2 compared to the M1 (apart from larger cache sizes), it'll be hard to explain which workloads see performance gains and to what degree.
Tom_Yum - Monday, June 6, 2022 - link
I think the mismatch between M1 Graphics compute performance versus gaming performance comes down largely to software (both MacOS and actual games) more than the underlying hardware. Few developers organically develop for Metal (more often they get other developers to port the game across to Metal for them, which is never as efficient as developing from day 1 for Metal support), and the degree of optimisation for Metal comes down to the skill of the developer and how much effort they deem worth investing to target a pretty small market. There is also the question of how well optimised MacOS itself is for the needs of gaming, such as low-latency input management. It has never been a priority for Apple, who have largely targeted the creator market that doesn't really benefit from those things.Personally, I think Apple is going to need to take a leaf from nVidia's book and bankroll a bunch of developers to develop Metal optimised versions of popular games, and through that try to nurture MacOS gaming as a viable, worthwhile target for all gaming developers. Until they do, most developers will continue to ignore MacOS or treat it as a porting afterthought with only just enough time put into it to make it run but not enough to make it well optimised. At present, the main way MacOS users play games is through Parallels or CrossOver, which doesn't help address the situation because it further disincentivises developers to develop decent MacOS ports.
Basically, Apple could make a RTX3090Ti-equivalent GPU, but it would still game poorly if every game is gimped by poor Metal optimisation or running through a clunky Vulkan to Metal translation layer.
name99 - Monday, June 6, 2022 - link
You assume Apple is interested in pursuing the "hard core/console" gamer market.I see no evidence for this, now or ever.
Apple is interested in the reverse issue of "make the mac experience richer for people who are already mac fans", but they have zero interest in "converting" non mac fans, especially the sort who rant on the internet about how much Apple sucks...
Scalarscience - Monday, June 6, 2022 - link
I agree with this, with the caveat that they also do a great job of appealing to extremely casual gamers/mobile gamers/children using Apple devices. They would clearly eyeball this as a potential future market and track it to see if is worth investing more in, as evidenced by them already moving the Apple TV platform further into their gaming ecosystem.Of course all companies would be eyeballing crossover mobile users now, so you could classify it easily with your comment under these people already being 'mac fans'.
Abe Dillon - Thursday, June 9, 2022 - link
I agree that they seem to have little interest in targeting "gamers". It's a low profit market with lots of entrenched competitors.What I don't get is why you think they should target the people "who rant on the internet about how much Apple sucks". Trying to "convert" people who never matured past a grade-school mentality of brand tribalism is a complete waste of time. They're impervious to reason.
Example: This thread was started by someone who actually believes the M1 Ultra's 64-core GPU is outranked by an RTX 3050. I don't know where they're getting that info, but it's pretty silly.
Tom_Yum - Tuesday, June 7, 2022 - link
I didn't see any ranting about how much Apple sucks in my post. While I agree that Apple isn't interested in going after the 'hard core gamer' market, I strongly doubt that Apple don't see business benefits from broadening the appeal of their Macintosh line beyond current 'mac fans', given the Mac also serves as a gateway into the broader Apple ecosystem. The hardware is capable of gaming, but the software is lacking.There are many people out there that would love to buy a Mac but also occasionally play games. Most people are generalists who want something that can do everything ok (think about how popular SUVs are now, even if they are rarely used offroad). Do you think the Mac's market share wouldn't increase significantly if it could provide all the performance it does, all the battery life it does, and could also reliably run the majority of AAA titles? Do you really think Apple isn't interested in increasing MacOS's marketshare? Sell a great laptop, and then next time that person buys a phone suddenly that iPhone has a lot more value add, suddenly Apple Music subscription seems like a better deal, etc. Before you know it, that individual who sat outside the Apple ecosystem is fully embedded. Or they can continue to target only 'mac fans' who are already in the ecosystem....
name99 - Tuesday, June 7, 2022 - link
(a) Go read a site like phoronix. No matter WHAT Apple does, it's not good enough and is interpreted in the worst possible way.(b) Remember this?
https://www.escapistmagazine.com/john-romero-apolo...
Apple does not want to be associated with that culture. And sure, so OK, 20 years on Romero grew up. Problem is, there is a constant stream of new teenagers with the exact same mentality to take his place.
(c) What do you want Apple to do? What developers say they want is Apple devices that look exactly like PCs. They want x86, and they want DX or Vulkan. Apple isn't going to give them either of those (for the very simple reason that they're not going to destroy their long term goals to gain 5% extra sales this year).
That's my point -- gamer whining is not rational, actionable whining. It is not a list of realistic things that Apple could do, it's whining for the sake of whining.
For example, something Apple has just added as part of Metal3 is a technical call to get the base address of the GPU. This apparently will make Vulkan much more performant. But notice -- no gaming whiner was asking "provide this API call", something realistic and actionable. It was all "Apple sucks and they cost too much and they should be using Vulkan because it's a million times better than Metal, and I would never buy a Mac even if you paid me"...
mode_13h - Saturday, June 11, 2022 - link
> no gaming whiner was asking "provide this API call", something realistic and actionable.If you want to know what actual game *developers* think of Apple, you might be looking in the wrong places. There's a lot of noise on the internet that can easily drown out whatever signal you're seeking, if you don't filter it carefully.
mode_13h - Saturday, June 11, 2022 - link
> Go read a site like phoronix. No matter WHAT Apple does, it's not good enough> and is interpreted in the worst possible way.
Phoronix is specifically focused on Linux and open source. Apple is famously secretive, prefers proprietary standards, practically invented walled gardens in computing, and caters towards less technical users. You basically couldn't find an audience more hostile to Apple, if you tried. To cite them as characteristic of some general reception towards Apple is either seriously misguided or disingenuous.
magreen - Wednesday, June 8, 2022 - link
Your posts are eminently reasonable and thoughtful. Perhaps that's why they garner such a strong reaction.Abe Dillon - Thursday, June 9, 2022 - link
"There are many people out there that would love to buy a Mac but also occasionally play games."There are tons of options for those people. Get a mac and a cheap PC or a Console.
"Most people are generalists who want something that can do everything ok"
Most people **aren't** hard core gamers who care about hitting 60 fps on max settings.
Most hard core gamers think everyone is a hard core gamer. There are plenty of people who just want to play a bit of Stardew Valley or The Sims or Minecraft and they're perfectly happy doing it on an old PS 4.
"Do you really think Apple isn't interested in increasing MacOS's marketshare?"
Yes. I do. Apple puts profit margin ahead of market size. If the market is huge, but the profits are razor thin, then you're dancing on a thin line between success and bankruptcy. It only takes small fluctuations in component prices for your margin to go negative and sink the ship. Game machines are notoriously a low-margin business. Apple has no interest in that.
"Sell a great laptop, and then next time that person buys a phone suddenly that iPhone has a lot more value add"
The halo effect cuts both ways: enter a low-margin market and you'll have to compromise on quality to compete, then it tarnishes the whole brand.
"Or they can continue to target only 'mac fans' who are already in the ecosystem."
It's rather self-centered to assume that Apple's only play is to cater to hard-core gamers or 'mac fans'. There couldn't possibly be other markets for Apple than the "coveted" gamer market, right? They just dropped a machine that can encode/decode 18 streams of 8K video simultaneously. Do you think they did that for shits and giggles? Don't you think they **might** be targeting some sort of market with that capability? <<rolls eyes>>
mode_13h - Tuesday, June 7, 2022 - link
> Apple ... have zero interest in "converting" non mac fansThey are continually searching for ways to increase their revenues. With revenues as large as they already are, that's not an easy task. Therefore, I absolutely expect them to make a play to significantly enlarge their user base.
If you think about it, they're already in people's living rooms with Apple TV. So, wouldn't it be interesting for them to make a backdoor push for the console market, by introducing a significantly upgraded model? Nintendo has long shown that you don't need top HW specs to build a significant userbase - you just need the right content.
gund8912 - Tuesday, June 7, 2022 - link
Mac sales increased considerably in last qtr so i don't think just "Mac fans" are buying them.Sailor23M - Wednesday, June 8, 2022 - link
I bought my first mac last year, M1 MBA. Best decision ever.OreoCookie - Wednesday, June 8, 2022 - link
IMHO that's a very outdated view. Apple's most popular platform is iOS, and it has billions of users. All of the technology in the M2 will come to the next iPhone and next iPad. It seems likely that Apple will literally put the M2 in the next-gen iPad Pro.By numbers iOS is likely the #1 gaming platform on the planet. And it isn't just candy crush-type games either. I live in Japan and I see plenty of people playing real games on their phones (RPGs are big here).
mode_13h - Saturday, June 11, 2022 - link
There's a certain logic in Apple approaching the console gaming market. There's a sizeable userbase, willing to spend significant amounts of money on software and subscriptions, which has already demonstrated a willingness to live within a walled garden. Plus, they value a simplified low-friction user experience vs. the technically rich environment of PC gaming.It almost makes *too much* sense, if you think about it like that.
Silver5urfer - Monday, June 6, 2022 - link
Shadow of Tomb Raider is updated to Metal API and uses 64Bit MacOS support, perhaps not full ARM compat and needs Rosetta translation, but guess what ? Apple's so called "In House" a.k.a PowerVR stolen / Poached technology powered by M1 ULTRA fully maxed out 48GB config gets pathetic 82FPS on HIGH preset at 1080P resolution rofl. Vs a 3090 with ULTRA and RT at 135-140FPS that's on a 5800X, if we use a 10900K and 12900K it will have even more performance.source - Youtube search for that game on Apple M1 Ultra. Apple got OWNED.
Apple is not going to spend millions or billions on gaming market just because they are having free cash. Gaming is a strong market that demands heavy DIY or Console backing, with having ZERO IP or experience of this industry since ATARI. Apple is not fit for this task which is why they are targeting more ecosystem benefits to make their fans buy more of their walled garden Utopia products and venture into Services since iPhone is fading slowly to that "consumption" market. Soon Apple will launch AR and Car to further diversify.
RE8 Village is being ported to Mac by this Fall 2022. RE Engine is not heavy at all, so it will be easy to run that game for sure because it runs on Gen 8 consoles which are weaker than a Q6600 lol. Anyways the thing is I want benchmarks which would be interesting with their new M2 debut.
gescom - Tuesday, June 7, 2022 - link
"M1 ULTRA fully maxed out 48GB config gets pathetic 82FPS on HIGH preset at 1080P resolution rofl"Well, in that case, a MacBook Pro & PS5 combo would solve it and a 600W RTX generator + a 250W Intel's generator "solution" isn't needed anymore.
Silver5urfer - Tuesday, June 7, 2022 - link
Yeah when performance is tatters you switch gears to Max TDP ratings.600W ? 3090FE is 350W and you can underclock it and lose at max 5% by 300W, and for 12900K what makes you think gaming gets 250W PL2 ? not even 10900K goes that high, in gaming 12900K gets 120W tops and 10900K goes to 150W max, for 5800X3D which can be at similar perf is far far less, and if you take a 6800XT it will be at 10% less than 3090 in the same resolution. So that's out of the window. We were talking how Apple claimed 3090 performance and charged an arm + leg and got beaten badly on all occasions. Except the specialized ARM core workloads where they have IP blocks on the silicon, once FPGA gets embedded onto x86 by AMD it will be beaten again.
So does M1 Ultra have any expansion slot ? nope. Proprietary junk SSD, forget PCIe expansion, I can make any x86 PC into a Proxmox machine with HBA cards and what not. This pile of Junk can only boot Mac OS at best with best compat.
Ultimately ? Apple got SHREDDED, a $4000+ junk that can barely beat 2020 Hardware so much for bleeding edge PR smoke and mirrors.
PlagueStation is irrelevant, Apple got destroyed for a far less and direct competitor product PS5 is utter junk because DMC 5 on it runs at 4K30 FPS lol and 1080P it cannot get more than 120FPS, which is shameful since 1080Ti is 6 years old and runs at 170FPS at 1080P resolution in the same title.
mattbe - Tuesday, June 7, 2022 - link
Pretty pathetic that you are just making stuff up to defend Apple. 600W RTX generator? That would probably involve a ridiculous high volage cooled by liquid nitrogen, and even then, the results that the other poster mentioned didn't use that extreme overclocking set up to get those results.Realistically, a 350W 3090 would get those results.
blackcrayon - Sunday, June 12, 2022 - link
Strange how you point out that Shadow of the Tomb Raider is running in emulation, and yet Apple got "owned' on that game. Talk about unmaking one's own point.techconc - Monday, June 13, 2022 - link
I'm not sure what point you are trying to make by comparing non-native games running under emulation on Apple hardware and then pretending to make a hardware performance comparison based on it. That shows that you really don't understand what you are comparing.Realistically, the M1 Ultra is the sort of upper mid-range offering. It's not Apple's Mac Pro offering. In something like GfxBench, it's getting very close to 3080 scores. That's not bad for the range it's targeting and especially considering the power usage. We'll see the numbers continue to improve and scale well as they have. Second generation silicon looks to be a nice improvement overall.
Doug_S - Tuesday, June 7, 2022 - link
People keep whining about gaming performance on Apple Silicon Macs. I guess those people don't realize that the mobile gaming market, which Apple dominates, is larger than the entire console gaming market and entire PC gaming market COMBINED.Apple dominates gaming, people who read forums like Anandtech don't accept that because can't play the latest AAA FPS on a Mac. Like it or not, it is people playing games on phones that is where the money is today, and has been for a while. PC gaming that requires an "RTX3090Ti equivalent GPU" is a niche market, sorry to have to break that news to you.
Pneumothorax - Tuesday, June 7, 2022 - link
Most of us here don't care much for the next version of Candycrush and other PTP derivates that basically crowd the iOS gaming app store. I decided to keep my 2019 MBP 16 for now for bootcamp purposes as I would rather not carry two laptops on the go when I'm working on the road as I also game. I tried a MBP 16 M1 Max and it was pitiful for the few games I could run. Even lightweight FFXIV ran horribly on it.Silver5urfer - Tuesday, June 7, 2022 - link
What massive Joke.We are here comparing the performance of M1, M2 to Intel AMD Nvidia counterparts. Not Snapdragon vs A series in which Apple is lost lmao. Worldwide Android userbase is exponentially higher than Apple. Apple even lost top 2 ranks now to Samsung Rank 2 and Xiaomi is at Rank 1. With BBK it will be even further down slowly.
Clown Apple is lost everywhere they rely on American market only and those fanboys plus Rich people who can afford their BGA machines and Smartphone a.k.a Social media toys.
Mobile gaming Joke is filled with Gacha trash called Genshin Impact MTX GaaS nonsense or Netease powered MTX garbage games. Psst... with all that power Apple A series processors do not have Emulation support. So all the Android phones are emulation kings, going by sheer library of what Android can do it's no contest.
Nobody plays AAA on Mac that junk already axed 32bit support so even old games are dead on that platform meanwhile DOOM from 1993 can be played on a Windows machine. Yeah nobody plays on an RTX3090Ti that is very thin marketshare. But many own 1080 class and the new consoles are 1080Ti-2080 class approx. Which Apple's own M1 Ultra lost to 2080Ti rofl.
Gaming on all sides Apple is goner. Their latest brag about Metal 3 is just trying to be competitive since DX12 and Vulkan left them in the dust since decades. You do not need RE8 to evaluate anything since SOTRR on Apple M1 Ultra using Rosetta but Metal absolutely got destroyed by a 3090 with 2X more performance. Once new gen drops that gap will double, and yeah even a CPU with 8700K can upgrade to an RTX4090, and RTX5090 while M1 Ultra ? Stuck in coping on how great M series is and how nice it is in power consumption.
Doug_S - Tuesday, June 7, 2022 - link
Feel free to live in denial by looking at sales figures, which include lots of low end Androids sold to people who are not paying app developer salaries.Apple has 84% of mobile gaming revenue, and as I said mobile gaming is bigger than console and PC gaming combined. Apple is the biggest gaming company in the world, you considering them a joke doesn't change that fact.
https://venturebeat.com/2012/05/06/mobile-gaming-r...
Doug_S - Tuesday, June 7, 2022 - link
Just noticed that was US revenue only. In worldwide revenue according to Statista Apple had $42 billion in H2 2019 - H1 H2020 and Android had $27 billion. Still a clear majority.Silver5urfer - Wednesday, June 8, 2022 - link
A platform minting money because of dumb normies playing MTX drivel =! gaming market, if you want to count that as gaming because of the tag associated with it, let it be. As I said mobile gaming industry is a joke, it's filled with pure trash with zero art style, passion or anything worthwhile.By 2025 Mobile gaming trash is going to beat the traditional industry by 60% leaving 40% only to Consoles and PC. Now you will probably celebrate hearing that how Apple is amazing since they make a ton of cash on the mobile platform.
Even if they take up 99% those are not games. Just like Singleplayer is being slowly phased out for MP MTX GAAS junk, Singleplayers are what defined as games. Quality >>>> Quantity & Passion to tell stories and worlds worth spending time is what It is.
Doug_S - Wednesday, June 8, 2022 - link
OK so "gaming" is what you define it is. If you can make your own definitions your opinions are worthless to everyone but yourself.magreen - Wednesday, June 8, 2022 - link
I only need to read the first sentence of your posts and I know to ignore them. That is very helpful--thank you!Sailor23M - Wednesday, June 8, 2022 - link
Lol, trueScalarscience - Monday, June 6, 2022 - link
I have an RTX 3090. and while I would certainly choose this for a top-teir game in 4K and content dev uses (3DSMax/Maya/Blender/Substance/etc), there are certainly Mac optimized applications where the M1 Ultra is no slouch, for its thermal/performance envelope.And that's the actual point for some, especially a recording engineer/home DAW user who might worry about the fan noise for obvious reasons. I fit into all categories mentioned here, as well as some sci/viz uses and optics etc, and am extremely aware of the power envelope of the GPU's in here, and the sheer number of high performance but high power draw and thermal output machines. I'm constantly battling generating excess heat, and work to keep a constant temperature in here at all times.
The successor to the 3090/3090Ti ups the power envelope even more, to the point where it's going to be the primary power draw even in higher end Workstations by quite a large degree. One difference to Intel (even though comparing CPU & GPU is problematic) is that nVIDIA seems to use TDP well as a performance upper limit, whereas Intel allows burst power draw and thermals that fall well outside of the simplified stated specs (you have to understand the implementation and timings allowed for to understand why). So that comparison is again problematic...and just problematic overall for building rigs for various usages indoors and outdoors.
Otritus - Tuesday, June 7, 2022 - link
Performance in a tight thermal envelope is irrelevant from comparisons with the M1 Ultra and the RTX 3090. Apple claimed the M1 Ultra could match or even beat the RTX 3090. Instead the RTX 3090 clapped the M1 Ultra in all relevant workloads. M1 Ultra truly is impressive from a bandwidth and GPU perspective in the thermal package of the MacBook Pro, but the clowns at Apple marketing focused on false performance claims instead of real efficiency superiority.techconc - Wednesday, June 8, 2022 - link
The 3090 does better then the M1 Ultra on things like hardware based raytracing. However, other general GPU benchmarks such as 3D Mark, GfxBench, etc. show the M1 Ultra to be roughly on part with the 3090.Both products are optimized for different workloads. 3090 is more for gaming and raytracing, the M1 Ultra is more for video editing / filters, etc.
meacupla - Wednesday, June 8, 2022 - link
You are aware that the M1 Ultra doing well on benchmarks means nothing, right?Like, do you buy the product to get work done, or are you buying it to run benchmarks?
Abe Dillon - Thursday, June 9, 2022 - link
That's... Exactly what tachconc's point was. Did you even read the post?AS_0001 - Tuesday, June 7, 2022 - link
M1 Ultra had all the raw performance, the problem was with TBDR and IMR, most of the time, the GPU was sitting idle and that’s why getting thought they were underperformingViolet Giraffe - Tuesday, June 7, 2022 - link
Why should we care about transistors? Let's compare end product to end product, where Apple is at best matched to x86 competitors, and often it loses on price to performance ratio. And this is even before you consider software compatibility issues due to ARM architecture (and due to macOS, but pre-existing Mac fanatics already learned to cope with that).MayaUser - Tuesday, June 7, 2022 - link
For me, the M1 Ultra is matching the GTX 3070 in sustain gpu loadThe M2 ultra with its upcoming 28.8 TFLOPS will surpass GTX 3070. Of course nowhere near GTX 3090 especially in gaming where the nvidia is what known for
(My testing is as an Maya 3d modelling )
shabby - Tuesday, June 7, 2022 - link
So the m2 won't be faster than a 4090? Darn...caribbeanblue - Tuesday, June 7, 2022 - link
100% of the poor performance you encounter in games with Apple Silicon is due to lack of proper optimization and support from game devs (not saying this to *blame* them).Calin - Wednesday, June 8, 2022 - link
The GTX3050 is a 75W card (IIRC), and the 3090 is a 250W (I think) card.Meanwhile, the M2 uses up to 15W for graphics.
The power discrepancy is too great here, no chance the M2 will beat a card running at five times the power budget.
Assuming the M2 Ultra Pro Max triples the power budget for the integrated GPU, it still won't reach parity with the 3050.
meacupla - Wednesday, June 8, 2022 - link
No, the comparison was between the M1 Ultra and RTX 3050.The M1 Ultra consumes 215W at full load, where as the RTX 3050 only does around 75W at full load.
Guess which one is faster by about 35%? It's the RTX 3050.
So not only is the RTX 3050 more power efficient than the M1 Ultra, it does it on a worse node, and remains faster.
Yet, somehow, I am expected to believe an M2 Ultra is going to match a RTX 3090, when the base model M2 only has a 35% improvement in graphics over the M1.
Ashan360 - Friday, June 10, 2022 - link
Replace “Graphics” with “most current gaming and 3D modeling” and I’ll agree with you. In other software optimized GPU bound tasks, the M1 series chips is class competitive or leading when evaluating based on power efficiency, which is super important for laptops. Software optimization is just a matter of time now that there is such a quickly growing market of Apple Silicon Macs owned by affluent users to sell software for.Willx1 - Friday, June 10, 2022 - link
I thought the max kept up with a 3060,how does the ultra only compete with a 3050? I assuming you’re talking about gaming only?Silver5urfer - Monday, June 6, 2022 - link
Initially I was thinking Apple would go for N3 then it didn't made sense how fast Apple can switch given the % revenue cut of Macs for entire Apple stream. On top of the increasing costs. So maybe N3 will debut for M3.So a modest gain, I wonder how the cost scaling is, it's annoying to keep track of M1 release vs M2 release prices. I expect a bump by at-least $250-300.. Anyways Apple fans can buy this probably, but looking at this cadence of refresh. It's really stupid. I hate how the trend became nowadays, started with Smartphones about yearly refreshes. With the BGA machines it's even worse, you cannot even replace anything let alone ugprade. On top Apple's HW is super anti-DIY it's locked down to brim. Esp that Secure Enclave which is basically Intel ME / AMD PSP type backdoor system on the silicon.
Their graphs are super idiotic lol just remove that garbage curve and draw a bar chart you get the performance. Also their GPU is going to be decimated soon by upcoming Nvidia and AMD's new designs on TSMC 5N.
The CPU of M1 already got beaten by existing laptops when Andrei tested, esp it was only a meager % better over Zen 2+ Ryzen 4000 ultra low power BGA designs which were super inferior over even Comet Lake BGA parts and pathetic GPUs (1650 mobile LOL). IPC already was beaten by Alder Lake forget about MT since it cannot even beat many BGA processors.
M1 Ultra was faster in GPU but it was nothing in front of already existing designs and Apple got slapped by even Verge on RTX3090 class claim and beaten by an RTX 2080Ti, which is 4 year old product and costs less than their highly expensive ($4000) transistor bloated design.
I wonder if they are going to target RTX4090Ti / RX7950XT MCM this time with M2 Ultra ;)
name99 - Monday, June 6, 2022 - link
You’re assuming there will be an M2 line structured the same way as the M1 line.I don’t believe so. Notice that no M2 mini was announced…
My guess is the M line will be (approximately) the battery powered line, like old Intel U-class. There will be a new line announced soon’ish after A16/iPhone14 that will span the equivalent of H-class to Xeon, not attempting the ultra-low energies of the M line, with consequent higher performance.
If I’m right, don’t expect an M2 Pro, Max; the new line, call it Q1, will start at the Pro level, presumably in an 8+2 config (probably there’ll be a 6+2 yield-sparing version) and go up from there, with the Max being an ultra config (two chips) and the Ultra being a four chip (32+8) config.
Well, we’ll see! 3.5 months till September event, then maybe two months after that…
fazalmajid - Monday, June 6, 2022 - link
The M1 Ultra was actually part of a three-chip set developed for their flailing AR glasses project until Jony Ive decided to gimp the project (like he did the butterfly keyboard) by nixing the auxiliary processing unit. Source: The Information.mode_13h - Tuesday, June 7, 2022 - link
The M1 Ultra in AR Glasses? You mean in a hip-mounted rendering pack, a la Magic Leap? Because even at the lowest power threshold, it doesn't seem like you could put it in anything head-mounted that Apple would actually sell.Another reason I'm skeptical the Ultra was really designed for AR is that you really don't need so many general-purpose cores for it. The computer vision portions of the pipeline are more suitable to run on DSPs or GPUs, while the rendering is obviously GPU-heavy. In MS' Hololens 2, did they even do any compute outside of the Snapdragon?
name99 - Tuesday, June 7, 2022 - link
Not hip mounted.What's suggested in the patents is a base-station in the same room as the glasses.
I think it's foolish to make predictions about how much compute the glasses will or will not need for various tasks, especially if Apple will be, for example, making aggressive use of new technology like Neural Radiance Fields as part of their rendering.
Your argument is essentially "A fitbit can use some dinky ARM chip, therefore an Apple watch can also use same dinky ARM chip"; that only works if you assume that Apple wants to do the exact same things as fitbit, nothing more...
mode_13h - Tuesday, June 7, 2022 - link
> What's suggested in the patents is a base-station in the same room as the glasses.Are you sure they were talking about AR, then? Because AR is really about being fully untethered. Making an AR system with a basestation would be like an iPhone that had only wifi connectivity. You can do it, but it doesn't make a ton of sense - especially when the rest of the world is making proper cell phones.
> I think it's foolish to make predictions about how much compute the glasses will
> or will not need for various tasks
Set aside your mild disdain, for a moment, and let's look at what I actually said:
"you really don't need so many general-purpose cores for it."
Nothing you said contradicts that. I simply pointed out that general-purpose CPU cores aren't the best tool for the heavy computational tasks involved in AR. That (and power/heat) are the reasons it sounds strange to use something like the M1 Ultra, for it.
> for example, making aggressive use of new technology like Neural Radiance Fields
Sure, and you need neural engines for that - not general-purpose CPU cores. So, that tells us that the Ultra has too many CPU cores to have been designed *exclusively* for AR/VR. That's not saying AR/VR wasn't _on_the_list_ of things it targeted, but @fazalmajid suggested it was developed *specifically* for the "AR glasses project".
> Your argument is essentially "A fitbit can use some dinky ARM chip,
Not exactly. There's a baseline set of tasks needed to perform AR, and Hololens is an existence proof of what hardware specs you need for that. Sure, you could do a better job with even more compute, but at least we know an Ultra-class SoC isn't absolutely required for it. And when we're talking about a little battery-sucking heater you wear on your head, the baseline specs are very important to keep in mind. That said, Hololens 2 is a great improvement over the original, but still could use more resolution + FoV, so the benefit of more GPU horsepower is obvious.
Abe Dillon - Thursday, June 9, 2022 - link
I think this take on the AR project is pretty dumb.Johny Ive screwed up a few things (yes, the butterfly keyboard) but that doesn't mean he's the idiot that the nerd-o-sphere believes him to be. He was a big part of resurrecting Apple and building it into one of the most powerful companies in the world. It's not like Apple's comeback happened *despite* Johny Ive.
It doesn't take a genius to figure out that if you have to tether AR glasses to a base-station or even a dedicated pocket computer, it pretty much defeats the whole point of AR. Tethered AR isn't AR: It's a gimmick. He pushed for an untethered device that had lower fidelity graphics. That sounds like the right decision to me. Nobody is going to give a rat's anus if your AR is photo-realistic when they're tethered to a brick.
If Apple killed the AR project it's because the tech isn't ready yet (kinda like VR). Apple learned a long time ago that there's no prize for being the first to market. The iPhone wasn't the first smart phone, but it was the first smart phone to feel like a complete design rather than a prototype.
mode_13h - Saturday, June 11, 2022 - link
> It doesn't take a genius to figure out that if you have to tether AR glasses to a base-station> or even a dedicated pocket computer, it pretty much defeats the whole point of AR.
Yes, that's what I'm talking about. Microsoft showed you can do it with tech of several years ago, and it's quite usable for several things. For Apple to then release a more restrictive or bulky product would basically be a non-starter.
Whatever they do, it has to be at least as good as Hololens 2, in all respects (and significantly better, in at least a couple).
techconc - Wednesday, June 8, 2022 - link
Timing of specific product announcements is no indication that the M2 line won't be structured in the same way. The M1 iMac wasn't announced with all of the other M1 products either. It came later. If you haven't noticed there is a major supply chain global problem right now. Apple is only announcing what it thinks it can ship and they are struggling at that.ABR - Tuesday, June 7, 2022 - link
So you're dinging Apple for coming out with a new SOC with ~20% perf improvement after 18 months!? Just want to make sure I have that straight. Maybe you can remind me by the way, what was a typical increment for an annual Intel tick or tock?mode_13h - Tuesday, June 7, 2022 - link
Yeah, 20% seems rather good, considering they're both made on roughly similar process nodes.Kangal - Tuesday, June 7, 2022 - link
Agreed.Also Ryan is very wrong with his assumption with the M2 using A15 processors. The M1 was developed during the A13 processor, and used many aspects of the upcoming A14 processor. And the performance was basically equivalent between A13, A14, and A15. Since they're essentially on the same node, even on the TSMC-4nm, you don't have enough room to squeeze out that much performance.
All signs point to this using a new architecture, as has been rumoured for a long time. The ~20% performance uplift is impressive, even if users only see about 10% of it in real-world usecase. The upcoming iPhone 14 (A16 chipset) will likely boast an impressive increase in performance similar to this.
The only thing that I am not thrilled about is the graphics. Apple did very well with the M1, basically beating both Intel Xe and AMD Vega, even if they relied on Rosetta2. But the market has changed. Intel Laptops are getting getting graphics bump thanks to Nvidia dGPUs getting cheaper. Whereas AMD has started competing with their RDNA-2 iGPU. I feel like the +35% upgrade is barely enough, and was hoping for a +50% or so.
On the optimistic side, it is possible that Apple has saved the best for last. Maybe we will get an M2 Plus, M2 Pro, and M2 Max chipsets soon. Perhaps these will cost more money, but come with more substantial upgrades. They certainly hinted towards that with the collaboration they had for Biohazard/Resident Evil Village which is getting a macOS port, with the latest Metal API which supports things similar to FSR/DLSS and DirectStorage. So Apple is right up there with the Competitor APIs like Vulkan/AMD/Nvidia/DirectX 12U.
Silver5urfer - Tuesday, June 7, 2022 - link
10th gen CML vs 11th gen RKL has 19% IPC improvement, 14nm++ same node.mode_13h - Tuesday, June 7, 2022 - link
> 11th gen RKL has 19% IPC improvementSome of that was from AVX-512 and some of it was from raising the thermal ceiling even higher (which isn't an option for mobile-oriented CPUs). Also, it's easier to improve IPC, when you're starting from a lower base.
Not enough people are focusing on LPDDR5. Just look at what a benefit it was for Alder Lake's all-core performance! Surely, a chunk of the M2's IPC gains must be coming from this.
id4andrei - Wednesday, June 8, 2022 - link
That 18% is multithreaded, so not IPC. Split that among 8 cores and you will not have a huge IPC advance.techconc - Wednesday, June 8, 2022 - link
LOL... no, that's now how it works. If multithreaded performance is up 18%, that means the average of all cores is up 18%. You don't divide the 18% by the number of cores. Realistically, just like the A15, the M2 has modest increases with their performance cores and very significant increases with the efficiency cores which average out to 18% across the board.id4andrei - Wednesday, June 8, 2022 - link
Reading nanoreview's comparison between 6800U and 5800U from AMD, there is a 4% ST Cinebench advantage for 6800U and 24% advantage MT perf in Cinebench.I've always read IPC increase as a measure of single core perf improvement instead of total SoC. Even if I am wrong, 18% increase in a multithreaded benchmark score does not necessarily translate to a 18% increase per core score result
techconc - Wednesday, June 8, 2022 - link
"18% increase in a multithreaded benchmark score does not necessarily translate to a 18% increase per core score result"On average, yes, it does. Again, in this particular case, the performance cores saw a more modest game but the efficiency cores saw much bigger gains. Across all 8 cores, the increase averages to 18%. That's exactly what that means.
I'd also note that it becomes increasingly silly to just look at the CPU when discussion an SoC. The GPU increased by 35%, the NPU creased by 50%, it has a more powerful ISP, better media encoders/decoders, much higher memory bandwidth, etc, etc. The net improvement of the M2 over the M1 is pretty substantial.
id4andrei - Wednesday, June 8, 2022 - link
Let's wait and see the official data on benchmark workload and 3rd party reviews. Apple, like all companies, are notorious at cherry picking metrics and making use of wording like "up to". For instance the 35% is at peak consumption on the 10 core variant of the M2. You cannot infer GPU core progress when you compare 10 core vs 8 core(or maybe vs the M1 variant with 7 cores).techconc - Wednesday, June 8, 2022 - link
The question isn't whether Apple is cherry picking benchmarks or not. They very well may be. The point I was making is that if ANY device shows an 18% increase on multithreaded workloads, then logically, that's an average increase of 18% ON EACH CORE. From the A15, we already know the performance cores get a smaller bump and the efficiency cores get a bigger bump. Your original comment was suggesting that you'd have to split that by the number of cores to see the IPC improvement for any single core. That's simply not true.id4andrei - Wednesday, June 8, 2022 - link
I get what you mean and accept my fault. I did rationalize it as a split.Ashan360 - Friday, June 10, 2022 - link
You’re never going to convince these people. The ego doesn’t allow for being incorrect on the internet.Ashan360 - Friday, June 10, 2022 - link
LOL reading this thread 2 days late and the very next response was on the following page refuting my claim. We’ve all been proven wrong!techconc - Wednesday, June 8, 2022 - link
"For instance the 35% is at peak consumption on the 10 core variant of the M2. You cannot infer GPU core progress when you compare 10 core vs 8 core(or maybe vs the M1 variant with 7 cores)."If you look at their graphs, they show a 25% GPU improvement at the same 12W power level. It goes to 35% at 15W when all GPU cores are active. It's not exact, but it gives you a pretty good idea.
Silver5urfer - Tuesday, June 7, 2022 - link
RKL had Double Digit IPC growth 19% it failed to translate to Real world and lost hard 11900K vs 10900K due to loss of 2C4T and 14nm backport. 12900K improves even more IPC 20% on top of RKL again.Apple is not some magical unicorn that keeps on going without considering the physics and technical limitations and beat everything out as Appe fans see. In reality they got slaughtered in the benchmarks check on how M1 got destroyed by puny laptops and puny GPUs. For the M1 Max which is expensive option, it barely matches 3060 mobile BGA GPU.
For SPEC CPU which Anandtech always steers towards is the only place where it did something, also Zen 3+ 6nm designs will destroy that, on top if Zen 4 based BGA trash comes out M2 is toast. With Zen 4 and Raptor Lake, Apple cannot compete on ST and MT performance AND SPEC. And ofc GPUs once the new Ada and RDNA3 BGA junk comes. It's over.
systemBuilder33 - Saturday, June 11, 2022 - link
Intel improvements were amazing given that they had no CPU designers from 2014-2020, they laid off most of them in 2014, instead promoting marketing nerfs into their jobs and into the executive and ceo positions ...Otritus - Tuesday, June 7, 2022 - link
1. Kind of hard to release a product on a node that has been delayed, and currently isn’t producing chips.2. 10% inflation, so the real price increase isn’t as large as people make it out to be. Still a $100 premium, so there is good reason to be concerned. However, Apple doesn’t typically raise prices consecutively, so I would expect M2 to slot into M1, and M3 to replace M2.
3. The performance/watt curve is incredibly significant. Excluding the academic usage to see the inter generational advantages and micro architecture efficiency sweet spot, Apple products do not run at max TDP for extended durations except for certain models. Meaning that curve tells as the real performance differences depending on the chassis the M2 chip is in.
4. M1 had the best GPU in class, and it’s CPU was highly competitive by offering comparable performance in significantly reduce power envelopes. The 4000U class products from AMD are not designed to maximize performance, but deliver good performance with excellent battery life. The M1 chip simply does this better. M2 will not cede ground in this regard.
5. Such claims are lies because Tim Cook is the messiah that can only deliver the truth of Apple supremacy.
6. With how they got burned on the last claim, I think marketing might switch to the massive efficiency advantage the M2 Ultra will have.
Piotrek54321 - Monday, June 6, 2022 - link
Wow, it's 2022 and still not even decode support for AV1.fazalmajid - Monday, June 6, 2022 - link
Apple’s always been in bed with H.264/H.265 and presumably now VVC.nandnandnand - Tuesday, June 7, 2022 - link
Apple joined the Alliance for Open Media as a governing member in January 2018.Jorge Quinonez - Wednesday, June 8, 2022 - link
If Apple can save money using AV1 and increase profits they will do it. They have an obligation to their shareholders. AV1 support will eventually appear on one of their next SOCs.Ashan360 - Friday, June 10, 2022 - link
Apple will continue to double down on whatever codec their iPhones produce when recording video. They will not shift to AV1 if it doesn’t provide substantial performance improvement over h265, as die space is constrained. It speaks volumes that they implemented proRes decode before AV1.StrongDC - Monday, June 6, 2022 - link
"M2 is the first Apple SoC to support the newer LPDDR5 memory standard."This is wrong. M1 Pro/Max/Ultra all use LPDDR5 memory running at 6400MT/s.
Only M1 uses LPDDR4x.
bcortens - Monday, June 6, 2022 - link
Yeah, I did a double take on reading that too… I thought for a second my memory might be playing tricks on me but I went and checked and yep, the M1 Pro/Max/Ultra all use LPDDR5…Ryan Smith - Monday, June 6, 2022 - link
Thanks. That one is on me; I had forgotten about the Pro/Max/Ultra when writing that section up.Shmee - Monday, June 6, 2022 - link
Why is this only for Macs? If it is a fast chip, it might be interesting to put in a motherboard that actually had PCIe slots, m.2 slots, DDR5 DIMM slots, and SATA ports. Put it in along with an RTX 3090, or the next gen graphics cards when they come out. Might make for a nice system, I wonder how it would compare to something based on Alder Lake or Zen 4.Kurosaki - Monday, June 6, 2022 - link
It would kick their butts, as the M1 already dit that.. Needs some software to be able to manage the new hardware though.Personally I believed that Windows for ARM would explode, and that new ARM designs would come up and compete. has not been the case yet. Apple is the CPU king as we speak, and there seems not to be any serious competition to be seen, even on the horizon. Still its more expensive than ever to buy cpus for stationary gaming PCs, that gets beaten by the 10-50W M1 design.
mode_13h - Tuesday, June 7, 2022 - link
> It would kick their butts, as the M1 already dit that.No, go back and check the SPECbench results in the Alder Lake review article. It beat even the M1 Max.
Of course, the M1 Ultra hadn't been released at that point, but the Ultra also has more cores than either Alder Lake or the Ryzen 5950X.
Silver5urfer - Monday, June 6, 2022 - link
LOLApple is the most Anti-Consumer corporation on the planet, they axed 3.5mm jacks, I/O ports only to bring back now, Soldered everything and what makes you think Apple will make an LGA socket with full chipset ? Not even in 100 parallel universes this will be possible.
Also ARM junk will never make it to DIY machines. Because of how awful their Software support is. Mass market means you need the heavyweight support without that ARM cannot sustain. Also on top every single ARM processor out there relies on proprietary junk blobs including the most dev friendly Qcomm with it's CAF. They will never release the important base firmware. Plus more over ARM processors are always suited for lower performance workloads. Please do not bring Graviton 3 into equation. It's again a full custom proprietary silicon with support only offered by AWS. If we want we can talk Altera, Marvell but they are also slowly moving full custom off shelf parts. Fujitsu A64FX is niche.
So ultimately x86-x64 reigns supreme. Zen 4 will eat M2 for breakfast. Just wait until AMD releases it.
lemurbutton - Tuesday, June 7, 2022 - link
Lol at this AMD fanboy. AMD is done. It needs to beat Intel before it can think about Apple.Silver5urfer - Tuesday, June 7, 2022 - link
I do not have any AMD product man.. lol I didn't get Zen 3 because it's crappy IO Die and instability for me, It works great for many and I need OC and tuning which AMD cannot provide. So I got CML, But I can see how AMD is much superior in MT workloads and how their AM4 platform is such a boon for DIY to use and have longevity which Intel doesn't care. Forget this BGA trash Apple technology which is use and throw, you cannot replace even an SSD forget the rest. Heck the MacBooks have KBs with copious amounts of Glue.AMD is taking a huge chunk of Datacenter Marketshare from Intel. AMD gained more than 8% of Server Marketshare. In Mainstream AM4 saturation is massive, which is why AMD decided to not put more money in R&D for X3D refresh for enter Zen 3 stack as Alder Lake is already selling a lot for System Integrators and BGA junk.
Zen 4 will destroy Raptor Lake, you cannot simply get more MT performance from weak x86 E cores. AMD's Zen 4 got a massive boost in clocks as well, Apple has no chance. Not even their M2 Ultra can beat as mentioned in the articles, look at how M1 Ultra gets Slaughtered by 3990WX a 2019 product, same HEDT class. That's where your beloved Apple is in reality. Don't even waste time on BGA M1 and M1 Max as they are already destroyed by lowly Zen 4000 BGA parts and 1650 mobile vs M1, and M1 Max barely scratching 3060 Mobile rofl.
Qasar - Tuesday, June 7, 2022 - link
lemurbutton, hilarious, calling someone a fanboy, when you your self are an apple fanboy, hell, fanatic is more what you should be called.Kangal - Wednesday, June 8, 2022 - link
Qasar, that is hilarious you are calling lemurbutton an apple fanboy, when you yourself are a SilverSurfer fanatic. :PThe fact is x86_64 and ARMv9 are just trivial points. What matters is the end goal.
What things can they compute? Which can compute faster? How much energy does it use?
You usually cannot remove software (or hardware) from the equation. You have to look at both together as the software-hardware characteristics. Just like you can't remove Space when making calculations about Time, they are intertwined as the Space-Time fabric.
Qasar - Wednesday, June 8, 2022 - link
" Qasar, that is hilarious you are calling lemurbutton an apple fanboy, when you yourself are a SilverSurfer fanatic. :Pum yea ok sure, i guess you havent seen any of lemurbuttons posts then, as they ALL praise apple in some way, and bash every thing else in another.
Kangal - Wednesday, June 8, 2022 - link
I hadn't noticed. But calling someone a fanboy (even when true) and ending the comment there achieves nothing. More often than not, drags the convo back.The fact is SilverSurfer doesn't have a coherent argument, when he is comparing x86_64 to ARMv9. And he won't do anything to address the point that I made; you cannot abstract the "software" portion out of the "hardware" for comparison. That's one of the things Steve Jobs got right.
gund8912 - Tuesday, June 7, 2022 - link
I don't think it is possible to reverse the trend of ARM in laptops, it is game over for Intel/AMD in laptops. Huge performance/watt gains compared to X86. ARM works for most of the people using laptops.mode_13h - Tuesday, June 7, 2022 - link
> put in a motherboard that actually had PCIe slots, m.2 slots, DDR5 DIMM slotsWho knows about the M2, but the M1 didn't have much more PCIe lanes than needed for a M.2 slot. Also, didn't have an extra memory controller for external DDR5.
The M1 chips were designed for thin & light laptops, first and foremost. They'll make a CPU for the next Mac Pro that has more PCIe lanes and supports external DRAM.
t.s - Tuesday, June 7, 2022 - link
"They'll make a CPU for the next Mac Pro that has more PCIe lanes and supports external DRAM." Sources?Apple is Apple because their vertical integration and massive savings + walled garden because of that. I think it's oxymoron for them to make a socketed CPU+GPU.
mode_13h - Wednesday, June 8, 2022 - link
> Sources?The current Mac Pro has 64 PCIe lanes. They aren't going to release something with much less than that, which tells us it's not going to use a M1 Max die.
Also, it probably has to support a couple TB of DDR5, which suggests they're going to have to use external DRAM.
No, I'm not a Mac whisperer. Watch name99's posts. He seems to follow them more closely than most. He also has a twitter (under a different name). You can find it if you search for his M1 Explainer.
xol - Monday, June 6, 2022 - link
The 100GB/s main memory bandwidth seems quite modest for CPU+GPU, but Apple has large on chip caches .. they like to make comparison with x64 which makes sense as it is their main competitor..But there's another direct chip comparison that is interesting to make (but unfortunately hard to test easily) - that's with Nvidia's ORIN chips - in fact just their half-size ORIN NX.
ORIN 200GB/s bandwidth(double) ; ~4TFLOPS GPU (close), 8/12 A78 cores @2GHz+ (less, but could be close with 12 cores) ; "Neural" - similar 10-20 TOPS ; AV encode/decode - Nvidia seems better here (?) ; power 15-60W ; transistors (17 billion when 12 cores) ..
It would be an interesting comparison with M1/M2 , even more so since Orin is made on the inferior Samsung 8nm node ..
The comparison would be difficult in practice, but could shine some light on Apple's skill as a chip maker (and not necessarily the 'flawless victory' they can claim against x64)
anonomouse - Monday, June 6, 2022 - link
Orin is a much larger chip (hence the doubled bandwidth). It plays closer to the M1 Pro than the M1/M2 proper.Which may also play into why the AGX Orin module is 900$ (for 1K units) for the mostly-fully-enabled version, $1600 for the full-fat experience.
The Orin NX module variants are probably closer to this thing, conveniently with equal memory bandwidth too, and to get there they chop off/harvest half of the GPU.
That being said, Orin is also made for a different market with different requirements. It'd probably be pretty interesting if Nvidia built a proper Tegra SoC again for actual mobile products and how that might compare to this. Like a Tegra Windows on Arm SoC.
brucethemoose - Monday, June 6, 2022 - link
Yeah, they are in better position than most to do such a thing (including Qualcomm).I think their compute customers would be interested in a datacenter "APU" akin to Intel's Falcon Shores, which they could bin and spin into a M1-Max-esque product.
xol - Tuesday, June 7, 2022 - link
"Orin is a much larger chip" - well it has 17B transistors, vs 16/20B for the M1/M2 .. a larger chip by die size though"It'd probably be pretty interesting if Nvidia built a proper Tegra SoC again for actual mobile products and how that might compare to this." - supposedly Nvidia has designed a Orin variant "T239" for Nintendo, presumably for a Switch 2 (?) .. all I know is it has 1536 Ampere cores halfway between Orin and Orin NX. It seems to be real based based on API leaks and other stuff.. I'm assuming it's another mobile part but I may be wrong.
Definitely far more capable than anything Qualcomm has right now, maybe Nuvia will change that..
iphonebestgamephone - Tuesday, June 7, 2022 - link
Qualcomm claims 1.8 tflops for 8cx, and 8cx gen 3 is claimed to have 60% gpu perf increase. And they are also using the x1 cores in it, looks like they are just behind in multicore for now, compared to this t239.domboy - Tuesday, June 7, 2022 - link
I'd love it if nVidia got back to making Tegra SoCs for Windows on ARM! Last one they did was the Tegra4 back in 2013 I believe. I don't mind the Qualcomm chip in the Surface Pro X, but I suspect nVidia could do better. Especially on the GPU side, as they've been writing Windows drivers for a long time.mode_13h - Tuesday, June 7, 2022 - link
> The 100GB/s main memory bandwidth seems quite modest for CPU+GPUIt's significantly more than any other laptop or desktop APU.
Not more than consoles, though. Even the first-gen PS4 had more.
xol - Tuesday, June 7, 2022 - link
True, but also by comparison it's not much more than the Steam Deck APU at 88GB/s (1.6TFLOP GPU) - I wonder if Apple is still using a variant of PowerVR's tile based deferred rendering to reduce gpu bandwidth requirement ?mode_13h - Tuesday, June 7, 2022 - link
> I wonder if Apple is still using a variant of PowerVR's tile based deferred renderingI think everyone is using tile-based deferred rendering, now. Nvidia switched in Maxwell (GeForce 900-series). AMD started to switch over in Vega (search for "DSBR", which didn't yet work at launch) and presumably refined it in RDNA. And I think Intel switched with Xe.
It could go some ways towards explaining why Infinity Cache was such a huge win for AMD, in RDNA2. One thing TBDR does is improve access coherency, which should increase cache efficiency.
MayaUser - Tuesday, June 7, 2022 - link
The issue outside of Apples garden is that with what is going on in the world and how water and energy become so important...the x86 will come to an limitServers, and gamers soon will draw 500W just from the gpu only. This must go to an end, servers needs buildings cold with A/C and water tubes because how much power and heat produce
Apple made a statement that the world needs this. I hope Microsoft with their mac mini style arm will take a step in the right direction
This nvidia 4090-5090-9090 that will draw significantly more power just for 30-40% increase must to end (Intel cpu included here too). We should not reach 1000W just for the gpu...is outrageous
At least from my Gtx 3070 and M1 ultra, in Maya 3d modelling there are equal in sustain full load, but wall power source says a different story. M1 Ultra system draws in total around 210 W, while my GTX 3070 system over 620W...this is insane
Silver5urfer - Tuesday, June 7, 2022 - link
Are you having issues cooling a sub 700W PC ? If that's the case you should sell the PC and buy a laptop or a Macbook.I do not get this argument at all. Basic PC and on inferior node but have a ton of flexibility compare to an M1 Ultra which is $4000+ with zero user expansion options and say how it's lower power consumption. Specialized workloads on ARM is not a new thing, ARM has IP blocks that are dedicated for that type of workloads.
4090 is just 30% increase over 3090 I presume, where did you see this claim I would like to read the source.
1000W for GPU, well I would like to know which consumer GPU is this. RTX3090Ti ? that's 600W peak limited, and 450W max and its the worst card to buy $2000 for very minor boost over 3090 which is already a small 15% increase over 3080. Or you are talking about the HPC cards from AMD or Nvidia ?
t.s - Tuesday, June 7, 2022 - link
The remark is: Newest CPU/GPU power consumption is crazy high. And the trend is still going up, while we have limited resources. They (Intel, AMD, NVidia etc) should make something that more 'power friendly'.And I agree with him/her.
GeoffreyA - Wednesday, June 8, 2022 - link
Sure, Ryzen and Core are using more power than the M series at present; but there seems to be an unspoken assumption that ARM, Apple, and kin are radically more power efficient, good for the planet, and will take us far into the future, while poor x86 will go up in flames.ISA and microarchitecture does make a difference, but at the end of the day, more performance will take more energy, and it will go on like that till the laws of physics change. It's conceivable that, in principle, there's even a limit to computation.
In any case, if Apple were so eco-friendly, they should just close shop and call it a day; for that surely would use less power.
meacupla - Wednesday, June 8, 2022 - link
The way you are framing this argument is silly, because if you want to know what consumes a lot of resources to run, it's data centers.Google doesn't even want to let anyone know how much fresh water they are consuming in their data centers. Probably because if the public knew, law makers would be forced to do something about how wasteful data centers are.
Jorge Quinonez - Wednesday, June 8, 2022 - link
For those who value energy efficiency, Apple's silicon has no peer. The idea of a kW desktop system in the home has little appeal except to those who are gamers (I'm not a gamer).TerberculosisForgotHisPassword - Tuesday, June 7, 2022 - link
As a parent of a young child, I gave up building Gaming PCs because I do not have the time to play with drivers and settings for hours. Maybe your PCMR experience is different, and we all know half of the joy is fighting the system to make it “better”.Now I console game because I just turn the system on and a minute later I am playing.
I think Apple has some serious opportunities to get me to play games on their systems if they can give a mix of console ease and PC customization.
Also: very possibly, by the time they get there my kid will be old enough to want to build gaming PCs with me. Who knows.
TEAMSWITCHER - Tuesday, June 7, 2022 - link
Apple really needs to make an AppleTV with real gaming chops. Then some game makers will take Apple silicon seriously. For me, I stopped buying games for Macs because they all stopped working at some point. Unlike the PC that supports even many DOS games to this very day.techconc - Wednesday, June 8, 2022 - link
Yeah, agreed. The AppleTV is seriously a missed opportunity. Apple should have 2 versions. Simple streaming only version that maybe plugs into an HDMI port like a stick and a M1/M2 version that's capable of gaming.mode_13h - Thursday, June 9, 2022 - link
Does anyone remember the Apple Pippin?JRThro - Tuesday, June 7, 2022 - link
"4x High Performnace (Avalanche?)"Performnace? Come on!
ddhelmet - Tuesday, June 7, 2022 - link
AV1 decode?SydneyBlue120d - Tuesday, June 7, 2022 - link
No dedicated AV1 decoding.Jorge Quinonez - Wednesday, June 8, 2022 - link
We'll have to wait for dedicated hardware AV1 codec support. I've read it can handle AV1 software decoding. Though, idk how much that affects battery life.SydneyBlue120d - Tuesday, June 7, 2022 - link
Do you think iPhone 14 PRO SOC will be the same M2 or we can expect an ARMv9 ?Doug_S - Wednesday, June 8, 2022 - link
We can expect an A16, which should have a better core as everything we know indicates M2 uses A15 cores.I would not necessarily assume A16 will be ARMv9, and if it is that's mostly irrelevant because v9 doesn't add bring anything to the table even slightly revolutionary.
Jorge Quinonez - Wednesday, June 8, 2022 - link
SVE2 (SIMD) support is the only cool feature of ARMv9. SVE2 can handle up to 2,048 bit length instructions. Total speculation, but, Apple could include SVE2 512-bit SIMD instruction capability on a future M series processor just like Intel's AVX-512.mode_13h - Thursday, June 9, 2022 - link
They already have some matrix-multiply extension, in the M1.https://medium.com/swlh/apples-m1-secret-coprocess...
Ashan360 - Friday, June 10, 2022 - link
A16 cores will likely be more efficient/better IPC than those in M2, but not faster. M2 cores are clocked higher due to the higher power envelope. A16 will also be 5nm process…I don’t expect much of a performance gain in the CPU, but they will no doubt continue to push GPU and neural engines.mobutu - Wednesday, June 8, 2022 - link
"... is also seemingly why Apple has seemingly decided to ..."Ryan Smith - Wednesday, June 8, 2022 - link
Thanks!Ashan360 - Friday, June 10, 2022 - link
It took 18 months to roll out the M1 family of chips. Now that the M2 is starting to roll out, I wonder if they will take about 18 months again. It seems likely. So M3 on a 3 nanometer node at the end of 2023. M3 will likely skip the A16 architecture and jump straight to the A17 architecture. When including the node jump and two architecture generations, M2 to M3 should be a much bigger jump than M1 to M2. Still, the improvements here are nothing to scoff at. M2s biggest areas of improvement are in areas that make the most sense to invest in: GPU and neural engine. Too bad about the 1 external monitor limit, I really think this machine is more than enough for most but a lot of people have become used to having a dual monitor setup. This move begs Apple (or anyone) to make a much cheaper 6k display.mode_13h - Friday, June 10, 2022 - link
> It took 18 months to roll out the M1 family of chips. Now that the M2 is starting to roll out,> I wonder if they will take about 18 months again.
We don't know whether they truly planned all 3 M1 variants from the beginning. Also, I think you should start counting the M1 rollout from the A14, which is the first time they'll have gotten some experience with the cores and could start to see how well they might scale.
I wonder if they'll take a more modular approach, this time. Like, maybe doing just two chips, but with more interconnects?
If the M2 can scale to a 2-chip configuration and the M2+ can also scale to a 2-chip config, then you get the same 4 products that comprised the M1 line, but with approx 2/3rds of the silicon design & validation effort. However, maybe M2+ would scale all the way to a 4-chip config? For single-hop latency, each chip would just need to have 3 links, which seems manageable.
Willx1 - Friday, June 10, 2022 - link
The base model only has 8 gpu cores so I’m guessing we’ll see a 15% advantage over m1 gpu. Though if you need those 2 cores you’ll pay a ridiculous $100 along with $200 for each 256gb storage and 8GB ram upgrade, coming in at a ridiculous $2500 for the top configuration. You might as well purchase a pro model for that price since you get the 16” bae model with 10/16 Soc, 16GB ram and 512 Gb storage or a 14” model with 1TB SSD. So you lose a lot of storage and 8GB ram but gain larger mini led screen and a much faster SOC.systemBuilder33 - Saturday, June 11, 2022 - link
"is comprised of" is not english. please learn english.systemBuilder33 - Saturday, June 11, 2022 - link
"as Apple invested most of their improvements into improving overall energy efficiency" is poor english. please learn english.systemBuilder33 - Saturday, June 11, 2022 - link
"Carried over to M2, and it’s not unreasonable to expect similar gains, though the wildcard factor will be what clockspeeds Apple dials things to" A preposition is an awkward thing to end a sentence with. please learn english.web2dot0 - Saturday, June 11, 2022 - link
Constructive criticism/feedback is part of public discourse. Please learn proper social etiquettemode_13h - Sunday, June 12, 2022 - link
Obviously, these are hastily-written articles. If they're not published in a timely fashion, they're practically worthless. And they can't afford a copy editor, due to ad blockers.Maybe you should ask for a refund.
mode_13h - Sunday, June 12, 2022 - link
"is comprised of" is absolutely correct. Ignoramus.https://en.wiktionary.org/wiki/comprise#English
systemBuilder33 - Saturday, June 11, 2022 - link
I am thinking that the days of Intel + NVidia laptops are DEAD. AMD is shipping 2 Tflops integrated GPU (6xxx series), Apple is shipping 3.6 Tflops integrated GPUs, and Intel can't ship anything new in the GPU area, for 8 full freaking years, nothing, nil, nada, its zenith was iris pro 5200.mode_13h - Sunday, June 12, 2022 - link
If you're going to count Iris Pro, then why are you excluding Tiger Lake's 96 EU Xe iGPU?scottkrk - Monday, June 13, 2022 - link
It is a shame the x86/windows crowd can't be a little more appreciative of Apple's role in the tech industry, maybe in a few years time when the Windows ecosystem follows Apple *successful* transition to ARM based SOCs....Meanwhile, for those worried that Apple hasn't been giving non-mobile gaming enough love, I imagine their experience in designing SOCs for headphones/watches/mobiles/tablets/laptops/desktops could come in quite handy when developing a AR/VR headset?
mode_13h - Monday, June 13, 2022 - link
FWIW, I *do* respect Apple and what they've achieved. I wouldn't exactly use the word "appreciate", but the competitive pressure is certainly a positive aspect.I have no love for x86. Even 5 years ago, I thought the ARM transition would be much further along, by now.
techconc - Monday, June 13, 2022 - link
The industry has a pattern. Apple does something different like removing the floppy, no removable battery, adds working secure biometric security, etc. First, the industry mocks it until they end-up doing the same thing.Going the SoC route is really a no-brainer, especially for laptops, etc. Intel/AMD will have to eventually field something similar to be competitive. Some will complain because they can't add more memory, etc. Most will appreciate the much greater efficiency and battery life.
Anyway, we're just in the early stages of the industry mocking Apple on this approach... they'll follow soon enough. They have to in order to remain competitive.
mode_13h - Tuesday, June 14, 2022 - link
I think the reason Apple was first to deploy in-package memory, at a mass-market scale is due to being vertically integrated. That lets them absorb higher CPU costs more easily than a traditional laptop maker.The other thing is the GPU, which is the main beneficiary of moving DRAM in-package. Until now, laptops wanting a powerful GPU would deploy one on a separate die. AMD and Intel are now both tapping this market for additional silicon. In Apple's case, they had no separate GPU and didn't want to support it. So, they could more easily decide to go all-in on the iGPU approach.
techconc - Thursday, June 16, 2022 - link
Agreed, but I think it’s more than that. Modern workloads are increasingly benefitting from more than just the CPU and even GPU. For good well rounded performance, a modern system needs things like a Secure Enclave, dedicated Neural Engine, matrix multiplication unit, dedicated media blocks for common formats, etc. These dedicated units not only bring great performance, but they also bring great efficiency with their special purpose processing.mode_13h - Friday, June 17, 2022 - link
> dedicated Neural Engine, matrix multiplication unitThose are mostly about AI, except that matrix multiplies could be useful for HPC, if high-precision formats are supported.
And hard-wired AI provides the most benefit in mobile applications, due to the efficiency benefit vs. a CPU or even a GPU. In a desktop with a bigger power budget and often a bigger GPU, you're better of just offloading it to the GPU.
BTW, both ARM and Intel have matrix arithmetic ISA extensions.