I still wouldn't want it in my laptop, I just don't trust ARM to play well with all my software.
But, assuming the GPU has no issues beating the now fairly elderly one in the Nvidia X1 SoC, give me a new Nvidia Shield TV with this kind of power, and I will start tossing money by the bagful.
I know, they specifically made this for Windows laptops, but I already have Windows laptops that more than fulfill all my laptop and laptop-related needs, in every shape and size.
But my Shield Pro and Shield tubes are starting to show their age, and this, this is the kind of power that could keep a new version future-proof for a good long while.
Would it be complete overkill, and way more expensive than what 99% of humanity would be willing to pay for an Android TV box? Of course. But I still want it. Nvidia has been dragging their heels on this, and if they don't want to make their own new SoC, just buy it. Of course, we wouldn't be getting anything as small as the tube with this anymore, but something Roku Ultra sized, at minimum. But who cares? Just give it the same endless support they have given the Shield TVs thus far, and I'll buy. Hell, I'll buy one for every TV in the house.
Software is also my concern but not compatibility. I have a snapdragon 8cx gen3 system and it's fine for all the software I use but once the GPU goes above 50% load the whole system crashes and I can't figure out why. No logs, no temperature sensors or voltage sensors I have been able to access. It could be a driver issue but there's only been one update since launch. It's a shame as the GPU feels competitive for an iGPU up until it crashes (e.g. half life 2 at 5120x1440 is no problem).
If it's a driver issue Qualcomm haven't released a fix. If it's something else monitoring software hasn't caught up with the platform when I last checked. I'd love a newer generation chip but it has to be better supported.
Good idea but for one this is the official Windows on ARM 2023 dev kit so you'd expect it to be stable and well supported under Windows. For two since it's ARM switching OS isn't anywhere near as easy as x86.
I too experience graphics (driver) problems with my Windows Dev Kit 2023 ARM Mini-PC. When playing YouTube videos in Firefox the driver frequently restarts, resulting in the screen going black or white. In Edge the Edge Windows flickers shortly.
In addition I sporadically I get the message (audio renderer error, please restart your computer“. Windows Sandbox crashes every time when maximizing its Window (might be related to the uncommon resolution 3840×2560 of my MateView 28 monitor).
Windows Sandbox performance is barely acceptable and unpredictable. Sometimes even the volume control is extremely slow to open or does not open at all. I assume other virtualization solutions have similar problems, but did not test this.
MS did not fix all these problems in a year with three firmware updates.
This is why I'm rather excited about the rumor of nVidia getting back into the Windows on ARM game. They have much more experience writing windows graphics drivers than Qualcomm. Not saying the latter is horrible, but Qualcomm doesn't even currently have proper OpenGL support in Windows.
i reckon a new Shield TV refresh will be announce after the release of the Next Nintendo Switch followup next year and the Shield TV refresh will probably uses the binned SOC from the switch followup as its internal.
Agreed this won't work well with windows and typical windows software. Android and Linux are the obvious choice. Wouldn't take much to make a linux laptop that runs windows apps emulated better than x86 laptops.
I don't think you you realize how much of an improvement this is over existing Intel based solutions. Performance per watt is a very big deal for any mobile device, including laptops. It's not just about peak performance, it's about longer battery life, less heat and less fan noise. Apple's MacBooks have had an embarrassing lead in laptops for several years now. This is exactly what's needed to bring some level of parity to the PC market.
Or so Qualcomm claims, for the entirety of its life cycle, software will be in compatibility mode. And that's not good news for either performance or power.
I'm sceptical whether battery life, heat, and fan noise is worse primarily because of the lack of availability of good chips. It seems to me it's more of a conscious decision by PC manufacturers, or lack of trying.
I imagine the biggest advantage that Apple has is that they also control the OS, and have better ability to make sure the OS doesn't spend 2 hours on some "maintenance tasks" while running on battery power.
More ARM cores are shipped each year than cores based on any other instruction set. ARM is commonly found in powerful servers these days. The Fujitsu Fugaku supercomputer running on an ARM server chip sat at number 1 on the Top500 for a year or more very recently. Suspicions are irrelevant to actual computer performance.
For some reason, AnandTech is using the non-final slides from the presentation, whereas ArsTechnica does have the final ones, that show the actual model numbers of which competitor chips Qualcomm is comparing to. This includes the ARM slide, which explicitly states it is peak power 12-core Oyron vs Apple M2 peak power. Now, that isn't exactly a fair fight: 12-core Oyron is running at 50-Watts, while M2 is only 8-cores and running at 25-watts. A fairer fight is M2 Pro, also 12-cores (8P+4E) that is closer to 40-watts. And in that comparison, Oryon seems to be losing: it is roughly the same performance, for slightly more power, but again, Oryon is using 12 performance cores to match 8 performance & 4 efficiency cores of Apple's...
Price has never been a factor though when comparing CPU architectures, but only when talking about final products. Also, it says a lot that in the x86 competition, Qualcomm is happy to compare it to Intel 12- or 14-cores processors, but only an 8-core Apple chip - even though a 12-core Apple SoC also exists. I will say this: in the GPU department, Qualcomm might have a winner. Again, no idea why AnandTech doesn't have these slides, but Qualcomm shows it being 80% faster than AMD's Radeon 780M in the Ryzen 7940HS - that is pretty big. One downside though: it will only have DirectX 12 compatibility drivers at launch, no Vulkin support.
The fact that it's shipping with DirectX v12 first is not a downside. That's the hardwork getting done first. Qualcomm has loads of experience with Vulkan on Snapdragon processors. So that will come sooner than later. But it's refreshing to hear their DX12 is done, and that's performance comparisons have been done against the industry leader.
However, like all things, we will have to wait to get our hands on it, test it properly, then draw conclusions. It seems the direct comparisons will be against the AMD 7840u and M2 Max.
A different way to look at this is that they only provide a "single" memory controller (depending on how you slice these things up, but the equivalent of the M2's memory controller), rather than than "dual" memory controllers of an M2 Pro.
You can view this as - unbalanced for the amount of compute they provide? OR - the iGPU is M2-class, not M2 Pro class? OR - the new norm going forward for the ARM world will be more cores than M2?
It's unclear to me which of these is correct. There are indications (so, yes, very reliable!) that the M3 Pro/Max will have 16 "cores", for which the most obvious assumption is three 4-core P-clusters and one 4-core E-cluster. But another option is two 6-core P-clusters (there's no law that a cluster has to be 4 cores, and I'm unaware of any simulations that suggest, for example, the bandwidth of 4 cores to a shared L2 is high enough that 6 cores sharing that bandwidth would be a bad idea).
Which in turn opens up interesting options for M3. Maybe it gets a single 6P-cluster, and Pro/Max get 2 6P-clusters? Or maybe M3 gets 2 4P-clusters (so it's at least 8+4 cores, if "12" is expected to be the new low-end norm going forward) and Pro/Max get 3 4P-clusters?
"For some reason, AnandTech is using the non-final slides from the presentation"
For what it's worth, Qualcomm silently updated the pre-brief deck multiple times. So the version I had, which was supposedly final and is what I used to file this story Sunday night, was in fact not. I've since updated the images in the article, but I'll have to tweak the text later when I have time.
No problem Ryan! I was just really confused, because I read the AnandTech story first (it is always my first stop!) then Ars Technica second. So then I was like, why did Ryan have to do all of this speculation on what CPU this might be and what is implied here, when it is all laid out in the slides Ars Technica has? I would love for you to go back to this story now and fully update it with some comparisons of what you know of Intel's & Apple's current lineup. Like I said above, at first glance, it doesn't appear as though Oryon is going to reach M2 levels of IPC/efficiency. Though it is of course a league ahead of all other current ARM designs out there.
The extra memory bandwidth from the Pro likely helps a fair bit (just guessing since adding 4P cores seems to increase their score by ~45%) which is part of the reason I'm disappointed Qualcomm stuck with a 128 bit bus maximum. Though it does make sense given the market they're aiming at as it'll be cheaper and they don't really have to compete with Apple.
"... Oryon seems to be losing: it is roughly the same performance, for slightly more power..." Your calculations are way off. The M2 Pro is only 21% faster than M2 in multicore (not 50% as you claim) despite having 50% more cores (12 vs 8). If Snapdragon X Elite has 50% better multicore performance than M2, then it is still SIGNIFICANTLY AHEAD of M2 Pro.
Geekbench 6 (multi-core) M2 Pro 12222 (+21%) Apple M2 10094
@lionking80 - Majority of the M2 Pro scores I see are in the ~14000 range, with 12000 being only a few in the lowest. Let's go in the middle though, and say 13000. If Oryon is 50% better than base M2, that puts it at 15000 Geekbench 6 Multi-Core. Or, about ~15% better than M2 Pro/13000 average. That is 15% better though for consuming more power, and have 12 performance cores vs 8P+4E.
Again, my point wasn't that Oryon isn't faster than M2 Pro. It absolutely is. It is that M2, as a CPU architecture, still seems to be king in the IPC/efficiency. Also, 15% is a close enough gap that may very well evaporate next week when M3 releases.
It is still a VERY good outing though, and puts Qualcomm way ahead of all other ARM designs. And will probably get Apple to actually start making big jumps in performance with their own chips again.
@NextGen_Gamer We are reading the data very differently. There isn't anything in the benchmark numbers for the M2 (of whatever variety) that suggest to me that Apple's performance cores steal the win on efficiency even while being behind on performance.
Admittedly, though, there isn't a lot of useful benchmark data to call upon at this point that illuminates the comparative Perf/W and power consumption picture beyond the CPU peak performance scenario that Geekbench focuses on. And, Geekbench numbers, by themselves, are hardly satisfactory. Additionally, there is a need to properly confirm Qualcomm's benchmark numbers. Still, the picture will become much clearer in the coming months. I would definitely like to see Anandtech act in computer users interests and do a thorough performance report on the X Elite.
Yes, the release of the M3 will be interesting. That will figure into an increasingly interesting picture for ARM computing going forward.
Look at the official listings on the Geekbench site. M2 in the Mac mini - 9742 M2 Pro in Mac mini - 14251 - 46% higher If the X1 is 50% faster than M2, that would put it at 14613.... which is effectively the same. Also, keep in mind that Qualcomm's numbers are typically higher than actual shipping products using their chips. They always show the best case scenario with some sort of reference device that has no thermal limitations, etc.
From the slides, it would appear that multl thread on Geekbench is around 14500 (M2 x 1.5) which would put it in line with M2 Max with higher power consumption. However,the flip side is that it offers 15% better single-threaded at 30% less power and has the ability to ramp two cores to high clock speeds.
For a first attempt, this is far better than I expected and will cause serious issues for Intel and AMD unless they pull something out of the bag
Yes, Qualcomm's presentation was intentionally misleading. For example, they compared single core performance to the M2 Max. Why not compare multi-core performance as well? For that matter, the base M2 has the same single core performance but I guess that comparison didn't sound as impressive. Worse, I've seen general news coverage (forget which channel) which parroted that claim and said this new chip is faster than an M2 Max. Well played Qualcomm marketing... it worked on non-technical types.
The comparison made in the keynote was clearly with the M2 MAX - Apple's highest performing chip. Qualcomm claimed the Snapdragon X Elite outperforms that chip (presumably on the Geekbench 6 single-thread benchmark). Furthermore, at ISO performance Qualcomm claimed the Snapdragon chip uses about 30% less power than the Apple chip (at the latter's peak performance).
There is nothing unfair about that comparison. If further testing confirms Qualcomm's claims then the Oryon core will take the crown as the fastest ARM core and perhaps the fastest CPU core bare none. But, it will be efficient to boot.
The news is even worse for Intel than it is for Apple. Still, Apple somehow managed to alienate the designers of the Oryon core and I'm pretty sure the responsible parties over at Apple would be regretting that now.
I hope this is true across all benchmarks. This will either completely nuke x86 or force Intel/AMD to respond. How do these compare with phoenix. Its talking about mid 2024 when MTL will be there and probably Zen 5 laptops? Even Lunar Lake is supposedly coming up end of 2024 which is targeting ULV as are Arrow Lake for higher TDP laptops. Apple will have M3 based SOC. Great time to be a consumer. no company can over charge for their chips especially in this recessionary market.
Yeah it's going to be wild times having a multi way comparison between Nvidia's 2025 ARM windows chip, Qualcomm here, Apple, and then Intel and AMD as the old pair, rather than the cold old duopoly.
I'll miss x86 for sentimental reasons, but it is long past time for a more efficient architecture to take over. It was long past time a decade ago. That said, having better performance and better power consumption will start the long road to replacement; this is certainly not the end of that road. Most Windows software is still x86, and compatibility software will work for most tasks, but not all.
There's no way x86 goes away. All it'll take is some compatibility tweaks with the proposed x86_64S and dropping non-standard length instructions to make things a real competition. If things look dire, this effort will be expedited.
The only way x86 doesn't go away is if they adapt to similar architecture designs that Qualcomm and Apple are going. Things like unified memory, etc. to improve efficiency. I think AMD is more likely to get there before Intel if it's a contest between the two.
AMD tried to get HSA going as far back as Kaveri, but back then had little clout and since Intel didn't do it the x86 industry didn't follow, and eventually gave up on it. Unified isn't impossible on x86 land, we just need Microsoft, Intel, and AMD to agree to it.
To be honest, AMD is in the best position at the moment. They can develop some Zen6 cores that are VERY BIG. And put these in a big.LITTLE configuration. But instead of using some Medium Cores like based on Zen4, they can go for ARM.
Their Infinity Fabric allows them to go hybrid computing fairly well. Just imagine a 1+3 budget configuration of Zen6=Cortex-A730 for the 5W form, but clocking anywhere between 1W-20W on the efficiency curve. Or a more capable 2+4 design for the 10W tablet form, clocking between 2W-40W. Then a 3+5 design for the 15W ultrabook form, clocking between 4W-45W. Then a 4+4 design for the 25W laptop form, clocking between 5W-50W. Stepping up to the 4+8 design for the 40W gaming laptop and mini tower form. Then scaling up as you go....
It’s a combination of things, not just unified memory. I was simply using that as an example as to where modern SoCs are drawing their efficiency from. Another common efficiency/performance trait is more heterogeneous computing such as dedicated media blocks, encryption, NPU, etc, etc. You may not need that type of a design in a desktop with unlimited power and thermal constraints, but in ANY mobile device with a battery, you most certainly do.
> as these are notoriously memory bound; according to the company, the chip will have enough resources to run a 13 billion parameter Llama 2 model locally.
This is kind of a nothing statement. How is it run? What framework? How fast is it?
Basically that's the same as saying "has access to ~8GB of RAM".
With enough system RAM, it *would* be neat to run a 13B model in the background reasonably fast. You can do that on GPUs or with hybrid inference now, but it will obliterate battery life (and some CPU/GPU perf). I dont know of a way to do this on AMD SoCs at the moment, Even though they technically have an AI block.
[ 'power-hungry tensor block' is there a guess/number for wattage with top performance on ~45TOPS/4bit, for having some clue about power requirements? (Thx) ]
Leave it to Qualcomm to include a cellular modem in every model, despite almost no one wanting that in a PC. And they will make you pay for it, and when they do price comparisons undoubtedly include a high end plug-in 5G modem in the competitor's config to make their pricing look more reasonable.
Meh, we'll see how it shakes out when it launches. How are Qualcomm's drivers nowadays? I remember reading a dev blog by the Dolphin team years ago, where they rated GPU drivers, and Adreno drivers were rated poorly, with broken functions and poor documentation. Is that still the case?
"Microsoft needs to step up and deliver some major Windows x86-to-ARM translation support for this revolution to occur."
What are you talking about? They can already translate both x86 and x64 apps in Windows 11. The only thing really holding that back is cpu performance (or lack thereof), which is why people think Apple's Rosetta is so amazing compared to Microsoft's very similar solution.
"which is why people think Apple's Rosetta is so amazing compared to Microsoft's very similar solution."
I've heard that Apple included additional instructions and tweaked other things to make the translation from x86 to Apple Silicon flavor of ARM faster than it would be just using software-only options.
"I've heard that Apple included additional instructions and tweaked other things to make the translation from x86 to Apple Silicon flavor of ARM faster than it would be just using software-only options."
Well, they can do that since they control both the hardware and the software. From what I've read, looking at benchmarks, Rosetta has a similar translation overhead to Microsoft's solution; it just is less noticeable due to the M1/M2 being much faster that what Qualcomm has offered so far.
Yeah, but Microsoft's solution is nowhere near as good as Apple's. Qualcomm has delivered on the hardware this time. Microsoft also needs to deliver with a Rosetta quality emulator both in terms of compatibility and performance. If they can deliver this, the transition is seamless. There has been a fair amount of criticism for Microsoft's half baked attempts so far.
AS even switches its memory model when translating AMD64 (x86 base was dropped in macOS a while back, so it only needs to do so for the ISA extension making things a bit tidier), a core advantage
Oh, absolutely. Apple not only dropped support for x86 based but also 32 bit. Doing so did drop some legacy software support, but it also made for a very clean and efficient emulator.
The transition is not going as seamlessly as the x86 to x64 one. Back then, even if one had an x64 CPU, it was optional to use 64-bit Windows, which was problematic until Vista. And when one did, it was mostly transparent whether the apps were 32- or 64-bit. Credit must be given to AMD for designing x64 in such a way that it was backwards compatible at native performance.
This time, it's one or the other, x86 or ARM, and there are no official ARM CPUs from Intel or AMD. Apple, having both the hardware and software, was able to force the whole transition in one stroke, at the right time. On the "Wintel" side, the problem is vaguely circular. Microsoft needs ARM CPUs from AMD and Intel (both, otherwise it won't work), and AMD and Intel don't have a proper ARM Windows yet, so naturally will put their effort on x86. All these parties, arguably, need to work together or at least silently acquiesce.
If we look back, x64 would not have prevailed if Intel tried to be difficult and went their own way. So something similar must happen here.
All right. I guess I'm out of touch with what's going on in computing these days. I do hope RISC-V wins the day. If x86 must go, we might as well migrate to the best alternative, rather than just the marginally better, ARM. Problem is, Microsoft has already heavily invested in the latter, so if Intel pushes forward with RISC-V, the software side on Windows is going to be a problem.
I'd hazard a guess this is why Microsoft left out a new Surface Pro from its most recent event; they were waiting for this bad boy. Mid-2024's a ways out, but it'll be awesome to see what kind of devices appear with this chip. I'd love a 5G-capable Windows device that really performs and has the app compability to back up my use-case on the go. Bonus points if those GPU claims are genuine and I can run some games at decent enough graphics while sipping battery like my wife is able to on her M1 MBP.
Is this going to lead to more Windows laptops with non-upgradable RAM? Will the PC manufacturers follow Apple’s lead and charge through the nose for more RAM and storage at the point of sale? I wouldn’t put it past them.
I think the LPDDR4X bandwidth has the 6 and the 8 swapped (i.e. should be 68.3 not 86.3). I was confused why 5X had twice the transfer rate but only ~50% more bandwidth
I more interested in seeing some possible Gaming Handheld possibilities for some variant of this processor seeing is how Apple's not targeting any specifically intended for Handheld Gaming designs with the M series processor inside. There is gaming on iPhones and iPads but no beefy cooling solutions like on some actual Gaming Handhelds.
The biggest issue with ARM SoCs capable of running Windows is that M$ tried to turn them into an Apple walled garden clone, instead of a Personal Computer that happens to have an ARM CPU.
No thank you, we already have one of those and there is nothing that makes a M$ prison more attractive than the fruity cult.
So I sincerely hope these ARM SoCs will be true PCs, capable of running any current OS compiled for ARM.
If people want locked down hardware, M$ and the rotten fruit company can deliver today, no problem. Making that the only choice is quite simply something that requires regulators handing out more a slight slap on the wrist a decade later.
I don't think this is true? People have run Linux on the Surface Pro X as part of the linux-surface project, for example. There's been more work put into running Linux on the Thinkpad X13s AFAICT, but I guess Linux users love Thinkpads. Unlike other ARM devices, Microsoft even insists on a sane (UEFI) boot process for Windows on Arm laptops. I believe you can literally go into the BIOS and boot from USB if you want. WoA laptops are not, as far as I can tell, locked down any more than Intel or AMD laptops. My real concern is about non-upgradable memory, but that's not an ARM problem uniquely.
It may just be a holdover from the very first Windows-for-ARM devices they sold.
At the time I am quite sure that Windows variant was the Window Home S variant, that could only take software from the M$ shop, which I find quite inacceptable on my personal computer.
Microsoft also locked down their mobile phones, and have tried at every point to make Linux very unattractive unless it now runs inside their OS and ensures a M$ tax will be paid for something they didn't create.
If it's better than I fear, I'm glad to hear it. But I'd rather see it proven out, too, so that's why I am raising the point early on.
Non-upgradable memory isn't that much a problem for me, especially when there is a sound technical reason or benefit for it. The "Lego" 1/2/4/8 size chips approach with matching RAM channels makes a lot of sense and not adding pads and amplifiers for off-chip DRAM support save tons of silicon real-estate and power.
Crazy prices on what I consider sane RAM amounts and limited SKU availabiltiy is another issue and Apple charging HBM prices for LPDDDR RAM getting GDDR performance is just the type of extortion that keeps spoiling any remaining appetite after looking beyond their walled garden fences.
I can see that being a technical constraint/optimization: due to the exponential power increase near the top of the CMOS-curve, the power delivery to any cluster may just be limited to a certain amount per cluster. It points toward very decoupled power delivery per cluster and then having to support peak clock rates on neighbouring cores doesn't just require accomodating clocks, but also heat. Two cores running at 4.3 side-by-side or in adjacent clusters just aren't quite the same up close.
As tiny as these chips are, spreading it across the clusters may help reducing the amount of dark silicon otherwise required to sustain that and seems very much in line with what those Nuvia/Apple engineers have been doing all along.
Then again the difference between 3.8 and 4.3 GHz is so low that I can't work up the least bit of outrage at these numbers.
But I am wary we'll actually see 3.8GHz on all cores at ultrabook power ranges of 10-15 Watts.
There is much less left in terms of clocks on my Ryzen 5800U notebook, once all 8 cores spring into sustained action and for all 4nm and Nuvia magic, physics are still hard to beat.
But 12 cores at 3.8 on a NUCalike running a mix of Linux and Windows on a hypervisor with iGPU pass-through sounds really attractive...
So let's make sure M$ keeps their greedy paws from locking down a market segment they are crazy enough to consider their own.
I think you're just naming marketing material without understanding what it is. Hyperthreading doesn't make sense on an ARM chip with a reasonably wide decoder. The short version is this, on Intel based designs you only have something like a 4-wide decoder whereas on Apple's latest A17 Pro it's up to a 9-wide decoder. This is because the ARM (RISC) instruction set is much more predictable in terms of length, etc. x86/x64 still has CISC based instructions are are less predictable in length. Hyperthreading is effectively just a hack to recapture some of the lost efficiency that comes with a poor decoder.
No, SMT is not more than a hack to make up for a poor decoder. It’s also proven to be a security liability in many implementations as well. Don’t get me wrong, it’s a clever hack and an example of Intel pushing a legacy ISA far beyond what should be possible. But, at the end of the day, it’s a hack and not needed modern chip/ISA designs.
You don't need hyperthreading if you can fill the execution hardware without it, it's a band-aid for that. There was that Chinese designed Kirin SoC that just came out after the export bans with hyperthreading because they reused server cores, but people found it raised power use more than performance.
The efficiency looks very good. Assuming that The Core 13900 uses ~2.5X the peak power as Ryzen 7940, on CPU side, the SD should be on-pair with Ryzen, which is not too bad.
And the GPU looks good too, although it's easy to cherry-pick one parameter that can make it look better than the competition. Apples to apples comparison on GPU is difficult (more than on CPUs). But it's seems a massive step forward from The Gen2. Catching up with Apple, finally. I'd want something like this in my phone (I know, this is for laptops - hopefully, something similar trickles to phones).
Yeah, everyone's future products always look better than competitor's currently shipping products. Realistically, this chip will be competing against Intel Meteorlake and Intel's 4nm process. Similarly, it will be competing with Apple's M3 series of chips, not the current M2 series.
Hopefully these arm chips will make Linux Mainstream. There is no reason anyone should still be using windows in 2023. Its sad gaming companies don't go exclusively Linux because the business world is ready to drop windows. Gamers prop up windows which needs to end. I have used windows on ARM and the software compatibility issues is horrible.
Key point is ISO power, this ARM processor will never beat that Intel or AMD BGA chip. They are comparing these within 50W power cap. Once the x86 lets go of that Power Limit it will destroy this garbage. And to those who are dreaming about a desktop class performance, look at Tenstorrent AMD's Zen 5 x86 performance projection, that will wipe the floor. And once AM4's 2024 CPU debuts, this joke will be DOA as Qcomm is targeting 2024.
Apple also claimed a lot when they launched M series and it got curbstomped on the day of release. Then their entire GPU PR went out of window once they did not run specialized Mac OS exclusive workloads. Even M2 line cannot beat the latest AMD and Intel processors.
Everyone here loves to bring that BS efficiency argument, you are running a use and throw machine, for this you want some 14900K or 7950X performance ? Not even Apple or any firm can deliver this, it's simple physics, and you cannot cheat physics.
Next is ARM/x86 translation. Rosetta does Real time Microtrash cannot even deliver Windows 11 stable as Windows 10 LTSC. And this translation layer will be bugged up the wazoo and there's no chance that Microsoft will optimize it. Let alone be like Rosetta. Second ARM is all proprietary junk, look at Android OS upgrades, for every single damn thing you need the blobs and src modules which Qcomm shares or Exynos does. Rest of the OEMs do not and they end up in junkyard. Even with Qcomm and Exynos the support is limited. Same will be repeated here.
It's an experiment by the big power and bilderberg to try and change the world computing standard. x86 will die but not now, once x86 dies, the concept of a "Personal Computer" will be dead. Look at phones, do you think this ARM junk will succeed x86 PC and overtake it ? Be careful what you wish for.
For Microsoft, Windows 11 which cannot even beat Windows 7 in the concept of a stable PC with 10 years of SW advancement or that Unreal Engine 5's absolute horrendous optimization with 100x PR and still failing to beat Rockstar's RAGE RDR2 or the likes of CryTek's old Crysis 3 in pure GFX quality and optimization.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
84 Comments
Back to Article
quiksilvr - Tuesday, October 24, 2023 - link
Glad to see some competition in this space and legit laptop CPUs now and not just slightly overclocked phone chips.garblah - Tuesday, October 24, 2023 - link
The AV1 ENCODING capability on chip is unexpected. I wonder how it will compare to hardware AV1 encoding of the latest gen of desktop GPUs.PurposelyCryptic - Tuesday, October 24, 2023 - link
I still wouldn't want it in my laptop, I just don't trust ARM to play well with all my software.But, assuming the GPU has no issues beating the now fairly elderly one in the Nvidia X1 SoC, give me a new Nvidia Shield TV with this kind of power, and I will start tossing money by the bagful.
I know, they specifically made this for Windows laptops, but I already have Windows laptops that more than fulfill all my laptop and laptop-related needs, in every shape and size.
But my Shield Pro and Shield tubes are starting to show their age, and this, this is the kind of power that could keep a new version future-proof for a good long while.
Would it be complete overkill, and way more expensive than what 99% of humanity would be willing to pay for an Android TV box? Of course. But I still want it. Nvidia has been dragging their heels on this, and if they don't want to make their own new SoC, just buy it. Of course, we wouldn't be getting anything as small as the tube with this anymore, but something Roku Ultra sized, at minimum. But who cares? Just give it the same endless support they have given the Shield TVs thus far, and I'll buy. Hell, I'll buy one for every TV in the house.
CampGareth - Wednesday, October 25, 2023 - link
Software is also my concern but not compatibility. I have a snapdragon 8cx gen3 system and it's fine for all the software I use but once the GPU goes above 50% load the whole system crashes and I can't figure out why. No logs, no temperature sensors or voltage sensors I have been able to access. It could be a driver issue but there's only been one update since launch. It's a shame as the GPU feels competitive for an iGPU up until it crashes (e.g. half life 2 at 5120x1440 is no problem).If it's a driver issue Qualcomm haven't released a fix. If it's something else monitoring software hasn't caught up with the platform when I last checked. I'd love a newer generation chip but it has to be better supported.
Mantion - Thursday, October 26, 2023 - link
My guess is you are using windows. Switch to an Arch based or NIX linux.CampGareth - Thursday, October 26, 2023 - link
Good idea but for one this is the official Windows on ARM 2023 dev kit so you'd expect it to be stable and well supported under Windows. For two since it's ARM switching OS isn't anywhere near as easy as x86.pmeinl - Thursday, October 26, 2023 - link
I too experience graphics (driver) problems with my Windows Dev Kit 2023 ARM Mini-PC. When playing YouTube videos in Firefox the driver frequently restarts, resulting in the screen going black or white. In Edge the Edge Windows flickers shortly.In addition I sporadically I get the message (audio renderer error, please restart your computer“. Windows Sandbox crashes every time when maximizing its Window (might be related to the uncommon resolution 3840×2560 of my MateView 28 monitor).
Windows Sandbox performance is barely acceptable and unpredictable. Sometimes even the volume control is extremely slow to open or does not open at all. I assume other virtualization solutions have similar problems, but did not test this.
MS did not fix all these problems in a year with three firmware updates.
domboy - Friday, October 27, 2023 - link
This is why I'm rather excited about the rumor of nVidia getting back into the Windows on ARM game. They have much more experience writing windows graphics drivers than Qualcomm. Not saying the latter is horrible, but Qualcomm doesn't even currently have proper OpenGL support in Windows.lordlad - Thursday, October 26, 2023 - link
i reckon a new Shield TV refresh will be announce after the release of the Next Nintendo Switch followup next year and the Shield TV refresh will probably uses the binned SOC from the switch followup as its internal.Mantion - Thursday, October 26, 2023 - link
Considering Nvidia plans on making their own desktop chips Not sure why people think a Shield refresh would have a qcom chip.Mantion - Thursday, October 26, 2023 - link
Agreed this won't work well with windows and typical windows software. Android and Linux are the obvious choice. Wouldn't take much to make a linux laptop that runs windows apps emulated better than x86 laptops.techconc - Thursday, October 26, 2023 - link
I don't think you you realize how much of an improvement this is over existing Intel based solutions. Performance per watt is a very big deal for any mobile device, including laptops. It's not just about peak performance, it's about longer battery life, less heat and less fan noise. Apple's MacBooks have had an embarrassing lead in laptops for several years now. This is exactly what's needed to bring some level of parity to the PC market.dotjaz - Sunday, October 29, 2023 - link
Or so Qualcomm claims, for the entirety of its life cycle, software will be in compatibility mode. And that's not good news for either performance or power.shadowjk - Monday, October 30, 2023 - link
I'm sceptical whether battery life, heat, and fan noise is worse primarily because of the lack of availability of good chips. It seems to me it's more of a conscious decision by PC manufacturers, or lack of trying.I imagine the biggest advantage that Apple has is that they also control the OS, and have better ability to make sure the OS doesn't spend 2 hours on some "maintenance tasks" while running on battery power.
ChrisGX - Friday, October 27, 2023 - link
More ARM cores are shipped each year than cores based on any other instruction set. ARM is commonly found in powerful servers these days. The Fujitsu Fugaku supercomputer running on an ARM server chip sat at number 1 on the Top500 for a year or more very recently. Suspicions are irrelevant to actual computer performance.NextGen_Gamer - Tuesday, October 24, 2023 - link
For some reason, AnandTech is using the non-final slides from the presentation, whereas ArsTechnica does have the final ones, that show the actual model numbers of which competitor chips Qualcomm is comparing to. This includes the ARM slide, which explicitly states it is peak power 12-core Oyron vs Apple M2 peak power. Now, that isn't exactly a fair fight: 12-core Oyron is running at 50-Watts, while M2 is only 8-cores and running at 25-watts. A fairer fight is M2 Pro, also 12-cores (8P+4E) that is closer to 40-watts. And in that comparison, Oryon seems to be losing: it is roughly the same performance, for slightly more power, but again, Oryon is using 12 performance cores to match 8 performance & 4 efficiency cores of Apple's...brucethemoose - Tuesday, October 24, 2023 - link
I would bet the M2 Pro is a much more expensive system.NextGen_Gamer - Tuesday, October 24, 2023 - link
Price has never been a factor though when comparing CPU architectures, but only when talking about final products. Also, it says a lot that in the x86 competition, Qualcomm is happy to compare it to Intel 12- or 14-cores processors, but only an 8-core Apple chip - even though a 12-core Apple SoC also exists. I will say this: in the GPU department, Qualcomm might have a winner. Again, no idea why AnandTech doesn't have these slides, but Qualcomm shows it being 80% faster than AMD's Radeon 780M in the Ryzen 7940HS - that is pretty big. One downside though: it will only have DirectX 12 compatibility drivers at launch, no Vulkin support.Kangal - Friday, October 27, 2023 - link
The fact that it's shipping with DirectX v12 first is not a downside. That's the hardwork getting done first. Qualcomm has loads of experience with Vulkan on Snapdragon processors. So that will come sooner than later. But it's refreshing to hear their DX12 is done, and that's performance comparisons have been done against the industry leader.However, like all things, we will have to wait to get our hands on it, test it properly, then draw conclusions. It seems the direct comparisons will be against the AMD 7840u and M2 Max.
name99 - Thursday, October 26, 2023 - link
A different way to look at this is that they only provide a "single" memory controller (depending on how you slice these things up, but the equivalent of the M2's memory controller), rather than than "dual" memory controllers of an M2 Pro.You can view this as
- unbalanced for the amount of compute they provide? OR
- the iGPU is M2-class, not M2 Pro class? OR
- the new norm going forward for the ARM world will be more cores than M2?
It's unclear to me which of these is correct. There are indications (so, yes, very reliable!) that the M3 Pro/Max will have 16 "cores", for which the most obvious assumption is three 4-core P-clusters and one 4-core E-cluster. But another option is two 6-core P-clusters (there's no law that a cluster has to be 4 cores, and I'm unaware of any simulations that suggest, for example, the bandwidth of 4 cores to a shared L2 is high enough that 6 cores sharing that bandwidth would be a bad idea).
Which in turn opens up interesting options for M3.
Maybe it gets a single 6P-cluster, and Pro/Max get 2 6P-clusters? Or maybe M3 gets 2 4P-clusters (so it's at least 8+4 cores, if "12" is expected to be the new low-end norm going forward) and Pro/Max get 3 4P-clusters?
Ryan Smith - Tuesday, October 24, 2023 - link
"For some reason, AnandTech is using the non-final slides from the presentation"For what it's worth, Qualcomm silently updated the pre-brief deck multiple times. So the version I had, which was supposedly final and is what I used to file this story Sunday night, was in fact not. I've since updated the images in the article, but I'll have to tweak the text later when I have time.
NextGen_Gamer - Wednesday, October 25, 2023 - link
No problem Ryan! I was just really confused, because I read the AnandTech story first (it is always my first stop!) then Ars Technica second. So then I was like, why did Ryan have to do all of this speculation on what CPU this might be and what is implied here, when it is all laid out in the slides Ars Technica has? I would love for you to go back to this story now and fully update it with some comparisons of what you know of Intel's & Apple's current lineup. Like I said above, at first glance, it doesn't appear as though Oryon is going to reach M2 levels of IPC/efficiency. Though it is of course a league ahead of all other current ARM designs out there.thestryker - Tuesday, October 24, 2023 - link
The extra memory bandwidth from the Pro likely helps a fair bit (just guessing since adding 4P cores seems to increase their score by ~45%) which is part of the reason I'm disappointed Qualcomm stuck with a 128 bit bus maximum. Though it does make sense given the market they're aiming at as it'll be cheaper and they don't really have to compete with Apple.lionking80 - Wednesday, October 25, 2023 - link
"... Oryon seems to be losing: it is roughly the same performance, for slightly more power..."Your calculations are way off. The M2 Pro is only 21% faster than M2 in multicore (not 50% as you claim) despite having 50% more cores (12 vs 8). If Snapdragon X Elite has 50% better multicore performance than M2, then it is still SIGNIFICANTLY AHEAD of M2 Pro.
Geekbench 6 (multi-core)
M2 Pro 12222 (+21%)
Apple M2 10094
NextGen_Gamer - Wednesday, October 25, 2023 - link
@lionking80 - Majority of the M2 Pro scores I see are in the ~14000 range, with 12000 being only a few in the lowest. Let's go in the middle though, and say 13000. If Oryon is 50% better than base M2, that puts it at 15000 Geekbench 6 Multi-Core. Or, about ~15% better than M2 Pro/13000 average. That is 15% better though for consuming more power, and have 12 performance cores vs 8P+4E.Again, my point wasn't that Oryon isn't faster than M2 Pro. It absolutely is. It is that M2, as a CPU architecture, still seems to be king in the IPC/efficiency. Also, 15% is a close enough gap that may very well evaporate next week when M3 releases.
It is still a VERY good outing though, and puts Qualcomm way ahead of all other ARM designs. And will probably get Apple to actually start making big jumps in performance with their own chips again.
ChrisGX - Saturday, October 28, 2023 - link
@NextGen_Gamer We are reading the data very differently. There isn't anything in the benchmark numbers for the M2 (of whatever variety) that suggest to me that Apple's performance cores steal the win on efficiency even while being behind on performance.Admittedly, though, there isn't a lot of useful benchmark data to call upon at this point that illuminates the comparative Perf/W and power consumption picture beyond the CPU peak performance scenario that Geekbench focuses on. And, Geekbench numbers, by themselves, are hardly satisfactory. Additionally, there is a need to properly confirm Qualcomm's benchmark numbers. Still, the picture will become much clearer in the coming months. I would definitely like to see Anandtech act in computer users interests and do a thorough performance report on the X Elite.
Yes, the release of the M3 will be interesting. That will figure into an increasingly interesting picture for ARM computing going forward.
techconc - Thursday, October 26, 2023 - link
Look at the official listings on the Geekbench site.M2 in the Mac mini - 9742
M2 Pro in Mac mini - 14251 - 46% higher
If the X1 is 50% faster than M2, that would put it at 14613.... which is effectively the same.
Also, keep in mind that Qualcomm's numbers are typically higher than actual shipping products using their chips. They always show the best case scenario with some sort of reference device that has no thermal limitations, etc.
Speedfriend - Thursday, October 26, 2023 - link
From the slides, it would appear that multl thread on Geekbench is around 14500 (M2 x 1.5) which would put it in line with M2 Max with higher power consumption. However,the flip side is that it offers 15% better single-threaded at 30% less power and has the ability to ramp two cores to high clock speeds.For a first attempt, this is far better than I expected and will cause serious issues for Intel and AMD unless they pull something out of the bag
techconc - Thursday, October 26, 2023 - link
Yes, Qualcomm's presentation was intentionally misleading. For example, they compared single core performance to the M2 Max. Why not compare multi-core performance as well? For that matter, the base M2 has the same single core performance but I guess that comparison didn't sound as impressive. Worse, I've seen general news coverage (forget which channel) which parroted that claim and said this new chip is faster than an M2 Max. Well played Qualcomm marketing... it worked on non-technical types.ChrisGX - Thursday, October 26, 2023 - link
The comparison made in the keynote was clearly with the M2 MAX - Apple's highest performing chip. Qualcomm claimed the Snapdragon X Elite outperforms that chip (presumably on the Geekbench 6 single-thread benchmark). Furthermore, at ISO performance Qualcomm claimed the Snapdragon chip uses about 30% less power than the Apple chip (at the latter's peak performance).There is nothing unfair about that comparison. If further testing confirms Qualcomm's claims then the Oryon core will take the crown as the fastest ARM core and perhaps the fastest CPU core bare none. But, it will be efficient to boot.
The news is even worse for Intel than it is for Apple. Still, Apple somehow managed to alienate the designers of the Oryon core and I'm pretty sure the responsible parties over at Apple would be regretting that now.
trivik12 - Tuesday, October 24, 2023 - link
I hope this is true across all benchmarks. This will either completely nuke x86 or force Intel/AMD to respond. How do these compare with phoenix. Its talking about mid 2024 when MTL will be there and probably Zen 5 laptops? Even Lunar Lake is supposedly coming up end of 2024 which is targeting ULV as are Arrow Lake for higher TDP laptops. Apple will have M3 based SOC. Great time to be a consumer. no company can over charge for their chips especially in this recessionary market.brucethemoose - Tuesday, October 24, 2023 - link
Yeah, there are rumors of 256-bit APUs from both camps.tipoo - Tuesday, October 24, 2023 - link
Yeah it's going to be wild times having a multi way comparison between Nvidia's 2025 ARM windows chip, Qualcomm here, Apple, and then Intel and AMD as the old pair, rather than the cold old duopoly.yankeeDDL - Wednesday, October 25, 2023 - link
Indeed. But it looks like a massive step forward for the Snapdragons. That's always a good thing.Sivar - Tuesday, October 24, 2023 - link
I'll miss x86 for sentimental reasons, but it is long past time for a more efficient architecture to take over. It was long past time a decade ago.That said, having better performance and better power consumption will start the long road to replacement; this is certainly not the end of that road. Most Windows software is still x86, and compatibility software will work for most tasks, but not all.
lmcd - Tuesday, October 24, 2023 - link
There's no way x86 goes away. All it'll take is some compatibility tweaks with the proposed x86_64S and dropping non-standard length instructions to make things a real competition. If things look dire, this effort will be expedited.techconc - Thursday, October 26, 2023 - link
The only way x86 doesn't go away is if they adapt to similar architecture designs that Qualcomm and Apple are going. Things like unified memory, etc. to improve efficiency. I think AMD is more likely to get there before Intel if it's a contest between the two.tipoo - Friday, October 27, 2023 - link
AMD tried to get HSA going as far back as Kaveri, but back then had little clout and since Intel didn't do it the x86 industry didn't follow, and eventually gave up on it. Unified isn't impossible on x86 land, we just need Microsoft, Intel, and AMD to agree to it.Kangal - Friday, October 27, 2023 - link
To be honest, AMD is in the best position at the moment. They can develop some Zen6 cores that are VERY BIG. And put these in a big.LITTLE configuration. But instead of using some Medium Cores like based on Zen4, they can go for ARM.Their Infinity Fabric allows them to go hybrid computing fairly well. Just imagine a 1+3 budget configuration of Zen6=Cortex-A730 for the 5W form, but clocking anywhere between 1W-20W on the efficiency curve. Or a more capable 2+4 design for the 10W tablet form, clocking between 2W-40W. Then a 3+5 design for the 15W ultrabook form, clocking between 4W-45W. Then a 4+4 design for the 25W laptop form, clocking between 5W-50W. Stepping up to the 4+8 design for the 40W gaming laptop and mini tower form. Then scaling up as you go....
dotjaz - Sunday, October 29, 2023 - link
Unified memory? What do you think APUs use? Two separate memory?techconc - Monday, October 30, 2023 - link
It’s a combination of things, not just unified memory. I was simply using that as an example as to where modern SoCs are drawing their efficiency from. Another common efficiency/performance trait is more heterogeneous computing such as dedicated media blocks, encryption, NPU, etc, etc. You may not need that type of a design in a desktop with unlimited power and thermal constraints, but in ANY mobile device with a battery, you most certainly do.brucethemoose - Tuesday, October 24, 2023 - link
> as these are notoriously memory bound; according to the company, the chip will have enough resources to run a 13 billion parameter Llama 2 model locally.This is kind of a nothing statement. How is it run? What framework? How fast is it?
Basically that's the same as saying "has access to ~8GB of RAM".
With enough system RAM, it *would* be neat to run a 13B model in the background reasonably fast. You can do that on GPUs or with hybrid inference now, but it will obliterate battery life (and some CPU/GPU perf). I dont know of a way to do this on AMD SoCs at the moment, Even though they technically have an AI block.
back2future - Wednesday, October 25, 2023 - link
[ 'power-hungry tensor block'is there a guess/number for wattage with top performance on ~45TOPS/4bit, for having some clue about power requirements? (Thx) ]
Doug_S - Tuesday, October 24, 2023 - link
Leave it to Qualcomm to include a cellular modem in every model, despite almost no one wanting that in a PC. And they will make you pay for it, and when they do price comparisons undoubtedly include a high end plug-in 5G modem in the competitor's config to make their pricing look more reasonable.tipoo - Tuesday, October 24, 2023 - link
A number of people spend every new product year asking why Apple doesn't include cellular in Macbookslemurbutton - Wednesday, October 25, 2023 - link
Yes, all 5 of them asked.ChrisGX - Saturday, October 28, 2023 - link
@Doug_S The Snapdragon X Elite has been billed as a laptop chip. The rationale for including a 5G modem is perfectly obvious.wrkingclass_hero - Tuesday, October 24, 2023 - link
Meh, we'll see how it shakes out when it launches. How are Qualcomm's drivers nowadays? I remember reading a dev blog by the Dolphin team years ago, where they rated GPU drivers, and Adreno drivers were rated poorly, with broken functions and poor documentation. Is that still the case?iphonebestgamephone - Thursday, October 26, 2023 - link
Mali is better now.tipoo - Tuesday, October 24, 2023 - link
Some impressive claims here. I wonder if the M3 family is enough to keep ahead since it won't be competing with M4 when it launches.Farfolomew - Tuesday, October 24, 2023 - link
Microsoft needs to step up and deliver some major Windows x86-to-ARM translation support for this revolution to occur.domboy - Wednesday, October 25, 2023 - link
"Microsoft needs to step up and deliver some major Windows x86-to-ARM translation support for this revolution to occur."What are you talking about? They can already translate both x86 and x64 apps in Windows 11. The only thing really holding that back is cpu performance (or lack thereof), which is why people think Apple's Rosetta is so amazing compared to Microsoft's very similar solution.
questionlp - Wednesday, October 25, 2023 - link
"which is why people think Apple's Rosetta is so amazing compared to Microsoft's very similar solution."I've heard that Apple included additional instructions and tweaked other things to make the translation from x86 to Apple Silicon flavor of ARM faster than it would be just using software-only options.
domboy - Thursday, October 26, 2023 - link
"I've heard that Apple included additional instructions and tweaked other things to make the translation from x86 to Apple Silicon flavor of ARM faster than it would be just using software-only options."Well, they can do that since they control both the hardware and the software. From what I've read, looking at benchmarks, Rosetta has a similar translation overhead to Microsoft's solution; it just is less noticeable due to the M1/M2 being much faster that what Qualcomm has offered so far.
techconc - Thursday, October 26, 2023 - link
Yeah, but Microsoft's solution is nowhere near as good as Apple's. Qualcomm has delivered on the hardware this time. Microsoft also needs to deliver with a Rosetta quality emulator both in terms of compatibility and performance. If they can deliver this, the transition is seamless. There has been a fair amount of criticism for Microsoft's half baked attempts so far.tipoo - Friday, October 27, 2023 - link
AS even switches its memory model when translating AMD64 (x86 base was dropped in macOS a while back, so it only needs to do so for the ISA extension making things a bit tidier), a core advantagetechconc - Monday, October 30, 2023 - link
Oh, absolutely. Apple not only dropped support for x86 based but also 32 bit. Doing so did drop some legacy software support, but it also made for a very clean and efficient emulator.GeoffreyA - Saturday, October 28, 2023 - link
The transition is not going as seamlessly as the x86 to x64 one. Back then, even if one had an x64 CPU, it was optional to use 64-bit Windows, which was problematic until Vista. And when one did, it was mostly transparent whether the apps were 32- or 64-bit. Credit must be given to AMD for designing x64 in such a way that it was backwards compatible at native performance.This time, it's one or the other, x86 or ARM, and there are no official ARM CPUs from Intel or AMD. Apple, having both the hardware and software, was able to force the whole transition in one stroke, at the right time. On the "Wintel" side, the problem is vaguely circular. Microsoft needs ARM CPUs from AMD and Intel (both, otherwise it won't work), and AMD and Intel don't have a proper ARM Windows yet, so naturally will put their effort on x86. All these parties, arguably, need to work together or at least silently acquiesce.
If we look back, x64 would not have prevailed if Intel tried to be difficult and went their own way. So something similar must happen here.
GeoffreyA - Saturday, October 28, 2023 - link
"naturally will put their effort on x86"Well, truth be told, it's in Intel's benefit to thwart non-x86 designs.
mode_13h - Sunday, October 29, 2023 - link
Intel has been investing in the RISC-V ecosystem, for several years now.GeoffreyA - Monday, October 30, 2023 - link
All right. I guess I'm out of touch with what's going on in computing these days. I do hope RISC-V wins the day. If x86 must go, we might as well migrate to the best alternative, rather than just the marginally better, ARM. Problem is, Microsoft has already heavily invested in the latter, so if Intel pushes forward with RISC-V, the software side on Windows is going to be a problem.TheinsanegamerN - Thursday, October 26, 2023 - link
The CPus perform fine, MS's rosetta competitor performs like arse.spendable7528 - Tuesday, October 24, 2023 - link
I'd hazard a guess this is why Microsoft left out a new Surface Pro from its most recent event; they were waiting for this bad boy. Mid-2024's a ways out, but it'll be awesome to see what kind of devices appear with this chip. I'd love a 5G-capable Windows device that really performs and has the app compability to back up my use-case on the go. Bonus points if those GPU claims are genuine and I can run some games at decent enough graphics while sipping battery like my wife is able to on her M1 MBP.windywoo - Tuesday, October 24, 2023 - link
Is this going to lead to more Windows laptops with non-upgradable RAM? Will the PC manufacturers follow Apple’s lead and charge through the nose for more RAM and storage at the point of sale? I wouldn’t put it past them.TheinsanegamerN - Thursday, October 26, 2023 - link
LPDDR5X is the future right now, SO DIMM DDR5 cant compete with power use or bandwidth.Unashamed_unoriginal_username_x86 - Tuesday, October 24, 2023 - link
I think the LPDDR4X bandwidth has the 6 and the 8 swapped (i.e. should be 68.3 not 86.3). I was confused why 5X had twice the transfer rate but only ~50% more bandwidthFWhitTrampoline - Tuesday, October 24, 2023 - link
I more interested in seeing some possible Gaming Handheld possibilities for some variant of this processor seeing is how Apple's not targeting any specifically intended for Handheld Gaming designs with the M series processor inside. There is gaming on iPhones and iPads but no beefy cooling solutions like on some actual Gaming Handhelds.abufrejoval - Wednesday, October 25, 2023 - link
The biggest issue with ARM SoCs capable of running Windows is that M$ tried to turn them into an Apple walled garden clone, instead of a Personal Computer that happens to have an ARM CPU.No thank you, we already have one of those and there is nothing that makes a M$ prison more attractive than the fruity cult.
So I sincerely hope these ARM SoCs will be true PCs, capable of running any current OS compiled for ARM.
If people want locked down hardware, M$ and the rotten fruit company can deliver today, no problem. Making that the only choice is quite simply something that requires regulators handing out more a slight slap on the wrist a decade later.
Lettuce - Wednesday, October 25, 2023 - link
I don't think this is true? People have run Linux on the Surface Pro X as part of the linux-surface project, for example. There's been more work put into running Linux on the Thinkpad X13s AFAICT, but I guess Linux users love Thinkpads. Unlike other ARM devices, Microsoft even insists on a sane (UEFI) boot process for Windows on Arm laptops. I believe you can literally go into the BIOS and boot from USB if you want. WoA laptops are not, as far as I can tell, locked down any more than Intel or AMD laptops. My real concern is about non-upgradable memory, but that's not an ARM problem uniquely.abufrejoval - Wednesday, October 25, 2023 - link
It may just be a holdover from the very first Windows-for-ARM devices they sold.At the time I am quite sure that Windows variant was the Window Home S variant, that could only take software from the M$ shop, which I find quite inacceptable on my personal computer.
Microsoft also locked down their mobile phones, and have tried at every point to make Linux very unattractive unless it now runs inside their OS and ensures a M$ tax will be paid for something they didn't create.
If it's better than I fear, I'm glad to hear it. But I'd rather see it proven out, too, so that's why I am raising the point early on.
Non-upgradable memory isn't that much a problem for me, especially when there is a sound technical reason or benefit for it. The "Lego" 1/2/4/8 size chips approach with matching RAM channels makes a lot of sense and not adding pads and amplifiers for off-chip DRAM support save tons of silicon real-estate and power.
Crazy prices on what I consider sane RAM amounts and limited SKU availabiltiy is another issue and Apple charging HBM prices for LPDDDR RAM getting GDDR performance is just the type of extortion that keeps spoiling any remaining appetite after looking beyond their walled garden fences.
techconc - Thursday, October 26, 2023 - link
You do know that you can dual boot an ARM based MacBook in Asahi Linux, right?abufrejoval - Wednesday, October 25, 2023 - link
On the single peak clock per cluster issue:I can see that being a technical constraint/optimization: due to the exponential power increase near the top of the CMOS-curve, the power delivery to any cluster may just be limited to a certain amount per cluster. It points toward very decoupled power delivery per cluster and then having to support peak clock rates on neighbouring cores doesn't just require accomodating clocks, but also heat. Two cores running at 4.3 side-by-side or in adjacent clusters just aren't quite the same up close.
As tiny as these chips are, spreading it across the clusters may help reducing the amount of dark silicon otherwise required to sustain that and seems very much in line with what those Nuvia/Apple engineers have been doing all along.
Then again the difference between 3.8 and 4.3 GHz is so low that I can't work up the least bit of outrage at these numbers.
But I am wary we'll actually see 3.8GHz on all cores at ultrabook power ranges of 10-15 Watts.
There is much less left in terms of clocks on my Ryzen 5800U notebook, once all 8 cores spring into sustained action and for all 4nm and Nuvia magic, physics are still hard to beat.
But 12 cores at 3.8 on a NUCalike running a mix of Linux and Windows on a hypervisor with iGPU pass-through sounds really attractive...
So let's make sure M$ keeps their greedy paws from locking down a market segment they are crazy enough to consider their own.
[email protected] - Wednesday, October 25, 2023 - link
Memory, NPU and GPU nice. No M.2? No hyperthreading? No power saving. Compare this to the latest AMD SOC?techconc - Thursday, October 26, 2023 - link
I think you're just naming marketing material without understanding what it is. Hyperthreading doesn't make sense on an ARM chip with a reasonably wide decoder. The short version is this, on Intel based designs you only have something like a 4-wide decoder whereas on Apple's latest A17 Pro it's up to a 9-wide decoder. This is because the ARM (RISC) instruction set is much more predictable in terms of length, etc. x86/x64 still has CISC based instructions are are less predictable in length. Hyperthreading is effectively just a hack to recapture some of the lost efficiency that comes with a poor decoder.mode_13h - Sunday, October 29, 2023 - link
SMT is about much more than simply circumventing decoder bottlenecks.See also: GPUs
techconc - Monday, October 30, 2023 - link
No, SMT is not more than a hack to make up for a poor decoder. It’s also proven to be a security liability in many implementations as well. Don’t get me wrong, it’s a clever hack and an example of Intel pushing a legacy ISA far beyond what should be possible. But, at the end of the day, it’s a hack and not needed modern chip/ISA designs.tipoo - Friday, October 27, 2023 - link
You don't need hyperthreading if you can fill the execution hardware without it, it's a band-aid for that. There was that Chinese designed Kirin SoC that just came out after the export bans with hyperthreading because they reused server cores, but people found it raised power use more than performance.satai - Wednesday, October 25, 2023 - link
Does it Linux?yankeeDDL - Wednesday, October 25, 2023 - link
The efficiency looks very good.Assuming that The Core 13900 uses ~2.5X the peak power as Ryzen 7940, on CPU side, the SD should be on-pair with Ryzen, which is not too bad.
And the GPU looks good too, although it's easy to cherry-pick one parameter that can make it look better than the competition. Apples to apples comparison on GPU is difficult (more than on CPUs). But it's seems a massive step forward from The Gen2. Catching up with Apple, finally.
I'd want something like this in my phone (I know, this is for laptops - hopefully, something similar trickles to phones).
shabby - Wednesday, October 25, 2023 - link
Lol at those graphs, I'm ready to be disappointed...techconc - Thursday, October 26, 2023 - link
Yeah, everyone's future products always look better than competitor's currently shipping products. Realistically, this chip will be competing against Intel Meteorlake and Intel's 4nm process. Similarly, it will be competing with Apple's M3 series of chips, not the current M2 series.Nazoshadow - Wednesday, October 25, 2023 - link
Hey Ryan S - I think there is a typo in your table... shouldn't the # of tops for Snapdragon Elite X say "45 TOPS" instead of "46 TOPS" ?Mantion - Thursday, October 26, 2023 - link
Hopefully these arm chips will make Linux Mainstream. There is no reason anyone should still be using windows in 2023. Its sad gaming companies don't go exclusively Linux because the business world is ready to drop windows. Gamers prop up windows which needs to end. I have used windows on ARM and the software compatibility issues is horrible.Silver5urfer - Monday, October 30, 2023 - link
Key point is ISO power, this ARM processor will never beat that Intel or AMD BGA chip. They are comparing these within 50W power cap. Once the x86 lets go of that Power Limit it will destroy this garbage. And to those who are dreaming about a desktop class performance, look at Tenstorrent AMD's Zen 5 x86 performance projection, that will wipe the floor. And once AM4's 2024 CPU debuts, this joke will be DOA as Qcomm is targeting 2024.Apple also claimed a lot when they launched M series and it got curbstomped on the day of release. Then their entire GPU PR went out of window once they did not run specialized Mac OS exclusive workloads. Even M2 line cannot beat the latest AMD and Intel processors.
Everyone here loves to bring that BS efficiency argument, you are running a use and throw machine, for this you want some 14900K or 7950X performance ? Not even Apple or any firm can deliver this, it's simple physics, and you cannot cheat physics.
Next is ARM/x86 translation. Rosetta does Real time Microtrash cannot even deliver Windows 11 stable as Windows 10 LTSC. And this translation layer will be bugged up the wazoo and there's no chance that Microsoft will optimize it. Let alone be like Rosetta. Second ARM is all proprietary junk, look at Android OS upgrades, for every single damn thing you need the blobs and src modules which Qcomm shares or Exynos does. Rest of the OEMs do not and they end up in junkyard. Even with Qcomm and Exynos the support is limited. Same will be repeated here.
It's an experiment by the big power and bilderberg to try and change the world computing standard. x86 will die but not now, once x86 dies, the concept of a "Personal Computer" will be dead. Look at phones, do you think this ARM junk will succeed x86 PC and overtake it ? Be careful what you wish for.
For Microsoft, Windows 11 which cannot even beat Windows 7 in the concept of a stable PC with 10 years of SW advancement or that Unreal Engine 5's absolute horrendous optimization with 100x PR and still failing to beat Rockstar's RAGE RDR2 or the likes of CryTek's old Crysis 3 in pure GFX quality and optimization.