Many things will change over the next few years...
Obviously the tribal lines will change, as the only interesting competition is Apple vs QC.
But the other interesting thing is that both Apple AND QC are only cost-constrained by reality (ie silicon area), not by artificial market segmentation – neither of them is beholden to an Intel or an ARM Ltd which ramp up prices for more cores in a non-linear way.
We've obviously seen this right away in this chip, where QC seems to feel they can hit lower than MacBook Pro prices while giving a lot more cores. (The issue of desirability is somewhat different. Apple will have longer battery life for sure, which is all important for many people; QC still have to design their matching E-core – AND get MS to build the correct OS changes for optimal use...)
But the point is: Intel and AMD, and Apple so far, have felt that they could "get away with" providing ~8 performance cores in this space (and realistically, 8 at the high end, 6 for most people – most people buy the M2 Pro- with 6 not 8 P cores). I don't know if that can last. If one company (ie QC) is willing to sell cores at essentially their silicon area cost, not artificially higher, the competitors can't drift TOO far from that...
On the flipside, how much does it really matter? If the larger Qualcomm cores are eating its thermal headroom and/or don't scale that well, then maybe 4/6 big cores is a better split?
Intel currently sells a product with 2P8E in their U series, and most normal users don't seem to have really noticed.
While I can totally understand the enthusiasm about the sneak preview of this really interesting SoC, these cores obviously don't scale well within the shown envelope. Did you observe the constrained vs. unconstrained figures above?
If you give at least a bare minimum of credibility to the shown benchmarks and to whatever QC understands as "TDP", the unconstrained setup seems to be running WAY out of the chip's sweet spot. I mean come on! The benchmarks for MT achieve improvements of only about 5 - 25% at the price of around 250% more "power".
We certainly need more detailed and independent reviews, but from the shown figures the constrained setup looks much more promising. And for competitors this might mean offering chips with a clever mix of a limited amount of performance cores combined with enough low power cores could be the better choice for many use cases.
ST is like and ahead of any M2 as they're all similar MT is like M2 Max more than Pro But the GPU is just ahead of base M2, or maybe closer to M3 coming today
So they spent the die size on meeting that multicore. I guess it's just different choices, and they paired an M2 Max like multicore performance with an M2/M3 base like GPU. Depending on average and idle power use, this seems pretty promising for now, and there's also the question of Windows on ARM support and if you're not running a bunch of x86 emulation.
M2 Pro and Max are essentially IDENTICAL as far as CPU perf is concerned. Not EXACTLY (because Max has more memory bandwidth and SLC) but essentially the same.
If you're building a theory based on the supposed CPU performance differences between M2 Pro and Max you're wasting your, and everyone else's, time.
The M2s started volume sales in mid 2022, the X-Elite will ship in volume in mid 2024. The M3 Pro and Max are starting volume shipments this month, and they are faster in both single and multicore benchmarks (5-10% in single core, Max up to 40% faster in multicore). So what does this mean?
I don't know. Maybe the X-Elite is on a larger process and will catch up on 3 nm. Maybe its cheaper. The most important take away is that Apple has been showing what can be done with ARM on the desktop (laptop) for three years and now someone else has taken up the baton for Windows PCs. Surface RT is about to become Microsofts best hardware product, and Intel should be very concerned.
TDP = Thermal Design Power. It's given to OEM / ODM as an indication of how to design the thermals of their products. For example, most AMD Ryzen "U" series CPUs have a TDP of 15w; this means the laptop OEM/ODM should ensure their heatsink can dissipate at least 15w of heat while keeping the hotspot under X degrees. This does NOT mean that the power limit or consumption of the chip will be 15w, and indeed, "U" series can consume 60w+ under turbo conditions.
80w TDP is abnormally high, with even Intel pulling back at 55w TDP guidelines for their most powerful chips. Who knows what the actual power consumption will be under general workloads, though.
I would also guess that Qualcomm's TDP's are actual nominal TDP, and not Intel's fictional version these days. A TDP of 55w from Intel means nothing if the SoC can pull 150w+ some of the time. If Qualcomm only ever hits a max of 80w draw, it's way ahead of Intel and even AMD.
Specifically, this is device TDP. So 80W is for the whole kit and caboodle; storage, screen, you name it. It is not just the CPU TDP, which is the more typical metric here.
Almost surely not. It sounds like they're trying to obfuscate TDP and total device power consumption, which are not the same thing. Technically what they're saying is correct...it's just deceptive (and likely intentionally so).
That sounds like a distinction without a difference. After the CPU/GPU, the most power hungry component is the screen. But the screen has tremendous surface area and is set away from the rest of the laptop, allowing it to be passively cooled and not interfere with the CPU's cooling. Bottom line: if you design for a TDP of 23W, it won't matter whether your screen draws 1 watt or 30 watts — you're still designing to manage 23W of continuous heat dissipation from the CPU.
All of them (DTR and thin-and-light laptops) have the same CPU/GPU config: 12 performance Oryon cores (no efficiency cores), plus an undisclosed amount of Qualcomm's latest GPU cores.
It does have RT - it will be DirectX 12 Ultimate compatible when released. However, no Vulkan support is planned (so, no Vulkan games like DOOM will run at all on it).
So… These chips are for running Windows for ARM, but there were no benchmarks for Win32 apps under emulation? Despite how hard Microsoft has tried to push UWP apps, they are but a small fraction of Windows apps. Seems like they already know their emulation performance is going to suck. In 6 months, they’ll be up against the M3 and Intel’s 14th gen mobile CPUs, which, I believe, are a new architecture. Plus whatever AMD will have.
what's the correlation between UWP apps and ARM? there are already a bunch of programs available compiled for ARM (the first that come to mind are 7-zip and qbittorrent) which aren't UWP. the latter is dead and buried since at least 2021... emulation performance will obviously be worse than an Apple SoC with the same perf, but this one has basically the power of a M2 Max, so no in absolute terms it won't suck at all.
also why are you dismissing this so fast, like developers will start supporting ARM on windows too once they see a decent platform, they're already doing so for macOS. this is their first gen based on the new architecture, aren't you going too fast?
There’s a difference though. Apple pushes developers by giving them a timeline such as you have two years. Then one year, then three months, and they stick by it. Microsoft doesn’t do that, they’re too afraid of defections from Windows if the do so they keep both in parallel for quite a few years. So developers who are lazy (many of them) will do nothing until sales of the new technology is significant.
You are correct about the Apple vs MS timelines. One the other hand (a) most people only use Apple apps. I imagine much the same is true for MS.
(b) the companies that, in the past, have been slow to match Apple transitions basically lost massive amounts of goodwill and Apple market share. Adobe was hit hard by their slowness, Intuit lost the Quicken market and has never recovered. It might be good for the Windows ecosystem as a whole to clear out zombie companies that last innovated fifteen years ago and are unwilling/unable to adapt to ARM, and let fresh blood take their place.
Microsoft isn't ditching x86... there is no timeline to give. They are adding an additional architecture, so they have to have a totally different approach.
Could they be harder on developers? Maybe. Especially with someone like Epic that intentionally put in an architecture check to block their Unreal engine from running on ARM machines, even via emulation. That annoys the heck out of me as it was totally uncalled for, unless they're planning to actually support ARM64 natively in the near future.
This is Control in emulation, coming close to the 7840HS, and the GPU staying a bit ahead of the M2 like normal seems like a good sign for emulation performance
I would like to know the Transistor count on these. They are made on TSMC's 4nm process, while the existing AMD products are on 5nm and Intel is on 7nm design. So they are already using a brand new far more expensive node than these processors.
Apple will be releasing M3 processor today on TSMC 3nm node, so this will be beaten but again ARM processors that are fighting against x86 needs a ton of transistor density to achieve that and they do not have SMT advantage, which is why Apple M series processors are far more expensive to manufacture.
x86 Hyperthreading / SMT eats these processors. And AMD's Bergamo is the counter for ARM in HPC space for the high density without SMT design, which means ARM is already dead. Only Graviton designs are offered in higher volume because Amazon wants everything in-house.
Still the newer Zen 5 x86 design for 2024 will be outperforming Neoverse ARM and even Tenstorrent (Jim Keller) RISC-V high performance processor as per his own prediction. Then we have the OS component, Windows OS is heavily designed to be on x86 unless those rumors of Microsoft killing NTKernel with a Linux Kernel things will be worse for ARM because x86 to ARM translation will be a mess and a massive disaster. Microsoft cannot even maintain their own Engine based games - Halo Infinite, Slipspace and their QA of Windows 11 is a gigantic disaster. The insane problems associated with the Core Kernel components to the absolute BS of the UI aspect which is inspired by Apple and mobile UI causing a massive downgrade vs Windows 10 and 10 LTSC in stability.
All in all Qcomm wants to do something but by 2024 Q3, AMD Zen 5 will stomp on HPC space and in Consumer Client space. Plus this is a pure marketing and mental campaign on the people to try to switch from x86 compute to ARM for something sinister in the future in total clamping down of the Operating System freedom. Look at Windows it is going on a downgrade path, Android is now same, a simple sandbox iOS clone without any user / power user control - (Scoped Storage, Blacklist APIs, horrendous UI , Playstore restrictions), Windows is also slowly becoming same.
PC = Personal Computer will be a thing of past once x86 is dead but that death is not near, it will be perhaps done by 2030+ once the Automotive industry totally kills real cars with their new Software riddled disposable electronic ones and god knows what will become of US Economy and Technology advancements or say Orwellian.
Point one, TSMC 4nm is 5nm++. It's not a "new node", it's improved 5nm. 4nm is a marketing number for the improvements.
Point two. 7840HS is built with 4nm.
Not only will Zen5 be on the market by the time this really launches, but so will meteor lake. It will compete with both head to head. I welcome more players into the space, so I wish them well. But it's going to be very uphill for Arm on PC.
Can you help contextualise these results a bit more for the average user who browses the web and watches videos and does a bit of office productivity.
For such users (pls correct me if I'm wrong), single core scores matters by fractions of seconds, dual and quad core scores matter by fractions of seconds and max multi score almost doesn't matter at all (there are no casual browser apps that max out 12 or 16 core systems).
What really matters is the performance hit on battery and more than everything else, BATTERY LIFE.
I was under the impression that apple was far and away the leader in this so imagine my surprise when the hackjob optimised framework laptops with 7840U provide better ppw than M2 Pro machines with similar performance. The zen 4 H series didn't suggest this would be the case but the barely available U series seems to be just as good as Apple M Pros?
Where is everyone currently and expected to be in 6 months in terms of energy efficiency?
Looking at the TDPs and performance numbers of Snapdragon X Elite, it seems like it'll *not* be leading competitors in that respect considering Intel is about to jump architectures and node, AMD will further optimise from their already strong position and apple will also jump architecture *and* node
Singlecore is the most important metric for most workloads. There are limits to how much you can easily break a problem down into parallel threads. It cost a lot more to do this, and software is already expensive.
As a user, you will probably wait for singlecore to finish a task more than anything (faster feels "snappier"). The exceptions are things like video encoding, compressing, servers where the multi-core muscle shows up. You are correct that most users won't hit their core/thread limit in regular use. Fewer and fewer power users need all those cores. Some users have pretty much unlimited appetite for compute though, so more is *generally* better
Battery life is generally going to be dominated by SOC power and it's ability to do light loads at low power. Full legs stretched system speed also affects battery life, but that really depends on your workload and how often you peg your CPU.
How much does it cost? How efficient is it? How many hours can it playback videos over wifi with a 45Whr battery, etc? How well does it do x86 emulation? How well does it game?
Why? In 3 out of 4 of Qualcomm's own CPU benchmarks, the 80W TDP version shows only single-digit percentage gains versus an Intel CPU that's not top of the line today and will be >18 months old by the time X Elite ships. And that's not even considering that x64 has far broader native Windows software support than ARM. What's the draw here for OEMs and end users?
Overall, this chip is very impressive for Qualcomm and for the Windows PC market in general. I came away from Qualcomm’s demonstration very impressed. However, the more I’ve seen since Qualcomm’s presentation, the less impressed I am. For example, even in Qualcomm’s performance machine, their Geekbench 6 numbers didn’t match the numbers in the slide for their presentation where they compared it to the M2 Max (single core of course). I find this chip a bit odd as compared to Apple’s lineup though. They attempt to compare it to the M2 chip, but realistically, with 12 performance cores and 80 watt TDP, they are clearly targeting something closer to an M2 pro. Either way, as this article mentions, Qualcomm is trying to drive a narrative of how their future chip will perform better than present chips from Apple, AMD and Intel. As we know, Apple has already release their M3 FAMILY of chips which already looks to provide a considerable advantage over the X Elite. Still, this is a massive improvement over previous offerings available for Windows on ARM. Hopefully, that will kickstart the competition once again.
Yes I think we need to ignore the QC marketing droids, they're just trying to drum up excitement. And it's a bit much to expect the the first gen Nuvia chip will match Apple's mature M-series on every metric next year, even if it doesn't it's still impressive and a good start. The important part is that for the first time there's a viable and performant ARM chip for the PC - something Intel said publicly just last week was not even remotely a threat and not on their list of concerns.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
52 Comments
Back to Article
hnlog - Monday, October 30, 2023 - link
Hello Andreijoelypolly - Monday, October 30, 2023 - link
Kinda weird that they compare the 4 high performance core M2 instead of the M2 Proname99 - Monday, October 30, 2023 - link
Many things will change over the next few years...Obviously the tribal lines will change, as the only interesting competition is Apple vs QC.
But the other interesting thing is that both Apple AND QC are only cost-constrained by reality (ie silicon area), not by artificial market segmentation – neither of them is beholden to an Intel or an ARM Ltd which ramp up prices for more cores in a non-linear way.
We've obviously seen this right away in this chip, where QC seems to feel they can hit lower than MacBook Pro prices while giving a lot more cores. (The issue of desirability is somewhat different. Apple will have longer battery life for sure, which is all important for many people; QC still have to design their matching E-core – AND get MS to build the correct OS changes for optimal use...)
But the point is: Intel and AMD, and Apple so far, have felt that they could "get away with" providing ~8 performance cores in this space (and realistically, 8 at the high end, 6 for most people – most people buy the M2 Pro- with 6 not 8 P cores). I don't know if that can last. If one company (ie QC) is willing to sell cores at essentially their silicon area cost, not artificially higher, the competitors can't drift TOO far from that...
lmcd - Monday, October 30, 2023 - link
On the flipside, how much does it really matter? If the larger Qualcomm cores are eating its thermal headroom and/or don't scale that well, then maybe 4/6 big cores is a better split?Intel currently sells a product with 2P8E in their U series, and most normal users don't seem to have really noticed.
ifThenError - Thursday, November 2, 2023 - link
↑ This!While I can totally understand the enthusiasm about the sneak preview of this really interesting SoC, these cores obviously don't scale well within the shown envelope. Did you observe the constrained vs. unconstrained figures above?
If you give at least a bare minimum of credibility to the shown benchmarks and to whatever QC understands as "TDP", the unconstrained setup seems to be running WAY out of the chip's sweet spot. I mean come on! The benchmarks for MT achieve improvements of only about 5 - 25% at the price of around 250% more "power".
We certainly need more detailed and independent reviews, but from the shown figures the constrained setup looks much more promising. And for competitors this might mean offering chips with a clever mix of a limited amount of performance cores combined with enough low power cores could be the better choice for many use cases.
KPOM - Tuesday, October 31, 2023 - link
And yet today Apple released M3 chips with fewer performance cores in the base configurations.AmaraSheikh - Saturday, December 23, 2023 - link
nice posttipoo - Monday, October 30, 2023 - link
So it seems likeST is like and ahead of any M2 as they're all similar
MT is like M2 Max more than Pro
But the GPU is just ahead of base M2, or maybe closer to M3 coming today
So they spent the die size on meeting that multicore. I guess it's just different choices, and they paired an M2 Max like multicore performance with an M2/M3 base like GPU. Depending on average and idle power use, this seems pretty promising for now, and there's also the question of Windows on ARM support and if you're not running a bunch of x86 emulation.
name99 - Monday, October 30, 2023 - link
M2 Pro and Max are essentially IDENTICAL as far as CPU perf is concerned. Not EXACTLY (because Max has more memory bandwidth and SLC) but essentially the same.If you're building a theory based on the supposed CPU performance differences between M2 Pro and Max you're wasting your, and everyone else's, time.
boozed - Monday, October 30, 2023 - link
You can just say they're very similar, you know.tipoo - Tuesday, October 31, 2023 - link
But that would be normal 🤣valuearb - Thursday, November 2, 2023 - link
The M2s started volume sales in mid 2022, the X-Elite will ship in volume in mid 2024. The M3 Pro and Max are starting volume shipments this month, and they are faster in both single and multicore benchmarks (5-10% in single core, Max up to 40% faster in multicore). So what does this mean?I don't know. Maybe the X-Elite is on a larger process and will catch up on 3 nm. Maybe its cheaper. The most important take away is that Apple has been showing what can be done with ARM on the desktop (laptop) for three years and now someone else has taken up the baton for Windows PCs. Surface RT is about to become Microsofts best hardware product, and Intel should be very concerned.
Hyper72 - Tuesday, November 7, 2023 - link
Ye, the comparison with the Apple M-series is mostly academic. QC is trying to break into the PC market and grab a slice of the Intel/AMD cake.StormyParis - Monday, October 30, 2023 - link
Very interesting, thank you.lemurbutton - Monday, October 30, 2023 - link
The CPU uses as much as 80w of power? I'm pretty sure the M2 Max will only range from 35w - 40w at most for the CPU.tipoo - Monday, October 30, 2023 - link
It's just a TDP. That doesn't speak to what it's doing in an average mixed load or even load scenario, it's just what the limit is.olorinh - Monday, October 30, 2023 - link
TDP = Thermal Design Power. It's given to OEM / ODM as an indication of how to design the thermals of their products.For example, most AMD Ryzen "U" series CPUs have a TDP of 15w; this means the laptop OEM/ODM should ensure their heatsink can dissipate at least 15w of heat while keeping the hotspot under X degrees. This does NOT mean that the power limit or consumption of the chip will be 15w, and indeed, "U" series can consume 60w+ under turbo conditions.
80w TDP is abnormally high, with even Intel pulling back at 55w TDP guidelines for their most powerful chips. Who knows what the actual power consumption will be under general workloads, though.
James5mith - Tuesday, October 31, 2023 - link
I would also guess that Qualcomm's TDP's are actual nominal TDP, and not Intel's fictional version these days. A TDP of 55w from Intel means nothing if the SoC can pull 150w+ some of the time. If Qualcomm only ever hits a max of 80w draw, it's way ahead of Intel and even AMD.Ryan Smith - Monday, October 30, 2023 - link
Specifically, this is device TDP. So 80W is for the whole kit and caboodle; storage, screen, you name it. It is not just the CPU TDP, which is the more typical metric here.dudedud - Monday, October 30, 2023 - link
I don't think the 23W unit can manage everything with that little amount...bigmig - Tuesday, October 31, 2023 - link
Almost surely not. It sounds like they're trying to obfuscate TDP and total device power consumption, which are not the same thing. Technically what they're saying is correct...it's just deceptive (and likely intentionally so).bigmig - Tuesday, October 31, 2023 - link
That sounds like a distinction without a difference. After the CPU/GPU, the most power hungry component is the screen. But the screen has tremendous surface area and is set away from the rest of the laptop, allowing it to be passively cooled and not interfere with the CPU's cooling. Bottom line: if you design for a TDP of 23W, it won't matter whether your screen draws 1 watt or 30 watts — you're still designing to manage 23W of continuous heat dissipation from the CPU.meacupla - Monday, October 30, 2023 - link
How many cores does X elite have in total?NextGen_Gamer - Monday, October 30, 2023 - link
All of them (DTR and thin-and-light laptops) have the same CPU/GPU config: 12 performance Oryon cores (no efficiency cores), plus an undisclosed amount of Qualcomm's latest GPU cores.tipoo - Monday, October 30, 2023 - link
Latest but no RT? I'm confused why SD8 Gen 2 with Adreno had that a year ago but this doesn'tNextGen_Gamer - Monday, October 30, 2023 - link
It does have RT - it will be DirectX 12 Ultimate compatible when released. However, no Vulkan support is planned (so, no Vulkan games like DOOM will run at all on it).meacupla - Monday, October 30, 2023 - link
Ohhh, that sounds excellent. No wonder they can pull these benchmark numbersDigitalFreak - Monday, October 30, 2023 - link
So…These chips are for running Windows for ARM, but there were no benchmarks for Win32 apps under emulation? Despite how hard Microsoft has tried to push UWP apps, they are but a small fraction of Windows apps. Seems like they already know their emulation performance is going to suck.
In 6 months, they’ll be up against the M3 and Intel’s 14th gen mobile CPUs, which, I believe, are a new architecture. Plus whatever AMD will have.
JakoDel - Monday, October 30, 2023 - link
what's the correlation between UWP apps and ARM? there are already a bunch of programs available compiled for ARM (the first that come to mind are 7-zip and qbittorrent) which aren't UWP. the latter is dead and buried since at least 2021... emulation performance will obviously be worse than an Apple SoC with the same perf, but this one has basically the power of a M2 Max, so no in absolute terms it won't suck at all.also why are you dismissing this so fast, like developers will start supporting ARM on windows too once they see a decent platform, they're already doing so for macOS. this is their first gen based on the new architecture, aren't you going too fast?
melgross - Monday, October 30, 2023 - link
There’s a difference though. Apple pushes developers by giving them a timeline such as you have two years. Then one year, then three months, and they stick by it. Microsoft doesn’t do that, they’re too afraid of defections from Windows if the do so they keep both in parallel for quite a few years. So developers who are lazy (many of them) will do nothing until sales of the new technology is significant.name99 - Monday, October 30, 2023 - link
You are correct about the Apple vs MS timelines.One the other hand
(a) most people only use Apple apps. I imagine much the same is true for MS.
(b) the companies that, in the past, have been slow to match Apple transitions basically lost massive amounts of goodwill and Apple market share. Adobe was hit hard by their slowness, Intuit lost the Quicken market and has never recovered.
It might be good for the Windows ecosystem as a whole to clear out zombie companies that last innovated fifteen years ago and are unwilling/unable to adapt to ARM, and let fresh blood take their place.
domboy - Tuesday, October 31, 2023 - link
Microsoft isn't ditching x86... there is no timeline to give. They are adding an additional architecture, so they have to have a totally different approach.Could they be harder on developers? Maybe. Especially with someone like Epic that intentionally put in an architecture check to block their Unreal engine from running on ARM machines, even via emulation. That annoys the heck out of me as it was totally uncalled for, unless they're planning to actually support ARM64 natively in the near future.
lmcd - Monday, October 30, 2023 - link
The correlation is that building, packaging, and distributing for ARM was far easier using UWP than any other stack until quite recently.tipoo - Monday, October 30, 2023 - link
This is Control in emulation, coming close to the 7840HS, and the GPU staying a bit ahead of the M2 like normal seems like a good sign for emulation performancehttps://twitter.com/MehNitesh2/status/171900890208...
SarahKerrigan - Monday, October 30, 2023 - link
Kinda weird that you're conflating Win/ARM native apps and UWP like it's 2018, nglNextGen_Gamer - Monday, October 30, 2023 - link
For some comparison, here are some benchmarks from CNET's latest article with iPhone 15 Pro Max on iOS 17.1.Geekbench 6 Single-Core: 2,961
Geekbench 6 Multi-Core: 7,385
3DMark Wildlife Extreme: 25.1 fps
Again, that is A17 Pro, which has 2 performance + 4 efficiency cores, and 6 GPU cores.
Silver5urfer - Monday, October 30, 2023 - link
GB is a joke of a benchmark. Real benchmarks are Cinebench R20. Blender Workloads and POVray for a processor.name99 - Monday, October 30, 2023 - link
In other words "real" benchmarks are MT benchmarks?And specifically the MT benchmarks that make x86 look good?
OK then.
Silver5urfer - Monday, October 30, 2023 - link
I would like to know the Transistor count on these. They are made on TSMC's 4nm process, while the existing AMD products are on 5nm and Intel is on 7nm design. So they are already using a brand new far more expensive node than these processors.Apple will be releasing M3 processor today on TSMC 3nm node, so this will be beaten but again ARM processors that are fighting against x86 needs a ton of transistor density to achieve that and they do not have SMT advantage, which is why Apple M series processors are far more expensive to manufacture.
x86 Hyperthreading / SMT eats these processors. And AMD's Bergamo is the counter for ARM in HPC space for the high density without SMT design, which means ARM is already dead. Only Graviton designs are offered in higher volume because Amazon wants everything in-house.
Still the newer Zen 5 x86 design for 2024 will be outperforming Neoverse ARM and even Tenstorrent (Jim Keller) RISC-V high performance processor as per his own prediction. Then we have the OS component, Windows OS is heavily designed to be on x86 unless those rumors of Microsoft killing NTKernel with a Linux Kernel things will be worse for ARM because x86 to ARM translation will be a mess and a massive disaster. Microsoft cannot even maintain their own Engine based games - Halo Infinite, Slipspace and their QA of Windows 11 is a gigantic disaster. The insane problems associated with the Core Kernel components to the absolute BS of the UI aspect which is inspired by Apple and mobile UI causing a massive downgrade vs Windows 10 and 10 LTSC in stability.
All in all Qcomm wants to do something but by 2024 Q3, AMD Zen 5 will stomp on HPC space and in Consumer Client space. Plus this is a pure marketing and mental campaign on the people to try to switch from x86 compute to ARM for something sinister in the future in total clamping down of the Operating System freedom. Look at Windows it is going on a downgrade path, Android is now same, a simple sandbox iOS clone without any user / power user control - (Scoped Storage, Blacklist APIs, horrendous UI , Playstore restrictions), Windows is also slowly becoming same.
PC = Personal Computer will be a thing of past once x86 is dead but that death is not near, it will be perhaps done by 2030+ once the Automotive industry totally kills real cars with their new Software riddled disposable electronic ones and god knows what will become of US Economy and Technology advancements or say Orwellian.
pleoxy - Monday, October 30, 2023 - link
Point one, TSMC 4nm is 5nm++. It's not a "new node", it's improved 5nm. 4nm is a marketing number for the improvements.Point two. 7840HS is built with 4nm.
Not only will Zen5 be on the market by the time this really launches, but so will meteor lake. It will compete with both head to head. I welcome more players into the space, so I wish them well. But it's going to be very uphill for Arm on PC.
pSupaNova - Tuesday, October 31, 2023 - link
If the chipset can run Linux then it's not locked down.I would say Windows is more open then ever, because I have been using WSL 2 daily for years to make software that runs on graviton based servers.
ah06 - Monday, October 30, 2023 - link
Can you help contextualise these results a bit more for the average user who browses the web and watches videos and does a bit of office productivity.For such users (pls correct me if I'm wrong), single core scores matters by fractions of seconds, dual and quad core scores matter by fractions of seconds and max multi score almost doesn't matter at all (there are no casual browser apps that max out 12 or 16 core systems).
What really matters is the performance hit on battery and more than everything else, BATTERY LIFE.
I was under the impression that apple was far and away the leader in this so imagine my surprise when the hackjob optimised framework laptops with 7840U provide better ppw than M2 Pro machines with similar performance. The zen 4 H series didn't suggest this would be the case but the barely available U series seems to be just as good as Apple M Pros?
Where is everyone currently and expected to be in 6 months in terms of energy efficiency?
Looking at the TDPs and performance numbers of Snapdragon X Elite, it seems like it'll *not* be leading competitors in that respect considering Intel is about to jump architectures and node, AMD will further optimise from their already strong position and apple will also jump architecture *and* node
pleoxy - Monday, October 30, 2023 - link
Singlecore is the most important metric for most workloads. There are limits to how much you can easily break a problem down into parallel threads. It cost a lot more to do this, and software is already expensive.As a user, you will probably wait for singlecore to finish a task more than anything (faster feels "snappier"). The exceptions are things like video encoding, compressing, servers where the multi-core muscle shows up. You are correct that most users won't hit their core/thread limit in regular use. Fewer and fewer power users need all those cores. Some users have pretty much unlimited appetite for compute though, so more is *generally* better
Battery life is generally going to be dominated by SOC power and it's ability to do light loads at low power. Full legs stretched system speed also affects battery life, but that really depends on your workload and how often you peg your CPU.
dudedud - Monday, October 30, 2023 - link
If Andrei helped, why they didn't use SPEC?Tams80 - Monday, October 30, 2023 - link
Those are some damn impressive numbers!What's the catch?
meacupla - Monday, October 30, 2023 - link
We'll have to wait and see.How much does it cost?
How efficient is it?
How many hours can it playback videos over wifi with a 45Whr battery, etc?
How well does it do x86 emulation?
How well does it game?
bigmig - Tuesday, October 31, 2023 - link
Why? In 3 out of 4 of Qualcomm's own CPU benchmarks, the 80W TDP version shows only single-digit percentage gains versus an Intel CPU that's not top of the line today and will be >18 months old by the time X Elite ships. And that's not even considering that x64 has far broader native Windows software support than ARM. What's the draw here for OEMs and end users?TEAMSWITCHER - Wednesday, November 1, 2023 - link
It's not a product .. it's only a component.techconc - Monday, October 30, 2023 - link
Overall, this chip is very impressive for Qualcomm and for the Windows PC market in general. I came away from Qualcomm’s demonstration very impressed. However, the more I’ve seen since Qualcomm’s presentation, the less impressed I am. For example, even in Qualcomm’s performance machine, their Geekbench 6 numbers didn’t match the numbers in the slide for their presentation where they compared it to the M2 Max (single core of course).I find this chip a bit odd as compared to Apple’s lineup though. They attempt to compare it to the M2 chip, but realistically, with 12 performance cores and 80 watt TDP, they are clearly targeting something closer to an M2 pro.
Either way, as this article mentions, Qualcomm is trying to drive a narrative of how their future chip will perform better than present chips from Apple, AMD and Intel. As we know, Apple has already release their M3 FAMILY of chips which already looks to provide a considerable advantage over the X Elite. Still, this is a massive improvement over previous offerings available for Windows on ARM. Hopefully, that will kickstart the competition once again.
Hyper72 - Tuesday, November 7, 2023 - link
Yes I think we need to ignore the QC marketing droids, they're just trying to drum up excitement. And it's a bit much to expect the the first gen Nuvia chip will match Apple's mature M-series on every metric next year, even if it doesn't it's still impressive and a good start. The important part is that for the first time there's a viable and performant ARM chip for the PC - something Intel said publicly just last week was not even remotely a threat and not on their list of concerns.t.s - Thursday, November 2, 2023 - link
Q: "And what will be the price?"A: "Relax. It's affordable. Just $1999 for our standard model"
:p
Carson Lujan - Monday, November 6, 2023 - link
Apple: cracks knuckles."Now introducing M3"
Qualcomm: sweating profoundly
Benchmarks: M3 beats Qualcomm