Comments Locked

19 Comments

Back to Article

  • Makaveli - Monday, August 28, 2023 - link

    2024 looks like its going to be big year in the server space for these processors. Can't wait to see how they stack up against AMD’s EPYC competition.
  • Exotica - Monday, August 28, 2023 - link

    And zen 5 is coming. So epyc vs Xeon. Competition is good for the marketplace.
  • Samus - Tuesday, August 29, 2023 - link

    On the desktop side it's gotten almost ridiculous the amount of performance there is meanwhile nothing is utilizing it. Client-side AI seems to be on the backburner as everything is connected and presumed to eventually be a (datacenter-driven) service. Games are bottlenecked by GPU's at 4K. And a 10-year old Haswell-era system is still adequate for 90% of office PC environments where web browsing, office tasks and communications are not demanding.

    And Microsoft knows it. Why else the BS restrictions on Windows 11 (and even more demanding hardware, 10th gen or Zen2+, on Windows 12) guised as 'security improvements'
  • James5mith - Tuesday, August 29, 2023 - link

    I'd be happy if we got back to 65w TDP (or you know, whatever they ACTUALLY call power consumption these days) as the mainstream parts. Having a nominally 65w TDP chip consuming 200w+ under load is stupid.

    Especially since as you said, older chips that actually stayed in their power envelopes still do everything they need to do for the majority of the consumers.
  • meacupla - Tuesday, August 29, 2023 - link

    Yeah, but back in the 65W i5 days, the i5 and i7 were 4 cores and had very little performance gain over their previous generation.

    There is even a power limiter setting in BIOS for Intel chips, so that they don't consume 200w+.
  • nandnandnand - Tuesday, August 29, 2023 - link

    If you can tweak any chip you buy to act like the "35W" variants (e.g. 13900T = 35-106W, 13400T = 35-82W), then it's not really a big deal. You have the silicon, make it run efficiently.

    On AMD's side, they need to make some 65W desktop APUs for AM5. Although the 610M equivalent iGPU in Ryzen 7000 is a positive development that will allow many chips (all but 7500F) to be repurposed without a discrete GPU.
  • Chokster - Tuesday, August 29, 2023 - link

    True. I'm browsing this website right now on a W530 Thinkpad with Ivy Bridge. I rarely have to turn on the desktop unless I'm gaming or editing photos.
  • Hlafordlaes - Monday, October 16, 2023 - link

    I popped a used Xeon 12C/24T in my old X99 and it is a joy to use. Ditto another Z97 using a low power Xeon, 4C/8T. Up to Windows 7, socket 775 still rules, and is fine for Linux.
  • 0iron - Tuesday, August 29, 2023 - link

    "As announced by Intel back in 2024," (5th paragraph)
    I forgot what year it is now?
  • Ryan Smith - Tuesday, August 29, 2023 - link

    Look, it's tough being a time traveler, okay? Tomorrow is yesterday. Yesterday is next week. And next week is the day after tomorrow.
  • GeoffreyA - Monday, September 4, 2023 - link

    We may have to instantiate a Reality Change to erase this information...
  • Notmyusualid - Saturday, September 9, 2023 - link

    To be pendantic, 2566.

    https://thaiguider.com/what-year-is-it-in-thailand...

    :)
  • duploxxx - Tuesday, August 29, 2023 - link

    So after many many years Intel is cloning AMD, history repeated itself...

    First EPYC had it flaws, 3rd generation (Genoa) was the first to have all things straight (mainly IO wise) lets see how much Intel copied over from the flaws... THey don't really have a track record in the last decade of getting things right from the start.

    Interesting to see that Siera comes first in H1 and Granite short after, so that is for sure H2 else they would have mentioned H1... and a very late H1 release and a possible paper launch means Q3 actual systems to buy and Granite in Q4... so far the on-track. Which means all of these products will be against full AMD Zen5.

    No word on Granite about real core count besides higher, but Genoa already has 96 which is 50% more than Sapphire.
    Sierra has 144 mentioned, Bergamo has 128 already today and has rumoured to double in Zen5...

    No word on HT on Granite

    Intel stating Granite and Siera the next big leap, ehhh is that the same marketing team? Sapphire Rappids was going to be the next big leap from Cascade-lake and ice-lake and we all were advised to wait (and again wait) for it.... today Intel DC sales push a sapphire rapids refresh next to come, which shows that all these announcements are far far away.

    Marketing and CEO (Patje) is all over the place these days stating how well Intel will be in the future, it shows how desperate they are. It is visible in sales, it is visible in benchmarks (unless you believe Intel marketing)
  • nandnandnand - Tuesday, August 29, 2023 - link

    It is interesting that it's apparently using a tweaked version of the E-core (Sierra Glen) that diverges from the consumer version (Crestmont).

    Maybe 144c/144t Sierra Forest could fare better than expected against 128c/256t Bergamo.
  • jjjag - Tuesday, August 29, 2023 - link

    Not sure what you think Intel is cloning. This architecture is very different than what is shown of the Zen DC CPUs. The cache and memory architecture is completely different here. Intel has small I/O die with apparently just I/O, and the DDR interface appears to be on the cpu chiplets, as does the cache. AMD has a large central I/O with both DDR and cache and much smaller CPU chiplets. Unless you mean using chiplets altogether? AMD was certainly not the first to go away from monolithic and even Intel Sapphire Rapids uses a 4-chiplet package. We know that Sapphire design started MANY years ago and it's late due to execution. So the entire industry realized giant monlithic server chips were not feasible probably 10 years ago. But you are giving AMD all the credit?
  • DannyH246 - Tuesday, August 29, 2023 - link

    Imitation is the highest form of flattery! All Intel have done over the last 20years is copy AMD, now they’re doing it again. Lol.
  • jchang6 - Tuesday, August 29, 2023 - link

    the Intel P-cores are designed to 5-6GHz operation, which is possible in the desktop K SKU's. They are limited to mid-3 GHz in the high core count data center processors. OK, sure, we would rather have 60+ cores at 3GHz instead of 8 cores at 6GHz.
    Now consider that the L1 and L2 caches are locked at 4 and 14 or so cycles. For 5 GHz, this corresponds to 0.8ns and 2.8ns for L1 and L2 respectively. Instead of turbo-boost, if we limited the P-core to 3GHz, could L2 be set to 10 cycles?
  • name99 - Wednesday, August 30, 2023 - link

    Honestly by far the most interesting presentation is (surprise!) the ARM one.
    https://www.servethehome.com/arm-neoverse-v2-at-ho...

    The reason it's so valuable is that it contains by far the most technical data. It's especially interesting to compare against Apple (which doesn't make this public, but which we know about if you read my PDFs).
    The quick summary is that ARM continues to lag Apple in the algorithms they use for the CPU core, but with a constant lag – which is fine, since it means both are moving forward. (Unlike, say, Intel, where I can't tell WTF is happening; they seem to have given up on anything except moar+ GHz). If you simply go by IPC (SPEC2017) then Apple remains a few percent ahead, though less than in the past! Apple are still ahead in terms of power and the frequency they are willing to reach; but overall they're closer than they have been in a long time. I suspect this is basically a side effect of the exact timing relative to M3 not yet being out. But still, good going ARM!

    Especially interesting are the callouts to how different improvements affect performance – and just how much better prefetch (and ARM's prefetch already was not bad) help things. As far as I can tell, ARM and Apple are essentially now equal in DATA prefetch ALGORITHMs. Apple is probably ahead in terms of resources (ie table sizes and suchlike) and in co-ordination across the entire SoC.
    Apple remain far ahead in terms of INSTRUCTION prefetch running a doubly-decoupled fetch pipeline, not the singly-decoupled pipeline of ARM V2 (and Veyron V1); along with a whole set of next level branch prediction implementation details; but you don't see instruction fetch seriously tested on SPEC, for that you need to look at large codebases like browsers or sophisticated database benchmarks.

    And it's not over yet. I think there's at least 50% or more IPC boost available just by continuing on the current round of careful tweaks and improvements, even without grand new ideas like PiM, or AMX style accelerators.

    The Veyron paper is mildly interesting: https://www.servethehome.com/ventana-veyron-v1-ris... but they clearly don't have the manpower that ARM and Apple have, and it shows. They kinda know where they need to go, but who knows it they will ever be able to get there. You can compare it to the V2 paper to see just how much is missing – and how much it matters (basically they get about 50% of V2 IPC).

    MiA are IBM (no idea what that's about given their previous track record) and anything useful, or impressive, from team x86. Even as everyone on team RISC understands where to go (how to run an optimal Fetch pipeline, an optimal branch prediction design, an optimal prefetch engine) team x86 seems determinedly stuck in the 2010s and incapable of moving on.
  • back2future - Saturday, September 2, 2023 - link

    Are there system built-in statistics (tools) on hardware acceleration or hardware extensions utilization within different OS (Linux, MacOS, Windows, (Android)) for informing developers what extensions and hardware areas on CPUchiplets (efficiency) are involved with own working profiles (and to what extent)? At least, do CPU/SoC vendors analyze that area and provide statistics? (Thx)

Log in

Don't have an account? Sign up now