Comments Locked

26 Comments

Back to Article

  • Dante Verizon - Wednesday, February 21, 2024 - link

    Probably loses to Bergamo. They are comparing it to Genoa
  • Dividend3990 - Wednesday, February 21, 2024 - link

    Where is the comparison?
  • Terry_Craig - Wednesday, February 21, 2024 - link

    https://www.anandtech.com/Gallery/Album/9432#16
  • Soulkeeper - Wednesday, February 21, 2024 - link

    Can someone define "CSS Design" ?
  • mode_13h - Thursday, February 22, 2024 - link

    I think it's basically like ordering an entire meal, instead of ordering separate appetizer, soup, main course, dessert, and drinks. I'm pretty sure (not certain) that the CSS options are just designs where all the pieces you need are already put together, for you, saving you the work of having to integrate the IP, yourself.
  • lemurbutton - Wednesday, February 21, 2024 - link

    AMD is screwed. Big Cloud companies will make their own ARM chips to cut out AMD's margins. AMD doesn't have a foundry business to fall back on.
  • PixyMisa - Thursday, February 22, 2024 - link

    ARM has been promising that for a long time now.
  • Dante Verizon - Thursday, February 22, 2024 - link

    Dumb awards.
  • spaceship9876 - Thursday, February 22, 2024 - link

    AMD could make risc-v cores.
  • mode_13h - Thursday, February 22, 2024 - link

    AMD could sell chiplets. Next year, they'll have their own ARM cores, so they could give customers an option of either x86 or ARM chiplets.

    AMD also has their CDNA and XDNA IP to offer customers.
  • Dante Verizon - Thursday, February 22, 2024 - link

    You keep talking to a troll as if he were a normal, educated person backed by arguments.
  • SarahKerrigan - Thursday, February 22, 2024 - link

    I seriously doubt AMD will have their own ARM cores. I expect the forthcoming PC chip to be Cortex-X.
  • mode_13h - Monday, February 26, 2024 - link

    Why is it so hard to believe that AMD made their own ARM cores? Didn't you hear about the (now-canceled) K12?

    Furthermore, what would be the point of them just using a Cortex-X core? Just so they could ship it with a RDNA4 iGPU and XDNA NPU? That doesn't sound worthwhile.
  • SarahKerrigan - Thursday, February 29, 2024 - link

    I don't doubt AMD's capability to design ARM cores. I doubt the inclination of their leadership to throw large sums of money at the R&D associated with designing custom ARM cores for the potato-scale WoA market.

    We'll see in a year, I guess.
  • mode_13h - Friday, March 1, 2024 - link

    What if AMD has its eye on the server market, also? Or primarily?
  • Kangal - Sunday, March 3, 2024 - link

    I'm with Sarah on this one.
    Nvidia did design their own ARM core architecture. Sometimes it paid off, other times not really. And from what it seems, AMD has more talent on their team. So I don't think they can't design and build a competitive solution, one that is cooked custom made.

    However, there is very little reason for AMD to do so. For something worthwhile to come, it takes years. Qualcomm was on to something special with their Kryo-100 architecture, but they fired their whole division and went with Stock ARM cores. Or mildly tuned. Samsung tried to follow Apple and do something ambitious with their Mongoose cores, but weren't as talented, and by the time it was paying off they were at a big lithographic disadvantage.

    With all that in mind, it's highly unlikely AMD even dips their toes into the ARM environment. And if they did, they will definitely go Stock. And would be leaning on Samsung for expertise, like with the Exynos 2400 or Tensor S3 systems.

    What I'm hoping for, is that AMD is able to "marry" x86 and ARM together physically on the same chipset. Use an over-engineered Ryzen CPU to act like an Accelerator to process x86 specific strings, whilst, the bulk of the mundane threads are handled by the ARM cores. That could save a lot on heat and energy, and could build the next-gen Handhelds, Tablets and Laptops. And maybe spill-over into building more efficient/performant chips for Desktop and Cloud as well. But these are longterm visions, perhaps they have already had these in the lab for years, and they might surprise us in the near future. They have managed to surprise us with Zen, x3D, and AM5 platform, so they might be cooking. Otherwise, don't hold your breath. Even Qualcomm took ages to move 2019 Nuvia into a 2025 Oryon, which is ages in the fast-paced Tech Industry.
  • mode_13h - Sunday, March 3, 2024 - link

    > Nvidia did design their own ARM core

    AFAIK, those were weird VLIW cores with JIT translation. Makes sense for some embedded applications, but I think had no real mainstream potential.

    > However, there is very little reason for AMD to do so.

    If we assume that ARM has an inherent efficiency advantage, then AMD should struggle to compete (via x86) in the two most important markets for them: laptops and cloud CPUs. That's the upside of them doing a custom ARM core.

    > Qualcomm was on to something special with their Kryo-100 architecture,

    IIRC, they didn't perform much better than ARM's own IP. Considering the expense of doing in-house designs, I don't really blame Qualcomm for pulling the plug on that effort.

    > it's highly unlikely AMD even dips their toes into the ARM environment.
    > And if they did, they will definitely go Stock.

    And what's the point of that? What would their competitive advantage be vs. Mediatek or the hyperscalers with their own ARM CPUs containing ARM IP?

    > What I'm hoping for, is that AMD is able to "marry"
    > x86 and ARM together physically on the same chipset.

    Won't happen. The ARM and x86 system architectures are too different for them to play nicely in the same machine and within the same kernel.

    > Even Qualcomm took ages to move 2019 Nuvia into a 2025 Oryon,

    They were starting from scratch, other than the knowledge in their heads. Also, I think they missed more than one process window and had to then port their design to a new node, which also takes time.

    In AMD's case, Jim Keller said the K12 was basically a project to replace Zen's frontend with ARMv8-A. If that's the approach they're taking, and if he believed it was feasible even with the resources they had 10 years ago, then I'm sure they could make a go of it now!
  • Kangal - Sunday, March 10, 2024 - link

    I don't think you comprehend the situation.
    Sure, a corporation like AMD has a huge advantage over a company like Google AND Microsoft combined. When we are talking about taking Stock ARM technology and adapting it into new architecture.

    But AMD does NOT have that advantage over the market, where other players have a lot more heritage. The K12 was nothing special. That's a fact. If it was special, the technology would have been sold or licensed out, or better yet released for sale. Besides, the market has moved long since then, first with the big leap in 2016 with 16nm/Cortex-A73 then incremental big leaps to 2020 8nm/Cortex-A78, and then 2023/4nm/Cortex-X3. That was in a fairly small space of time, which is important context to explain the Apple/Nuvia/Qualcomm/Oryon situation. In that same timeframe we saw Intel struggle from 15nm 4core Skylake, to faster Skylake with 8core, then Intel 12th-gen and now the refrwsh Intel If you think AMD can enter the Server Space with ARM technology, you must appreciate that nothing is stopping the likes of Apple, Samsung, Qualcomm, Nvidia, MediaTek, or other players. Heck Amazon. These guys can potentially enter the market with better product than AMD and compete or dominate.

    Where AMD actually has an advantage is with x86 and the legacy code it supports. That's what is keep them competitive. And with the limited budget and R&D they have, they are making the right calls this decade.
  • Kangal - Sunday, March 10, 2024 - link

    The most sense that makes for AMD is to:
    - out-compete Intel (main rival)
    - keep pushing x86 forward (ergo, trucks vs cars)
    - look into new ways of innovating hardware/software

    ...and if they want to incorporate ARM technology, do so by trying to adopt that into their systems. Just because it is difficult, does not mean it is impossible. There are niche systems out there with ARM cores and with RISC-V cores. My idea makes the most sense, to use the ARM cores that are more efficient for most tasks, and using their Zen cores for the odd thread. It's like going back to the 80s where we had devices with more freedom in their designs. Modern systems stick to fairly understood setups.

    So yah, AMD will either steer clear of ARM technology, or they will innovate with it, and incorporate it. They won't try to compete with the established ARM Players on their turf, as that is not a wise business decision. Even if they MIGHT have a chance, it's a high risk/low reward scenario. Bad use good money, chasing bad money.
  • Findecanor - Thursday, February 22, 2024 - link

    This must be the least comprehensible article I've seen on this web site yet.
  • mode_13h - Thursday, February 22, 2024 - link

    I didn't find any noteworthy issues with its comprehensibility. Sure, it doesn't do a whole lot of explanation, pretty much narrating and summarizing most of the slides, but I thought it was okay.
  • mode_13h - Thursday, February 22, 2024 - link

    > I suspect we’re looking at a CPU core design that borrows heavily from Cortex-X5 –
    > Arm’s next-generation Cortex-X design –
    > in keeping in line with the use of X1 and X3 for V1 and V2 respectively.

    Didn't those other examples typically lag the X-series core on which each was based? In that case, V3 would be derived from X4.

    I think it's interesting how the V-series was initially positioned as more of a HPC-oriented core, yet it's been going into mainstream applications, like Graviton 3 & 4. Meanwhile, the N-series gave the impression of being the main Neoverse workhorse, yet it's been sliding down towards the more power-sensitive edge applications. As for the E-series, I've barely even heard about anything using those.

    > A Look Towards The Future: Adonis, Dionysus, and Lycius

    Why do they even bother with code names? They're pretty obviously going to be launched as the V4, N4, and E4! It's not as if calling them by those names would really tie their hands, should they wish to rebrand the Neoverse lineup.

    > fouth generation Neoverse IP is likely a couple of years out.

    One of their slides (#11) seems to indicate these cores took only 9 months to tape out. Therefore, we could be looking at the next gen in just 1 year.

    * There are some interesting details in the End Notes slides, such as that the V3 features 4x 128-bit SVE pipelines, like its predecessor. They also mention performance estimates for the N3 were based on a "3 nm" process node.

    ** Another key detail the mention is that their SPEC2017 performance estimates use gcc's -Ofast option, which breaks standard compliance and could change the results of some computations. They also use LTO (Link-Time Optimizer).
  • Wilco1 - Friday, February 23, 2024 - link

    Using GCC with Ofast and LTO are fairly minimal setting since that is how many people compile their own applications. Intel and AMD use their own compilers which have benchmark specific optimizations for SPEC. It's something that has been going on for many years. Many Intel scores were recently invalidated because of this cheating.
  • mode_13h - Monday, February 26, 2024 - link

    > Using GCC with Ofast and LTO are fairly minimal setting
    > since that is how many people compile their own applications.

    I'm skeptical that -Ofast is very common. Because it's dangerous, you'd want to use it with care and on code you either know quite well or that you can test comprehensively.

    I also think LTO is probably more the exception than the norm.
  • Wilco1 - Monday, February 26, 2024 - link

    Several distros enable LTO by default: https://www.phoronix.com/news/Arch-Linux-LTO-Propo...

    If you want your integer code to be vectorized, you need -O3. If you want your floating point code to be vectorized, you need -Ofast. It works fine in SPEC and most applications. Dangerous? Only if you have no idea what you're doing...
  • [email protected] - Friday, February 23, 2024 - link

    Using HBM3 a big win. Core needs to be CPU GPU independent?.

Log in

Don't have an account? Sign up now