Comments Locked

25 Comments

Back to Article

  • Kangal - Wednesday, March 2, 2022 - link

    I wonder if this means Personal Computing will move unto an ARM v9 or v10 Standard. That's iOSmac, Linux, Android, and Windows.

    Everything will run that for better performance, better efficiency, and better security.

    Whilst x86 features are used for Legacy Program support. And they're added to a system much like the way you would add in a ML/AI/NN co-processor, or a GPU, or even something like the Tensor core of a GPU.

    It's possible, but not elegant and I get a headache just thinking about such a future.
  • brucethemoose - Wednesday, March 2, 2022 - link

    This doesn't have anything to do with ISA? And the x86 incumbents are all part of this anyway.
  • brucethemoose - Wednesday, March 2, 2022 - link

    Wow, thats quite a list. Now I'm interested in who is *not* part of this, and why.

    -Nvidia
    -Broadcomm (who has a long history with chiplets IIRC)
    -Chinese big tech (Tencent, Alibaba, Mediatek and others)
    -Globalfoundries
    -IBM
  • melgross - Wednesday, March 2, 2022 - link

    Apple?
  • brucethemoose - Wednesday, March 2, 2022 - link

    Yeah. As insular as Apple is, even they would want to integrate 3rd party blocks into their own designs.
  • Doug_S - Wednesday, March 2, 2022 - link

    They don't need to integrate 3rd party blocks on separate chips, at least not where high bandwidth matters (i.e. they use Broadcom chips for wifi, but that's only a couple gigabits of bandwidth which is easily done with current technology)

    I'm sure Apple knew what was going on via TSMC given how closely they cooperate, so they didn't have any need to directly participate. If they experiment with this sort of thing it will be for lower volume higher ASP stuff like Mac Pro. They have to be pretty conservative for high volume stuff like iPhone, they will let their lower volume products and other companies take the risk of launch delays.
  • ikjadoon - Wednesday, March 2, 2022 - link

    Would they? What high-bandwidth third-party die would Apple need to connect?

    Apple's high-bandwidth silicon is all Apple-owned: thus an Apple-designed, private interconnect can be used when Apple's dies grow too large: Apple-designed GPU, CPU, NPU, key video accelerators, etc.

    I'm struggling to find any third-party, high-bandwidth, unable-to-be-integrated-into-the-die IP that Apple would love so much. Apple only makes consumer / business, on-premises, single-user hardware. No enterprise, no hyperscaling, no cloud / hosted hardware, etc. If the software can't run on a local macOS client, then Apple doesn't bother about accelerating it.

    Thus, Apple can roll its own interconnects for the foreseeable future: not too surprising to ignore an industry coalition targeting third-party die interconnects.
  • dotjaz - Sunday, March 6, 2022 - link

    So are Intel and AMD. What 3rd party high speed die do you see Intel or AMD integrate on package?
  • fishingbait15 - Wednesday, March 2, 2022 - link

    What unites the players on this list: they are either cutting edge foundries/packagers (TSMC, ASE), design general purpose CPUs for end user products (ARM, Google, Microsoft, Meta) or some combination (Intel, Samsung, Qualcomm, AMD). As for your list:

    Nvidia: basically exited the general purpose CPU market years ago, though they still supply them - with 2016 era designs - for the Nintendo Switch and Shield TV. They do GPUs, specialty CPUs and their next big play is data center/cloud server products.

    Broadcom: mostly non-CPU components with a few very generic CPUs

    MediaTek: Taiwanese not Chinese. They don't do R&D. Their thing is to take other's last year innovations and sell them cheaper this year.

    Chinese big tech: banned due to fears that they will appropriate IP without paying for it and use it in their military and intelligence hardware

    GlobalFoundries: general purpose foundry, not cutting edge

    IBM: see Nvidia. Their CPUs are for servers and mainframes

    Amazon: see Nvidia and IBM. Their CPUs are for cloud servers.

    Ampere: see a pattern? Their ARM CPUs are server only.

    Perhaps down the line there will be advantages to chiplets in servers. At that point you will see Nvidia, IBM, Amazon, Ampere etc. take interest. Right now, it appears that the goal is increased performance on consumer devices that want to put the CPU, GPU, NPU, DPU, SPU and misc. ASICs on "the next best thing" to an SOC. But you can't put a 128 core AMD Epyc server CPU on the same "chiplet" with an Nvidia data center GPU. Nvidia created NVLink just to get them into the same rack.
  • Pelle Hermanni - Thursday, March 3, 2022 - link

    Mediatek very much designs their own 5G and 4G modems and video blocks (first company with AV1 hardware decoding). Their GNSS chip has resolution of one meter, Qualcomm GNSS chip had resolution of three meters.
  • dotjaz - Sunday, March 6, 2022 - link

    And oh, how exactly do you ban somebody in fear of non-payment when it's free?

    Oh oh, one more thing, since it happened 7 years ago, and you might not remember. AMD is using MediaTek wireless chips in their platform, pretty impressive considering MTK has no R&D and AMD had to rely on them, wouldn't you imagine AMD had some R&D?

    https://www.anandtech.com/show/16666amd-wifi-6e-rz...
  • dotjaz - Sunday, March 6, 2022 - link

    Sorry, replied to the wrong level. Should have been the idiot racist.
  • timecop1818 - Wednesday, March 9, 2022 - link

    mediatek GPS has always been dogshit compared to literally every other gps chipset. ublox is vastly superior.
  • dotjaz - Sunday, March 6, 2022 - link

    You might be too stupid and racist to comment. MediaTek has their own modem, codec, NPU etc. They are currently the only company using Mali G610/710. And they have been using every Cortex A series within months of launching, barely a quarter behind Qualcomm and Samsung.
  • dotjaz - Sunday, March 6, 2022 - link

    Dimensity 9000 was launched merely weeks after Exynos 2200 and barely 2 months after Snapdragon 8 Gen1. I assume your lifespan is about 2500 days, so 30 days might be a "year" for you.
  • eastcoast_pete - Thursday, March 3, 2022 - link

    I had the same question. But, many of these current non-participants might well avail themselves of the tech that's outlined in UCIe. Several (such as Apple or NVIDIA) can access the technology through their foundry partners, especially TSMC and Samsung.
  • Pelle Hermanni - Thursday, March 3, 2022 - link

    Mediatek is Taiwanese like TSMC, not Chinese
  • brucethemoose - Wednesday, March 2, 2022 - link

    Also, I see "MEMORY" is one of the blocks in those slides... would this standard be used for memory too?

    Maybe it will be conflated with a future HBM standard?

    I can also envision CPUs with low-latency, low power dram sitting right on the (traditional) package, which I would gladly trade off for the expandability of DIMMs.
  • Wereweeb - Wednesday, March 2, 2022 - link

    HBM is just a chiplet, so there's a possibility. And HBM has comparable latency to standard DIMM's. I'd much rather have OMI DIMMs, for more flexibility.

    Don't forget that silicon devices will have ever increasing failure rates as we shrink them to sizes where the position of one atom makes a difference. Losing modularity for marginal power gains just isn't worth it IMO.
  • brucethemoose - Wednesday, March 2, 2022 - link

    Depends on the gains and the form factor.

    In a CPU, every bit of memory latency you can shave off is huge... though I'm kinda surprised HBM latency isn't lower than GDDR or DIMMs. Maybe its just not tuned for that?

    And in stuff like laptops or consoles, you don't get much modularity anyway, and the saved space on the PCB is a bigger benefit.
  • JasonMZW20 - Thursday, March 3, 2022 - link

    HBM latency is much lower than any off-package memory technology given the short trace lengths and location. It also consumes much less power than off-package memory.

    Nvidia's (Micron) 21Gbps GDDR6X can consume 65-80W alone (when all chips are used and maximum bandwidth is needed), so it's logical that Nvidia also uses HBM in datacenter/server products where extra cost can be offset.
  • evilspoons - Wednesday, March 2, 2022 - link

    I dunno, "oops I bought the 16 GB CPU instead of the 32 GB one" sounds kinda like hell.

    I suppose as long as whatever on-CPU DRAM you crammed on was also supplemented by modules. Return of L4 cache?
  • brucethemoose - Wednesday, March 2, 2022 - link

    That could work in desktop workloads if the "L4 Cache" is exclusive. Data/code that's actually getting used gets pushed to the on-package memory, while code/data just sitting there taking up space (which desktops tend to have a whole lot of) can be banished to cheap DIMMs.
  • myself248 - Thursday, March 3, 2022 - link

    typo "seems" should be "deems"
  • Ryan Smith - Thursday, March 3, 2022 - link

    Thanks!

Log in

Don't have an account? Sign up now