Comments Locked

14 Comments

Back to Article

  • Morawka - Wednesday, July 25, 2018 - link

    This is what i've been dreaming of for years. Makers and inventors will be pleased.

    I always wondered if DARPA was secretly working with Intel. With DARPA's nano robotics designs and Intel's lithography, the possibilities become endless.
  • austinsguitar - Wednesday, July 25, 2018 - link

    i give this a solid 20 years until actually realized. nothing to see here. move along.
  • smilingcrow - Wednesday, July 25, 2018 - link

    Stevie Ray is disappoint.
  • skavi - Wednesday, July 25, 2018 - link

    Ryzen is already using multiple die.
  • Dragonstongue - Thursday, July 26, 2018 - link

    ^ this..AMD with IF, their APU designs and IMHO "leadership" when it comes to multi-die design "is better" are they "as fast" per core, no, but when they figure out the "secret sauce" of reducing latency, figuring out uncore (voltage/power/heat reduction) and making XFR even more "robust" Intel will be in a "world of hurt"

    GPU have been built like this for many years in their own fashion...I think the "stacked die" is that much more impressive, if they can figure out how to keep multiple high performance dies cool under pressure, the upcoming chips 2/5d-3d "stacked" are going to be sick.

    cannot wait to see the integrated liquid cooling "in the core" something IBM was working on for many years and a good chunk of their design team is "helping" AMD going forward.

    will be very interesting over the next couple of years ^.^
  • edzieba - Thursday, July 26, 2018 - link

    Kind of. Infinity Fabric over PCB makes a Ryzen/Threadripper 'chip' effectively a multi-CPU board stuffed into a tiny package, with all the NUMA issues that entails. The idea of Modular Chips is to be able to assemble the (already logically discrete) components into a single chip that, for all intents and purposes, performs identically to a monolithic die.
  • Samus - Thursday, July 26, 2018 - link

    They aren't completely "modular" though. You can't just pull one out and have it continue working. You can't even really disable one without screwing up the infinity fabric.

    There is nothing even remotely close to production in what is being described here.
  • Alexvrb - Thursday, July 26, 2018 - link

    Well... AMD can disable them. First gen TR had two disabled dies.
  • rahvin - Wednesday, July 25, 2018 - link

    I give it infinity. Just like the modular cell phone it will be too complicated, too expensive and just plain not worth it once they try to engineer it. Like most ideas it will die during the engineering phase where they actually try to make it work.
  • Dragonstongue - Thursday, July 26, 2018 - link

    HBM memory is basically EXACTLY this, as is Ryzen in many ways, much more "modular" then all previous processors AMD has ever made, APU "ties" the concept of cpu and gpu together, HBM "interposer" ties the memory a step further and infinity fabric is the "backbone"

    so basically AMD has been doing this for what ~7 years or something like that, little chunks at a time refined and "perfected" in their own manner.

    small blocks that perform well are much easier to "tie together" than a massive die, glue or not, do it right and latency means very little "at the end of the work load"

    if anything, the higher the memory i.e DDR4, DDR5 etc which has increasingly high latency BUT higher bandwidth will vastly benefit the concept compared to trying to do this many years ago with very low latency but slow memory.

    interesting if you "spin" the thought in different ways...I think Intel is likely to cheap out and use thermal paste AH HA HA
  • Arnulf - Thursday, July 26, 2018 - link

    Newer memory interfaces you mention (DDR4, DDR5) don't have higher latency than their predecessors - it's just that as their clock frequency goes up and clock cycle time goes down, the latency expressed as the number of clock cycles goes up, yielding similar latency when calculated in time units.
  • edzieba - Wednesday, July 25, 2018 - link

    I wonder how this ties into IDEA and POSH.
  • iwod - Thursday, July 26, 2018 - link

    Talking about Royalty Free, where is the Thunderbolt 3 license?
  • mode_13h - Saturday, July 28, 2018 - link

    I'm skeptical how well this would *really* suit CPU and GPU. CPU wants low-latency and medium memory bandwidth (assuming the sort of lower core-count chips that would include a GPU), while GPUs crave bandwidth. Adding a generic bus would seem to add some latency vs. a purpose-built, tightly integrated bus, and probably not scale as well.

    As for other blocks, sure. But I still foresee monolithic dies with CPU, GPU, and memory controller. Something like this can be used to tie in everything else.

Log in

Don't have an account? Sign up now