Comments Locked

9 Comments

Back to Article

  • nandnandnand - Tuesday, February 11, 2020 - link

    16+ layers is exciting to be sure, but it sounds like the interconnect density is a bigger deal. What magnitude of improvements will that bring?
  • lightningz71 - Wednesday, February 12, 2020 - link

    In the areas that this is being looked at, one of the current biggest roadblocks to performance is the technology to allow enough individual electrical paths between devices. Given how small current lithography is, IC packages are needlessly large for many of them for the single purpose of allowing more area for connection pins. Being able to scale the density of connections from the current ~625/mm-sq to "100,000+" / mm-sq can give a 1000 fold increase in connections. Now, because you're dealing with larger stacks, that's going to be spread across many more individual devices, but, say you quadruple the height of the stack in number of units, that 1000x increase in connection density still easily allows 10-100X the bandwidth between devices at current per pin data rates. Conveniently, devices largely can't use that magnitude of data rate increase at present; there's just not enough processing power to sustain that level of throughput in most cases. Instead, you can reduce the data rate by a factor of ten, which requires SUBSTANTIALLY less power. So, cutting the data rate by a factor of ten, but increasing the number of channels by a factor of 100-1000 still nets you 10-100X the data throughput at a potentially significant power savings. And, all of this can be accomplished with even smaller packages because of the crazy high pin density. This means less used package material, and that you can pack more of them on individual devices.
  • nandnandnand - Thursday, February 13, 2020 - link

    I like this answer. Thanks.
  • Averant - Wednesday, February 12, 2020 - link

    Well, consider: Traditional copper pillar interconnects are what... 600-ish per square mm. This technology is promising 100,000 - 1,000,000 per square mm. More interconnects means more throughput and higher bandwidth. With all things being equal(which we don't have the information to confirm) we're taking several orders of magnitude in performance.
  • nandnandnand - Thursday, February 13, 2020 - link

    Or a couple of orders of magnitude at reduced power consumption.

    Hopefully we will see at least 4-8 GB of HBM being stacked on all CPUs (Zen 4?), GPUs, and APUs. And then 3DSoC for the ultimate performance gains.
  • Santoval - Wednesday, February 12, 2020 - link

    "All of this does come at a cost, however, and that's something that Invensas keeps to itself. The firm does not disclose how much DBI Ultra costs to use, or how that compares to the technologies used today."
    Confidential licensing terms (particularly licensing fees) are always suspect. One can assume that DBI Ultra costs much more than the competition, and secret licensing fees allow the company to ask for different money on a case by case basis. But why make that secret though? They could always negotiate openly.
    Larger volume of devices = less money does not need to be secret. Is the cost of DBI Ultra so much higher than "copper pillar" that they are afraid of scaring off the interested parties? Wouldn't the under the table secret negotiations scare off more clients though?
  • valinor89 - Wednesday, February 12, 2020 - link

    I actually read that as they have not disclose the actual cost difference between the old method and the new one. A part from licensing costs the most important would be what it actually cost to use it on a product. Does it need new machines and lenghty or fiddly proceses?
  • SaberKOG91 - Wednesday, February 12, 2020 - link

    Doesn't really matter. SK can afford it and it will get used across all of their memory product lines.
  • carcakes - Sunday, February 23, 2020 - link

    It could have like 8GB HBM2 memory versus triple socket 7 TB ddr4 memory

Log in

Don't have an account? Sign up now