Have you told me which presentations you'd find interesting? I've not had an email. I can't read your mind.
Quite apart from which, *I* find it interesting. If I only posted presentations you found interesting, it'd be called jjj-tech. If you don't find it interesting, there's a front page of other content, or multiple ways to go look at something else. It's optional for you to read this. If someone is forcing you, there are bigger issues here.
This is huge. Simplifying the process of SOC development so that pieces can be developed separate of a monolithic die. It's a very interesting topic, and will shape the future of processor development at Intel. Thanks for the coverage Ian
It used to be that MCMs were higher risk, lower yield and MUCH higher cost. It's pretty fascinating that they are now becoming preferred even where not required by gargantuan dies (Nvidia - looking at you...). It speaks to confidence in pre-pacakge testing and bump bonding tech even at these tiny scales.
It is huge alright, intel went form mocking AMD about "gluing chips together" to promoting the bright future of their own glued together chips. Let's hear it for hypocrisy and lame publicity.
AMD 8 Cores are monolithic... There's no right or wrong there is just cost, performance and business planning. What we see is that multi-die does come at a performance cost.
AMD mocked them because Pentium D wasn't actually a dual core, there was no core-core communication inside the package, for the core to communicate with the other core it has to send the data out to the chipset, and then again return it to the other core via another bus... Intel's first glue was an actual glue of two separate CPU's into a single CPU package.
Intel's current method is almost an exact replica of what AMD is doing, Intel is just doing away with in-package data route to a more advanced silicon based one. while silicon based one isn't new ( ie, silicon interposer ) Intel's approach is more cost effective as you don't need a huge silicon die that cover all the original dies.
Any chance you'll be liveblogging z14? That and Centriq were the only ones I was really looking forward to this year, so I opted to go on vacation instead.
On a related note, I'm astounded you're able to liveblog at all. Last year wifi was totally unusable (it wouldn't even associate, most of the time) in the event hall except during breaks, and I couldn't get a reliable cell connection either. Guess it's improved.
It hasn't. The WiFi is basically dead with periods of clarity, so everyone fired up their hot-spots which compounded the issue in a giant Faraday cage. This year they blame it on the venue WiFi channels being identical but they don't have access to change it. I've got my devices on max WiFi power and max aggressiveness. It helps, somewhat. If there's a stall, I queue up what's being said.
I would give Ian more credit on this topic, it may seem uninteresting, but it quite possible be glimpse of what is coming in the future - especially with challenge of 10nm and below.
But I think the real interesting thing to watch is the system on chip connection with this process. It will allow 10nm to be use for CPU(s) and GPU(s) and less demanding 14nm and 22nm for other components. Just keep in mind, Intel's CPU interests have change and more focus is on mobile now.
On a side note, this quite technical discussion and brings me back to my days at Georgia Tech, dealing with electronic cpu designed course - but I end up on software side.
Uninteresting to you. I'm stoked and super-happy this stuff is coming along. Yes, it's a bit boring on the surface, but this bodes well for massive scale-out machinations.
If you do not understand that reliable packaging solutions are the future of silicon tech, beh you have some problems to be fixed in your brain. The interposer was an attempt too expensive for large volumes, lets hope EMIB will end more cheap for a spare customers (Intel will utilize it a lot). Many thanks for the coverage to Anandtech.
Hot Chips is a specialised conference with few press, mostly developers and companies. This year was the highest attendance ever, and today is the busiest day. It was early in the morning, however.
Intel last month: Glue is terrible for microprocessors and leads to inconsistent performance, monolithic designs are the best Intel this month: Glue is amazing. It's the best thing ever. GLUE ALL THE THINGS!
Yeah, whomever published that drivel should be fired - twice. Didn't change anyone's mind for the better. Seems to be an industry-wide problem of understanding some pretty basic truths of not making your customers and reviewers furious with you...
What it looks like to me (I'd be happy to be corrected) is - the essential idea is not new, not specific to Intel - the pitches are not especially extreme compared to what Apple has done BUT - maybe substantially cheaper than Apple (? but I suspect this is unlikely) - maybe substantially more pins, or faster than Apple (? this is definitely possible)
More significant seem to me two points that Intel seems to be begrudging in how strongly it is pushing them - this tech moving to the high end. Intel wants to portrait this as a great step forward, mix and match technology, blah blah. MY GUESS is that it's making lemonade out of lemons. All the advantages and possibilities of this tech were available at the costs the high end is willing to pay a few years ago. If they weren't adopted, it's because staying on-die was more appealing (?) Moving off-die in this way looks to me like something Intel has been forced to do (maybe because of upcoming EUV focussable area limitations, maybe yield issues, maybe fear of how they will be forced to be more cost competitive soon in the many-core market).
- Intel's claims to be pushing this as a standard. I think we can all agree that a standard is desirable, and would be appreciated by everyone (including Apple and all the other phone makers, watch makers, wireless headphone makers etc). The question is how honest Intel is about really pushing this as a standard. For every USB they've pushed, there's also been a Thunderbolt -- choices made (deliberately or foolishly) that have caused prices to be way higher than necessary and have prevented mass adoption. I can see Intel management saying that no way ARM should get to benefit from their design effort and so making choices that mean that, for all their talk of standardization and multiple foundries, the reality is that in a year the ARM eco-system announces their own equivalent of EMIB, perhaps slightly less performant but WAY cheaper, more flexible, and a better match to standard ARM protocols.
S1 is like mobile and using Package on Package. PoP uses microbumps around an SoC to keep x/y dimensions on a PCB small and minimise distance for data to travel. PoP is used extensively in mobile. PoP works when thermals are suitable, and usually used for DRAM, but you're still limited by copper connections through a package. EMIB is different.
How is it different? What I mean is, what does it enable that's different from existing PoP and similar tech? I'm not trying to be bitchy here, I'm trying to get at what's supposedly new here. I've given my analysis (based, I'll admit, on limited knowledge) and I'm trying to get at what aspects of that analysis are and are not correct.
It sounds like what you are saying is that this - is different from PoP because PoP ALWAYS involves only perimeter micro-bumps and this involves area micro-bumps (so allows for many more connections) - is different from interposer how? As far as I can tell, the only difference is that it's smaller than traditional interposers.
When Intel first announced this, *I* thought the big deal, the significant difference they had compared to PoP or interposers, was the provision of some level of logic (eg for routing or signal shaping/equalization/translation purposes) within the sliver of silicon. Hence the name Embedded Multi-Die Interconnect BRIDGE. And that would be cute and likely valuable. But they've said nothing in this presentation to sustain that interpretation.
EMIB is different because it can pack a LOT more connections into the same space. Essentially the bumps are a LOT smaller. It's like using an interposer, except cheaper because you only need those small EMIB dies whee you need the high density interconnects instead of underneath the whole thing.
To expand on that EMIB gets you the performance advantages of an interposer, but is much cheaper because you only need those little tiny dies, instead of a giant interposer that is as big as the area of all of the dies on top. An interposer is essentially a partially processed die, so THAT is the reticle limit they are talking about. Giant interposer = expensive, yield issues, hard to manufacture, etc.
On top of that, the EMIB bridges all have pads on only one face, so can be built using a normal process. An interposer requires pads on both sides, which means extensive usage of TSVs across a massive die area.
Whatever the exact method, it’s just a matter of trying to save money, and lower costs. Chips are just getting too big, which means lower yields. I would add that, as I think you mentioned (without having to go back to verify it), this also helps thermals.
Besides the fact that it is completely different....
S1/S2 are basically PoP designs where interconnect bandwidth isn't an issue and signalling density is extremely low.
The closest thing to EMIB is Si Interposers where you are going for extremely high interconnect bandwidths and densities. Si Interposers though have their limitations, requiring TSVs for external connectivity and having limitations on package size based on reticle limits. EMIB gives all the advantages of Si Interposers with basically none of the down sides.
The EMIB dies just need to bridge between two chips and therefore can be extremely small, and because you can use multiple EMIB, you aren't limited to things like reticle limits allowing rather large scale multi die designs. In addition, external connections work just like is any other basic packaging technology and don't require TSV (which add a lot of process steps and risk).
To put things in perspective, one EMIB connection has an orders of magnitude higher interconnect bandwidth than the entire S1/S2 package has between all its connections. Each EMIB is capable of Tbps bandwidth.
So ... I don't want to incorrectly label or falsely characterize this, but Intel "Stratix" is the continuation/integration or 2017 modernization of Altera IP (which they acquired in 2015) ... as a response to Advanced Micro Device's decade-long quest for '3D-IC' and bolt-on fusion of disparate technologies ??
If this is correct, I guess imitation (in efforts to exert market heft) is truly the greatest form of flattery ...
Except Intel has completely overhauled Altera's designs. There's practically nothing in common with the old designs. Block and cell-level programming, not to mention transform times and overall performance/watt have gone from incredible to a scale Xilinx is in no position to compete with.
Hah! No. Altera never had integration this broad, and as for the raw design of the FPGA itself, it's in a completely different league than before thanks to Intel's design libraries. The caching structure is completely different too.
No, it may not be a 100% Intel design, but it is very far removed from Altera already.
p.s. Unless I missed something and they intend to use "super-glued" dies only for their FPGAs. But I don't think they developed EMIB solely for them, did they?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
51 Comments
Back to Article
jjj - Tuesday, August 22, 2017 - link
You live blog the most boring presentations ,prioritizing brand affiliations vs what's interesting.Ian Cutress - Tuesday, August 22, 2017 - link
Have you told me which presentations you'd find interesting? I've not had an email. I can't read your mind.Quite apart from which, *I* find it interesting. If I only posted presentations you found interesting, it'd be called jjj-tech. If you don't find it interesting, there's a front page of other content, or multiple ways to go look at something else. It's optional for you to read this. If someone is forcing you, there are bigger issues here.
LeftSide - Tuesday, August 22, 2017 - link
This is huge. Simplifying the process of SOC development so that pieces can be developed separate of a monolithic die. It's a very interesting topic, and will shape the future of processor development at Intel.Thanks for the coverage Ian
cekim - Tuesday, August 22, 2017 - link
It used to be that MCMs were higher risk, lower yield and MUCH higher cost. It's pretty fascinating that they are now becoming preferred even where not required by gargantuan dies (Nvidia - looking at you...). It speaks to confidence in pre-pacakge testing and bump bonding tech even at these tiny scales.ddriver - Tuesday, August 22, 2017 - link
It is huge alright, intel went form mocking AMD about "gluing chips together" to promoting the bright future of their own glued together chips. Let's hear it for hypocrisy and lame publicity.Kevin G - Tuesday, August 22, 2017 - link
AMD mocked Intel for this during the days of the dual core Pentium 4's.Lolimaster - Tuesday, August 22, 2017 - link
For sub 5 cores cpu's a monolithic die is the better way, AMD was right. In the multicore era, modular is the way to go, AMD was right on that too.Strunf - Wednesday, August 23, 2017 - link
AMD 8 Cores are monolithic...There's no right or wrong there is just cost, performance and business planning. What we see is that multi-die does come at a performance cost.
Lolimaster - Wednesday, August 23, 2017 - link
They had 2 integer units but only 1 floating point unit per module.Xajel - Wednesday, August 23, 2017 - link
AMD mocked them because Pentium D wasn't actually a dual core, there was no core-core communication inside the package, for the core to communicate with the other core it has to send the data out to the chipset, and then again return it to the other core via another bus... Intel's first glue was an actual glue of two separate CPU's into a single CPU package.Intel's current method is almost an exact replica of what AMD is doing, Intel is just doing away with in-package data route to a more advanced silicon based one. while silicon based one isn't new ( ie, silicon interposer ) Intel's approach is more cost effective as you don't need a huge silicon die that cover all the original dies.
MrSpadge - Tuesday, August 22, 2017 - link
Kick their marketing for that and let everyone else move on with technology.MrSpadge - Tuesday, August 22, 2017 - link
+1SarahKerrigan - Tuesday, August 22, 2017 - link
Any chance you'll be liveblogging z14? That and Centriq were the only ones I was really looking forward to this year, so I opted to go on vacation instead.Ian Cutress - Tuesday, August 22, 2017 - link
That's the plan.SarahKerrigan - Tuesday, August 22, 2017 - link
Thanks! Looking forward to it.On a related note, I'm astounded you're able to liveblog at all. Last year wifi was totally unusable (it wouldn't even associate, most of the time) in the event hall except during breaks, and I couldn't get a reliable cell connection either. Guess it's improved.
Ian Cutress - Tuesday, August 22, 2017 - link
It hasn't. The WiFi is basically dead with periods of clarity, so everyone fired up their hot-spots which compounded the issue in a giant Faraday cage. This year they blame it on the venue WiFi channels being identical but they don't have access to change it. I've got my devices on max WiFi power and max aggressiveness. It helps, somewhat. If there's a stall, I queue up what's being said.chaddichor - Thursday, August 24, 2017 - link
Brilliant reply Ian. Get your troll-spray out!SaolDan - Friday, August 25, 2017 - link
Just ignore him.HStewart - Tuesday, August 22, 2017 - link
I would give Ian more credit on this topic, it may seem uninteresting, but it quite possible be glimpse of what is coming in the future - especially with challenge of 10nm and below.But I think the real interesting thing to watch is the system on chip connection with this process. It will allow 10nm to be use for CPU(s) and GPU(s) and less demanding 14nm and 22nm for other components. Just keep in mind, Intel's CPU interests have change and more focus is on mobile now.
On a side note, this quite technical discussion and brings me back to my days at Georgia Tech, dealing with electronic cpu designed course - but I end up on software side.
Kevin G - Tuesday, August 22, 2017 - link
No, this presentation had some ramifications for Intel's road map going forward. Especially the that rectle limits and die size are no longer issues.vladpetric - Tuesday, August 22, 2017 - link
"I don't understand squat, so don't cover it".There, I fixed it.
ZeDestructor - Tuesday, August 22, 2017 - link
Uninteresting to you. I'm stoked and super-happy this stuff is coming along. Yes, it's a bit boring on the surface, but this bodes well for massive scale-out machinations.Gondalf - Wednesday, August 23, 2017 - link
If you do not understand that reliable packaging solutions are the future of silicon tech, beh you have some problems to be fixed in your brain.The interposer was an attempt too expensive for large volumes, lets hope EMIB will end more cheap for a spare customers (Intel will utilize it a lot).
Many thanks for the coverage to Anandtech.
melgross - Tuesday, August 22, 2017 - link
It looks to be rather sparsely attended. Was the interest level that low?Ian Cutress - Tuesday, August 22, 2017 - link
Hot Chips is a specialised conference with few press, mostly developers and companies. This year was the highest attendance ever, and today is the busiest day. It was early in the morning, however.SarahKerrigan - Tuesday, August 22, 2017 - link
Different presentations have different attendance. It's usually a couple hundred people in any given one, at least going by previous years.IME processor presentations tend to have higher attendance than interconnects.
e36Jeff - Tuesday, August 22, 2017 - link
Intel last month: Glue is terrible for microprocessors and leads to inconsistent performance, monolithic designs are the bestIntel this month: Glue is amazing. It's the best thing ever. GLUE ALL THE THINGS!
cekim - Tuesday, August 22, 2017 - link
Yeah, whomever published that drivel should be fired - twice. Didn't change anyone's mind for the better. Seems to be an industry-wide problem of understanding some pretty basic truths of not making your customers and reviewers furious with you...edzieba - Tuesday, August 22, 2017 - link
Silicon-based links vs. substrate links. Big difference in power and latency.RedGreenBlue - Tuesday, August 22, 2017 - link
Meh. Looks glued together. It must be terrible.Proven Sytem Performance and Innovation ❌
Inconsistent Performance From 5 Glued-Together Dies ✅
Dr. Swag - Tuesday, August 22, 2017 - link
*4Dr. Swag - Tuesday, August 22, 2017 - link
Actually no you're right. I was thinking about epyc instead of the pic they published :pname99 - Tuesday, August 22, 2017 - link
Not to be an Apple fanboy, but what is new here compared to, say, the packaging used by the Apple Watch S1 and S2 packages?https://www.ifixit.com/Teardown/Apple+Watch+X-ray+...
What it looks like to me (I'd be happy to be corrected) is
- the essential idea is not new, not specific to Intel
- the pitches are not especially extreme compared to what Apple has done
BUT
- maybe substantially cheaper than Apple (? but I suspect this is unlikely)
- maybe substantially more pins, or faster than Apple (? this is definitely possible)
More significant seem to me two points that Intel seems to be begrudging in how strongly it is pushing them
- this tech moving to the high end.
Intel wants to portrait this as a great step forward, mix and match technology, blah blah. MY GUESS is that it's making lemonade out of lemons. All the advantages and possibilities of this tech were available at the costs the high end is willing to pay a few years ago. If they weren't adopted, it's because staying on-die was more appealing (?)
Moving off-die in this way looks to me like something Intel has been forced to do (maybe because of upcoming EUV focussable area limitations, maybe yield issues, maybe fear of how they will be forced to be more cost competitive soon in the many-core market).
- Intel's claims to be pushing this as a standard. I think we can all agree that a standard is desirable, and would be appreciated by everyone (including Apple and all the other phone makers, watch makers, wireless headphone makers etc). The question is how honest Intel is about really pushing this as a standard. For every USB they've pushed, there's also been a Thunderbolt -- choices made (deliberately or foolishly) that have caused prices to be way higher than necessary and have prevented mass adoption.
I can see Intel management saying that no way ARM should get to benefit from their design effort and so making choices that mean that, for all their talk of standardization and multiple foundries, the reality is that in a year the ARM eco-system announces their own equivalent of EMIB, perhaps slightly less performant but WAY cheaper, more flexible, and a better match to standard ARM protocols.
Ian Cutress - Tuesday, August 22, 2017 - link
S1 is like mobile and using Package on Package. PoP uses microbumps around an SoC to keep x/y dimensions on a PCB small and minimise distance for data to travel. PoP is used extensively in mobile. PoP works when thermals are suitable, and usually used for DRAM, but you're still limited by copper connections through a package. EMIB is different.name99 - Tuesday, August 22, 2017 - link
How is it different? What I mean is, what does it enable that's different from existing PoP and similar tech?I'm not trying to be bitchy here, I'm trying to get at what's supposedly new here. I've given my analysis (based, I'll admit, on limited knowledge) and I'm trying to get at what aspects of that analysis are and are not correct.
It sounds like what you are saying is that this
- is different from PoP because PoP ALWAYS involves only perimeter micro-bumps and this involves area micro-bumps (so allows for many more connections)
- is different from interposer how? As far as I can tell, the only difference is that it's smaller than traditional interposers.
When Intel first announced this, *I* thought the big deal, the significant difference they had compared to PoP or interposers, was the provision of some level of logic (eg for routing or signal shaping/equalization/translation purposes) within the sliver of silicon. Hence the name Embedded Multi-Die Interconnect BRIDGE.
And that would be cute and likely valuable.
But they've said nothing in this presentation to sustain that interpretation.
extide - Tuesday, August 22, 2017 - link
EMIB is different because it can pack a LOT more connections into the same space. Essentially the bumps are a LOT smaller. It's like using an interposer, except cheaper because you only need those small EMIB dies whee you need the high density interconnects instead of underneath the whole thing.extide - Tuesday, August 22, 2017 - link
To expand on that EMIB gets you the performance advantages of an interposer, but is much cheaper because you only need those little tiny dies, instead of a giant interposer that is as big as the area of all of the dies on top. An interposer is essentially a partially processed die, so THAT is the reticle limit they are talking about. Giant interposer = expensive, yield issues, hard to manufacture, etc.edzieba - Wednesday, August 23, 2017 - link
On top of that, the EMIB bridges all have pads on only one face, so can be built using a normal process. An interposer requires pads on both sides, which means extensive usage of TSVs across a massive die area.melgross - Tuesday, August 22, 2017 - link
Whatever the exact method, it’s just a matter of trying to save money, and lower costs. Chips are just getting too big, which means lower yields. I would add that, as I think you mentioned (without having to go back to verify it), this also helps thermals.ats - Tuesday, August 22, 2017 - link
Besides the fact that it is completely different....S1/S2 are basically PoP designs where interconnect bandwidth isn't an issue and signalling density is extremely low.
The closest thing to EMIB is Si Interposers where you are going for extremely high interconnect bandwidths and densities. Si Interposers though have their limitations, requiring TSVs for external connectivity and having limitations on package size based on reticle limits. EMIB gives all the advantages of Si Interposers with basically none of the down sides.
The EMIB dies just need to bridge between two chips and therefore can be extremely small, and because you can use multiple EMIB, you aren't limited to things like reticle limits allowing rather large scale multi die designs. In addition, external connections work just like is any other basic packaging technology and don't require TSV (which add a lot of process steps and risk).
To put things in perspective, one EMIB connection has an orders of magnitude higher interconnect bandwidth than the entire S1/S2 package has between all its connections. Each EMIB is capable of Tbps bandwidth.
TeaMat - Tuesday, August 22, 2017 - link
Pfft, this is just glued together ;)Lieutenant Tofu - Wednesday, August 23, 2017 - link
Wait, Intel is planning on .. GLUING TOGETHER multiple IP blocks?! ;)Smell This - Wednesday, August 23, 2017 - link
So ...I don't want to incorrectly label or falsely characterize this, but Intel "Stratix" is the continuation/integration or 2017 modernization of Altera IP (which they acquired in 2015) ... as a response to Advanced Micro Device's decade-long quest for '3D-IC' and bolt-on fusion of disparate technologies ??
If this is correct, I guess imitation (in efforts to exert market heft) is truly the greatest form of flattery ...
patrickjp93 - Friday, August 25, 2017 - link
Except Intel has completely overhauled Altera's designs. There's practically nothing in common with the old designs. Block and cell-level programming, not to mention transform times and overall performance/watt have gone from incredible to a scale Xilinx is in no position to compete with.kfishy - Saturday, August 26, 2017 - link
Stratix 10 was developed long before the Intel acquisition. Intel FPGA is essentially still clearing the pipes of old Altera designs.patrickjp93 - Monday, September 4, 2017 - link
Hah! No. Altera never had integration this broad, and as for the raw design of the FPGA itself, it's in a completely different league than before thanks to Intel's design libraries. The caching structure is completely different too.No, it may not be a 100% Intel design, but it is very far removed from Altera already.
Ian Cutress - Friday, August 25, 2017 - link
Android Password Breaker hacking tutorials hacking ebooks hacking news hacking tools android technology https://myhacker.netSaolDan - Friday, August 25, 2017 - link
did u just get hacked?Santoval - Saturday, August 26, 2017 - link
So Intel will move to "glued" dies, just as AMD, but -in their analogy- due to EMIB they will instead use... super-glue?Santoval - Saturday, August 26, 2017 - link
p.s. Unless I missed something and they intend to use "super-glued" dies only for their FPGAs. But I don't think they developed EMIB solely for them, did they?patrickjp93 - Monday, September 4, 2017 - link
No. See Kaby Lake G.