"Being a socket maker, TE indicates that key features of Socket P4 and Socket P5 are the same: they have the same pin count, the same 0.9906 mm hex pitch, the same SP height of 2.7 mm, and the same mounting mechanisms."
Leading in with the words "being a socket maker" is unnecessary and doesn't add anything but word count to an already lengthy sentence.
Likely one socket is for Cooper Lake and the other for Ice Lake. Both SKUs will are mutidie but Cooper reporderly will be three dies (two compute, one I/O) Ice Lake could be four dies (three compute, one I/O) AKA 26x3=78 cores instead of 56. The Ice Lake large die is 26 cores so the 3x configuration is almost sure because only 52 cores could be useless against Epyc in some workloads. Another secret are the thermals but a 300W Cooper is likely dropping the base speed and the all core turbo speed versus actual AP line. About Ice Lake there is a old slide saying 230W but all depend on how many 300mm2 10nm dies Intel will put together.
Considering that Ice and Copper lake are advertised as part of the same platform (Whitley), the LGA4189-5 variant could be intended for the DDR5/PCIe5 successor (Sapphire/Granite Rapids)
If that was the case the socket would not be apparently identical. Intel are not going to switch to DDR5 before their Tiger Lake Xeon series (to be released in the holiday season of 2020 or in Q1 2021), which is the next Xeon platform after Whitley. Apparently that would also be the platform to introduce PCIe 5.0, since as we know PCIe 4.0 is going to be very short lived.
*If*, and that's still a big "if" they manage to produce Tiger Lake desktop CPUs at an acceptable yield and clock frequency and launch them some time in Q3 or Q4 of 2020 (their Ice Lake desktop CPUs were canned) they might switch them to DDR5 first. As things stand now, that might not happen : Ice Lake desktop was replaced by Comet Lake and Tiger Lake desktop will probably be replaced by Rocket Lake.
More likely in mid-late 2020 to early 2021 we will have a repeat of the same : 4-core (no more) Tiger Lake-U/Y parts and later in 2021 Tiger Lake Xeons, with no 10nm fabbed desktop CPUs. Unless Intel fix their shit, that is.
>If that was the case the socket would not be apparently identical.
Why not? These things tend to have a fair number of reserved/unused pins. They don't need to physically change the socket, other than re-keying it to prevent customer support nightmares.
Unlike the consumer world where Intel has learned its lesson with LGA775 a decade+ ago and the ongoing repeat of that compatibility mess that AMD is doing with their current mainstream sockets; Intel is willing to reuse sockets for mutually incompatible systems in the workstationish/server markets. LGA2011 went through three such revisions before being replaced with LGA2066.
Yes. Cooper and Ice will share a socket that is for 2S platform. For 4S platform, it will be Cooper only. LGA4189-4 and LGA4189-5 are for 2S and 4S platform separately. From Huawei's leaked roadmap, it shows that 2S and 4S platform are different. https://www.tomshardware.com/news/intel-server-ddr...
That's a useful leak! It implies that Cooper Lake is a 6-channel/6-UPI chip, and that Intel will sell a "48 core" multi-die variant of Cooper Lake for 2S configs to better compete with AMD's Epyc Rome. That being said, the multi-die setup will have only 8 of the 12 memory controllers active, which would help Intel differentiate pricing. And the 4S Cooper Lake setup will probably have more PCIe lanes than the dual-die 2S Cooper Lake setup.
1) Skylake and Cascade Lake have 28 cores (6*5 mesh minus 2 for memory controllers). I'd be shocked if Cooper Lake isn't the same mesh layout. 2) I've seen rumors that Ice Lake has 34 cores (6*6 mesh minus 2 for memory controllers). This seems conservative and realistic given the die shrink to 10nm. That being said, a 6*7 mesh minus 2 for memory controllers would also be reasonable given the die shrink, so either Intel has dramatically increased the core count for Ice Lake, or caches are way bigger, or they're being conservative due to yields.
The other way is to put the pins on the CPU. The mid way was slots but that won't scale to that many contacts. Guess which way is easier? And look at Threadripper mounting problems with their cartridge system.
Well, the pins used to be on the CPU but that arrangement was swapped around because a broken pin on a motherboard is less likely to happen and almost always cheaper to replace than the CPU.
If you're asking if there is some better way to physically link the CPU to the rest of the system, not that I've ever heard of. All those pins are necessary for all the links (memory, PCIe, etc.) that we expect for a modern system. We could, in theory, reduce the number of pins by making more connections serial but that's a whole different can of worms.
Yeah, I'm familiar with the history and necessity for so many pins. Just wondering if there is a better mechanical alternative to make the connection. I personally have never had a problem, but I've seen several people who have.
The only alternative I know is too use thousands of solder balls instead. That gives better electrical contacts; and as a bonus means the whole 2 generations per mobo issue is avoided because swaps are impossible.
Intel and IBM has explored replacing some of the copper links with optical which mechanically would be safer since the methods they use don't have any point that could bend. However, these companies have only shown off prototypes and there is a such a crisscross of patents that without any sort of mutual licensing agreement, keeps this technology in their labs.
It's a natural consequence of more cores per CPU. The benefit is that you need fewer discrete CPUs to achieve a given core count and that means fewer sockets and motherboard traces between them. So, if you need a certain number of cores, increased integration actually provides a cost savings.
A good question whether server CPUs will top out at 8-channel memory and 130-lane PCIe, or just keep on going. A lot of that probably has to do with efficiency-scaling beyond 64 cores. At some point, the interconnect power & maintaining cache coherency starts to dominate power dissipation.
Cranking bitrate burns up power and transistor budget. Keeping it low means more pins (at significant cost). Which solution is better is mostly dictated by the economics at the time and what workload the system is tuned for.
"As discovered previously, Intel’s next-generation Xeon Scalable processors in LGA4189 packaging will feature a native eight-channel memory controller and PCIe 4.0, with at least eight channel memory on standard configurations." You link only has information about Ice Lake Xeon CPUs, not Cooper Lake ones. That paragraph -and your entire article basically- makes it sound like Cooper Lake Xeons will also support PCIe 4.0 (and 8-channel memory?) but I doubt that's the case. It rather looks like that Cooper Lake Xeons will be used on a new platform that fully supports PCIe 4.0 but the CPUs themselves will only have a PCIe 3.0 controller. So while moving to Ice Lake Xeons (and taking advantage of PCIe 4.0) later on would be an easy switch, since they share the same platform, Cooper Lake Xeons will only work in PCIe 3.0 mode. Which makes the point of Cooper Lake even more questionable, unless Ice Lake Xeons are going to be released in 2H 2020.
My initial guess as to what there are two LGA4189 sockets would be the same reason why the Purley platform has two different socket at launch: on package fabric. Two more Purley sockets were released later for the variant that had an on-package FPGA and another for the models with 64 PCIe lanes to the motherboard.
However, Intel has pretty much killed off Omnipath recently which would negate the reason for the second socket type as Purley had.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
31 Comments
Back to Article
Chaitanya - Monday, September 2, 2019 - link
In last paragraph, word sufficient is out of place.PeachNCream - Tuesday, September 3, 2019 - link
Also.."Being a socket maker, TE indicates that key features of Socket P4 and Socket P5 are the same: they have the same pin count, the same 0.9906 mm hex pitch, the same SP height of 2.7 mm, and the same mounting mechanisms."
Leading in with the words "being a socket maker" is unnecessary and doesn't add anything but word count to an already lengthy sentence.
Gondalf - Monday, September 2, 2019 - link
Likely one socket is for Cooper Lake and the other for Ice Lake. Both SKUs will are mutidie but Cooper reporderly will be three dies (two compute, one I/O) Ice Lake could be four dies (three compute, one I/O) AKA 26x3=78 cores instead of 56. The Ice Lake large die is 26 cores so the 3x configuration is almost sure because only 52 cores could be useless against Epyc in some workloads.Another secret are the thermals but a 300W Cooper is likely dropping the base speed and the all core turbo speed versus actual AP line. About Ice Lake there is a old slide saying 230W but all depend on how many 300mm2 10nm dies Intel will put together.
Ian Cutress - Monday, September 2, 2019 - link
Intel has already stated that Cooper and Ice will share a socket and be upgradeable.Arsenica - Monday, September 2, 2019 - link
Considering that Ice and Copper lake are advertised as part of the same platform (Whitley), the LGA4189-5 variant could be intended for the DDR5/PCIe5 successor (Sapphire/Granite Rapids)Santoval - Monday, September 2, 2019 - link
If that was the case the socket would not be apparently identical. Intel are not going to switch to DDR5 before their Tiger Lake Xeon series (to be released in the holiday season of 2020 or in Q1 2021), which is the next Xeon platform after Whitley. Apparently that would also be the platform to introduce PCIe 5.0, since as we know PCIe 4.0 is going to be very short lived.*If*, and that's still a big "if" they manage to produce Tiger Lake desktop CPUs at an acceptable yield and clock frequency and launch them some time in Q3 or Q4 of 2020 (their Ice Lake desktop CPUs were canned) they might switch them to DDR5 first. As things stand now, that might not happen : Ice Lake desktop was replaced by Comet Lake and Tiger Lake desktop will probably be replaced by Rocket Lake.
More likely in mid-late 2020 to early 2021 we will have a repeat of the same : 4-core (no more) Tiger Lake-U/Y parts and later in 2021 Tiger Lake Xeons, with no 10nm fabbed desktop CPUs. Unless Intel fix their shit, that is.
III-V - Tuesday, September 3, 2019 - link
>If that was the case the socket would not be apparently identical.Why not? These things tend to have a fair number of reserved/unused pins. They don't need to physically change the socket, other than re-keying it to prevent customer support nightmares.
Kevin G - Tuesday, September 3, 2019 - link
Example: see the three different variant of LGA 2011 which each had different memory technologies.DanNeely - Tuesday, September 3, 2019 - link
Unlike the consumer world where Intel has learned its lesson with LGA775 a decade+ ago and the ongoing repeat of that compatibility mess that AMD is doing with their current mainstream sockets; Intel is willing to reuse sockets for mutually incompatible systems in the workstationish/server markets. LGA2011 went through three such revisions before being replaced with LGA2066.Arsenica - Tuesday, September 3, 2019 - link
They are not identical. LGA4189-4 has a key 28mm away from the center of the package while LGA-4189-5 has it 25mm away from the center.JJWu - Tuesday, September 3, 2019 - link
Yes. Cooper and Ice will share a socket that is for 2S platform. For 4S platform, it will be Cooper only. LGA4189-4 and LGA4189-5 are for 2S and 4S platform separately. From Huawei's leaked roadmap, it shows that 2S and 4S platform are different.https://www.tomshardware.com/news/intel-server-ddr...
Elstar - Friday, September 6, 2019 - link
That's a useful leak! It implies that Cooper Lake is a 6-channel/6-UPI chip, and that Intel will sell a "48 core" multi-die variant of Cooper Lake for 2S configs to better compete with AMD's Epyc Rome. That being said, the multi-die setup will have only 8 of the 12 memory controllers active, which would help Intel differentiate pricing. And the 4S Cooper Lake setup will probably have more PCIe lanes than the dual-die 2S Cooper Lake setup.lefty2 - Monday, September 2, 2019 - link
> Both SKUs will are mutidie ...Yeah? Provide a link with your source.
ilt24 - Tuesday, September 3, 2019 - link
@Gondalf ... "The Ice Lake large die is 26 cores"Are you sure about that, seems odd that they would lower their core count when they are moving to a smaller process?
Elstar - Friday, September 6, 2019 - link
1) Skylake and Cascade Lake have 28 cores (6*5 mesh minus 2 for memory controllers). I'd be shocked if Cooper Lake isn't the same mesh layout.2) I've seen rumors that Ice Lake has 34 cores (6*6 mesh minus 2 for memory controllers). This seems conservative and realistic given the die shrink to 10nm. That being said, a 6*7 mesh minus 2 for memory controllers would also be reasonable given the die shrink, so either Intel has dramatically increased the core count for Ice Lake, or caches are way bigger, or they're being conservative due to yields.
In short, 26 cores seems like a typo.
quorm - Monday, September 2, 2019 - link
Are these kind of sockets with thousands of fragile pins on the motherboard ever going away? Is there really no better way to do it?ipkh - Monday, September 2, 2019 - link
The other way is to put the pins on the CPU. The mid way was slots but that won't scale to that many contacts. Guess which way is easier?And look at Threadripper mounting problems with their cartridge system.
jordanclock - Monday, September 2, 2019 - link
Well, the pins used to be on the CPU but that arrangement was swapped around because a broken pin on a motherboard is less likely to happen and almost always cheaper to replace than the CPU.If you're asking if there is some better way to physically link the CPU to the rest of the system, not that I've ever heard of. All those pins are necessary for all the links (memory, PCIe, etc.) that we expect for a modern system. We could, in theory, reduce the number of pins by making more connections serial but that's a whole different can of worms.
quorm - Monday, September 2, 2019 - link
Yeah, I'm familiar with the history and necessity for so many pins. Just wondering if there is a better mechanical alternative to make the connection. I personally have never had a problem, but I've seen several people who have.DanNeely - Monday, September 2, 2019 - link
The only alternative I know is too use thousands of solder balls instead. That gives better electrical contacts; and as a bonus means the whole 2 generations per mobo issue is avoided because swaps are impossible.Kevin G - Tuesday, September 3, 2019 - link
Intel and IBM has explored replacing some of the copper links with optical which mechanically would be safer since the methods they use don't have any point that could bend. However, these companies have only shown off prototypes and there is a such a crisscross of patents that without any sort of mutual licensing agreement, keeps this technology in their labs.mode_13h - Monday, September 2, 2019 - link
It's a natural consequence of more cores per CPU. The benefit is that you need fewer discrete CPUs to achieve a given core count and that means fewer sockets and motherboard traces between them. So, if you need a certain number of cores, increased integration actually provides a cost savings.A good question whether server CPUs will top out at 8-channel memory and 130-lane PCIe, or just keep on going. A lot of that probably has to do with efficiency-scaling beyond 64 cores. At some point, the interconnect power & maintaining cache coherency starts to dominate power dissipation.
PixyMisa - Monday, September 2, 2019 - link
IBM is using 25Gbps memory interfaces on their new Power 9 chips, so they need far fewer pins for the same memory bandwidth.So we're probably stuck with pins, but we could reduce the number significantly if we run the signals faster.
willis936 - Monday, September 2, 2019 - link
TNSTAAFLCranking bitrate burns up power and transistor budget. Keeping it low means more pins (at significant cost). Which solution is better is mostly dictated by the economics at the time and what workload the system is tuned for.
stephenbrooks - Monday, September 2, 2019 - link
0.9906 mm hex pitch = 39 mils exactlySantoval - Monday, September 2, 2019 - link
"As discovered previously, Intel’s next-generation Xeon Scalable processors in LGA4189 packaging will feature a native eight-channel memory controller and PCIe 4.0, with at least eight channel memory on standard configurations."You link only has information about Ice Lake Xeon CPUs, not Cooper Lake ones. That paragraph -and your entire article basically- makes it sound like Cooper Lake Xeons will also support PCIe 4.0 (and 8-channel memory?) but I doubt that's the case.
It rather looks like that Cooper Lake Xeons will be used on a new platform that fully supports PCIe 4.0 but the CPUs themselves will only have a PCIe 3.0 controller. So while moving to Ice Lake Xeons (and taking advantage of PCIe 4.0) later on would be an easy switch, since they share the same platform, Cooper Lake Xeons will only work in PCIe 3.0 mode. Which makes the point of Cooper Lake even more questionable, unless Ice Lake Xeons are going to be released in 2H 2020.
Kevin G - Tuesday, September 3, 2019 - link
Cooper Lake has a handful of instructions not found on Ice Lake that will be useful for the AI/machine learning crowd.It also doesn't hurt Intel to have a plan B available if they run into 10 nm supply issues.
dairyAT - Monday, September 2, 2019 - link
IN B4 software companies start licencing price per pin.Xyler94 - Wednesday, September 4, 2019 - link
Don't give them ideas!MDD1963 - Tuesday, September 3, 2019 - link
2nd paragraph references both LGA4189-4 and LGA3189-5....is that accurate, or a typo?Kevin G - Tuesday, September 3, 2019 - link
My initial guess as to what there are two LGA4189 sockets would be the same reason why the Purley platform has two different socket at launch: on package fabric. Two more Purley sockets were released later for the variant that had an on-package FPGA and another for the models with 64 PCIe lanes to the motherboard.However, Intel has pretty much killed off Omnipath recently which would negate the reason for the second socket type as Purley had.