can we get a image comment? that "offer an 1U solution featuring eight of these CPUs, all liquid cooled." deserves the Jurassic park glasses gif.... ~2500 W per blade, how to even cool that beast....
drexnx gave a pointer to an existing implementation. The only thing I'd like to add is that this isn't anything new outside of starting to migrate toward commodity-ish systems. Enterprise scale watercooling has been used in some mainframe and super computer applications for something like half a century.
"used in some mainframe and super computer applications for something like half a century."
to be more specific, liquid cooling came first, then air cooling. in the IBM case, it was the smaller, nearly mini-computer, systems that went air. pre-semiconductor mainframes had the chilled air blown over the tubes.
Not only not new, they've had this cooling infrastructure since the first mainframes in the 60's that were water cooled. I can remember a tour of a university in grade school where they showed us the now decommissioned mainframe from (IIRC) '62 that had water cooled CPU's hooked up to water chillers are the roof of the building.
This is actually coming back into popularity because it can be quite a bit cheaper to run than the standard hot/cold isles in a traditional data center and it allows higher density because water can move way more heat than air.
AOC and the demonuts have just stated that they are carbon capping all new CPU's starting in 2021 to 15W and 42U racks to 300W. All new datacenters must be carbon and toxic neutral.
Supercell99 Where did you find that information? I am no demoncrat but I really want to see that article that says that they want to carbon cap processors. I tried googling it and found nothing.
They aren't carbon capping CPUs per se...they will be taxing companies that expend a lot of energy...so as a result, the companies themselves will take measures to lower the energy consumption...or they will invest in green energy that can be used to keep data centers operating at full tilt.
And the Coal lobby and repnuts just stated they are banning all the more energy efficient server offers to force companies to spend more on energy and save their obsolete industrie that is dying a natural death. See, I can create stupid fake news too.
“For a base frequency, the EPYC 7H12 will be set at 2.6 GHz, and a turbo frequency of 3.3 GHz. Compared to the EPYC 7742, that’s +350 MHz on base and -100 MHz on turbo, for an increase in +55W TDP. ”
The chart currently shows a 100MHz turbo reduction for the 7H12. Me thinks something is wrong?
I'm assuming that the customer who drove these has loads that make them more concerned about sustained performance at full load rather than short term peak performance on bursty loads. In which case AMD might be binning the chips for this based off of the clocks achieved by the slowest core on the chip not the fastest one. (See yesterdays article on how AMD Turbo works if you don't understand why this is relevant.)
The 3.4 GHz boost clock will be only available for some cores. The 7H12 will likely boost many more cores, even if not that high. Thus singlethreaded-performance (which nobody cares about in that segment) will be slightly worse, but multithreaded-performance a lot better. The reason for lowering the maximum clock might be to make sure that as many cores as possible can run at maximum clockspeed, which they might not, were it higher.
And a peak CPU power consumption of over 94kW in a 42U rack is also impressive (280w x 8 CPU per 1U x 42U). With memory and peripherals and PSU losses the total peak rack power will probably exceed 120kW.
Hard to believe, sorry. And the claim is only about 32 blades per rack (you need power, cooling, routing...), and I don't see 8 CPU/U anywhere... With 8ch of RAM/CPU it is not going to fit.
No increase in turbo clock speed......very strange thing indeed. ST performace stay +20% worse than Intel offering. This is a limiting factor that shrink severely the number of customers interested in the SKU. I suspect they utilized a manufacturing process pretty similar to the SOC process for phones, all this to lower the thermals and fit 64 cores in only 225W at the expense of the responsiveness of the SKUs to the demand of tasks. This the reason right now Xeon is selling like donuts. All the Epyc 2 server line is without common sense up to 32 cores, only the 64 cores SKUs is intersting for some applications.
You're far too focused on single threaded performance on a server CPU. These are designed to be deployed by the hundreds for massively parallel workloads. What matters is total throughput.
"by the hundreds for massively parallel workloads"
outside of weather and nucular bombs, I doubt that there are more than a dozen of such workloads. be careful: multitasking and multiprocessing does not equal parallel. the former two were implemented in the early 60s in the mainframe world.
And you are correct - Weather and Nuclear simulations (and other science stuff like DNA sequencing). In the Article AMD specially said this is aimed at the HPC market and said they had other skus for traditional enterprises. HPC is a science application heavy market.
Am no AMD or Intel Fanboi I just want the best bang for my hard earned, in a million years I would never need a quarter of the processing power of one of these CPUs but I cant resist a good troll, bear in mind AMD has been nowhere for the last say 10 years in CPUs, all of a sudden they are back and in all things other than games are royally kicking intels ass just like intel did with the core 2 duo so swings and roundabouts, mark the date and then see what market share AMD has grown in 6-12 Months then tell me the same about doughnuts btw doughnuts are cheap, disposable and very bad for your health and learn to spell whilst your at it
Here's a fun exercise. Let 56 core 400w Cascade Lake have the turbo frequency and let 64 core 280w Rome have the base frequency. Before we even talk about ipc we get
Rome 2.6 x 64 / 280 = 0.59 Cascade Lake 3.8 x 56 / 400 = 0.53
So in this ideal case Intel ipc must be 11% greater to match perf per watt.
The above numbers are meaningless except to appease the single thread hounds in the server arena.
AMD on massive roll, just like me, 2019-2020 is massive world change or very much gearing up for it.
THE PROOF is what is taking place last few months alone, rocket speed style...things not ALL drop all at the same time for NO reason....nothing is that "random" to be that "unrelated"
not according the many years this Dragon has roamed the land.
shame investors are NOT really giving them a very well earned and solid pat on the butt sort of speak.
i.e pulled from the grave to run the market more or less complete for next say 3-4 years .. in round about way at very least as them big players (Amazon etc) have all very jumped on board with AMD......they must all know we NEED to do this...whatever the THIS is....
I am very <3 my Ryzen 3600 (keep up with 8700/9600 / 2700x / 3700x all day long (ofc max 12 thread) .. got cpu-z to back my claim ^.^ (going ~15% higher avg results for my anything BUT stable bios lol...but, 3200 ram run even tighter @ 3600 and the cpu boosting to ms level up to 4.52g...not at all complain about my choice to save myself some $$$$$$$$$ .. could easily been sold as 3600x without a doubt.....maybe AMD wanted good faith in some of their line no question???
rather easily actually, as long as there's room for the tubing, it's very easy to do... so long as your pump and radiators aren't inside the same enclosure, which these systems usually are like that, no pump/res/rads inside the 1U case. They've got quick disconnect fittings at the back, and transfer the water to a radiator array via powerful pumps. Remember Linus's Whole Room Watercooling project? Imagine that, on a difference scale.
In Sweden all the main cloud providers are building massive complexes and hooking them up to municipal district heating systems, selling the heat they create. Quite a few are also built in the north of Sweden where the outside temperature is below freezing for 6 months/year, cheap cooling and selling the heat combined with plenty of renewable energy equals great operating costs.
When you buy a system that will go into a data center, you often price the machine that you are going to install at 3 to 5 years of it's operating cost.
When paying for power in the data center, you usually get charged about triple, because for every kw of power consumed, it may take 2kw of power to chill the air to it's initial temperature. This is where the benefit of water cooling comes into play.
A water cooled CPU does not need the inlet water temperature to be the same as the air temperature. In fact, a water in temperature of 140F is often acceptable. Chilling water to 140F can be accomplished using radiators and pumps nearly anywhere in the world, without the need to "chill" the water below outside ambient temperature.
The above power reduction can drop the cost of running the CPU to 1/3, justifying installing a CPU and system that costs 3 times as much, because the budget for upkeep got moved to the hardware purchase price.
The building will probably have several water "chillers" available for water cooled CPUs.
Processors often have a target temperature to reach.
The above power reduction can drop the cost of running the CPU to 1/3, justifying installing a CPU and system that costs nearly double, because the budget for upkeep got moved to the hardware purchase price.
Finally we have 128 threads processors again on our ble planet Gaia Let us count: 65 nm-4 cores-Q6600 45 nm-Nehalem Intel -8 cores 32 nm-16 cores-Opteron from AMD 22 nm-32 cores-Zen from AMD 14 nm-64 cores-Epyc from AMD 10 nm-128 cores or Threads-Epyc again 7 nm-256 cores bear in mind that is advertised as being the 7 nanometer process but it is in reality 10 nanometers
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
54 Comments
Back to Article
deil - Wednesday, September 18, 2019 - link
can we get a image comment?that "offer an 1U solution featuring eight of these CPUs, all liquid cooled." deserves
the Jurassic park glasses gif....
~2500 W per blade, how to even cool that beast....
drexnx - Wednesday, September 18, 2019 - link
...with the aforementioned liquid?if you're talking about the liquid cooling, a water/water heat exchanger to a secondary common loop and that to an outside cooling tower/chiller.
google image search "ibm water cooling specification" and the first result will be a nice diagram
ACE76 - Thursday, September 19, 2019 - link
That makes more sense...kinda misleading the way they framed it in the article though.DanNeely - Wednesday, September 18, 2019 - link
drexnx gave a pointer to an existing implementation. The only thing I'd like to add is that this isn't anything new outside of starting to migrate toward commodity-ish systems. Enterprise scale watercooling has been used in some mainframe and super computer applications for something like half a century.FunBunny2 - Wednesday, September 18, 2019 - link
"used in some mainframe and super computer applications for something like half a century."to be more specific, liquid cooling came first, then air cooling. in the IBM case, it was the smaller, nearly mini-computer, systems that went air. pre-semiconductor mainframes had the chilled air blown over the tubes.
rahvin - Wednesday, September 18, 2019 - link
Not only not new, they've had this cooling infrastructure since the first mainframes in the 60's that were water cooled. I can remember a tour of a university in grade school where they showed us the now decommissioned mainframe from (IIRC) '62 that had water cooled CPU's hooked up to water chillers are the roof of the building.This is actually coming back into popularity because it can be quite a bit cheaper to run than the standard hot/cold isles in a traditional data center and it allows higher density because water can move way more heat than air.
peevee - Thursday, September 19, 2019 - link
Where do you see that "offer an 1U solution featuring eight of these CPUs"? Something not quite believable...A5 - Wednesday, September 18, 2019 - link
16k cores/32k threads per rack is really impressive.dcl88 - Wednesday, September 18, 2019 - link
That's at least 2M *just* in CPUs.ACE76 - Thursday, September 19, 2019 - link
They get big volume discountsweb2dot0 - Wednesday, September 18, 2019 - link
1U (8 Rome CPUs) = 512 cores/1024 threads (~2500w)42U per rack = 21k cores/ 42k threads (94kw)
Probably need 200Gbps Infiniband to hook them up too.
Probably not realistic in terms of power density, but it's fun to just write about it.
Supercell99 - Wednesday, September 18, 2019 - link
AOC and the demonuts have just stated that they are carbon capping all new CPU's starting in 2021 to 15W and 42U racks to 300W. All new datacenters must be carbon and toxic neutral.JoeAceJR - Thursday, September 19, 2019 - link
Don't worry Intel will just lower the frequency like crazy.JoeAceJR - Thursday, September 19, 2019 - link
Supercell99 Where did you find that information? I am no demoncrat but I really want to see that article that says that they want to carbon cap processors. I tried googling it and found nothing.ACE76 - Thursday, September 19, 2019 - link
They aren't carbon capping CPUs per se...they will be taxing companies that expend a lot of energy...so as a result, the companies themselves will take measures to lower the energy consumption...or they will invest in green energy that can be used to keep data centers operating at full tilt.peevee - Thursday, September 19, 2019 - link
Humor alert.Manabu - Monday, September 23, 2019 - link
And the Coal lobby and repnuts just stated they are banning all the more energy efficient server offers to force companies to spend more on energy and save their obsolete industrie that is dying a natural death. See, I can create stupid fake news too.MonkeyPaw - Wednesday, September 18, 2019 - link
“For a base frequency, the EPYC 7H12 will be set at 2.6 GHz, and a turbo frequency of 3.3 GHz. Compared to the EPYC 7742, that’s +350 MHz on base and -100 MHz on turbo, for an increase in +55W TDP. ”The chart currently shows a 100MHz turbo reduction for the 7H12. Me thinks something is wrong?
MonkeyPaw - Wednesday, September 18, 2019 - link
Edit: and by that, I mean, why the turbo drop but TDP increase? Higher base clock keeps it from boosting?DanNeely - Wednesday, September 18, 2019 - link
I'm assuming that the customer who drove these has loads that make them more concerned about sustained performance at full load rather than short term peak performance on bursty loads. In which case AMD might be binning the chips for this based off of the clocks achieved by the slowest core on the chip not the fastest one. (See yesterdays article on how AMD Turbo works if you don't understand why this is relevant.)Freeb!rd - Wednesday, September 18, 2019 - link
TDPs are a funny thing and even more complex with the new 7nm chiplet CPUs with more sensors and logic to control boost frequencies.https://www.anandtech.com/show/13124/the-amd-threa...
thomasg - Wednesday, September 18, 2019 - link
The 3.4 GHz boost clock will be only available for some cores.The 7H12 will likely boost many more cores, even if not that high.
Thus singlethreaded-performance (which nobody cares about in that segment) will be slightly worse, but multithreaded-performance a lot better.
The reason for lowering the maximum clock might be to make sure that as many cores as possible can run at maximum clockspeed, which they might not, were it higher.
peevee - Thursday, September 19, 2019 - link
As long as you consider "up to 11%" a lot.Duncan Macdonald - Wednesday, September 18, 2019 - link
And a peak CPU power consumption of over 94kW in a 42U rack is also impressive (280w x 8 CPU per 1U x 42U). With memory and peripherals and PSU losses the total peak rack power will probably exceed 120kW.peevee - Thursday, September 19, 2019 - link
Hard to believe, sorry. And the claim is only about 32 blades per rack (you need power, cooling, routing...), and I don't see 8 CPU/U anywhere... With 8ch of RAM/CPU it is not going to fit.phoenix_rizzen - Tuesday, October 1, 2019 - link
It's listed right in the article that the OEM in question has a 1U blade that will support 8 CPUs, with 32 blades per rack.phoenix_rizzen - Tuesday, October 1, 2019 - link
Last sentence of the second paragraph:"One of AMD’s main partners, Atos, is set to offer an 1U solution featuring eight of these CPUs, all liquid cooled."
Gondalf - Wednesday, September 18, 2019 - link
No increase in turbo clock speed......very strange thing indeed. ST performace stay +20% worse than Intel offering. This is a limiting factor that shrink severely the number of customers interested in the SKU. I suspect they utilized a manufacturing process pretty similar to the SOC process for phones, all this to lower the thermals and fit 64 cores in only 225W at the expense of the responsiveness of the SKUs to the demand of tasks.This the reason right now Xeon is selling like donuts. All the Epyc 2 server line is without common sense up to 32 cores, only the 64 cores SKUs is intersting for some applications.
vanilla_gorilla - Wednesday, September 18, 2019 - link
You're far too focused on single threaded performance on a server CPU. These are designed to be deployed by the hundreds for massively parallel workloads. What matters is total throughput.FunBunny2 - Wednesday, September 18, 2019 - link
"by the hundreds for massively parallel workloads"outside of weather and nucular bombs, I doubt that there are more than a dozen of such workloads. be careful: multitasking and multiprocessing does not equal parallel. the former two were implemented in the early 60s in the mainframe world.
Joshua-Graham - Wednesday, September 18, 2019 - link
And you are correct - Weather and Nuclear simulations (and other science stuff like DNA sequencing). In the Article AMD specially said this is aimed at the HPC market and said they had other skus for traditional enterprises. HPC is a science application heavy market.BeCurieUs - Wednesday, September 18, 2019 - link
Also, virtualization servers, right? Or does that put itself on single-core performance as well?Supercell99 - Thursday, September 19, 2019 - link
you haven't seen multithread porn downloads on infiniband :Djordanclock - Wednesday, September 18, 2019 - link
Gondalf, it's time for your meds. You're talking crazy again.Korguz - Wednesday, September 18, 2019 - link
hstewart... is that you ???Aephe - Wednesday, September 18, 2019 - link
He's not even trying to hide it...Korguz - Wednesday, September 18, 2019 - link
nope.. not in the least.Qasar - Wednesday, September 18, 2019 - link
hey Gondalf , what ever it is you are smoking, could i have some ?? that must be some pretty good stuff you are on !!!alufan - Wednesday, September 18, 2019 - link
really?https://www.redsharknews.com/technology/item/6647-...
https://hexus.net/tech/news/systems/133871-gigabyt...
Am no AMD or Intel Fanboi I just want the best bang for my hard earned, in a million years I would never need a quarter of the processing power of one of these CPUs but I cant resist a good troll, bear in mind AMD has been nowhere for the last say 10 years in CPUs, all of a sudden they are back and in all things other than games are royally kicking intels ass just like intel did with the core 2 duo so swings and roundabouts, mark the date and then see what market share AMD has grown in 6-12 Months then tell me the same about doughnuts btw doughnuts are cheap, disposable and very bad for your health and learn to spell whilst your at it
Schmide - Wednesday, September 18, 2019 - link
Here's a fun exercise. Let 56 core 400w Cascade Lake have the turbo frequency and let 64 core 280w Rome have the base frequency. Before we even talk about ipc we getRome 2.6 x 64 / 280 = 0.59
Cascade Lake 3.8 x 56 / 400 = 0.53
So in this ideal case Intel ipc must be 11% greater to match perf per watt.
The above numbers are meaningless except to appease the single thread hounds in the server arena.
prisonerX - Wednesday, September 18, 2019 - link
How's your Intel stock going, buddy?Irata - Thursday, September 19, 2019 - link
I think servethehome very much disagrees with you.twtech - Wednesday, September 18, 2019 - link
AMD isn't trying to use "leetspeak" in their model names now are they?Dragonstongue - Wednesday, September 18, 2019 - link
AMD on massive roll, just like me, 2019-2020 is massive world change or very much gearing up for it.THE PROOF is what is taking place last few months alone, rocket speed style...things not ALL drop all at the same time for NO reason....nothing is that "random" to be that "unrelated"
not according the many years this Dragon has roamed the land.
Dragonstongue - Wednesday, September 18, 2019 - link
shame investors are NOT really giving them a very well earned and solid pat on the butt sort of speak.i.e pulled from the grave to run the market more or less complete for next say 3-4 years .. in round about way at very least as them big players (Amazon etc) have all very jumped on board with AMD......they must all know we NEED to do this...whatever the THIS is....
I am very <3 my Ryzen 3600 (keep up with 8700/9600 / 2700x / 3700x all day long (ofc max 12 thread) .. got cpu-z to back my claim ^.^ (going ~15% higher avg results for my anything BUT stable bios lol...but, 3200 ram run even tighter @ 3600 and the cpu boosting to ms level up to 4.52g...not at all complain about my choice to save myself some $$$$$$$$$ .. could easily been sold as 3600x without a doubt.....maybe AMD wanted good faith in some of their line no question???
as the others supposedly "not doing so well"
guess it pays to save money more oft than not
Ah
ha
ACE76 - Thursday, September 19, 2019 - link
How the hell do you water cool 8 separate 64 core CPUs in a 1u enclosure!!Xyler94 - Thursday, September 19, 2019 - link
rather easily actually, as long as there's room for the tubing, it's very easy to do... so long as your pump and radiators aren't inside the same enclosure, which these systems usually are like that, no pump/res/rads inside the 1U case. They've got quick disconnect fittings at the back, and transfer the water to a radiator array via powerful pumps. Remember Linus's Whole Room Watercooling project? Imagine that, on a difference scale.peevee - Thursday, September 19, 2019 - link
Racks in a meat freezer... Time to buy meat freezer stocks? ;)itonamd - Thursday, September 19, 2019 - link
maybe shall try with dielectric liquid cooling like 3M novec..Arnulf - Friday, September 20, 2019 - link
"Dielectric liquid" ... you mean oil? Tried and tested.Zoolook13 - Friday, September 20, 2019 - link
In Sweden all the main cloud providers are building massive complexes and hooking them up to municipal district heating systems, selling the heat they create.Quite a few are also built in the north of Sweden where the outside temperature is below freezing for 6 months/year, cheap cooling and selling the heat combined with plenty of renewable energy equals great operating costs.
mikegrok - Friday, September 20, 2019 - link
When you buy a system that will go into a data center, you often price the machine that you are going to install at 3 to 5 years of it's operating cost.When paying for power in the data center, you usually get charged about triple, because for every kw of power consumed, it may take 2kw of power to chill the air to it's initial temperature. This is where the benefit of water cooling comes into play.
A water cooled CPU does not need the inlet water temperature to be the same as the air temperature. In fact, a water in temperature of 140F is often acceptable. Chilling water to 140F can be accomplished using radiators and pumps nearly anywhere in the world, without the need to "chill" the water below outside ambient temperature.
The above power reduction can drop the cost of running the CPU to 1/3, justifying installing a CPU and system that costs 3 times as much, because the budget for upkeep got moved to the hardware purchase price.
The building will probably have several water "chillers" available for water cooled CPUs.
Processors often have a target temperature to reach.
mikegrok - Friday, September 20, 2019 - link
The above power reduction can drop the cost of running the CPU to 1/3, justifying installing a CPU and system that costs nearly double, because the budget for upkeep got moved to the hardware purchase price.Ozymankos - Monday, September 23, 2019 - link
Finally we have 128 threads processors again on our ble planet GaiaLet us count:
65 nm-4 cores-Q6600
45 nm-Nehalem Intel -8 cores
32 nm-16 cores-Opteron from AMD
22 nm-32 cores-Zen from AMD
14 nm-64 cores-Epyc from AMD
10 nm-128 cores or Threads-Epyc again
7 nm-256 cores
bear in mind that is advertised as being the 7 nanometer process but it is in reality 10 nanometers