"energy is reused by the heat exchangers" What does this mean? Does it mean they just use the natural movement of the liquid from convection and avoid some pumps or do they actively harvest energy using some sort of peltier or such?
Could be, but power plants do a lot of work to generate hot gas to drive turbines, so recovering some power doesn't seem out of the question in a fully contained system like this.
I don't recall when IBM released it's first air-cooled full mainframe (3X0 class machines), but for the first 40 or 50 years of mainframe computing, liquid cooling was all there was. Whether it was total immersion, in some definition of 'total', I don't know. Back To The Future.
The first immersion-cooled system I know of was the Cray-2. Water inside pipes is a diffrent and more mundane solution than sticking electrical components into an inert fluid.
Data center power consumption is significantly impactful given the sheer number of racked systems people operate globally. It's great to see attempts to handle waste heat more efficiently, but the core problem is that modern civilization is broadly compelled to process vast (and every growing) amounts of information for a variety of reasons. Certainly, we can now handle individual chunks of data more efficiently than we could when people kept filing cabinets crammed to the brim with papers, but the problem is that as we've gotten more efficient, we have been storing and processing far more data to offset the advantages our technological systems provide without fully considering each processing or storage requirement and whether or not it really is necessary to begin with. I'm afraid the net gains from immersion cooling will be wiped out by dumping more power into more processing we would have otherwise not done at all which will continue to result in a net loss of quality of life for civilization as a whole for obvious reasons anyone can reach on their own.
So while this sound like a neat way to go about it, the fact that you still need to MOVE the water or the servers is a failure point, unless its some form of gravity fed system. I don't know, seems kinda like a product apple would make, self contained, so only they can know how to fix it and charge a lot for something that can be done cheaper.
"Low boiling point" - the liquid boils when it gets hot, taking the heat away in its gaseous state - the heat is then transferred into the exchanger at the top, the gas turns back to liquid and re-enters the pool.
No need for any active "moving" of water, just the need to make sure you don't create "pockets" in which gaseous coolant can get trapped instead of moving up to the exchanger at the top - actually a more difficult issue than it might first seem.
I don't suppose latent heat of vaporization is of any significance then, here? I kept thinking about water evap coolers that cooled through the sole property of water to require so much heat to vaporise.
It is. Latent heat of vaporization is relevant to all vaporization.
All the energy being carried away by the gas will be the vaporization energy(with the fluid as a whole sitting at the boiling point, much as a boiling pot of water won't go over 212 no matter how high the fire is). It just happens at a much lower threshold here than it does with water(exact temperature to be determined), so it will maintain a constant computer-safe temperature.
I wonder how close you get to a useful thing if you spin some rack units around into a drawer that's 36" tall instead of deep, take out some heatsinks and fans, roughen heatspreader surfaces, pick reasonable components, then fill the thing up and put the condenser setup on top?
I'd guess the answer is "not very close" but I'd be fascinated to know more about why.
Also curious what happened with ZTE saying they were looking into cheaper fluids than Novec in the earlier story. Guessing, again, not much; optimistic claims are cheap but fancy hydrocarbon liquids are (sadly) not.
"The idea is that because these units are a lot easier to manage, operational expenses will be severely reduced regardless."
I continue to wonder about that assertion. In the early days of the industrial revolution, millions of folks were made redundant by capital, thus gaining profit for the capitalist. Most production these days has so little labour in it, I don't see the basis for such an assertion. Just what is the dollar cost of labor per server per year?
Other than people, what are the "operational expenses" to be reduced? That is, have to be paid with conventional data centre infrastructure? I just don't see it. A solution in search of problem.
And, by the bye, the laws of thermodynamics demand that such an indirect method must be less efficient. Whether that inefficiency is compensated by other savings is another question.
Indirect? It is pretty much the most direct cooling you get. Liquid in contact with hot components vaporizes, bubble rises to surface, carrying heat. Vapor cools, condenses, falls back down into tank.
it's indirect since, finally, the heat absorbed liquid is cooled by air. the laws say that heat exchange is less than 100% at each interface. now, one could use a geothermal engine, which might be more globally efficient. and, there's all that additional electricity needed to move the liquid around. the ultimate radiators will just as much (the laws say more) air flow to dissipate the removed heat.
beyond the laws, though, is how 'operational expenses' are reduced. no explanation yet offered.
My brain hurts from trying to understand what you just typed... the dielectric fluid very very very likely boils at ~56°C. The condenser loop very very very likely runs water at or around 40-45°C, the then heated water can then be pumped through a liq-air hx that needs little airflow at relativly mild ambient temperatures of 20-30°C to complete the circuit of heat exchange. Energy transfers passively to the boiling heat sinks -> fluid through evaporation -> condenser through condensation -> water loop pumped -> liq-air hx -> environment. So the pump and potentially fans are added energy consumption devices but the pump can be sized in a manor to use little energy (compared to overall IT power) at the needed mass flow rates likewise with the fans.
The way this is advertised, it probably factors HVAC operating costs into conventional air cooling which simply dumps waste heat into the data center. The HVAC is then responsible for removal of said heat. Citing thermodymanics alone ignores the physical placement of the system and the method by which cool air is provided to said system along with a bunch of other factors that are more nuanced, unique to each data center, and unknown based on the information we have available.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
20 Comments
Back to Article
prime2515103 - Tuesday, November 19, 2019 - link
When is the desktop version coming out?The True Morbus - Tuesday, November 19, 2019 - link
If only they had gone AMD, they wouldn't need ridiculous cooling like this, and would be faster :PThe server market turns slooooowly.
Santoval - Tuesday, November 19, 2019 - link
I wonder if they will also provide an AMD Epyc version. If Intel is one of the partners they helped in the design I would assume they will not.Santoval - Tuesday, November 19, 2019 - link
edit : "...*that* helped in the design..."valinor89 - Tuesday, November 19, 2019 - link
"energy is reused by the heat exchangers"What does this mean? Does it mean they just use the natural movement of the liquid from convection and avoid some pumps or do they actively harvest energy using some sort of peltier or such?
saratoga4 - Tuesday, November 19, 2019 - link
Typo for "removed by the heat exchangers" I think.surt - Tuesday, November 19, 2019 - link
Could be, but power plants do a lot of work to generate hot gas to drive turbines, so recovering some power doesn't seem out of the question in a fully contained system like this.FunBunny2 - Tuesday, November 19, 2019 - link
I don't recall when IBM released it's first air-cooled full mainframe (3X0 class machines), but for the first 40 or 50 years of mainframe computing, liquid cooling was all there was. Whether it was total immersion, in some definition of 'total', I don't know. Back To The Future.Lord of the Bored - Wednesday, November 20, 2019 - link
The first immersion-cooled system I know of was the Cray-2. Water inside pipes is a diffrent and more mundane solution than sticking electrical components into an inert fluid.PeachNCream - Tuesday, November 19, 2019 - link
Data center power consumption is significantly impactful given the sheer number of racked systems people operate globally. It's great to see attempts to handle waste heat more efficiently, but the core problem is that modern civilization is broadly compelled to process vast (and every growing) amounts of information for a variety of reasons. Certainly, we can now handle individual chunks of data more efficiently than we could when people kept filing cabinets crammed to the brim with papers, but the problem is that as we've gotten more efficient, we have been storing and processing far more data to offset the advantages our technological systems provide without fully considering each processing or storage requirement and whether or not it really is necessary to begin with. I'm afraid the net gains from immersion cooling will be wiped out by dumping more power into more processing we would have otherwise not done at all which will continue to result in a net loss of quality of life for civilization as a whole for obvious reasons anyone can reach on their own.imaheadcase - Tuesday, November 19, 2019 - link
So while this sound like a neat way to go about it, the fact that you still need to MOVE the water or the servers is a failure point, unless its some form of gravity fed system. I don't know, seems kinda like a product apple would make, self contained, so only they can know how to fix it and charge a lot for something that can be done cheaper.Father Time - Tuesday, November 19, 2019 - link
"Low boiling point" - the liquid boils when it gets hot, taking the heat away in its gaseous state - the heat is then transferred into the exchanger at the top, the gas turns back to liquid and re-enters the pool.No need for any active "moving" of water, just the need to make sure you don't create "pockets" in which gaseous coolant can get trapped instead of moving up to the exchanger at the top - actually a more difficult issue than it might first seem.
ads295 - Wednesday, November 20, 2019 - link
I don't suppose latent heat of vaporization is of any significance then, here? I kept thinking about water evap coolers that cooled through the sole property of water to require so much heat to vaporise.Lord of the Bored - Wednesday, November 20, 2019 - link
It is. Latent heat of vaporization is relevant to all vaporization.All the energy being carried away by the gas will be the vaporization energy(with the fluid as a whole sitting at the boiling point, much as a boiling pot of water won't go over 212 no matter how high the fire is).
It just happens at a much lower threshold here than it does with water(exact temperature to be determined), so it will maintain a constant computer-safe temperature.
twotwotwo - Tuesday, November 19, 2019 - link
I wonder how close you get to a useful thing if you spin some rack units around into a drawer that's 36" tall instead of deep, take out some heatsinks and fans, roughen heatspreader surfaces, pick reasonable components, then fill the thing up and put the condenser setup on top?I'd guess the answer is "not very close" but I'd be fascinated to know more about why.
Also curious what happened with ZTE saying they were looking into cheaper fluids than Novec in the earlier story. Guessing, again, not much; optimistic claims are cheap but fancy hydrocarbon liquids are (sadly) not.
FunBunny2 - Tuesday, November 19, 2019 - link
"The idea is that because these units are a lot easier to manage, operational expenses will be severely reduced regardless."I continue to wonder about that assertion. In the early days of the industrial revolution, millions of folks were made redundant by capital, thus gaining profit for the capitalist. Most production these days has so little labour in it, I don't see the basis for such an assertion. Just what is the dollar cost of labor per server per year?
Other than people, what are the "operational expenses" to be reduced? That is, have to be paid with conventional data centre infrastructure? I just don't see it. A solution in search of problem.
And, by the bye, the laws of thermodynamics demand that such an indirect method must be less efficient. Whether that inefficiency is compensated by other savings is another question.
Lord of the Bored - Wednesday, November 20, 2019 - link
Indirect? It is pretty much the most direct cooling you get.Liquid in contact with hot components vaporizes, bubble rises to surface, carrying heat. Vapor cools, condenses, falls back down into tank.
FunBunny2 - Wednesday, November 20, 2019 - link
it's indirect since, finally, the heat absorbed liquid is cooled by air. the laws say that heat exchange is less than 100% at each interface. now, one could use a geothermal engine, which might be more globally efficient. and, there's all that additional electricity needed to move the liquid around. the ultimate radiators will just as much (the laws say more) air flow to dissipate the removed heat.beyond the laws, though, is how 'operational expenses' are reduced. no explanation yet offered.
destorofall - Wednesday, November 20, 2019 - link
My brain hurts from trying to understand what you just typed... the dielectric fluid very very very likely boils at ~56°C. The condenser loop very very very likely runs water at or around 40-45°C, the then heated water can then be pumped through a liq-air hx that needs little airflow at relativly mild ambient temperatures of 20-30°C to complete the circuit of heat exchange. Energy transfers passively to the boiling heat sinks -> fluid through evaporation -> condenser through condensation -> water loop pumped -> liq-air hx -> environment. So the pump and potentially fans are added energy consumption devices but the pump can be sized in a manor to use little energy (compared to overall IT power) at the needed mass flow rates likewise with the fans.PeachNCream - Thursday, November 21, 2019 - link
The way this is advertised, it probably factors HVAC operating costs into conventional air cooling which simply dumps waste heat into the data center. The HVAC is then responsible for removal of said heat. Citing thermodymanics alone ignores the physical placement of the system and the method by which cool air is provided to said system along with a bunch of other factors that are more nuanced, unique to each data center, and unknown based on the information we have available.