Comments Locked

11 Comments

Back to Article

  • YukaKun - Monday, March 12, 2018 - link

    I think it would need a pool and some scuba-diving technicians.

    Singing "Under the Sea" might or might not be necessary.

    Cheers!
  • lilmoe - Monday, March 12, 2018 - link

    Maybe a heat sink/block on top of the CPU would allow for even more heat dispersion?
  • jtd871 - Monday, March 12, 2018 - link

    The IHS is already larger than the die. Hope they're using good TIM for the IHS...
  • DanNeely - Monday, March 12, 2018 - link

    If they're boiling the liquid the surface needs to be hot enough to do so, a heat sink lowering the surface temps would be counter productive.
  • Santoval - Monday, March 12, 2018 - link

    I highly doubt combining passive cooling with immersion cooling would provide any benefit. Perhaps the results would be worse, since you insert an unnecessary middleman that was also specifically designed to dissipate air, not liquids. Besides you would also defeat one of the main benefits of immersion cooling : the possibility of very compact/dense designs.
  • sor - Thursday, March 15, 2018 - link

    As long as the heat sink is more thermally conductive than the cooling liquid then it would absolutely benefit from a heat sink to spread and aid the transfer of heat. In fact, that’s the whole point of roughing up the lid, increase surface area and create micro fins/pins.

    Now, it may be that soldering on a 1cm tall copper finned heatsink only improves things by a tiny fraction vs just roughing up the lid, or it might be that they actually do want to run the systems hot and boil the surface in order to create convection currents.
  • DanNeely - Monday, March 12, 2018 - link

    For cloud scale datacenters not being able to easily service individual servers might not be a major problem either. In some of their previous data centers (haven't seen anything about their most recent ones) MS was bringing in servers pre-assembled into shipping container or prefab building module sized lumps with the intent of just connecting power, data, and cooling to the modules at setup and then never opening them until they were due to be replaced wholesale. Any dead server modules would just be shut down at the administrative level, and when the total number of dead ones got high enough or if new generations of hardware got enough better the entire module would be pulled out as a unit and sent to the recycler.
  • Holliday75 - Monday, March 12, 2018 - link

    I admit its been almost 3 years since I worked in a MS data center, but I've never seen it works like this. Are you talking about Dell Nucleon racks or HP/Dell containers?

    Azure was using Dell Nucleon's when I left and they were fixed on the fly when blades went down/drives failed.
  • DanNeely - Monday, March 12, 2018 - link

    I don't think any of the articles I read ever named the suppliers. I did find one article from 08 (longer ago than I thought) talking about the early shipping container data centers where the plan was to be hands off until the entire container was yanked.

    "Once containers are up and running, Microsoft's system administrators may never go inside them again, even to do a simple hardware fix. Microsoft's research shows that 20% to 50% of system outages are caused by human error. So rather than attempt to fix malfunctioning servers, it's better to let them die off. "

    and

    "As more and more servers go bad inside the container, Microsoft plans to simply ship the entire container back to the supplier for a replacement. "

    https://www.computerworld.com/article/2535531/data...

    If MS decided in place repair was worth a larger on site staff since then, well that's why I'd noted not having seen anything about how they were running more recent centers. *shrug*
  • GeorgeH - Monday, March 12, 2018 - link

    That's the first time I've seen a windowed computer case I actually want.

    The big headline here is them thinking they can get the fluid down to $10 a gallon, though - assuming they hit that price point with good performance characteristics immersion cooling could finally go mainstream.
  • Blazorthon - Wednesday, March 14, 2018 - link

    $100 to $300 per gallon is expensive, but is it really prohibitive for these systems? If a server needs a few gallons and we assume $300 per gallon, then that's about $1000. Cheaper servers wouldn't make sense with that, but some higher end servers might have four or eight processors per board and each processor could cost several thousand dollars, plus thousands of dollars for the RAM. If the server uses SSDs and/or other additions like GPUs, then that could mean many thousands more. Is another $1000 or so for the liquid really that big a deal if it lets you stack things a little denser, especially if you can reuse the liquid every time you replace the server?

Log in

Don't have an account? Sign up now