It looks like one GPU will be forced to ingest waste heat from another one in front of it if the server is fully populated. Atop that, there are four system fans supporting 3 GPUs (so ~1.33 fans each) and the one that will eat waste heat is cooled by 1 fan. Maybe it wouldn't be a problem, but its a noteworthy compromise to achieve that sort of density in a 2U chassis.
The nature of long-narrow rackmount systems is such that some components inevitably get heat soaked by something else. The systems are designed to handle it. Mostly by using extremely high RPM fans so that even hot air is sufficient to keep hardware operating within thermal limits. The catch is that keeping a high TDP part from overheating with hot air requires extremely loud fans; and while it doesn't matter in the data center, it's definitely not going to be something you want to put in your office rack next to your desk.
Yes, thank you, but the explanation is not necessary. My professional responsibilities entail a modest data center presently and enterprise scale equipment administration is a substantial chunk of the last couple decades' of my work in information technology.
I'm not sure why an observation about GPU placement and its implications for heat are worth getting so triggered. Some of you infer a lot of meaning that is simply not stated in order to spin yourselves into a frantic rage.
I was thinking that, too. I doubt it will have a substantial impact for the most part. If it's really an issue for the end-user, though, then they could always individually test the GPUs and place the one with the best voltage characteristics in that location.
There are similar concerns with server boards that have two GPUs per board, where one GPU receives the hot air from the other GPU. The system has to be designed and tested to ensure both GPUs are within spec given the air flow the system provides. Sometimes air temperature depends on the position within a rack. Though not ideal, this type of situation is not uncommon.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
13 Comments
Back to Article
eSyr - Thursday, August 6, 2020 - link
"big server juggernaut"—are you referring to IBM?Spunjji - Friday, August 7, 2020 - link
Intel - they haven't released PCIe 4.0 capable server hardware yet.eSyr - Sunday, August 9, 2020 - link
For the record, that was a joke about IBM being a big iron vendor that has PCIe 4.0 since 2017.danjw - Thursday, August 6, 2020 - link
"The G242-Z11 also has support for four 3.5-inch SATA pays at the front, and two NVMe/SATA SSD bays in the rear."I think you mean "3.5-inch SATA bays at the front,"
PeachNCream - Thursday, August 6, 2020 - link
It looks like one GPU will be forced to ingest waste heat from another one in front of it if the server is fully populated. Atop that, there are four system fans supporting 3 GPUs (so ~1.33 fans each) and the one that will eat waste heat is cooled by 1 fan. Maybe it wouldn't be a problem, but its a noteworthy compromise to achieve that sort of density in a 2U chassis.DanNeely - Thursday, August 6, 2020 - link
The nature of long-narrow rackmount systems is such that some components inevitably get heat soaked by something else. The systems are designed to handle it. Mostly by using extremely high RPM fans so that even hot air is sufficient to keep hardware operating within thermal limits. The catch is that keeping a high TDP part from overheating with hot air requires extremely loud fans; and while it doesn't matter in the data center, it's definitely not going to be something you want to put in your office rack next to your desk.PeachNCream - Thursday, August 6, 2020 - link
Yes, thank you, but the explanation is not necessary. My professional responsibilities entail a modest data center presently and enterprise scale equipment administration is a substantial chunk of the last couple decades' of my work in information technology.crimsontape - Saturday, August 8, 2020 - link
Man, the internet is savage and petty.Dan, let me apologize on this persona's behalf - he is not peachy nor creamy. All that work and no play.
PeachNCream - Tuesday, August 11, 2020 - link
I'm not sure why an observation about GPU placement and its implications for heat are worth getting so triggered. Some of you infer a lot of meaning that is simply not stated in order to spin yourselves into a frantic rage.Smell This - Thursday, August 6, 2020 - link
I think you wayyy over-thought this. I suspect residual pressure in the system would suck the chrome off a trailer hitch
.
PeachNCream - Thursday, August 6, 2020 - link
No, those fans will generate significant airflow, but that's quite an exaggeration.Spunjji - Friday, August 7, 2020 - link
I was thinking that, too. I doubt it will have a substantial impact for the most part. If it's really an issue for the end-user, though, then they could always individually test the GPUs and place the one with the best voltage characteristics in that location.Ktracho - Friday, August 7, 2020 - link
There are similar concerns with server boards that have two GPUs per board, where one GPU receives the hot air from the other GPU. The system has to be designed and tested to ensure both GPUs are within spec given the air flow the system provides. Sometimes air temperature depends on the position within a rack. Though not ideal, this type of situation is not uncommon.