Comments Locked

29 Comments

Back to Article

  • drexnx - Monday, November 16, 2020 - link

    I wonder how they did that refrigerant cooling system - even with refrigerant, you still need a big air to air condenser and fan.
  • edzieba - Monday, November 16, 2020 - link

    The radiator doesn't need to be all that much larger than the equivalent watercooled variant, if it's sized to fit the load. What you lose in the extra thermal input from the compressor input energy, you gain from the increased efficiency per unit area of your radiator from operating the radiator at a higher temperature.

    As they don't specify their sub-ambient technology they should even be using a CoTS peltier cooling system, like the ones Intel is shipping out and supporting under the 'cryo cooler' branding. Those are peltier TEC 'hot ends' attached to AIO loops. Not sure who the OEM is for the TEC heads though.
    TEC sub-ambient cooling has obviously much worse thermal efficiency than compressor loop cooling (at a minimum double the power to be dissipated) though, so only the 'maintenance free' claim would point in that direction.
  • FloridaMan - Friday, November 20, 2020 - link

    So who gets to suck the charge out each year? Guess all interns are going to have an HVAC compliance power seminar.
  • FloridaMan - Friday, November 20, 2020 - link

    And a way to keep condensation from leaking onto the board..
  • psyclist80 - Monday, November 16, 2020 - link

    Looking forward to MI100 vs A100 reviews!
  • rsandru - Monday, November 16, 2020 - link

    Meanwhile, on the more common computers we all use, software (aka games) support for multiple GPUs is still nearly non-existent.

    I'm surprised that the common 'smart' commenters who are quick to criticize multi-accelerator support in games with the "just buy a faster card" comment don't say the same about this nVidia workstation.

    Give me a modular multi-GPU capability (HW+SW) that I can expand as needed/desired/financially can afford with let's say 1-4 GPUs and I'll sign up tomorrow.
  • edzieba - Monday, November 16, 2020 - link

    "I'm surprised that the common 'smart' commenters who are quick to criticize multi-accelerator support in games with the "just buy a faster card" comment don't say the same about this nVidia workstation."

    Why would they? Gaming and HPC (and ML) are VERY different workloads. It's like complaining "Mammoet can just add more SPMTs under a load to move it, why can't I just add more engines to my truck?".
  • waterdog - Monday, November 16, 2020 - link

    I went looked up Mammoet SPMT. There goes the afternoon's productivity as I spend the rest of the day watching heavy equipment videos. Thanks. :)
  • FloridaMan - Friday, November 20, 2020 - link

    Yes!
  • FloridaMan - Friday, November 20, 2020 - link

    You can. We Florida men know these things. Though, the results are less surprising if not more entertaining.
  • jesuscat - Monday, November 16, 2020 - link

    If you spent upwards of 100k+ on a DGX workstation to game on it, you're doing it wrong.
    it's targeted way past your needs as a consumer.
  • FloridaMan - Friday, November 20, 2020 - link

    I think as API's trend toward DX12 and Vulkan we'll see more a soft implementation of multi-gpu support environment. However, tech companies and media won't talk about it as there's been all but no support for this for almost a decade. DX11 swung it's massive **** around for far too long. I think it's a rather inferior API at least on the shader side, and multi-gpu support has to be separately coded. DX12 streamlines this a bit, but still has to be tweaked into the mainline. Vulkan is much easier as your doing the work on the front end, and makes it much less of a headache than having to insert alternating lines within base instructions.
    I'm a huge fan of the multi-gpu concept. I'd much rather grab a mid-tier GPU that's about 30% faster than the previous generation, then drop in a second a few months later. Much easier to explain two $300-$400 dollar purchases to the wife than one $800 purchase, which is too much for a toy IMO.
  • dgz - Monday, November 16, 2020 - link

    The entire gaming industry decided to abandon multi GPU support not because it doesn't make much sense at the moment, but to annoy you.

    They could easily fix all the latency/stuttering racing conditions and help amd/nvidia sell infinitely more video cards but opted not to because they don't like your attitude.
  • kgardas - Monday, November 16, 2020 - link

    Come on. How many gamers are willing to use 2 GPUs? And how many to use (and pay for!) 3 GPUs and even more? So if 100% will do 2 GPUs, then it will mean selling just twice the cards and not infinitely more cards. Also the market for 2-more GPUs gaming machines is very small due to much higher price and much bigger problems with scalability (power, cooling, space, bus etc.).
  • Kjella - Monday, November 16, 2020 - link

    Don't forget stability issues, I foolishly bought two cards for SLI in the GTX 9xx generation and games crashed much more often than running a single card. Personally I think it worked well when it worked, but since nobody would bother to find and fix the crash bugs I ended up selling the second card. Nothing like destroying a good gaming session with a crash to desktop, the FPS wasn't worth it.
  • Holliday75 - Tuesday, November 17, 2020 - link

    Multiple GPU does not necessarily mean multiple cards.
  • haukionkannel - Tuesday, November 17, 2020 - link

    No, but the problem are the same!
    Do you want something that works or something that crach, works sometime... maybe...
    Multi GPU in gaming is dead because it don¨t work with constant driverfu!
    We need operations systems that support multi chip cpus from the core... None in the market at this moment.
  • brantron - Monday, November 16, 2020 - link

    Infinite times zero RTX 3000 cards is still zero.

    There are bigger problems to fix than stuttering, or both Nvidia and AMD wouldn't have these absurdly drawn out roll outs, starting with specifically the price ranges that sell the least.
  • Unashamed_unoriginal_username_x86 - Monday, November 16, 2020 - link

    If you came here to complain about multi-GPU in video games or want a $10k+ GPU, you're less than <1% of the market and you probably don't have the money for it anyway. Give up

    With that said, I wonder if they'll introduce any SKUs with 6 active HBM stacks or less SMs, perhaps next year as an refresh for AI workloads.
  • rsandru - Tuesday, November 17, 2020 - link

    Well, you're wrong :-) I do have an SLI set up.

    I'm just complaining that the investment made in software to leverage multiple CPU cores, multiple AI accelerators, multiple network interfaces that can be pooled, etc.. didn't extend to video games.

    Hey, it's almost Christmas, I can make a wish lol :-D
  • Adam7288 - Monday, November 16, 2020 - link

    But.....can it run Commander Keen?
  • KusheYemi - Monday, November 16, 2020 - link

    Hi Ryan! Why is there no article or write up about the launch of CDNA and the AMD Instinct MI100 accelerator? It is quite a big deal for the HPC market.
  • Cooe - Monday, November 16, 2020 - link

    Where's the AMD Instinct M100 (aka "Arcturus") article guys??? Also, I could totally see AMD pulling the exact same move Nvidia pulled here later on.

    (Aka, releasing a 64GB MI100 w/ HBM2e instead of the 32GB model's HBM2; both doubling capacity AND bumping the memory bandwidth up from 1.28TB/s to 1.6TB/s. And they could potentially even unlock the full 128CU die on top of that [aka +8 more CU's vs standard MI100], to sweeten the deal for the flagship part just that little bit more).
  • Sychonut - Monday, November 16, 2020 - link

    Looking forward to the 500W successor to Ampere. Should be fun getting a second PSU to only power your GPU. The generation after that should be directly plugged into your city's nuclear power plant.
  • Lord of the Bored - Monday, November 16, 2020 - link

    But where will we install the second power supply now that cases lack 5.25" bays?

    You jest, but we used to do exactly that.
  • TheHughMan - Monday, November 16, 2020 - link

    If only we could all get corporate subsidies and tax breaks to afford GPUs and CPUs that perform at maximum threshold.
  • urbanman2004 - Monday, November 16, 2020 - link

    Well, I'm sure either option is def above my pay grade, lol. Just wondering how AMD's Instinct MI100 fares against either the 40GB or 80GB version.
  • Kevin G - Monday, November 16, 2020 - link

    I wonder when we'll see more fully enabled GA100 chips with the full six stacks of HBM2E enabled. An extra 20% of memory bandwidth and 16 GB of memory would be benefit for many workloads on top of the extra compute units. There is still plenty of performance left in those rare golden samples. I have a feeling nVidia is just stock piling them up for a viable supply as they can't be that common due to yields on a 826 mm^2 die.
  • Santoval - Monday, November 16, 2020 - link

    "Both models are shipping using a mostly-enabled GA100 GPU with 108 active SMs..."
    The Ampere generation is quite confusing; how/why can the "full" GA100 die have 108 SMs, a die size of 826 mm², 6912 CUDA cores and an FP32 performance of "just" 19.5 TFLOPs but the smaller GA102 die of the RTX 3090 has 82 SMs, a die size of 628 mm², *10496* CUDA cores and an astounding FP32 performance of 35.58 TFLOPS?

    Is the counter-intuitive difference in cores and FP32 performance simply because the INT (integer) cores were retained on the GA100 but were repurposed as FP32 cores on the GA102? If that's the case is the 3090's FP32 performance somehow "artificially inflated", i.e. it cannot be compared -apples for apples- with the FP32 performance of Turing? Are the integer cores that were "removed" really important for premium consumer cards or are they just used for things like AI (but not games)?

    In other words, are AMD's 6800 series cards able to compete (or even slightly surpass) in performance Ampere consumer cards with far more (nominal) FP32 TFLOPs just because the INT cores of the Ampere cards were repurposed or because Navi 2 is a more efficient and effective design?

Log in

Don't have an account? Sign up now