Comments Locked

8 Comments

Back to Article

  • Threska - Thursday, February 15, 2018 - link

    One second increments for a per hour charge? Plus at those rates there's going to be some job packing.
  • CiccioB - Thursday, February 15, 2018 - link

    Few questions:
    1. power consumption of one of this thing?
    2. is the HBM mounted on the motherboard instead of being placed on the same package next to the TPU?
    3. the slide says that the accumulator is a 32b register but the precision is reduced for the multipliers, so it does not seem those are 128x128 real 32bit capable multipliers (16x16 or 24x24 maybe)
    4. how does that compare in real performance (theoretic TFLOPS aside) with other AI computing devices like the monster (in all senses) GV100? Some tests?
    5. Can it mine bitcoins or ethereums? :D
  • Yojimbo - Saturday, February 17, 2018 - link

    You can get the answer to 1 and 2 in various Next Platform articles. As far as how the tpu2 compares to the GV100 in real world perfomance and price/performance, you can find some limited data here: https://www.forbes.com/sites/moorinsights/2018/02/...

    No, you can't mine bitcoin or ether. You can't even deal with non-TensorFlow neural networks.
  • MrSpadge - Thursday, February 15, 2018 - link

    So that's where all the HBM went, leaving none for AMDs big Vega.
  • Yojimbo - Saturday, February 17, 2018 - link

    I don't think they are using all that much HBM for their pods. Maybe they have built more, but from what ai remember, they initially had 3 pods. Each pod has 256 TPUs if I remember. That's 768 TPUs each with 16 GB of HBM2. That's a piddly amount compared to what NVIDIA has used in their production of the Tesla P100, Quadro P100, Tesla V100, and Titan V.
  • Yojimbo - Saturday, February 17, 2018 - link

    Sorry, I misremembered. There are 256 TPUs in a rack. There are 4 racks in a pod. So there are 1024 TPUs in a pod and 3072 in the three pods.

    As a comparison, just one supercomputer that uses the Tesla P100, piz daint, uses 5320 P100s, each with 16 GB of HBM2. The vast majority of P100s and V100s are being used by companies for deep learning, and not in supercomputers.
  • mode_13h - Tuesday, February 20, 2018 - link

    If you just consider raw computational power, I wonder how that picture compares to a mammalian brain.
  • MonicaJB - Monday, June 15, 2020 - link

    Great article! I want to share my passion with you. I love online slots. It relaxes me, helps me get distracted. You can find my favorite slot machine Book of Ra at this link https://book-of-ra-play.com/how-to-win-on-slot-boo... What are your favorite online slots?

Log in

Don't have an account? Sign up now