Many of this AI operations are lower precision on parse matrices. The TOPs metric (meaning tensor operations per second, not trillions of operations per second) would be a better comparison point as the unit of work is more defined in this context.
Where did he do that? He compared the low-precision performance mentioned by Amazon with the low-precision performance of supercomputers. Low-precision performance is what is important here. FP64 performance of Trainium is entirely irrelevant (it doesn't even look like it supports the precision).
He did declare that 65 exaflops is of the same level as the 93 exaflops of Jupiter without making it clear that he was comparing 65 with 93. I think that's a stretch.
yeah that is the problem, the linked Jupiter article talks about low precision performance as well, however that is not obvious when reading jut the current text. anyway comparing 65 and 93 is a stretch, that's 50% difference in performance which is a lot ... it's not even in the same ballpark ...
It is obvious they are talking about low precision, as they call Jupiter an "AI supercomputer". Besides, why would one operate under the assumption that the article was making a mistake? The article mentions the low-precision performance of the Trainium 2's and then compares it to Jupiter. So the correct thing would be to compare it to the low-precision performance of Jupiter, which is exactly what it is doing.
The problem isn't with the article, the problem is that some people have the idea that FP64 is the "real" way to measure supercomputers, which is bunk. In fact, the majority of new supercomputers aren't on the TOP500 list and their FP64 performance doesn't matter as far as their supercomputing capability. AI is eating HPC. Jupiter itself, which will be on the TOP500, is being built with the idea that AI will be one of its most important workloads.
FP64 is a common metric for evaluating supercomputers. While low precision may be acceptable in AI, many fields demand high precision and arithmetic stability, where FP64 is essential.
They shouldn't compare projects like Frontier with these focused on AI.
I for one welcome our new AI masters and encourage them to cull the weak and the disapproving from among the human population while leaving the rest of us that will not pose a problem to an AI-ruled empire as an interesting and/or amusing science experiment.
A vengeful god made humans in his image, and look at the result!
Sadly, humans didn't learn this lesson. We designed AI to think much the same way we do, and are training AI models on our own works and culture. You can't *really* expect such a process to yield terribly rational AI, can you?
Exactly my thought. Rather than rational, it's going to be a monstrous version of humanity. The sad, ironic part is how companies are chasing after AI, scared to lose out on the billions of dollars, yet this technology is likely going to unravel the world. At the very least, make us a lot more stupid. Why think, when AI can think for us?
Yes, "hi," my friend! We used to have some great conversations, you, me, and Oxford Guy, and it's good to see everyone back. Well, I've just been caught up in personal life, problems with love and my lady, and it's been tough. I felt out of touch with a lot, but it's coming right, slowly.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
Terry_Craig - Thursday, November 30, 2023 - link
Please refrain from comparing the low-precision performance mentioned by Amazon with the FP64 performance of supercomputers. Thank you :)Kevin G - Thursday, November 30, 2023 - link
I'll second this.Many of this AI operations are lower precision on parse matrices. The TOPs metric (meaning tensor operations per second, not trillions of operations per second) would be a better comparison point as the unit of work is more defined in this context.
Yojimbo - Thursday, November 30, 2023 - link
Where did he do that? He compared the low-precision performance mentioned by Amazon with the low-precision performance of supercomputers. Low-precision performance is what is important here. FP64 performance of Trainium is entirely irrelevant (it doesn't even look like it supports the precision).He did declare that 65 exaflops is of the same level as the 93 exaflops of Jupiter without making it clear that he was comparing 65 with 93. I think that's a stretch.
haplo602 - Friday, December 1, 2023 - link
yeah that is the problem, the linked Jupiter article talks about low precision performance as well, however that is not obvious when reading jut the current text. anyway comparing 65 and 93 is a stretch, that's 50% difference in performance which is a lot ... it's not even in the same ballpark ...Yojimbo - Friday, December 1, 2023 - link
It is obvious they are talking about low precision, as they call Jupiter an "AI supercomputer". Besides, why would one operate under the assumption that the article was making a mistake? The article mentions the low-precision performance of the Trainium 2's and then compares it to Jupiter. So the correct thing would be to compare it to the low-precision performance of Jupiter, which is exactly what it is doing.The problem isn't with the article, the problem is that some people have the idea that FP64 is the "real" way to measure supercomputers, which is bunk. In fact, the majority of new supercomputers aren't on the TOP500 list and their FP64 performance doesn't matter as far as their supercomputing capability. AI is eating HPC. Jupiter itself, which will be on the TOP500, is being built with the idea that AI will be one of its most important workloads.
Dante Verizon - Friday, December 1, 2023 - link
FP64 is a common metric for evaluating supercomputers. While low precision may be acceptable in AI, many fields demand high precision and arithmetic stability, where FP64 is essential.They shouldn't compare projects like Frontier with these focused on AI.
PeachNCream - Thursday, November 30, 2023 - link
I for one welcome our new AI masters and encourage them to cull the weak and the disapproving from among the human population while leaving the rest of us that will not pose a problem to an AI-ruled empire as an interesting and/or amusing science experiment.Oxford Guy - Friday, December 1, 2023 - link
The biggest threat from AI is that it will replace the always-irrational human governance with rational governance — a first.skaurus - Saturday, December 2, 2023 - link
Don't worry, politicians would not allow to replace them until the very last moment possible and then some.easp - Saturday, December 2, 2023 - link
LOL. I'm sure you are much less rational than your self-image.mode_13h - Monday, December 4, 2023 - link
A vengeful god made humans in his image, and look at the result!Sadly, humans didn't learn this lesson. We designed AI to think much the same way we do, and are training AI models on our own works and culture. You can't *really* expect such a process to yield terribly rational AI, can you?
Garbage in, garbage out.
GeoffreyA - Wednesday, December 6, 2023 - link
Exactly my thought. Rather than rational, it's going to be a monstrous version of humanity. The sad, ironic part is how companies are chasing after AI, scared to lose out on the billions of dollars, yet this technology is likely going to unravel the world. At the very least, make us a lot more stupid. Why think, when AI can think for us?mode_13h - Thursday, December 7, 2023 - link
Good points.Also, "hi!" It's been a while since we crossed posts!
: )
GeoffreyA - Thursday, December 7, 2023 - link
Yes, "hi," my friend! We used to have some great conversations, you, me, and Oxford Guy, and it's good to see everyone back. Well, I've just been caught up in personal life, problems with love and my lady, and it's been tough. I felt out of touch with a lot, but it's coming right, slowly.mode_13h - Thursday, December 7, 2023 - link
Regarding your personal life, I hope things work out for the best.OG and I are pretty much like oil & water. I don't miss some of those arguments.
GeoffreyA - Friday, December 8, 2023 - link
Thank you.That's true. Seems like only yesterday. How time flies.
Well, concerning computing, the big change that strikes me these days is AI, and while I see the advantages, I've got a bad feeling about all of it.
skaurus - Saturday, December 2, 2023 - link
I'd like to note for future reference that I am of same opinion.quorm - Friday, December 1, 2023 - link
Trainium is still a terrible name.eastcoast_pete - Sunday, December 3, 2023 - link
Maybe to differentiate it from the "impossibletotrainium" setups? And yes, that's a pretty bad pun. Maybe ChatGPT can do better 😁?