I'm not sure if "server" vs "workstation" really does apply here.
My guess is that it's really professional high-density servers vs. low-cost bitbarns, which do not care about noise, but want to extract denser inference workloads using entry level workstation boards and chassis in facilities that used to hold crypto workloads.
Not necessarily the highest quality or highest ethics inference workloads, either.
Pretty sure that scale up across PCIe wouldn't work so it's more about fitting more sub-card size inference workloads into volume and budget, while ROCm means some significant adaptation software effort vs. anything from team green and that implies a deployment at scale to pay for it.
If I was in intelligence, I'd have a closer look at who is buying this stuff and who they are reselling it to.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
2 Comments
Back to Article
erinadreno - Monday, June 3, 2024 - link
It's either a slot worth of metal costs $500 or that GPU was not selling wellabufrejoval - Monday, June 3, 2024 - link
I'm not sure if "server" vs "workstation" really does apply here.My guess is that it's really professional high-density servers vs. low-cost bitbarns, which do not care about noise, but want to extract denser inference workloads using entry level workstation boards and chassis in facilities that used to hold crypto workloads.
Not necessarily the highest quality or highest ethics inference workloads, either.
Pretty sure that scale up across PCIe wouldn't work so it's more about fitting more sub-card size inference workloads into volume and budget, while ROCm means some significant adaptation software effort vs. anything from team green and that implies a deployment at scale to pay for it.
If I was in intelligence, I'd have a closer look at who is buying this stuff and who they are reselling it to.