Not surprised by this at all. My 3960X Threadripper system was never able to run 64gb (16gb x 4) at even the promised DDR4-3200 speed. I tried three different RAM kits and even a different CPU (replaced by AMD) and the problem never went away. In the end I believe it to be motherboard issue, it was simply incapable of running stable with any RAM faster than DDR-3000.
After spending so much on the ASUS Zenith II Extreme Alpha motherboard, 64GB of DDR-3600 RAM, and a $1400 CPU, the end result was very disappointing. Support from ASUS, AMD, and G-Skill was a long process, and eventually I had to just accept what was working and move on.
Ultimately, I don't believe that AMD and ASUS can properly deliver and support any HEDT platform that is worth the money they ask for it. I sincerely wish Intel would return to this segment, as I never had a problem with my X99 Deluxe II motherboard.
I have two systems with 3990X and two systems with 3970X, ASUS Zenith II Extreme Alpha motherboard and 256GB of 3600 speed G.Skill RAM in each system. All runs perfectly and completely stable, even with maxed out PBO overclock. I regularly run compilation jobs that require almost entire 256GB of RAM and never experienced any problems. I suspect you got unlucky with your CPU memory controller.
How are you going with those >200GB matrices & statistics? Many years ago I had to use raw frequency stats, then a program to generate blocks of SAS code that could analyse cross-tab by cluster (weighted) with smaller subsets of interest from every possible combination (multi-morbidity data). Making sure the stats methods still gave correct results. Divide & conquer to fit in limited RAM of circa 2013 computers. In those days it was mostly constrained by single thread & disk/network IO speed (~100MB/s).
Ya know.. I have yet to build an AMD system that didn't suffer from some kind of issue. I don't think I'm unlucky either. I need to stop buying AMD gear thinking... "this time will be different." Because it never is.
One of the most common issues of AMD is sketchy performance and stability with non samsung chips. If you ram had hynix or similar chips..it would usually not post at the advertised max speed.
As soon I moved to Samsung chips in all my AMD builds, all problems went away.
I have a 3960X Threadripper on a MSI TRX40 Creator, and a 2990WX Threadripper on a MSI X399 MEG Creation board. Both system use 4 sticks of Kingston 64GB (16GB x 4) DDR4 3200 dual rank memory (HyperX Predaor/Fury) that worked fine at their C16 3200 MT XMP rate. I used DRAM Calc app and managed to get the RAM of my 3960X to run stable at gear 1 C16 3800 MT with UCLK=MCLK set 1900Mhz. The ZEN2 CPU is definitely capable of this high memory OC. I've watch Buildzoid clips of him showing how with ZEN2 TR it can go even higher than 3800. My ZEN+ 2990WX I have running at Gear 1 C16 DDR4-3466 speed, and I checked ZenTimings that revealed its UCLK=MCLK is at 1733Mhz. The high memory clocking is doable you do need a good motherboard, but I always observed the TRX40 boards having a fast amount of decent boards with high amount of VRM phases. It was the X399 boards that only had a total of two good boards, which was either the MSI MEG Creation or the ASUS ROG Zenith Extreme.
I was able to run my 3960X with G.Skill Trident Neo 4x16GB 3600CL16 (B-die) at 3600CL14 and/or 3766 CL16 (Asus ROG Strix TRX40-E Gaming) for weeks without any problems.
Broke down the processor about two months ago. Touched accidentally USB-A -port with an USB-C -cable and the processor no longer booted (or booted and went trough all but the last POST test and then stuck - tried two different MB) :( Put all components into a new 5800X3D system and got 4x16GB memory to work @3600CL16.
It's not about cheap or expensive, it's if you got Samsung B-die or not.
I have G.Skill F4-3600C18-16GVK, and it only works at 3200 CL18. Why? because I was told it would be Samsung B-die, but it turned out to be SK Hynix, and the AM4 platform doesn't like working with anything other than Samsung B-die.
I have B-die and that is what I'm talking about when I said don't buy cheap memory. B-die = expensive but you get what you pay for and it just works. And you should have done your own research I would have caught the memory was Hynix before purchase.
I remember when samsung released low quality "b" dies. It was in the news i think in tomshardware. These were used as "b dies" in corsair high end ram. And they were not as good as the top class high binned true b die. Someone correct me If I am wrong.
micron m-die(?) 3600c16 working great on am4, it just wasnt available until a couple years after launch, samsung isnt the only choice
and actually hynix had different dies, some tighter and more stable than others
first hand anecdotal:
1) in 2018 hynix cjr(?) 3200c16 / 2600x / msi b450m mortar = never fully stable, had to tweak low level ohms, maybe the cpu had issue, linux randomly showed amdgpu pcie timeouts in log
2) though in 2023 the same hynix sticks work fine with 5600g / asrock deskmeet
3) in 2020 micron 3600c16 / 3600x / asus tuf b450m = solid
4) in 2023 different micron 3600c16 / 5600 with pbo / same msi b450m mortar from 2018, everything fine
Before b-die memory I also had a G.Skill Trident set with Hynix ic:s (G.Skill 64GB (4 x 16GB) Trident Z, DDR4 3600MHz, CL17, 1.35V CL17-19-19-19). They also worked @3600 CL16-18-18-18 or 3733 CL17.
and what does a content creator need tons of pcie lanes and quad+ channel memory for? regular desktop isnt weak with 16 high freq cores and 3d cache
if workloads like rendering scale so well with cores then they also scale across multiple networked computers for a cost effective render farm instead of a single expensive threadripper
main issue is probably the daw niche of extremely complex realtime audio synths/effects, but this has workarounds for years (prerender specific tracks), and it's not like we had better performance available in the past
HEDT isn't back when the platform price of entry is more than double that of the top desktop setups. AMD did release Zen 4 TR cheaper than Intel's current closest equivalent ($1500 vs $2100), but when $1500 is the cheapest it gets CPU wise you could put together CPU/DRAM/mobo for less. This is why I've contended TR 3xxx actually marked the end of HEDT as that is when the price of entry became significantly higher than desktop.
> the AMD Ryzen Threadripper 7980X ($4999), despite having eight fewer cores than the W9-3495X ($5889), half the memory channels (4 vs. 8) and being ultimately cheaper, it is the better option.
Am I reading it wrong? 7980X has eight more cores than W9-3495X not fewer. Don't think it changes the conclusion though.
It remains true, what has been true for every threadripper: if your software allows for computing on more than one node, using 5-10 ryzen servers for the same money gives you more performance, redundancy, more io-bandwith & for many usecases even more total ram.
There is a lot of so called "professional" use cases that require a lot of RAM on a single machine. It often possible to split calculations across a cluster of machines, but not so with RAM.
You should either use bar graphs that show the 14900K's performance when limited to 125W, or you should just change the graphs and list the 14900K as 428W.
AMD doesn't get a pass either but at least they are more honest. With these new Threadrippers they are actually spot on. Meanwhile the "350W" Xeon uses just over 500W. At the very least maybe include some efficiency charts?
Not that the power consumption is good, but these represent the absolute maximum power draw number seen they do not represent workload power draw. If they were to pick "real" power numbers they would have to measure power consumption for every single test and show that.
"While it's clear in multi-threaded workloads such as rendering, the Ryzen Threadripper 7980X and 7970X are more potent with higher core counts, there are certain situations where the current desktop flagship processors still represent a better buy."
Good to know if I ever start playing Dwarf Fortress?
One test I'd like to see is encoding 4+ videos at once. One 4K AV1 or HEVC encode is not going to top out all of the cores on the 7980X, but enough parallel encodes will blast the thing.
I also wouldn't mind seeing how they stack up against the WX series, especially in regard to RAM channels when the CPU is saturated.
So, even with a 5,000 dollar CPU, encoding an hour of 1080p AV1 video at 30fps with the medium quality preset would take nearly 2 hours? I guess AV1 software encoding is still pretty slow.
Just raising the presets a few steps can cut down the time considerably, without too much of a loss of quality. On my system, SVT-AV1's fastest preset, 12, approaches x264 preset medium, if I remember right, and the quality is still better than the latter.
These new processors are just the BS and utter ripoff. Look at supercomputers which use very similar processors: You can find there a lot of different models and test them. What these tests show is that during simulations they almost always stay around base frequency which is for this article's 64-core 2.5GHz processor equivalent to 32-cores of standard consumer ~5 Ghz 7950x which costs ~$500. So you pay 10x money for just the 2x increase in performance. What is 2x increase in performance ? NOTHING! When you compare computers, remember, you compare not a salary, game fps or your weight loss :) stop thinking this way, in computers, and specifically in supercomputers it is 3-10x when things are really different. Typically if usual PC is really not enough for you then the next step you need is 10x or 100x more, or even 1000x. So these hell expensive toys have no economic sense for almost everyone. Just get supercomputer time if you need more than your PC gives you and stop wasting your money. By the way these processors made off $10 chiplets cost probably $100 to manufacture
You're all over the place. First of all a 7950X has 16 cores. Even if tweo of those could match a 64 core TR (it won't), you'd need all of the other parts associated with a second computer. You are also forgetting about PCIe and memory bandwidth.
Then you say maybe $100 to manufacture. You know how much it costs to develop these chips? AN insane amount of money. You make it sound like AMD is selling a $100 widget for $5000 because they can. People will buy these for $1000's. If they didn't sell, AMD would have to lower prices. The market will determine what is "fair".
Sounds like the complaint of a cheap person that doesn't want to spend their money on anything. Starts with a fruit-vegetable comparison and ends with an absurdly low-balled figure.
It is better to be cheap than dumb. I wrote TR is 2x faster than consumer 7950X? Let's take this more precisely from "Science and Simulation" for example as scientists should do. Out of its 13 tests the TR 7980x won only 5. Even more, taking the mean square root of test ratios we can get that TR actually only 33% faster than 7950X3D. Couple tests look like a single core taking them out changes this outcome just 5%. What a misery, it is actually a TOTAL DEBACLE! Buy the way, just in case.tell your relatives to take the credit card from you
Tonight's Headlines: Guy on the internet with a narrow use case decrees AMD's entire HEDT lineup BS. His application runs just as well on a consumer platform so no one else could possibility find value...
"You know how much it costs to develop these chips? AN insane amount of money." OK, tell us how much exactly.
AMD first introduced chiplets in 2015. The cost of that development returned many times since. As to the cost of chiplets themselves, Zen4 chiplets have around 6B transistors. Apple Bionic A14 chip has twice of that and costs $17. Do the math
I wonder why there is no 16 core option. It would be nice to have a less expensive HEDT CPU for gaming, with higher clocks. Also, why no gaming benchmarks?
Games aren't designed to leverage these chips (too many cores, not enough clock, no 3D cache, too much inter-module latency).
Games are designed for low-end CPUs, comparatively.
As for a 16-core version, it wouldn't be enough cores to justify the cost of the motherboard unless AMD were targeting extreme clocks, which the company isn't.
HEDT is a terribly scammy space for CPUs. The markup for overall compute power is high, the maximum CPU clocks are low, power consumption and cooling is crazy, and then there is the biggest issue - per CPU memory bandwidth to RAM. Modern 4-8 core laptop CPUs get two memory channels. This chip gives you a measly 4 channels far more processor cores to squabble over. That's woefully inefficient scaling to say the least and I'm sure someone will start crying about wiring complexity in a world where we have 172-layer stacked NAND and hundreds of CPU cores on a single chip package while ignoring that wiring for 8 memory channels would be trivial with a little bit of effort and thought put into it.
Usually secondhand last-generation servers are a better source of pure computrons than HEDT; on the other hand third-generation Xeon Scalable with eight channels per processor hasn't made it to the second-hand market yet, and whilst the less-popular many-core Skylake CPUs are under £100 the base systems are still quite expensive and the stock levels aren't great.
FYI the Xeon W3300 series (Icelake), introduced 3rd quarter 2021, has 8 channel memory. However the secondhand market for the W3300 is atrocious. On eBay prices for a W3365 are 1200 USD, and that's for a QS sample. The required motherboard from SuperMicro, X12SPA-TF C621A LGA4189 on eBay, are asking 12-1500 USD. So ya, skip it and go new W3400 series, that's what I'm in the process of doing.
AVX512 shows 5 times, 8 times, even 10+ times speedup. Can anyone on the planet show me any real app, not the test no one saw the source code, which benefited from AVX512 even by the factor 2-3 ?
Enjoyed the stats. It would be interesting to read some articles on what would be the best hardware configs to run tensor Flow, One API, and/or GNU Scientific assuming a budget of $5000.
Are Anandtech's Ian Cutress AVX2 and AVX512 codes available for users testing? Are source codes available? On how many cores they work, all or just one?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
66 Comments
Back to Article
Jansen - Monday, November 20, 2023 - link
Bit disappointing that the memory controller only supports DDR5-5200, considering that JEDEC compliant DDR5-6400 RDIMMs are available.Ryan Smith - Monday, November 20, 2023 - link
At the end of the day it's the same I/O die as Genoa. So it comes with roughly the same restrictions.TEAMSWITCHER - Monday, November 20, 2023 - link
Not surprised by this at all. My 3960X Threadripper system was never able to run 64gb (16gb x 4) at even the promised DDR4-3200 speed. I tried three different RAM kits and even a different CPU (replaced by AMD) and the problem never went away. In the end I believe it to be motherboard issue, it was simply incapable of running stable with any RAM faster than DDR-3000.After spending so much on the ASUS Zenith II Extreme Alpha motherboard, 64GB of DDR-3600 RAM, and a $1400 CPU, the end result was very disappointing. Support from ASUS, AMD, and G-Skill was a long process, and eventually I had to just accept what was working and move on.
Ultimately, I don't believe that AMD and ASUS can properly deliver and support any HEDT platform that is worth the money they ask for it. I sincerely wish Intel would return to this segment, as I never had a problem with my X99 Deluxe II motherboard.
lemans24 - Monday, November 20, 2023 - link
Intel is definitely in HEDT with their xeon w-2400 chipsStormyParis - Monday, November 20, 2023 - link
I've blacklisted Asus. Lots of issues with both specs, reliability, and service.vfridman - Monday, November 20, 2023 - link
I have two systems with 3990X and two systems with 3970X, ASUS Zenith II Extreme Alpha motherboard and 256GB of 3600 speed G.Skill RAM in each system. All runs perfectly and completely stable, even with maxed out PBO overclock. I regularly run compilation jobs that require almost entire 256GB of RAM and never experienced any problems. I suspect you got unlucky with your CPU memory controller.Mikewind Dale - Tuesday, November 21, 2023 - link
I have a ThreadRipper Pro 3950X on a Supermicro WRX80 motherboard. I run 8x64 (512) GB of Supermicro-branded DDR4 3200 ECC RDIMM without a problem.Adam7288 - Wednesday, November 22, 2023 - link
Same exact config! Ram Bros.tygrus - Saturday, January 6, 2024 - link
How are you going with those >200GB matrices & statistics?Many years ago I had to use raw frequency stats, then a program to generate blocks of SAS code that could analyse cross-tab by cluster (weighted) with smaller subsets of interest from every possible combination (multi-morbidity data). Making sure the stats methods still gave correct results. Divide & conquer to fit in limited RAM of circa 2013 computers. In those days it was mostly constrained by single thread & disk/network IO speed (~100MB/s).
TEAMSWITCHER - Friday, November 24, 2023 - link
Ya know.. I have yet to build an AMD system that didn't suffer from some kind of issue. I don't think I'm unlucky either. I need to stop buying AMD gear thinking... "this time will be different." Because it never is.tamalero - Tuesday, December 5, 2023 - link
One of the most common issues of AMD is sketchy performance and stability with non samsung chips.If you ram had hynix or similar chips..it would usually not post at the advertised max speed.
As soon I moved to Samsung chips in all my AMD builds, all problems went away.
JRF68 - Tuesday, December 12, 2023 - link
Just unlucky. Should RMA the mb. Asus imho, is still the best for HEDT whether AMD or Intel.clsmithj - Sunday, December 31, 2023 - link
I have a 3960X Threadripper on a MSI TRX40 Creator, and a 2990WX Threadripper on a MSI X399 MEG Creation board.Both system use 4 sticks of Kingston 64GB (16GB x 4) DDR4 3200 dual rank memory (HyperX Predaor/Fury) that worked fine at their C16 3200 MT XMP rate.
I used DRAM Calc app and managed to get the RAM of my 3960X to run stable at gear 1 C16 3800 MT with UCLK=MCLK set 1900Mhz.
The ZEN2 CPU is definitely capable of this high memory OC. I've watch Buildzoid clips of him showing how with ZEN2 TR it can go even higher than 3800.
My ZEN+ 2990WX I have running at Gear 1 C16 DDR4-3466 speed, and I checked ZenTimings that revealed its UCLK=MCLK is at 1733Mhz.
The high memory clocking is doable you do need a good motherboard, but I always observed the TRX40 boards having a fast amount of decent boards with high amount of VRM phases.
It was the X399 boards that only had a total of two good boards, which was either the MSI MEG Creation or the ASUS ROG Zenith Extreme.
thestryker - Monday, November 20, 2023 - link
Where have you seen JEDEC compliant 6400 RDIMMs? The highest I've seen is 5600.Rοb - Wednesday, November 29, 2023 - link
Here: https://www.anandtech.com/show/18988/teamgroup-unv... and https://www.anandtech.com/show/21129/micron-introd...demu - Monday, November 20, 2023 - link
I was able to run my 3960X with G.Skill Trident Neo 4x16GB 3600CL16 (B-die) at 3600CL14 and/or 3766 CL16 (Asus ROG Strix TRX40-E Gaming) for weeks without any problems.Broke down the processor about two months ago. Touched accidentally USB-A -port with an USB-C -cable and the processor no longer booted (or booted and went trough all but the last POST test and then stuck - tried two different MB) :(
Put all components into a new 5800X3D system and got 4x16GB memory to work @3600CL16.
Now considering the new 7960X.
Makaveli - Monday, November 20, 2023 - link
So looks like the difference between your system Demu and Teamswitcher is you paired it with high-quality memory.Which is no surprise and it's why I never go cheap on memory.
meacupla - Monday, November 20, 2023 - link
It's not about cheap or expensive, it's if you got Samsung B-die or not.I have G.Skill F4-3600C18-16GVK, and it only works at 3200 CL18. Why? because I was told it would be Samsung B-die, but it turned out to be SK Hynix, and the AM4 platform doesn't like working with anything other than Samsung B-die.
Makaveli - Monday, November 20, 2023 - link
I have B-die and that is what I'm talking about when I said don't buy cheap memory. B-die = expensive but you get what you pay for and it just works. And you should have done your own research I would have caught the memory was Hynix before purchase.meacupla - Monday, November 20, 2023 - link
That's the thing though, I did do my research.Makaveli - Monday, November 20, 2023 - link
So how did you not catch that the memory you were looking at was using Hynix?I just had to look at the specs and I knew just by the Cas latency. There was no good memory in the DDR4 range that came in at CL18.
meacupla - Tuesday, November 21, 2023 - link
Well, clearly, the site that I used thought it was B-die, when it was false info for the 2x16GB model.tamalero - Tuesday, December 5, 2023 - link
I remember when samsung released low quality "b" dies. It was in the news i think in tomshardware.These were used as "b dies" in corsair high end ram.
And they were not as good as the top class high binned true b die.
Someone correct me If I am wrong.
29a - Tuesday, November 21, 2023 - link
The PCB the memory is mounted on matters too.kn00tcn - Tuesday, November 21, 2023 - link
micron m-die(?) 3600c16 working great on am4, it just wasnt available until a couple years after launch, samsung isnt the only choiceand actually hynix had different dies, some tighter and more stable than others
first hand anecdotal:
1) in 2018 hynix cjr(?) 3200c16 / 2600x / msi b450m mortar = never fully stable, had to tweak low level ohms, maybe the cpu had issue, linux randomly showed amdgpu pcie timeouts in log
2) though in 2023 the same hynix sticks work fine with 5600g / asrock deskmeet
3) in 2020 micron 3600c16 / 3600x / asus tuf b450m = solid
4) in 2023 different micron 3600c16 / 5600 with pbo / same msi b450m mortar from 2018, everything fine
demu - Tuesday, November 21, 2023 - link
Before b-die memory I also had a G.Skill Trident set with Hynix ic:s (G.Skill 64GB (4 x 16GB) Trident Z, DDR4 3600MHz, CL17, 1.35V CL17-19-19-19).They also worked @3600 CL16-18-18-18 or 3733 CL17.
iamkyle - Monday, November 20, 2023 - link
I see unlike previous generations of Threadripper, AMD and its board partners are abandoning the "content creator/gamer" segment.Great for the workstation crowd, a loss for the aforementioned.
Threska - Monday, November 20, 2023 - link
The people who created Crysis could have used this. :-)kn00tcn - Tuesday, November 21, 2023 - link
and what does a content creator need tons of pcie lanes and quad+ channel memory for? regular desktop isnt weak with 16 high freq cores and 3d cacheif workloads like rendering scale so well with cores then they also scale across multiple networked computers for a cost effective render farm instead of a single expensive threadripper
main issue is probably the daw niche of extremely complex realtime audio synths/effects, but this has workarounds for years (prerender specific tracks), and it's not like we had better performance available in the past
thestryker - Monday, November 20, 2023 - link
HEDT isn't back when the platform price of entry is more than double that of the top desktop setups. AMD did release Zen 4 TR cheaper than Intel's current closest equivalent ($1500 vs $2100), but when $1500 is the cheapest it gets CPU wise you could put together CPU/DRAM/mobo for less. This is why I've contended TR 3xxx actually marked the end of HEDT as that is when the price of entry became significantly higher than desktop.thestryker - Monday, November 20, 2023 - link
Forgot to add: these are just the lower SKU workstation parts not a resurrection of HEDTwujj123456 - Monday, November 20, 2023 - link
> the AMD Ryzen Threadripper 7980X ($4999), despite having eight fewer cores than the W9-3495X ($5889), half the memory channels (4 vs. 8) and being ultimately cheaper, it is the better option.Am I reading it wrong? 7980X has eight more cores than W9-3495X not fewer. Don't think it changes the conclusion though.
rUmX - Tuesday, November 21, 2023 - link
You're rightGavin Bonshor - Tuesday, November 21, 2023 - link
Thanks for highlighting that obvious error, edited!bernstein - Monday, November 20, 2023 - link
It remains true, what has been true for every threadripper: if your software allows for computing on more than one node, using 5-10 ryzen servers for the same money gives you more performance, redundancy, more io-bandwith & for many usecases even more total ram.vfridman - Monday, November 20, 2023 - link
There is a lot of so called "professional" use cases that require a lot of RAM on a single machine. It often possible to split calculations across a cluster of machines, but not so with RAM.quorm - Monday, November 20, 2023 - link
A nice increase in performance, but seems like almost everyone would be better off with either desktop ryzen or pro/epyc.Thunder 57 - Monday, November 20, 2023 - link
You should either use bar graphs that show the 14900K's performance when limited to 125W, or you should just change the graphs and list the 14900K as 428W.AMD doesn't get a pass either but at least they are more honest. With these new Threadrippers they are actually spot on. Meanwhile the "350W" Xeon uses just over 500W. At the very least maybe include some efficiency charts?
thestryker - Monday, November 20, 2023 - link
Not that the power consumption is good, but these represent the absolute maximum power draw number seen they do not represent workload power draw. If they were to pick "real" power numbers they would have to measure power consumption for every single test and show that.Oxford Guy - Tuesday, November 21, 2023 - link
Deceptive power usage needs to be stopped.GeoffreyA - Thursday, November 23, 2023 - link
Yes. Deceptive everything.boozed - Monday, November 20, 2023 - link
"While it's clear in multi-threaded workloads such as rendering, the Ryzen Threadripper 7980X and 7970X are more potent with higher core counts, there are certain situations where the current desktop flagship processors still represent a better buy."Good to know if I ever start playing Dwarf Fortress?
FatFlatulentGit - Monday, November 20, 2023 - link
One test I'd like to see is encoding 4+ videos at once. One 4K AV1 or HEVC encode is not going to top out all of the cores on the 7980X, but enough parallel encodes will blast the thing.I also wouldn't mind seeing how they stack up against the WX series, especially in regard to RAM channels when the CPU is saturated.
garblah - Tuesday, November 21, 2023 - link
So, even with a 5,000 dollar CPU, encoding an hour of 1080p AV1 video at 30fps with the medium quality preset would take nearly 2 hours? I guess AV1 software encoding is still pretty slow.GeoffreyA - Tuesday, November 21, 2023 - link
Just raising the presets a few steps can cut down the time considerably, without too much of a loss of quality. On my system, SVT-AV1's fastest preset, 12, approaches x264 preset medium, if I remember right, and the quality is still better than the latter.GeoffreyA - Tuesday, November 21, 2023 - link
And preset 6, which is medium, is roughly similar to libaom's fastest, cpu-used 8.FatFlatulentGit - Tuesday, November 21, 2023 - link
A single AV1 encode is not going to saturate 64/128 cores. The advantage is being able to do multiple simultaneous encodes.GeoffreyA - Thursday, November 23, 2023 - link
Or splitting into scene-based chunks.SanX - Wednesday, November 22, 2023 - link
These new processors are just the BS and utter ripoff. Look at supercomputers which use very similar processors: You can find there a lot of different models and test them. What these tests show is that during simulations they almost always stay around base frequency which is for this article's 64-core 2.5GHz processor equivalent to 32-cores of standard consumer ~5 Ghz 7950x which costs ~$500. So you pay 10x money for just the 2x increase in performance. What is 2x increase in performance ? NOTHING! When you compare computers, remember, you compare not a salary, game fps or your weight loss :) stop thinking this way, in computers, and specifically in supercomputers it is 3-10x when things are really different. Typically if usual PC is really not enough for you then the next step you need is 10x or 100x more, or even 1000x. So these hell expensive toys have no economic sense for almost everyone. Just get supercomputer time if you need more than your PC gives you and stop wasting your money. By the way these processors made off $10 chiplets cost probably $100 to manufactureThunder 57 - Wednesday, November 22, 2023 - link
You're all over the place. First of all a 7950X has 16 cores. Even if tweo of those could match a 64 core TR (it won't), you'd need all of the other parts associated with a second computer. You are also forgetting about PCIe and memory bandwidth.Then you say maybe $100 to manufacture. You know how much it costs to develop these chips? AN insane amount of money. You make it sound like AMD is selling a $100 widget for $5000 because they can. People will buy these for $1000's. If they didn't sell, AMD would have to lower prices. The market will determine what is "fair".
Threska - Wednesday, November 22, 2023 - link
Sounds like the complaint of a cheap person that doesn't want to spend their money on anything. Starts with a fruit-vegetable comparison and ends with an absurdly low-balled figure.SanX - Thursday, November 23, 2023 - link
It is better to be cheap than dumb. I wrote TR is 2x faster than consumer 7950X? Let's take this more precisely from "Science and Simulation" for example as scientists should do. Out of its 13 tests the TR 7980x won only 5. Even more, taking the mean square root of test ratios we can get that TR actually only 33% faster than 7950X3D. Couple tests look like a single core taking them out changes this outcome just 5%. What a misery, it is actually a TOTAL DEBACLE! Buy the way, just in case.tell your relatives to take the credit card from youBushLin - Thursday, November 23, 2023 - link
Tonight's Headlines:Guy on the internet with a narrow use case decrees AMD's entire HEDT lineup BS. His application runs just as well on a consumer platform so no one else could possibility find value...
SanX - Sunday, November 26, 2023 - link
YMMVSanX - Thursday, November 23, 2023 - link
"You know how much it costs to develop these chips? AN insane amount of money."OK, tell us how much exactly.
AMD first introduced chiplets in 2015. The cost of that development returned many times since. As to the cost of chiplets themselves, Zen4 chiplets have around 6B transistors. Apple Bionic A14 chip has twice of that and costs $17. Do the math
Shmee - Wednesday, November 22, 2023 - link
I wonder why there is no 16 core option. It would be nice to have a less expensive HEDT CPU for gaming, with higher clocks. Also, why no gaming benchmarks?Oxford Guy - Wednesday, November 22, 2023 - link
Games aren't designed to leverage these chips (too many cores, not enough clock, no 3D cache, too much inter-module latency).Games are designed for low-end CPUs, comparatively.
As for a 16-core version, it wouldn't be enough cores to justify the cost of the motherboard unless AMD were targeting extreme clocks, which the company isn't.
mvkorpel - Thursday, November 23, 2023 - link
The 7970X actually has a max boost clock of 5.3 GHz, according to AMD. It is reported as 5.1 GHz in the article.PeachNCream - Sunday, November 26, 2023 - link
HEDT is a terribly scammy space for CPUs. The markup for overall compute power is high, the maximum CPU clocks are low, power consumption and cooling is crazy, and then there is the biggest issue - per CPU memory bandwidth to RAM. Modern 4-8 core laptop CPUs get two memory channels. This chip gives you a measly 4 channels far more processor cores to squabble over. That's woefully inefficient scaling to say the least and I'm sure someone will start crying about wiring complexity in a world where we have 172-layer stacked NAND and hundreds of CPU cores on a single chip package while ignoring that wiring for 8 memory channels would be trivial with a little bit of effort and thought put into it.TomWomack - Monday, November 27, 2023 - link
Usually secondhand last-generation servers are a better source of pure computrons than HEDT; on the other hand third-generation Xeon Scalable with eight channels per processor hasn't made it to the second-hand market yet, and whilst the less-popular many-core Skylake CPUs are under £100 the base systems are still quite expensive and the stock levels aren't great.JRF68 - Tuesday, December 12, 2023 - link
FYI the Xeon W3300 series (Icelake), introduced 3rd quarter 2021, has 8 channel memory. However the secondhand market for the W3300 is atrocious. On eBay prices for a W3365 are 1200 USD, and that's for a QS sample. The required motherboard from SuperMicro, X12SPA-TF C621A LGA4189 on eBay, are asking 12-1500 USD. So ya, skip it and go new W3400 series, that's what I'm in the process of doing.SanX - Wednesday, November 29, 2023 - link
AVX512 shows 5 times, 8 times, even 10+ times speedup. Can anyone on the planet show me any real app, not the test no one saw the source code, which benefited from AVX512 even by the factor 2-3 ?Frank_M - Tuesday, December 5, 2023 - link
Waves Plug-ins for DAW's.R
Mathematica
Pretty much anything that uses the Intel Math Kernel Library.
SanX - Tuesday, December 12, 2023 - link
And the speedup there is? 10%Frank_M - Tuesday, December 5, 2023 - link
Enjoyed the stats.It would be interesting to read some articles on what would be the best hardware configs to run tensor Flow, One API, and/or GNU Scientific assuming a budget of $5000.
SanX - Wednesday, December 13, 2023 - link
Are Anandtech's Ian Cutress AVX2 and AVX512 codes available for users testing? Are source codes available? On how many cores they work, all or just one?