Comments Locked

118 Comments

Back to Article

  • Silma - Tuesday, February 9, 2021 - link

    TLDR: unless you absolutely need a 3995WX feature not included in a 3990X, the 3990X is a much better choice: almost same performances, $1,500 less.
  • ingwe - Tuesday, February 9, 2021 - link

    Yeah absolutely. Still exciting to me with the increased DDR capacity.
  • kgardas - Tuesday, February 9, 2021 - link

    Well, not always! For some cases W-3175x was better and even for some cases poor small 5950x was better. So target application always matter here.
    For me AVX512 perf of 3175x is breath taking (8x speedup of AVX512 code in comparison with non-AVX!) and pity that AMD is not supporting this yet. Speaking about spec. code, Saphire Rappids AMX will be something to look for.
  • frbeckenbauer - Tuesday, February 9, 2021 - link

    It's unlikely amd will ever implement AVX512 directly. It's too niche and takes up a huge amount of die space, you're better off going to GPU instead.
  • Oxford Guy - Tuesday, February 9, 2021 - link

    If AMD were to implement it, Intel has AVX1024 waiting in the wings.
  • Smell This - Tuesday, February 9, 2021 - link


    Get me the Nuke Plant ... STAT!
    We have an order for Chipzillah AVX1024, and need more Gigawatts.
  • kgardas - Tuesday, February 9, 2021 - link

    Not avx1024, but amx: https://fuse.wikichip.org/news/3600/the-x86-advanc...
  • ishould - Tuesday, February 9, 2021 - link

    Is AMX something that might be able to be implemented as a chiplet? If so, AMD might be going this route so the customers that need it can get it
  • Elstar - Friday, February 12, 2021 - link

    I can't find a link to it, but during the Xeon Phi era (rest in peace), the Intel engineers were proud of the EVEX encoding scheme and that it could support AVX1024 or AVX2048 someday. I think now that Xeon Phi is dead and normal Xeons have embraced AVX512BW and AVX512VL, this dream is dead too.
  • kgardas - Tuesday, February 9, 2021 - link

    Look at the benchmark numbers and then think what TR will be able to do with proper avx512 support. Yes, AMD definitely needs to implement it. Also it'll need to implement AMX in the future if they would not like to become second class x86 citizen again.
  • YB1064 - Tuesday, February 9, 2021 - link

    You are kidding, right? Intel has become the poor man's AMD in terms of performance.
  • kgardas - Wednesday, February 10, 2021 - link

    From general computing point of view yes, but from specific point no. Look at 3d particle movement! 3175x with less than half cores, at least $1k cheaper is able to provide more than 2x perf of the best AMD. So if you have something hand optimized for avx512, then old, outdated intel is still able to kicks amd ass and quite with a style.
  • Spunjji - Wednesday, February 10, 2021 - link

    @kgardas - Sure, but not many people can just throw their code at one of only a handful of programmers in the world with that level of knowledge and get optimised code back. That particle movement test isn't an industry-standard thing - it's Ian's personal project, hand-tuned by an ex-Intel engineer. Actual tests using AVX512 aren't quite so impressive because they only ever use it for a fraction of their code.
  • Fulljack - Thursday, February 11, 2021 - link

    not to mention that any processor that run in avx512 will have it's clockspeed tanked. unless your program maximize the use of avx512, the net progress will result slower application than using avx/2 or none at all.
  • sirky004 - Tuesday, February 9, 2021 - link

    what's you deal with AVX 512?
    Usual workload with that in mind is better to offload in GPU.
    There's a reason why Linus Torvald hate that "power virus"
  • kgardas - Wednesday, February 10, 2021 - link

    Usually if you write the code, it's more easier to add few avx512 intrinsic calls then to rewrite the software for GPU offload. But yes, GPU will be faster *if* the perf is not killed by PCIe latency. E.g. you need to interact with data on CPU and perform just few calcs on GPU so moving data cpu -> gpu -> cpu -> loop over, will kill perf.
  • kgardas - Wednesday, February 10, 2021 - link

    AFAIK, Linus hates that avx512 is not available everywhere in x86 world. But this will be the same case with upcoming AMX, so there is nothing intel may do about it. Not sure if AMD will need to pay some money for avx512/amx license or not...
  • Qasar - Wednesday, February 10, 2021 - link

    sorry kgardas but linus HATES avx512:
    https://www.extremetech.com/computing/312673-linus...
    https://www.phoronix.com/scan.php?page=news_item&a...
    "I hope AVX512 dies a painful death, and that Intel starts fixing real problems instead of trying to create magic instructions to then create benchmarks that they can look good on… "
    where you got that he likes it. and chances are, unless intel makes amx available with no issues, amx maybe the same niche as avx 512 is.
  • kgardas - Wednesday, February 10, 2021 - link

    Yes, I know he hates the stuff, but not sure about the right reason. In fact I think AVX512 is best AVX so far ever. I've read some of his rants and it was more about avx512 is not everywhere like avx2 etc. etc. Also Linus was very vocal about his departure from Intel workstation to AMD and since AMD does not provide avx512 yet it may well be just pure engineering laziness -- don't bother me with this stuff, it does not run here. :-)
  • Qasar - Wednesday, February 10, 2021 - link

    i dont think it has to be do with laziness, it has to do with the overall hit you get in performance when you use it, not to mention the power usage, and the die space it needs. from what i have seen, it still seems to be a niche, and over all not worth it. it looks like amd could add avx512 to zen at some point, but maybe, amd has decided it isnt worth it ?
  • brucethemoose - Tuesday, February 9, 2021 - link

    There are large efforts in various communities, from Rust and C++ to Linux distros and game engines, to improve autovectorization.

    This means that more software should support AVXwhatever with minimal or no support from devs as time goes on.

    Also, one can"t always just "go to the GPU." Video encoding is a great example where this won't work.
  • RSAUser - Wednesday, February 10, 2021 - link

    NVENC seems to work pretty well.
  • eastcoast_pete - Wednesday, February 10, 2021 - link

    The newest generation of NVENC (since Turing) is indeed quite capable at encoding and transcoding to AVC and h265. That ASIC has come a long way, and is really fast. However, for customized encoding situations, the story shifts; that's where CPU-based encoding comes into its own. Also, if you want AV1 encoding, we're still OOL on GPUs; if that has changed, I'd love to know. Lastly, a lot of the work these workstations are used for is CGI and editing, and for those, a combination of lots of RAM, many-core CPUs like these here and 1-2 higher end GPUs is generally what people use who do this for a living.
  • GeoffreyA - Friday, February 12, 2021 - link

    NVENC, VCE, etc., work brilliantly when it comes to speed, but if one is aiming for best quality or size, it's best to go for software encoding. x264 does support a bit of OpenCL acceleration but there's hardly any gain, and it's not enabled by default. Also, even when AV1 ASICs are added, I've got a feeling it'll fall well below software encoding.
  • phoenix_rizzen - Tuesday, February 9, 2021 - link

    More like, unless you absolutely NEED AVX-512 support, there's absolutely no reason to use an Intel setup.

    Pick a Ryzen, Threadripper, or EPYC depending on your needs, they'll all be a better value/performer than anything Intel currently has on the market.
  • twtech - Tuesday, February 9, 2021 - link

    It's workload-dependent, and what you're looking for. While regular TR supports ECC, it doesn't support registered ECC, which is 99% of ECC memory. The 8 memory channels also make a big difference in some workloads, such as compiling code. Also the TR Pro has a lot more PCIE lanes, as well as better support for business/corporate management.

    So I agree with your statement if you are a solo operator with a workload that is not only not very memory dependent, but not very memory-bandwidth dependent either.
  • eastcoast_pete - Wednesday, February 10, 2021 - link

    For the use case covered here (customized CPU rendering), those many cores are hard to beat. In that scenario, the main competitors for the TR pros are the upcoming, Zen 3-based Epyc and non-pro TRs. Unfortunately, Intel is still way behind. What I wonder about also is whether some of these rendering suites are also available for ARM's Neoverse arch; some of those 128+ core CPUs might be quite competitive, if the software exists.
  • wumpus - Wednesday, February 10, 2021 - link

    I'm guessing that a lot of the times that feature will be ECC. How many tasks worth doing on a 5-figure workstation are worth having botched bytes thanks to a memory error?

    Granted, a lot of the times that ECC is needed are also going to be the >256GB cases, but ECC is pretty important (if only to known that there *wasn't* a memory error).
  • The Hardcard - Tuesday, February 9, 2021 - link

    While the EPYC name is epic, it doesn’t scream “for heavyweight computational workloads” like Threadripper does. Both names are candidates for bonuses, but whoever thought up Threadriipper should get the bigger one.

    If the same person is responsible for both names, then the title AMD Marketing Fellow was earned.
  • Oxford Guy - Tuesday, February 9, 2021 - link

    Threadripper is even more corny than EPYC. Neither inspire much confidence from a professional standpoint, sounding like gamer speak.
  • CyberMindGrrl - Tuesday, February 9, 2021 - link

    Speak for yourself. I work in the animation industry and many indies like myself have gone Threadripper thanks to the lower cost/performance ratio as compared to Intel. I myself built a 64 core Threadripper last year and I run three RTX 2080 ti's in that machine so I can do both CPU and GPU rendering AT THE SAME TIME. While "Threadripper" is an absolutely stellar name, they could have called it "Bob" and we'd be just as happy.
  • Oxford Guy - Tuesday, February 9, 2021 - link

    Very nice demonstration of a ‘rebuttal’ that, in no way, addresses the original claim.
  • lmcd - Tuesday, February 9, 2021 - link

    Pentium is equally meaningless. And Intel's name is "Core" which is completely useless.
  • Oxford Guy - Wednesday, February 10, 2021 - link

    Meaninglessness is one issue, hokiness is another.
  • Spunjji - Thursday, February 11, 2021 - link

    Ah yes, the time-honoured empirical concept of "hokiness" 🤭
  • Oxford Guy - Thursday, February 11, 2021 - link

    You’re really trying to suggest that all names are equal in perceived seriousness?

    The desperation continues...
  • Fujikoma - Wednesday, February 10, 2021 - link

    I take it your profession involves glitter and coloured lights. In a professional setting, the names EPYC and Threadripper are as unimportant as Xeon, Core i9, Bulldozer... et al. They're naming conventions that might garner some water cooler chatter, but don't impact the purchase orders.
  • Oxford Guy - Wednesday, February 10, 2021 - link

    Your response is to the wrong post.
  • Spunjji - Thursday, February 11, 2021 - link

    @Oxford Guy - no, it was totally to you. You claim to factor the names of your tools into your assessment of their worth, so your work can't be that important. 👍
  • Oxford Guy - Thursday, February 11, 2021 - link

    So, your brilliant claim is that the names don’t matter. Well, in that case you’re saying names like Threadripper are worthless.

    So much for a defense of the corny naming practice, there.
  • Holliday75 - Friday, February 12, 2021 - link

    Oxford Guy: I just want to argue.
  • Spunjji - Thursday, February 11, 2021 - link

    They're a professional, they don't care about the name - that was the rebuttal, and it directly addressed your "claim". It doesn't need to be more than their opinion, because your "claim" was just your own turgid opinion.

    Pseudo-rationalists are a plague.
  • Oxford Guy - Thursday, February 11, 2021 - link

    Another comment that fails to rebut the original claim.
  • grant3 - Friday, February 12, 2021 - link

    An industry professional, in the target market for this product, tells you "The name is stellar."

    Yes, that both addresses + contradicts your claim that you know what inspires "professional confidence" better than the actual professionals.
  • Spunjji - Thursday, February 11, 2021 - link

    p r o f e s s i o n a l
    Some professionals are gamers. Some have a sense of humour. Some even make games!

    But sure, it's not "Xeon", which is "professional" by virtue of being duller than a water flavoured lollipop.
  • Oxford Guy - Thursday, February 11, 2021 - link

    The obsession continues...
  • Qasar - Thursday, February 11, 2021 - link

    and its your obsession. nothing wrong with the name threadripper. you want to complain about product names, go look at some names for video cards, or even some motherboards.
    hmm maybe you dont like the name cause intel didnt make it ?
  • schujj07 - Thursday, February 11, 2021 - link

    Xeon is an even number name than Threadripper of Epyc. The name is too close to Xenon the Nobel gas. If you just heard the name you would think the Intel CPU doesn't play well with anything besides itself.
  • schujj07 - Thursday, February 11, 2021 - link

    *dumber not number...stupid autocorrect
  • GeoffreyA - Friday, February 12, 2021 - link

    For my part, I think Threadripper is a pretty nice name and have always liked it (it gives the impression of tearing through work with mighty, relentless threading). But I also agree with Oxford Guy's sentiment that names in computing are often made in poor taste (Bulldozer, Netburst, and Core 2 Duo are "choice" specimens). A subjective thing, to be sure, but that's how I feel.
  • Oxford Guy - Friday, February 12, 2021 - link

    Bulldozer was indeed particularly awful. Abstract names like Xeon are generally less annoying than misapplied real-world names.
  • Qasar - Friday, February 12, 2021 - link

    too bad bulldozer and netburst were code names for the architecture, and not marketing names like xeon and threadripper.
  • GeoffreyA - Saturday, February 13, 2021 - link

    You're right, but even FX and Phenom were in poorer taste than Athlon, which was sheer gold, I'd say. Is Threadripper good or bad as a name? What I say, or anyone else here says, doesn't matter. Only from a survey can we get a better picture, and even there it's a reflection of popular opinion, a blunt instrument, often misled by the times.

    Is there a standard of excellence, a mode of naming so tuned to the genius of the language that it never changes? It's evident to everyone that "Interstellar" sounds better than "Invisible Invaders from Outer Space," but we could be wrong and time only can decide the matter. If, in 500 years, people still get a shiver when they hear Interstellar, we'll know that Nolan named his film right.

    Back to the topic. I think the spirit of Oxford Guy's comment was: TR and Epyc aren't that good names (which I partly agree with). Whether it inspires confidence in professionals is a different matter. A professional might be an expert in their field but it doesn't mean they're an expert on good names (and I'm not claiming I am one either). It matters little: if the target demographic buys, AMD's bank account smiles. But it's a fair question to ask, apart from sales, is a name good or bad? Which were the best? Does it sound beautiful?Names, in themselves, are pleasurable to many people.
  • jospoortvliet - Saturday, February 13, 2021 - link

    Names should have a few properties if they are to be good.
    Easy to pronounce (cross-culturally!)
    Easy to remember (distinctive)
    Not (too) silly/funny
    Bonus: have some (clever) relation to the actual product.

    Threadripper certainly earns the bonus but might arguably perhaps maybe lose out on the 3rd ‘silly’ point. However, in that regards i would argue it makes a difference how well it fulfills that rather ambitious title, and as we all know the answer is “very well”. Now if threadripper was a mediocre product, not at all living up to its name, I’d judge different but as it stands I would say it is a brilliant name.
  • GeoffreyA - Saturday, February 13, 2021 - link

    Good breakdown that, to test names against. Simplicity, too, wins the day.
  • GeoffreyA - Saturday, February 13, 2021 - link

    "Bulldozer was indeed particularly awful"

    One of the worst. AMD's place names were good in the K8 era, and the painter ones are doing a good job too.
  • danjw - Saturday, February 13, 2021 - link

    You may not be aware of this, but Threadripper, is actually comes from the 80's fashion fad of ripped clothing. ;-)
  • GeoffreyA - Saturday, February 13, 2021 - link

    Well, I like it even more then, being a fan of the 80s.
  • Hulk - Tuesday, February 9, 2021 - link

    Is the difference in output quality strictly due to rounding/numerical errors when using GPU vs CPU or are there differences in the computational algorithms that calculate the numbers?
  • Kjella - Tuesday, February 9, 2021 - link

    Not in terms of color accuracy, not a problem making 10/12 bit non-linear color from 32 bit linear - even 16 bit is probably near perfect. But for physics engines, ray tracing etc. errors can compound a lot - imagine a beam of light hitting a reflective surface where fractions of a degree means the light bounces in a completely different direction. Or you're modelling a long chain of events that cause compounding errors, or the sum of a million small effects or whatever. But it can also just be that the algorithms are a bit "lazy" and expect everything to fit because they got 64 bits to play with. I doubt that much needs ~1.0000000001 precision, much less ~1.0000000000000000001.
  • avb122 - Tuesday, February 9, 2021 - link

    Those cases do not matter unless you are checking that the result is the same as a golden reference. Otherwise the image it creates is just as if the object it was rendering moved 10 micrometers. To our brain it not doesn't matter.

    Being off by one bit with FP32 for geometry is about the same magnitude as modeling light as a partial instead of a wave. For color intensity, one bit of FP32 is less than one photon in real world cases.

    But, CPUs and GPUs all get the same answer when doing the same FP32 arithmetic. The programmer can choose to do something else like use lossy texture compression or goofy rounding modes.
  • avb122 - Tuesday, February 9, 2021 - link

    It's not because of the hardware. AMD and NVIDIA's GPUs have IEEE complient FPUs. So, they get the same answer as the CPU when using the same algorithm.

    With CUDA, the same C or C++ code doing computations can run on the CPU and GPU and get the same answer.

    The REAL reasons to not use a GPU are that the non-compete parts (threading, memory management, synchronization, etc.) are different on the GPU and not all GPUs support CUDA. Those are very good reasons. But it is not about the hardware. It is about the software ecosystem.

    Also GPUs do not have a tiny amount of cache. They have more total cache than a CPU. The ratio of "threads" to cache is lower. That requires changing the size of the block that each "thread" operates on. Ultimately, GPUs have so much more internal and external bandwidth than a CPU that only extreme cases where everything fits in the CPUs' L1 caches buy not in the GPU's register file can a CPU have more bandwidth.

    Ian's statement about wanting 36 bits so that it can do 12-bit color is way off. I only know CUDA and NVIDIA's OpenGL. For those, each color channel is represented by a non-SIMD register. Each color channel is then either an FP16 or FP32 value (before neural networks GPUs were not faster at FP16, it was just for memory capacity and bandwidth). Both cover 12-bit color space. Remember, games have had HDR for almost two decades.
  • Dug - Tuesday, February 9, 2021 - link

    It's software.

    But sometimes you don't want perfect. It can work in your benefit depending on what end results you view and interpret.
  • Smell This - Tuesday, February 9, 2021 - link


    Page 4
    Cinebench R20
    Paragraph below the first image
    **Results for Cinebench R20 are not comparable to R15 or older, because both the scene being used is different, but also the updates in the code bath. **

    I do like my code clean ...
  • alpha754293 - Tuesday, February 9, 2021 - link

    It's a pity that the processor and as a platform, you can buy a used dual EPYC 7702 server and still reap the multithreaded performance of 128-cores/256-threads moreso than you would be able to get out of this processor.

    I'd wished that this review actually included the results of a dual EPYC 7702/7742 system for the purposes of comparing the two, as I think that the dual EPYC 7702/7742 would still outperform this Threadripper Pro 3995WX.
  • Duncan Macdonald - Tuesday, February 9, 2021 - link

    Given the benchmarks and the prices, the main reason for using the Threadripper Pro rather than the plain Threadripper is likely to be the higher memory capacity (2TB vs 256GB) .
    Even a small overclock on a standard Threadripper would allow it to be faster than a non-overclocked Threadripper Pro for any application that fits into 256GB.
  • twtech - Tuesday, February 9, 2021 - link

    There are a couple other pretty significant differences that matter perf-wise in some scenarios - the Pro has 8-channel memory support, and more PCIE lanes.

    Significant differences not directly tied to performance include registered ECC support, and management tools for corporate security, which actually matters quite a bit with everyone working remotely.
  • WaltC - Tuesday, February 9, 2021 - link

    On the whole, a nice review...;)

    Yes, it's fairly obvious that one CPU core does not equal one GPU core, as comparatively, the latter is wide and shallow and handles fewer instructions, IPC, etc. GPU cores are designed for a specific, narrow use case, whereas CPU cores are much deeper (in several ways) and designed for a much wider use case. It's nice that companies are designing programming languages to utilize GPUs as untapped computing resources, but the bottom line is that GPUs are designed primarily to accelerate 3d graphics and CPUs are designed for heavy, multi-use, multithreaded computation with a much deeper pipeline, etc. While it might make sense to use both GPUs and CPUs together in a more general computing case once the specific-case programming goals for each kind of processing hardware are reached, it makes no sense to use GPUs in place of CPUs or CPUs in place of GPUs. AMD has recently made no secret it is divulging its GPU line to provide more 3d-acceleration circuitry and less compute circuitry for gaming, and another branch that will include more CU circuitry and less gaming-use 3d-acceleration circuitry. 'bout time.

    The software rendering of Crysis is a great example--an old, relatively slow 3d GPU accelerator with a CPU can bust the chops of even WX3995 CPUs *if* the 3995WX is tasked to rendering Crysis sans a 3d accelerator. When the Crysis-engine talks about how many cores and so on it will support, it's talking about using a 3d accelerator *with* a general-purpose CPU. That's what the engine is designed to do, actually. Take the CPU out and the engine won't run at all--trying to use the CPU as the API renderer and it's a crawl that no one wants...;) Most of all, using the CPU to "render" Crysis in software has no comparison to a CPU rendering a ray-traced scene, for instance. Whereas the CPU is rendering to a software D3d API in Crysis, ray-tracing is done by far more complex programming that will not be found in the Crysis engine (of course.)

    I was surprised to read that Ian didn't think that 8-channel memory would add much of anything to performance beyond 4-channel support....;) Eh? It's the same principle as expecting 4-channel to outperform 2 channel, everything else being equal. Of course, it makes a difference--if it didn't there would be no sense in having 3995WX support 8 channels. No point at all...;)
  • Oxford Guy - Tuesday, February 9, 2021 - link

    Yes, the same principle of expecting a dual core to outperform a single core — which is why single/core CPUs are still dominant.

    (Or, we could recognize that diminishing returns only begin to matter at a certain point.)
  • tyger11 - Tuesday, February 9, 2021 - link

    Definitely waiting for the Zen 3 version of the 3955X. I'm fine with 16 cores.
  • Fellovv - Tuesday, February 9, 2021 - link

    Agreed— picked up a p620 with 16c for $2500, could have gotten it for lower from Lenovo if they didn’t have weeks of lead time. Ian- you may see Lenovo discounts all the crazy prices about 50% all year, and sometimes there are Honey coupons to knock off hundreds more.
    I have read that the 16c 2 CCX 3955WX May only get 4 channel RAM, not the full 8. I may be able to confirm in the near future. Gracias for the fine and thorough review. My only request is to ensure the TR 3990 is included in every graph— it was MIA or AWOL in several. I went with they TR Pro for the RAM and PCIe 4 lanes. Seeing the results confirms it was a good choice for me. Can’t wait for the Zen3!
  • realbabilu - Tuesday, February 9, 2021 - link

    Nice 👍 about mkl, how about blis and open las,.did it suffer high multi core problem
  • MonkeyMan73 - Wednesday, February 10, 2021 - link

    AMD has the performance crown in most scenarios, but it comes at an extremely high price point. Might not be worth this kind of money even for most extreme power user. Maybe get a dual core Xeon? Might be cheaper.

    BTW, your las pic of this review is definitly not an OPPO Reno 2 :)
  • MonkeyMan73 - Wednesday, February 10, 2021 - link

    Apologies, not a Dual core Xeon, that will not cut it but meant a Dual Socket Xeon setup.
  • Oxford Guy - Wednesday, February 10, 2021 - link

    The worst aspect of the price-to-performance is that it’s using outdated tech rather than Zen 3.
  • MonkeyMan73 - Sunday, February 28, 2021 - link

    Correct, there is always some sort of trade-off.
  • Greg13 - Wednesday, February 10, 2021 - link

    I feel like you guys really need to get some more memory intensive workloads to test. So often in these Threadripper / Threadripper Pro / EPYC reviews, the consumer CPU (5950X in this case) is often faster or not far behind even on highly multithreaded applications. I do some pretty large thermal fluid system simulations in Simscape where by once a system is designed I use an optimisation algorithm to find the optimal operating parameters of the system. This involves running multiple simulations of the same model in parallel using Matlab Parallel computing toolbox along with their global optimisation toolbox. Last year I bought a 3950X and 128GB ram to do this, but as far as I can tell it is massivly memory bandwidth limited. It's also memory capacity limited too... Each simulation uses around 10GB ram each, so I generally only run 12 parallel workers to keep within the 128GB of ram. However, In terms of throughput I see barely any change when dropping down to 8 parallel workers, suggesting, I think that with 12 workers, it's massivly memory bandwidth limited. This also seems to be the case in terms of the CPU power, even with 12 workers going, the CPU power reported is pretty low, which leads me to think it's waiting for data from memory?

    I assume that this would be better with Threadripper or even better with Threadripper Pro with their double and quadrouple memory bandwidth. However I don't have the funds to buy a selection of kit and test it to see if the extra cost is worth it. It would be good if you guys could add some more memory intensive tests to the suite (ideally for me some parallel Simscape simulations!) to show the benefit these extra memory channels (and capacity) offer.
  • Shmee - Wednesday, February 10, 2021 - link

    Yeah I would wait for Zen 3 TR for sure. That said, this would only make sense as X570 has limited IO. It would be great to have a nice 16 core TR that had great OC capability and ST performance, was great in games, and did not have the IO limitation as X570. I really don't need all the cores, mainly I care about gaming, but the current gaming platforms just don't have the SATA and m.2 ports I would like. Extra memory bandwidth is also nice.
  • eastcoast_pete - Wednesday, February 10, 2021 - link

    Thanks Ian! I really wanted one, until I saw the system price (: But, for what these proTRs can do, a price many are willing and able to pay.
    Also, as it almost always comes up in discussions of AMD vs Intel workstation processors: could you write a backgrounder on what AVX is/is used for, and how open or open source extensions like AVX512 really are? My understanding is that much of this is proprietary to Intel, but are those AVX512 extensions available to AMD, or do they have to engineer around it?
  • kgardas - Wednesday, February 10, 2021 - link

    avx512 is instruction set implemented and invented by Intel. Currently available in TigerLake laptops and Xeon W desktops plus of course server Xeons. Previous generation was AVX2 and generation before AVX. AVX comes with Intel's SandyBridge cores 9 years ago IIRC. AVX2 with Haswell.
    Due to various reasons IIRC AMD and Intel cross-licensed their instruction sets years ago. Intel needed AMD's AMD64 to compete. Not sure if the part of the deal is also future extensions, but I would guess so since AMD since that time implemented both AVX and AVX2. Currently AMD sees no big pressure from Intel hence I guess is not enough motivated to implement avx512. Once it is, I guess we will see AMD chips with avx512 too.
  • kwinz - Wednesday, February 10, 2021 - link

    Really? CPUs are in high demand because GPU programming is hard? That's what you're going with?
  • Gomez Addams - Wednesday, February 10, 2021 - link

    Good heavens that was painful to read. Some of the worst writing have had to suffer through in a while. One tip : compute is a verb and not a noun, just as simulation is a noun and simulate is a verb. Just because marketing droids use a term does not mean it is correct.
  • Spunjji - Thursday, February 11, 2021 - link

    Sorry pal, language doesn't work that way. You may not *like* it, but that's the way it is!
  • croc - Wednesday, February 10, 2021 - link

    Are we now ignoring the elephant? EPYC was to launch in 2020. Actually, AMD said that Zen 3 would launch in 2020, but there's a weasel in them words... SOME of the Zen 3 cpu's DID launch, mostly looking like paper though. EPYC is sort-of launching as we speak, and Zen 3 Threadripper is a no-show.

    I have said this elsewhere, and I will say it here. It would appear that AMD's lack of fab experience is showing, as they seem to be having issues getting their designs to fab properly at 7nm. Low to no yields? And TSMC is having issues of their own with China buying up as much talent as it can, while threatening to just grab it all in a military takeover. TSMC should have already built an advanced fab somewhere in the west, out of China's reach. Europe? Canada? After Trump, I would say to avoid the US as much as it needs to avoid China.
  • Spunjji - Thursday, February 11, 2021 - link

    AMD were hoping to get Milan into production in Q3 2020 and have it shipping to some customers by Q4, which they did. It's not available to OEMs yet, so hardly a fanfare moment, but not the "elephant" you're trying to paint it as either.

    Same goes for Zen 3, too - it absolutely wasn't a "paper launch" - but I see you're just here to push FUD rather than discuss *the article*.

    Like, what's this "they seem to be having issues getting their designs to fab properly at 7nm" crap? Whose backside are you pulling that out of?

    Amazing how many people seem to think these comment sections are the ideal place to grind their own personal political axes.
  • Oxford Guy - Thursday, February 11, 2021 - link

    ‘Amazing how many people seem to think these comment sections are the ideal place to grind their own personal political axes’

    You seem to think this is your personal website.
  • Qasar - Thursday, February 11, 2021 - link

    as do you, point is ?
  • Oxford Guy - Wednesday, March 10, 2021 - link

    It’s not the tu quoque fallacy.
  • tygrus - Thursday, February 11, 2021 - link

    If total desktop sales were 20Million in 2020Q4 and if AMD sold 5M with about 0.95M being Zen3 so AMD could have been 19% of desktop CPU sales being Zen3. That's a good start and a lot better than <5% you may think happens for a paper launch. Notebook market adds another 50M/qtr (20% AMD?) and tablets (probably not AMD) on top of that so Zen3 sales would look like ~7% of AMD consumer CPU's sold that qtr.

    Not all sales & deliveries are publicised so server sales may have happened already for Zen3. The FAB capacity & yield were more than enough because it was the substrate & final assembly which limited supply.
  • ipkh - Sunday, February 28, 2021 - link

    Really, they have Global Foundries to thank for this. Global Foundries miseead the market and decided to drop highend node production. This left TSMC as the only highend node company left standing (that does 3rd party fab). Global Foundries and Samsung could have had a much better roadmap working together with IBMs researchers. But they didn't and now we see how much it is costing the entire industry. AMD may be forced to use Samsung Foundries if TSMC production gets tied up.
  • Giocic - Thursday, February 11, 2021 - link

    I think that this well done review is missing to test the most important applications that really show the difference of the memory bandwidth available in this machine. The applications are CFD ones like Ansys Fluent or Star-CCM. If you have the chance to test it especially with machines with a lot of cores, and memory you really show to people that really need this kind of machine why they make sense. Imagine people that design jet engines or in the automotive industry.
  • Techlover 23 - Tuesday, February 23, 2021 - link

    Can't wait for the threadripper Pro cpu I did some research and Asus Pro WS WRX80E Sage SE WiFi is the best workstation motherboard to support these CPU. I need it for my engineering rendering application for sure!!

Log in

Don't have an account? Sign up now