Actually Threadripper 1950X was rumoured to debut at $849 but i'm guessing the reason it is now $999 is because Intel played its hand early with the i9 7900X at $999 which then made AMD say, "hell we've got a 12-core part that keeps up with this thing. We'll slightly undercut the 7900X with that and price our flagship at the notorious intel extreme edition price point of $1000. Everyone will be happy!"
They do not "play with prices", they set their prices somewhere between respectable (or very high, in Intel's case) profit margins and the maximum price they think their customers will endure. Thje price their customers will swallow also depends on the price and price/performance of the competition. Intel's earlier release of CPUs up to 12 cores, the fact that they announced prices before AMD and the subsequent benchmarks that were made on them, allowed AMD to "test the waters" and adjust their prices accordingly. AMD and Intel are a duopoly in their sector, and duopolies always take into account the price of their competitor.
Thanks for the microeconomics lecture. My doubts are related to the validity of the claim that the 1950X was ever going to be an $850 part. As a betting man the narrative of "gee I wish this amazing part was $850" is more likely than "AMD leaked that this part could be valued/sold at $850".
Dude, I found the links doing 1 search on google. Some peeps just like to whine and throw the fanboy word around like it's a valid argument before getting some information..... waw, just WAW!
Maybe you need to READ the articles you linked to in your post... the linked articles clearly state 16-core ENTRY level; not Flag-ship/top-end 16-core TR.
Slight correction. Intel's only released up to their 10-core part (i9-7900X) at this point. 12-core and up is the HCC (High-Core-Count) die coming sometime in the fall (rumored October-ish). Till then, AMD's currently got them beat on core count at the high end, even with their lower end Threadripper part.
lol i strongly encourage u to take a basis business class. Do you honestly believe that if any one of the 2 can get away with pricing their CPUs for less they won't? Why wouldnt they? Do they now want sales? obviously lower prices mean increased sales and neither of these 2 giant tech companies are that dumb. Margins on tech in general are extremely small because there is a ton of R&D costs that companies need recoup in the first year or ASAP hence prices on new tech are generally higher. That's how it works and that has always been the case. Now as far as pricing, they are priced to recover those costs and a small or big enough margin where they think they can maximize those sales...and most likely that margin is on the low side.
No Ukilo the 1950x was never rumored to be $849 is was the lowest end 16 core the 1950 that had that rumor and looking at pricing it seems to be fairly on point.
I imagine the non-"X" parts (along with the heavily rumored $550 8-core 1900X) to come along at some point later with lower clocks & XFR, and split the current prices.
There might be some discounts after the initial surge. But even still, at $1000 for 16c/32t, 60+4 PCIe lanes, quad channel RAM? That's a freaking discount.
Of course, the logical argument is that EPYC 7401P 24c/48t CPU is $1,075. 128 lanes and even better RAM support. Depending on what your workload is, it could very well be worthwhile to pursue EPYC, despite its slower clock speed.
In order to use EPYC, you'd need to use a server motherboard. Those may not be readily available from regular retail, and there probably will be few models available in general, unless you purchase a complete server.
The server boards may also not include things that a workstation needs, or things that may be desirable in gaming, such as SLI or Crossfire.
Heat production is a consequence of die size, power draw and leakage. A monolithic chip isn't much different from 2 chips that take up the same same area. Processor design & manufacturing process are contributing factors also. The biggest problem with a large area size die (monolithic) chip is cost, since fewer usable chips are able to be harvested due to defects.
"Heat production is a consequence of die size, power draw and leakage. A monolithic chip isn't much different from 2 chips that take up the same same area"
This may be true, but that doesn't mean cooling performance will be the same.
If you look at a thermal image of a monolithic design, there will be one hot spot right in the middle.
In contrast, a dual die will have two cooler spots, forming an overall bigger but cooler spot.
You will still be limited to how much heat your cooler can remove, but the dual die having heat spread over a larger area will have a significant effect on core temperature.
It wouldn't have much of an effect in the dies are immediately next to each other, but amd has their dies like half an inch apart. The further they are apart, the less thermal effect the dies will have on each other.
It wouldn't hurt if Threadripper was priced lower.
Epyc's 4 lowest priced SKUs range from $1100 to $475.
While the Clock is not as fast the Epyc CPU has Enterprise Features and is guaranteed Socket Compatible for a few years (IE: The 7nm, 64 Core, 4GHz+ CPU drops in, same TDP as the 7601).
See the GloFo 7nm Webpage for the performance improvements from going to 7nm from 14nm, and AMD's Videos for the word on Server Socket comparability.
Game makers should start targeting these high core count procs ASAP to ensure there is a market for sophisticated games with high end compute requirements. It can only benefit them.
Honestly, I'd just like games to expand beyond 4 cores at all. I doubt we're going to see developers targeting 16-cores CPUs, as the percentage of users with those will be very small. I think 6-core utilization is a more practical, but still useful goal.
With steam showing 42% of gamers on dual core systems (most presumably with hyperthreading, but not readily broken out to show) and only 4.2% on systems with more than 4 cores; 2 big threads and misc small ones or 4 equal threads is still probably the sweat spot for optimization. If Ryzen and Intels upcoming 6 core mainsteam CPUs show strong adoption we might start to see more widespread use of engines that can smoothly scale to larger numbers of cores. Currently it doesn't offer much benefit for the amount of work involved.
Unfortunately, you're right. At least Intel and especially AMD are bringing the cost of more cores down, which will increase adoption of 4+ core systems. My i7-6850K for ~$550 could buy me 8 cores now with an 1800X or a i7-7800X
Wait, isn't steam based mostly on laptops? I bet the I7 skews the data since they are probably crappy U chips. For PC, most people buy the software but use a crack to avoid the DRM hassle and don't use steam at all. Which is why supporting DRM through steam is a terrible idea to being with, like when Keurig tried to DRM it's coffee. Anyways, good luck AMD,you may have really good CPU's this time around, but the Motherboard support is terrible, esp. with Gigabyte and their AM4 soft brick issue.
Your perception may be slanted, those of us who were gamers ten years ago definitely remember Bioshock and other games with such heavy DRM that it was installing root-kits into people's PCs. DRM isn't heavy-handed anymore BECAUSE of the backlash. I am anti-pirate but also anti-malware from software companies.
I'll have to side with the major general on this one. Most gamers right now are mobile and console, despite growth in the PC gaming market. Even if we cut down to just PC gamers, most don't bother checking forums or gaming news more than occasionally. Many don't play games outside of a few favored series. Most gamers only notice about DRM when it happens in their game and causes problems for them. When their only encounter with DRM is Steam itself, most folks aren't too bothered by it just being a slow to start launcher. It's not that DRM isn't worse for them than the alternative, it's just not their priority and the push back is done by folks who care much more about it than they do. The more successful the vocal side is at keeping DRM out of their games, the less reason they have to join the vocal side.
There's a lot of games on steam that don't enable its copy protection, and steam is often just a convenient update + sync client for things obtained via GOG or Humble Bundles/Humble Store
Yup, almost entirely laptops (with some old desktops as well), and heavily weighted by users in developing countries like Brazil and India with older models. The Steam survey is a terrible representation of just the North American desktop PC market.
It's kind of a chicken and egg thing. Game developers will not use more than 4 cores because PC gamers rarely buy CPUs with more than 4 cores, and PC gamers rarely buy CPUs with more than 4 cores because games do not use more than 4 cores. I thought that the adoption of 8-core consoles of the last 1.5 generations would have broken that circle, but it did not. Both main consoles are x86 based, so their ports to games are relatively straightforward. Does this mean that even console games do not use more than 4 cores (maybe because their PC ports will waste them?) or they do and developers downgrade that support when making the PC ports? What about console exclusive games?
I think that's a bit excessive. I don't see 8 cores picking up into even most gaming oriented builds. 6 cores is still more reasonable, and is easier to achieve than 8 cores.
I'm not sure whether you mean this is the minimum game developers should be using for their workstations or whether you mean 8 cores is the target they should optimize their games for. If you meant the latter: no excuse? How's this for an excuse?
"It's wasteful to go out of my way optimizing my software for the benefit of an extremely tiny subset of my target audience".
The number of people running 8 or more cores is literally sub-1%. And mind you this survey is inherently skewed towards higher-end-than-average users (gamers on steam who know enough about their computers to report their specs). For reference into how skewed to the high end this is, the most popular GPU in the survey is the GTX 1060.
8/16 CPU's game development would be great but not practical because most of the user base is either on mid tier laptops or dual & quad core desktops. If they go the high core count to soon they pretty much remove 3/4 of their sales market. Now in 3-5 years we will see games and software all being made for higher core count systems most likely. It all takes time to get the ball rolling. Having AMD releasing high core count and rumored Intel doing the same in the consumer market is a good first step in the right direction for sure.
i agree game must be optimised for 2 threads ideally (But a bonus if it scales with more cores), as that can impact there sales (really the bottleneck should be the GPU), other issue you have if the game can use more than 4 cores can cause issues for streamers who are using quad core cpus still (or even RYZEN 6-8 core cpus if the game is using all threads)
Before too long (I hope), game engines will be poly-core aware from the ground up so they'll automatically and correctly scale with as many or as few cores as the system has with zero extra configuration. One can dream. It might take an engine written in Rust or Go to actually happen anytime soon.
They should be there already, but for different reasons. Considering how weak consoles' cores are, you cannot really get decent performance in many games if you don't properly utilize all available cores - I think both PS4 and X1 will allow up to 7 cores to be available to devs.
It really starts with console gaming. Until consoles graduate to more cores, neither will PC games.
The problem is everything is so GPU limited that strong CPU performance is irrelevant for most gaming. Where CPU's like this come in handy for typical home use is transcoding, encoding, even just decompressing RAR's. The platform itself offers so many PCIe lanes you could definately utilize it for multi-GPU scenarios as well. This is a really low price for a capable platform.
"Until consoles graduate to more cores, neither will PC games." The two main consoles are both 8-core, and we are talking about PC games limited to 4 cores. How many cores do you think would consoles need for PC games to move beyond 4 cores?
I'm aware, but the original post was about games, so I answered about games. Professional software should also target high core counts, but that wasn't the topic.
My post was not a reply to yours, mind the indentation.
Games are years away from property utilizing 8 cores. They are just not that complex. You can throw more threads at them, but at this point it will result in a performance loss, because of the thread synchronization overhead.
In contrast, HPC workloads scale very well, most of the really time staking ones almost perfectly. Gaming is misplaced in HEDT today. You spend a lot of money on hardware you can't take advance of, and even thou far more expensive, it will actually perform worse than much more affordable products.
That's not strictly true. GTA V for example scales very well with cores/threads. An Intel i9 7900X with 10 cores gives 10fps more than its 4 core i7 7700K cousin paired with a GTX 1080! Also all current gen consoles PS4/XB1 are 8 core CPUS so developers are somewhat familiar with optimising code for more cores.. just saying.
You're proving his point. You're increasing core count by 2 and a half times and getting 10 extra FPS, which depending on what the baseline framerate was might mean an extra 10-20%? 250% the cores for 10% performance gains is not an example of "scaling well".
Actually this might be one of those times when internet sarcasm is hard to detect and I'm actually being daft while your joke flies over my head.
Well yes, but if not for GTA etc., there wouldnt be so much carnage out there, and surgeons wouldnt need so many powerful imaging computers like these.
Games are going to be optimised for machines today, as you are selling the game today for $$$, no point optimising it for 3 years away when the game is in the bargain bucket. Games engines however can afford to be a bit more forward thinking.
It's not an easy undertaking. Splitting tasks into independent threads can be done in some instances, but doesn't make sense in others. And sometimes when you split tasks, the overhead of doing so may cost performance rather than saving anything. Perhaps with the massive amounts of threads we're seeing, newer techniques come to light, but then those might cause trouble on, say 4c8t machines. So you might have to have two code paths, ugly and hard to maintain. It's very complex, never easy.
My current software compilation machine is a 4C/8T Xeon 1650v3. I'll be happy to ditch it for a Threadripper, should shave hours off my parallel builds. The max RAM supported should also be an improvement over the paltry 32GB/64GB most non-extortionately priced workstations offer (think HP Z2xx/Z4xx class).
An ANSYS user once told me that his ideal workstation would have 1TB RAM. He didn't care about cores, even one would be enough he said, he just wanted RAM, lots of it. Funny how applications can vary so widely in what they need for optimal throughput. Btw, I've seen an After Effects render gobble 40GB RAM in seconds, and a guy at a US movie company told me some of their really heavy renders can pull in 500GB of data for a single frame, which of course really hammers a machine with only 64GB.
Don't even get me startedon GIS, or Defense Imaging systems. :D Years ago this was the sole domain of SGI, producing systems for ERDAS that could handle datasets of hundreds of GB:
I talked a UK movie company that's working on the necessary tech for uncompressed 8K workflow in apps like Flame. So far they're meddling with PCIe SSDs to get about 9GB/sec, but they need more, especially for moving the data around.
There's big compute, there's big data, and a whole world of stuff inbetween.
Then what's the point of high core chips like Threadripper or i9? They don't have server admin features like on Epyc or Xeon and they're crazy expensive for typical office desktops.
Would Threadripper make sense for scientific workstations, video editing and CAD rigs? As compared to using Epyc or Xeon for these tasks.
"Big memory", much of which involves virtualising nvme striped raid arrays, is a big trend big servers.
AMD seem in the process of introducing it much further down the IT food chain.
epyc has an awesome number of native nvme ports as i recall - 16 maybe?
even the fabric enabled vega frontier gpu has its own nvme ports (& plenty of reserved lanes) & a 1TB striped pair option as a sorta L3 for the vega gpuS hbm2 cache.
amd seem to have restricted vega frontiers use of the nvme array as virtual memory to about 256GB - speed reasons presumably (the published max is 512TB of address space), but its an interesting concept, aye? 256GB+ of gpu workspace on a vega gpu?
A similar virtualising facility seems buried in epyc/Fabric/HBCC, but better hidden or less publicised.
Most have a mindset that its silly to virtualise storage to simulate memory, and indeed it was historically, but storage has changed beyond recognition.
AMD has also focused on the smarts to enhance the virtualisation process.
I wonder what 4x striped samsung 500GB 960 proS would bench at on an epyc mobo? 10GBps sustained? Even much cheaper 128GB nvme drives would yield v fast yet capacious arrays.
So yes, its not dram/vram, & there has to be a perf penalty, but there is to swapping data in and out of workspace too. Epyc handles all that for the app in background.
So maybe your friends, and they are not alone i hear, CAN have ~infinite memory on an affordable AMD HEDT~.
I would love to hear your thoughts on this. apologies for being v tired.
Games can definitely use more cores. The two big areas that come to my mind are physics and A.I.. Will developers go for it ? Probably not, since as someone pointed out, most users are still hanging on to dual core/hyper-threading machines. In 3-4 years maybe they will, as these machines become pretty much obsolete and 6-8 cores will be the norm. Or at least we can hope so.
Physics can and should be accelerated by a co-processor and more advanced AI is likely to also become GPU accelerated. Schemes to allow an IGP to work on these problems in parallel with a rendering GPU would be preferable.
Many, many years ago, Havok Inc., physics middleware vendor, was on the final steps of releasing an OpenCL branch of their physics engine to then be accelerated by GPUs as well, not only CPUs.
And then Intel went and bought Havok. And we never heard a single peep about that OpenCL version again.
@Imcd Physics I can agree with. I will, however, point out that in such a GPU limited gaming market, it would be better if the physics were not executed on the rendering GPUs. A dedicated physics card, IGP, or excess CPU cores in heavily threaded processors are all viable options. Given the state of the market, with only one (GPU vendor locked) GPU accelerated physics engine and the only commercialized dedicated physics hardware bought out and deprecated by that same vendor, thread heavy CPUs may be the only viable option for reducing GPU load in the short term. I would, however, welcome the return of some dedicated co-processor that doesn't put the load on the rendering GPU. Though, I would prefer some sort of vendor independent standard to avoid lock-in shenanigans.
I don't really see AI as a generically good fit for GPU acceleration. Perhaps crowd AI in RTS games would fit well, but in many games, AI is more branch heavy than will execute on a GPU optimally. I figure AI is a pretty good fit for heavily threaded processors.
Consumer tasks (e.g. gaming, but any task that interacts with a human rather than being queued up well in advance and running at 100% utilization all the time) are Amdahl's Law scaled, not Gustaffson's Law. There are very diminishing returns to parallelisation, and many tasks cannot be split at all.
The old adages is: bringing a child to term will take 12 months, regardless of the number of women assigned to the task.
And that's the problem with this thread (pun intended). Frame time is determined by how long the GPU has to spend retrieving resources + how long it takes the GPU to process the resources and render the frame from them. With most major engines supporting DX12 and Vulkan, and greater flexibility in GPU architectures + huge amounts of VRAM, CPUs are becoming comparatively less relevant. Most core count increases will help by keeping OS threads out of cores the game is using.
From a bio of an oz crim, a cellmate had been cuckolded by his wife while he was inside, & he was furious, but it was twins, & he figured they take twice as long, so all was well.
[old addage is that bringing a child to term is going to take 12 months ...]. I think it speeds up to 9 months as soon as you add 1 woman to the parallelism mix!!!!!
@edzieba: "The old adages is: bringing a child to term will take 12 months, regardless of the number of women assigned to the task. " If you want to produce more child in a year throwing more women at it helps, by having 12 of them you get 12 kids in a year (or 9m whatever)
So while you cant complete single kid/task in 1month you can you can complete much more by having more women/cores
Though I probably got your point that when something really cannot be parallelized and you only need to have one child, having 12 wont help...
Parallelization is a very very hard problem, and in many cases (audio production for example), there are many limitations that can't be overcome, since you must have result A before computing result B. Those 32 core monsters will be only interesting for very few tasks that are naturally very parallelizable, such as many video production tasks. In gaming, I see immediate potential for servers, not for PC games.
most of us already have 4 threads even on laptops. If that would push we should have much more quad/octa threaded games. It's just dead hard to async some parts of game.
The core count will be bananas. I'm happy to see AMD finally revving up the competition quite significantly. If they can manage to release Bristol Ridge, I can divert a lot of my mainstream builds for customers their way.
What I wait for is the next zen revision in general. Seems like they have a hot spot somewhere in their design, which is what prevents higher clocks, because their power efficiency is pretty good. They are far from hitting thermal limits at 4 Ghz, something else is holding those cores back.
That's a pretty likely situation. I'd also like to see higher IPC, because they could lower clocks slightly and keep the same performance for some SKUs (i.e. not top end) and that would even further improve power effiiciency.
I suspect, given some of their comments, that (among other things) they will be improving on IPCs. However, it will certainly not be as dramatic as Excavator -> Zen. They will likely receive IPC improvements reminiscent of Intel's last several generations (5% - 10%). If they claim more than 15% I'll be simultaneously extremely enthusiastic and deeply skeptical.
Cache (memories in general) take up a rather large area compared to the amount of circuitry that is active at any given time. Though the entire cache may be considered as running at full speed, only a single address per port is accessed at any given time. That said, the interface circuitry (ports) doing the accessing can and have been seen in the past to get warm. I believe RAMBUS had this heat issue in their complex interface circuitry with their RDRAM on the early P4s.
@ddriver: "Seems like they have a hot spot somewhere in their design, which is what prevents higher clocks, because their power efficiency is pretty good."
It is entirely likely that they have a hot spot that isn't doing them any favors. I haven't checked the extreme overclocking scene lately, but if there is a critical hot spot, then LN2 should cool it sufficiently to allow for some pretty impressive overclocks.
However, it is also possible that there is a critical path with too much capacitance on the lines. There could be too many splits off of a single driver, not enough transmission buffers, improperly sized transmission buffers, or too much (sequential) logic between flip-flops (too many transmission buffers, multi-plexors, comparators, gates, etc.) just to name a few. Given the large number of cores accessing the same two memory controllers, and the memory issues AMD has been having, I figure that would be a good place to start checking.
I could be wrong but I don't think you want to overclock your RAM when completing mission-critical tasks. I don't know how well ECC works with overclocking either, and when you are talking 128Gb+ of RAM, chances are you want/need ECC. For the 1800x what you are saying is correct, in this space, I'm not as confident.
You can clock the memory speed to whatever the maximum speed the RAM allows. You're right that you should not overclock RAM in any mssion critical tasks but that's not the same as you can't overclock the interface safely which gives the performance boost of the decreasing latency and increasing bandwidth of the infinity fabric as those are connected. If you're going for a big memory build you're limited to DDR4 2400 ECC RAM unless you're willing to spend silly money as 128Gb of the former in 4x32Gb configuration is already about $1600. If it's less critical, or you generally need less memory, you can use faster spec'ed ram and get the mentioned performance boost.
It depends a little on how you see this platform being used. I see it as mainly being a replacement for 1P/2P Xeon workstations. That it can compete in, and basically win in, the high end desktop market as well is just an added bonus.
Yeah sure higher margins than server cos we deserve to be robbed. They can't support the platform with those prices.
As it is, only 3 of their SKUs are selling well, the two 6 cores and the 1700, all else needs tuning. The 1800X and 1700X need to be closer to were cheapest prices are now, the quads need higher single core turbo but then again, their quads are dead ,they can't sell them now and Intel has Coffelake soon. Nice that AMD made sure to ship Ryzen 3 only when it doesn't matter anymore. Threadripper needs a 16 cores at up to 800$ and a 12 cores at bellow 600$.
They got so used to not make any money that they refuse to change course and actually try to sell some products.
Very interesting to have just 2 models, but then again how many do you really need. Pricing also makes surprising amount of sense. People are pushed towards the more expensive and powerful CPU as it is better perf/$; plus these prices are low enough to make a lot of sense. In stark contrast with desktop pricing, where 1600 is incredible perf/$ while 1800X is far too expensive.
As for IPC, AMD's cinebench table gives Intel IPC of 65.7 points/(cores*GHz), 12C TR gets 57.9 and 16C gets 56.3. About 15% IPC advantage for Intel here. But this is using just base clocks, so mostly the worst case for AMD.
I'm no longer interested in Bristol Ridge. I assume it will appear when Raven Ridge appears, as a low end option, at which point it won't be of much interest.
At this point, it's looking more and more like that. Even if it releases RIGHT NOW, a lot of people will probably say "just wait for Raven Ridge when it comes out before the end of the year, it'll be 1000% better"
Excavator's numbers were pretty solid on mobile -- biggest problem though is it's probably 28nm from what I'm seeing around the internet, so it's pretty hard for it to compete.
no word on other core counts or SKU's? I'd be interested in an 8-core or 10-core because I want the PCIe lanes over Ryzen 7/5 but don't want to pay the premium for 12-core. Plus they might turbo higher than the 12/16-core.
You know it's effectively 2 ryzen 8 core cpu's under the heat spreader. As they will want to make them the same you're going to end up with either 2*8=16 or partly disabled 2*6=12.
I think global foundries yields are just too high to give you a 10-core cpu - not only do most of the cores always work, most overclock to within 4% of each other! AMD wins!
At $799 and $999 price for the 12C and 16C respectively, it sort of tells us that the yields are too high for even the 12C variant, and they'd rather sell the 16C, or use the 6 cores they do have for R5s
Apart from the number of cores, the SKUs are pretty much identical. Same amount of RAM, same amount of PCIe, same amount of L3 cache. 80% price for 75% of the cores isn't overpriced if you consider that.
Yes, I suspect they are "wasting" many good cores by addressing many of their price points like the 1500x.
Still, it illuminates what a fantastic biz model they have. They can still extract more value by restricting supply of the wasteful skuS if the 6 & 8 core models are selling well.
If multi-die packages become more common again {ah, it's the days of the Core 2 Quads all over again!) then these gargantuan packages are bad news for ITX. ASRock have demonstrated that LGA2011-3 and LGA2066 can be crammed onto an ITX board, but even they have baulked at trying to fit Threadripper into ITX.
To be honest, how many people want a system this powerful crammed into that small of a package? I feel like they would have the room for at least a microATX case and board.
What are you talking about? Ryzen ITX has already been launched, allowing for 8 cores in the ITX format even with the non-optimized chipset. Multi-die packages at the professional level have been common for higher core counts, and since Threadripper is effectively a "prosumer" part this isn't at all surprising, nor is it a worrying trend because it doesn't deviate from existing trends.
I don't expect mobile Threadripper or anything similar. I expect 4C/8T mobile chip with 8MB of Cache, and 2.5-3Ghz Clock with TDP of 30-50 for $250. More than enough to make i7-7700HQ and i7-7300HQ non-reasonable,overpriced choice - like they are. We just needed an example, luckily we've been getting a lot of those from March 2nd.
Intel have just gone on a huge AMD slating exercise in their Press Workshop 2017, and one of the things in their presentation was that EPYC was just "4 Glued-together Desktop Die". It'd be wonderfully hypocritical of them to follow suit, and yet I, too, expect it to happen.
You are correct, and that goes for quad cores as well! The delicious irony is that about 10 years back it was Intel offering a 2x2 CPU branded as quad-core, like the legendary Q6600. AMD had the Phenom chips, which were a single monolithic die but were significantly behind in several performance metrics.
Intel looks to have a theoretical TDP advantage between Threadripper and the Core i9-7980XE, but given how poorly the i9 7900X behaves in that regard I have my doubts. Really, really looking forwards to see if AMD's power efficiency with Ryzen scales well to this core count!
I seriously doubt the 14/16/18 core part of Intel will come anywhere close the frequency ranges of AMD. I think the 16/32 Intel part will likely be 3.0 - 3.7 at best, if at all and the 18/36 part will likely be 3.0 - 3.4, if not a base frequency under 3.0.
Sustained rumours puts the 18C at 2.7GHz base, 3.7GHz all core boost, and 4.2GHz two-core boost. That's on a 165W package but unless Intel stops using Colgate as TIM and actually starts soldering the heat spreader on then it will be even more thermally constrained than the 7900X is on it's 140W TDP package.
This would be nice for my Android builds... But if an 8-thread configuration takes ~10GB RAM to comfortably build Nougat (I have to kill Xorg to do it with 8GB + a bit of swap thrashing), I suspect 32GB is going to be a bare minimum for any of these chips in a buildbox, preferably 64GB - not that that should be a huge issue for somebody with $1000 to burn on a CPU.
I'm pretty sure at 8GB you're bottlenecked by RAM super hard. I have no idea what your configuration is, but a 6-core Sandy Bridge-E + 24GB RAM is fast enough I can comfortably reserve a core for the rest of my OS while compiling most large projects I've tried within reasonable timeframes. 8GB sticks are quite cheap.
Of course not. If they did that they would handicap their entire operation when Intel stops sending them review samples. This is an old practice at Anandtech, sweep Intel foibles under the rug, lambast AMD at every turn.
Of course not. If they did that they would handicap their entire operation when Intel stops sending them review samples. This is an old practice at Anandtech, sweep Intel foibles under the rug, lambast AMD at every turn.
Yes, they screwed up the TIM (once again) and there are major heat issues with these chips. Ian just decided that wasn't worth discussing in this piece. Draw your own conclusions on why that might be.
Game developers shouldn't target any specific amount of cores. They should instead focus on scale-able computing. Games should be able to detect how many cores a game has and use them appropriately. Don't look at a game being designed for a quad core system. Games should be developed like workstation software is developed. More cores? Better performance. Period. Up to infinity cores.
Easier said than done. CPUs in games handle the way the world works rather than looks. If that differs too greatly between machines you're essentially talking about different games at that point.
e.g. If AI-controlled NPCs scaled with cores (would be an easy way to accomplish what you're suggesting) the game would literally be harder on a more powerful computer. Just the first example I could think of, I'd love to hear some examples you can think of where this would not be the case.
One thing that really bothers me is that it shows each Ryzen core in the TR has 2X16 PCIe lanes, so why do we only get 24 in the mainstream Ryzen chip? Do all the Ryzen chips actually have 32 PCIe lanes, and 8 lanes are just not able to be used? If so, that annoying. If not, then it looks like TR is using a different die from the Ryzen mainstream chip.
I think this has to do with cost. More PCI-E means more pins and bigger socket. It also means more complex motherboards. That would make mainstream/consumer oriented Ryzen computers to extensive for the target market. Most people won't use SLI.
With Threadripper, you get 32 PCIe lanes per chip, but no USB support. Ryzen processors using the AM4 socket support four 10 Gbps USB connections. Due to encoding overhead, the actual data rate is reduced to about 9.7 Gbps, but that's still faster than a single PCIe 3.0 lane. My guess is that the Ryzen chip contains four USB controllers, each of which is fed by two PCIe lanes in order to be able to operate at full speed. For Threadripper and EPYC, these PCIe lanes are connected to the socket rather than the USB controllers.
You have a similar situation with the SATA controllers on the Ryzen chip, but these are electrically switchable. The AM4 socket has 24 PCIe lanes and 2 SATA interfaces, but for two of the PCIe lanes, you have to decide whether they should be connected to the socket (in which case the SATA interfaces are disabled) or to the SATA controllers (in which case you have only 22 PCIe lanes going off chip). In EPYC and (I'm guessing) in Threadripper, the socket doesn't include pins for these SATA interfaces, so the SATA controllers are useless and the PCIe lanes are always connected to the socket.
Still, screw the already over catered sata ports, & 2x usb3 ports would do fine thanks, & let us have a second 4 lane nvme socket for ~7GBps raid nvme for paged virtual memory.
That is a ridiculous chip for $1000. The whole platform cost is cheaper than Intel too. Essentially you could build a pretty ridiculous PC for <$2500. (16 core CPU, 32GB RAM, X399 motherboard, 512GB SSD, 4TB HDD, GTX1080, and a Windows 10 license)
I'm sort of pissed I just built a ryzen 1800x system and watercooled it. I'd buy threadripper for my workstation but... it's too close to me completing this build. I'M NOT MADE OF MONEY AMD GEEZ!
Any news about APUs? Haven't heard a peep about this product category. I'm in the market for a new laptop and a Ryzen APU would be perfect, as long as TDP issues are handled properly.
According to ark.intel.com, the max turbo frequency of the Core i7-7820X and Core i9-7900X is 4.3 Ghz, not the 4.5 Ghz listed in the article.
Based on price, the Threadripper 1950X is competing with the i9-7900X. They are both listed as $999, but the i9-7900X price is for quantity 1000, so the 1950X should be a few dollars less than the i9-7900X.
I wonder if the $200 gap between the 12-core and 16-core Threadrippers means that the 10-core and 14-core CPUs will launch at $699 and $899 respectively. So, $100 steps for each 2 additional cores. Sounds, er, reasonable?
Yeah completely reasonable if you forget 14 an 10 cores aren't reasonable to expect in the first place. Each die needs to have same number of cores per CCX. Epyc had all dies the same. Why wouldn't TR? For TR, sensible chips are 4C, 8C, 12C and 16C. AMD decided to release just 12C and 16C at this point. I guess because 8C at hypothetical 600$ would have stiff competition from 8C Intel and could probably lead to lower sales of the TR platform as a whole. I just wonder why there is no 8C Epyc with high clocks and 180W TDP to better take care of the same niches as 8C and 4C TR would target.
First off, as stated above, there will not be a 10C or 14C version, and strongly doubt 8C either.
Yields are simply too high. There's no benefit for AMD disabling 2 cores on a fully functional 8C dies to make 12C chips when they can get higher profit margin selling them in 6C R5s instead. Basically the pricing is telling you: "Please buy the 1950X... although we know the 1920X actually outperforms the 7900X the 1920X doesn't actually exist" *waves hand*
They always are on Anandtech. Any price reduction on Intel CPUs are immediately included in all future reviews while AMD prices are almost always listed at the highest price point in the previous 6 months. Anandtech is just as heavily Intel biased as Toms Hardware.
The base frequency is just insane for both processors for such a high core count and especially for the power consumption... Makes Intel's entire "i9" series look very lame especially with so few PCI-E lanes and on top of that no ECC support.
"SHED", eh Ian? :) As a fellow Brit, I'm sure the slang use of that word isn't lost on you if you happened to be telling people your computer 'is a bit of a shed' :D
Shouldn't the original Ryzens be classified as mainstream and Ryzen Threadripper HEDT? It's silly to invent new classes just because at long last, the mainstream is getting more cores.
A thought that bears on many of the gaming comments imo, is the sheer impossibility of gamers ever being content.
i.e., every time resolution increments, grunt needed almost squares it seems :).
I think thats the attraction. They hate gaming, but must act keen in order to have an excuse for a whomper computer. Its not like u can brag about your word processor anymore.
AMD is smart. They priced it high enough to prevent Intel from slashing their prices. Because AMD knows that most gamers will buy the intel chip if it's half way close to the AMD in cores and performance. AMD is giving a solid 40% discount here, you just have to deal with a inferior chipset.
In particular, this is already requires 40 address bits, before getting into virtual, rather than physical memory.
The original AMD64 specification had the MMU set up such that any address where the upper 16 bits aren't all the same would be an automatic page fault. So basically in every 64-bit OS, 0xFFFF000000000000-0xFFFFFFFFFFFFFFFF is the kernel space, and 0x0000000000000000-0x0000FFFFFFFFFFF is user space, with anything else being invalid -- effectively only 48-bit addressing (256TB virtual address space) is possible.
Will this be changing in the next 5-10 years, with multi-socket servers starting to approach this limit?
"The original AMD64 specification had the MMU set up such that any address where the upper 16 bits aren't all the same would be an automatic page fault. So basically in every 64-bit OS, 0xFFFF000000000000-0xFFFFFFFFFFFFFFFF is the kernel space, and 0x0000000000000000-0x0000FFFFFFFFFFF is user space, with anything else being invalid -- effectively only 48-bit addressing (256TB virtual address space) is possible."
When they did the amd64 spec, there was no point in putting all 64 memory address lines in hardware. You couldnt buy 16 exabytes of memory, so putting in the extra transistors in hardware to handle it would have been a waste of die space. So, they choose to just implement 48 bits of space, and hardwire the upper 16 to 1s.
This is a normal thing to do. Intel did it in the x86 spec as well. The first 32 bit processors did not support 32 bits in hardware for memory. I think they started with 24, but i cant remember. And the upper 8 bits left over were just set to 1s. Later on they implemented the additonal hardware to support the rest of the bits. AMD just extended that practice with the AMD64 x86 extension.
Any address >48 bits is a fault, because there is no hardware to connect more then 48 bits worth. The same is true of intel chips. Except i think they only do 40 or 44 bits currently, not 48. This may have changed with skylake or kabylake. Its been awhile since i looked up these metrics, but it was definitly less then 48 on intels implementation of amd64 when they first started to adopt it.
This is fine because you cant buy that much memory anyway. Remember this is a physical ram limit not a virtual ram limit. However there will also be virtual limits, because it makes no sense to allocate a page table for the entire 64 bit address space, when you cant use that much. All you do is slow down page look ups, and waste ram doing it.
These limits should be pretty easy to increase. For hardware you just add in the missing bitlines, and change the fault bit mask to check less bits. I think its likely they go to 56 bits next, not the full 64. Tho they could do 52 bits next, which covers 4 petabytes. Increses are already in the spec, i believe(its been many years since i looked at it, but i seem to remember it)
Right now most you can do anyway is 2 terabytes on epyc, so only 41 bits necessary, still got a ways to go before an increase in hardware is necessary. Probably good till 2025-2030 for servers, and for desktop were good for.....i duno mabye 2040.
"Ryzen is ok if u dont want raid nvme - u can have 2 x nvme using onboard x370 m.2 ports, but one is fast and the other, very fast."
Err, you realize that raid nvme makes no sense on intel right?. The m.2 nvme slots are connected to the chipset. The chipset has a the equivilent of 4x pci3.0 lanes of bandwidth to the cpu. And thats shared by everything connected to the chipset. Sata, usb, sound, network, everythign else shares it. A single nvme drive can saturate a 4x link. If you try to do a raid 0 with 2 nvme drives, you would effectivly slow each drive down to 2x speed when you used them. There isnt enough bandwidth to do raid nvme. I meant you can do it, just dont expect the speed increase you would normally get from raid.
I do not know if any of the x299 boards have connected 2 m.2 nvme slots directly to the cpu. There are enough pci lanes on skylake-x to do it. But for compatability reasons, because they have to support 16 28 and 44 configs, they will likely connect all the m.2 slots to the chipset. Since there isnt enough room to do it on the kabylake-x version.
2 nvme on ryzen vs kabylake. On ryzen, you have 1 direct connected to the cpu at a full 4x link that it does not share. The 2nd one is connected to the chipset, likely at 4x 2.0 speed(2x 3.0 equivilent), or half speed. On kabylake you have 2 drives connected to the chipset, electrically 4x pci 3.0, but they have to share the equivilent of 4x pci 3.0 lanes to the cpu.
For ryzen, that means you can access 1 drive at full speed, and 1 at half speed(or a bit more depends on the drive). But you get that speed full time, regardless if you access 1 drive or both drives at the same time.
For kabylake, that means you can access either drive at full speed by themselves, but if you try to do both at the same time, both are half speed, since they have to share the single 4x link to the cpu.
This assumes you also arent maxing out USB ports at the same time. If you do that on kabylake, they also share the 4x link to the cpu, which means it will slow down either drive if used at the same time. If you do this on ryzen, some of the usb ports have their own link to the cpu, so if you used those it wouldnt slow either drive down. Other ports are on the chipset, and would share with the 2nd drive. However the chipset link is 4x and the 2nd drive is likely at 2x, so there is plenty of bandwith to do a lot of usb at the same time as well, before you start slowing down the 2nd drive.
In most workloads you wouldnt notice a difference between either platform tho. Normally you dont use everyting flat out at the same time.
Yes, I was aware of intels chipset deviousness, but thanks for documenting this amazingly neglected intel gotcha - oh, & btw, u can only use ~one device at a time. Its like a house only allowing use of one water tap at a time.
I was simply saying that your best option w/ ryzen w/ a 16 lane gpu & my recommended 2x nvme m.2 ports onboard the mobo, and its not a bad option. - A single nvme 3x best sata ssd speed, and a single nvme, 5-7x best sata ssd speed.
Intels chipset ports, as u describe, certainly preclude raid0 pcie3 nvme. To use a top ssd as it should be used, on the intel onboard m.2 port, would max out the entire chipsetS 4 lane bandwidth.
Yet they claim u can connect 2 such devices.
A notable rule of thumb w/ intel is, it is only $1k+ cpuS that offer 40+ pcie3 lanes. Some only offer 16 lanes - the scoundrels - what a dead end PC?.
I would be interested in your take on the intel non chipset architecture also. It seems suspiciously similar, using; cumbersome (multi hop data path / vs amd) switches and crossbars on the cpu io lanes - sharing limited bandwidth among many ports?
I cant believe its legal to tell such ~lies (even in the pinko EU it seems) as intel do with their 4 lane chipset - seeming to promise an endlessly expandable PC to newbs.
Ok. Great. 16 cores and 32 threads. Not worth anything for gamers, so who is the target audience for threadripper? Encoders? Renderers? People who feel inadequate?
No question that it is for higher end users and I don't see it as a good choice for a mostly gamer. I am quite interested in the 12 core for running Adobe content creation projects (video and photos) and still have some ability to do minor background tasks.
It is also helpful for me that it has the ability to handle a bunch of cards so I won't have any issue with my Adaptec 8805/Intel X520DA2/and several other cards that hog PCIe lanes.
And yup, between projects there may be some gaming that it will handle without a problem but that's not even close to critical for me.
Of course, I will wait for the reviews but I'm pretty excited about the possibilities here.
Video editing, software-based rendering, engineers and designers, developers working on very large software projects. The market that used to buy SGI and Sun workstations back in the day.
2 points PixyMisa for just mentioning SGI. :D SUN didn't serve quite the same markets, as they never had the equivalent 3D/visualisation product line. SUN were stronger in business markets, though SGI did do well there too in some cases.
Anyway, this is why the strong I/O of TR (and Naples) is so interesting, SGI's traditional markets focused on big data, where it wasn't so much how fast one could process data as how much (or as John Mashey once put it, "It's the bandwidth, stupid!"). SGI scaled its I/O to tens of GB/sec using vastly parallel FC and scalable shared memory NUMA, which even with the earlier Onyx2 model could handle large data sets (eg. early 2000s, the 64-CPU Group Station for Defense Imaging, load and display a 67Gbyte 2D image in less than 2 seconds, sustained I/O rate of 40GB/sec), but the systems to do this were very expensive, easily fill a machine room (I used to run a 16-CPU system with 6 gfx pipes for driving a RealityCentre and CAVE). Medical, GIS, defense, aerospace, auto, etc., they all generate huge amounts of data. Finally having a platform that can support such complex loads even at the desktop level will be a boon. It's a pity SGI has effectively gone, would have been so funny if they'd released a Naples-based server product to succeed UV.
Its almost a 1P epyc with a decent clock speed - i.e., they pulled off pairing 2x zeppelin die w/o too much latency creeping in.
Thats a nice niche.
Just saying, but there has long been a ~prestige/hedt/workstation market for rigs with $1k cpuS & 1k$ gpuS, so what about a 2k$ monster apu. Thats a tasty sale for amd.
As I figure it, a TR MCM has space for 8 cores and 2 x vega gpu, and could extend to "adjacent" HBM2 cache & nvme on the mcm - all on the very impressive Infinity Fabric.
Fabric is fundamentally about maintaining coherency between teamed processors, and there is no reason to think it wont solve those ~crossfire type problems with gpuS as they have for cpuS.
They are forced to produce a die w/ a single zen 4 core ccx and a single vega gpu for the economy desktop apu and mobile market, but i have a feeling that beyond that, there heart is in a 2x vega die, like zeppelin, and we will see epyc like mcmS w/ 16 cores & 4x gpu (or 2x zeppelin die & 2x 2 gpu die).
I'm looking forward if a company will make such a good microATX X399 motherboard.. but looking at the status of AM4 and there's no "good-enough" microATX one.. then we will have to wait a long time ago...
@Ian Cutress: "Up until this point, we knew a few things – Threadripper would consist of two Zeppelin dies featuring AMD’s latest Zen core and microarchitecture, and would essentially double up on the HEDT Ryzen launch. Double dies means double pretty much everything: Threadripper would support up to 16 cores, up to 32 MB of L3 cache, quad-channel memory support, and would require a new socket/motherboard platform ...".
Incorrect, Threadripper is made from the Epyc CPU.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
223 Comments
Back to Article
nathanddrews - Thursday, July 13, 2017 - link
I'm glad to see the $999 being the top, but I was hoping it would be $799 for the 16/32 model.nathanddrews - Thursday, July 13, 2017 - link
...as opposed to $1299 or $1499 as some rumors stated...uklio - Thursday, July 13, 2017 - link
Actually Threadripper 1950X was rumoured to debut at $849 but i'm guessing the reason it is now $999 is because Intel played its hand early with the i9 7900X at $999 which then made AMD say, "hell we've got a 12-core part that keeps up with this thing. We'll slightly undercut the 7900X with that and price our flagship at the notorious intel extreme edition price point of $1000. Everyone will be happy!"willis936 - Thursday, July 13, 2017 - link
Rumored by who? The fanboys lving in their fantasy world? Neither company has room to play with prices right now.Santoval - Friday, July 14, 2017 - link
They do not "play with prices", they set their prices somewhere between respectable (or very high, in Intel's case) profit margins and the maximum price they think their customers will endure. Thje price their customers will swallow also depends on the price and price/performance of the competition. Intel's earlier release of CPUs up to 12 cores, the fact that they announced prices before AMD and the subsequent benchmarks that were made on them, allowed AMD to "test the waters" and adjust their prices accordingly. AMD and Intel are a duopoly in their sector, and duopolies always take into account the price of their competitor.willis936 - Friday, July 14, 2017 - link
Thanks for the microeconomics lecture. My doubts are related to the validity of the claim that the 1950X was ever going to be an $850 part. As a betting man the narrative of "gee I wish this amazing part was $850" is more likely than "AMD leaked that this part could be valued/sold at $850".Galid - Friday, July 14, 2017 - link
http://wccftech.com/amd-ryzen-threadripper-16-core...https://www.techpowerup.com/234114/amds-entry-leve...
Dude, I found the links doing 1 search on google.
Some peeps just like to whine and throw the fanboy word around like it's a valid argument
before getting some information..... waw, just WAW!
Freebyrd26 - Saturday, July 15, 2017 - link
Maybe you need to READ the articles you linked to in your post... the linked articles clearly state 16-core ENTRY level; not Flag-ship/top-end 16-core TR.Cooe - Friday, July 14, 2017 - link
Slight correction. Intel's only released up to their 10-core part (i9-7900X) at this point. 12-core and up is the HCC (High-Core-Count) die coming sometime in the fall (rumored October-ish). Till then, AMD's currently got them beat on core count at the high end, even with their lower end Threadripper part.Hxx - Tuesday, July 18, 2017 - link
lol i strongly encourage u to take a basis business class. Do you honestly believe that if any one of the 2 can get away with pricing their CPUs for less they won't? Why wouldnt they? Do they now want sales? obviously lower prices mean increased sales and neither of these 2 giant tech companies are that dumb. Margins on tech in general are extremely small because there is a ton of R&D costs that companies need recoup in the first year or ASAP hence prices on new tech are generally higher. That's how it works and that has always been the case. Now as far as pricing, they are priced to recover those costs and a small or big enough margin where they think they can maximize those sales...and most likely that margin is on the low side.FreckledTrout - Friday, July 14, 2017 - link
No Ukilo the 1950x was never rumored to be $849 is was the lowest end 16 core the 1950 that had that rumor and looking at pricing it seems to be fairly on point.Cooe - Friday, July 14, 2017 - link
I imagine the non-"X" parts (along with the heavily rumored $550 8-core 1900X) to come along at some point later with lower clocks & XFR, and split the current prices.johnpombrio - Thursday, July 13, 2017 - link
There may be lower clocked versions that will be cheaper.bill.rookard - Thursday, July 13, 2017 - link
There might be some discounts after the initial surge. But even still, at $1000 for 16c/32t, 60+4 PCIe lanes, quad channel RAM? That's a freaking discount.nathanddrews - Thursday, July 13, 2017 - link
Of course, the logical argument is that EPYC 7401P 24c/48t CPU is $1,075. 128 lanes and even better RAM support. Depending on what your workload is, it could very well be worthwhile to pursue EPYC, despite its slower clock speed.ERJ - Thursday, July 13, 2017 - link
Although you are correct, for most desktop / workstation tasks the 2ghz 24 cores will not outpace 3.4ghz w/ 16 cores.deil - Friday, July 14, 2017 - link
tell me 2 years ago I will get 16 cores for 1k I would tell you're insane.Hul8 - Sunday, July 16, 2017 - link
In order to use EPYC, you'd need to use a server motherboard. Those may not be readily available from regular retail, and there probably will be few models available in general, unless you purchase a complete server.The server boards may also not include things that a workstation needs, or things that may be desirable in gaming, such as SLI or Crossfire.
deil - Friday, July 14, 2017 - link
I just hope it will not overheat like they say about intel big boysnathanddrews - Friday, July 14, 2017 - link
Pretty sure Intel's heat problem is their unsoldered IHS.msroadkill612 - Friday, July 14, 2017 - link
Yes, but heat is also an inherent problem in monolithic chips.Intel will suffer disproportionately beyond 10-12 core.
Freebyrd26 - Saturday, July 15, 2017 - link
Heat production is a consequence of die size, power draw and leakage. A monolithic chip isn't much different from 2 chips that take up the same same area. Processor design & manufacturing process are contributing factors also. The biggest problem with a large area size die (monolithic) chip is cost, since fewer usable chips are able to be harvested due to defects.ddriver - Friday, July 21, 2017 - link
"Heat production is a consequence of die size, power draw and leakage. A monolithic chip isn't much different from 2 chips that take up the same same area"This may be true, but that doesn't mean cooling performance will be the same.
If you look at a thermal image of a monolithic design, there will be one hot spot right in the middle.
In contrast, a dual die will have two cooler spots, forming an overall bigger but cooler spot.
You will still be limited to how much heat your cooler can remove, but the dual die having heat spread over a larger area will have a significant effect on core temperature.
It wouldn't have much of an effect in the dies are immediately next to each other, but amd has their dies like half an inch apart. The further they are apart, the less thermal effect the dies will have on each other.
Rοb - Saturday, July 22, 2017 - link
It wouldn't hurt if Threadripper was priced lower.Epyc's 4 lowest priced SKUs range from $1100 to $475.
While the Clock is not as fast the Epyc CPU has Enterprise Features and is guaranteed Socket Compatible for a few years (IE: The 7nm, 64 Core, 4GHz+ CPU drops in, same TDP as the 7601).
See the GloFo 7nm Webpage for the performance improvements from going to 7nm from 14nm, and AMD's Videos for the word on Server Socket comparability.
Rickyxds - Monday, July 24, 2017 - link
You could be borned in BrazilThreadRipper will cost here about $: 1800 dolars or more!
Ser the exemple:
Ryzen 7 1800x in brazil R$: 2235
One Dolar is R$: 3,14
Actual Ryzen 7 1800x price $: 420
420 x 3,14 = R$: 1318,8
2235 - 1318,8 = difference R$: 916,2
Difference in dolar = $291,78
average brazilian salary: R$: 2000 or $: 636,9
Such a joke
prisonerX - Thursday, July 13, 2017 - link
Game makers should start targeting these high core count procs ASAP to ensure there is a market for sophisticated games with high end compute requirements. It can only benefit them.MajGenRelativity - Thursday, July 13, 2017 - link
Honestly, I'd just like games to expand beyond 4 cores at all. I doubt we're going to see developers targeting 16-cores CPUs, as the percentage of users with those will be very small. I think 6-core utilization is a more practical, but still useful goal.DanNeely - Thursday, July 13, 2017 - link
With steam showing 42% of gamers on dual core systems (most presumably with hyperthreading, but not readily broken out to show) and only 4.2% on systems with more than 4 cores; 2 big threads and misc small ones or 4 equal threads is still probably the sweat spot for optimization. If Ryzen and Intels upcoming 6 core mainsteam CPUs show strong adoption we might start to see more widespread use of engines that can smoothly scale to larger numbers of cores. Currently it doesn't offer much benefit for the amount of work involved.MajGenRelativity - Thursday, July 13, 2017 - link
Unfortunately, you're right. At least Intel and especially AMD are bringing the cost of more cores down, which will increase adoption of 4+ core systems. My i7-6850K for ~$550 could buy me 8 cores now with an 1800X or a i7-7800Xadobepro - Thursday, July 13, 2017 - link
Wait, isn't steam based mostly on laptops? I bet the I7 skews the data since they are probably crappy U chips. For PC, most people buy the software but use a crack to avoid the DRM hassle and don't use steam at all. Which is why supporting DRM through steam is a terrible idea to being with, like when Keurig tried to DRM it's coffee. Anyways, good luck AMD,you may have really good CPU's this time around, but the Motherboard support is terrible, esp. with Gigabyte and their AM4 soft brick issue.MajGenRelativity - Thursday, July 13, 2017 - link
I can fairly confidently say most gamers would not know what DRM is, let alone how to get around it.fanofanand - Thursday, July 13, 2017 - link
Your perception may be slanted, those of us who were gamers ten years ago definitely remember Bioshock and other games with such heavy DRM that it was installing root-kits into people's PCs. DRM isn't heavy-handed anymore BECAUSE of the backlash. I am anti-pirate but also anti-malware from software companies.desolation0 - Thursday, July 13, 2017 - link
I'll have to side with the major general on this one. Most gamers right now are mobile and console, despite growth in the PC gaming market. Even if we cut down to just PC gamers, most don't bother checking forums or gaming news more than occasionally. Many don't play games outside of a few favored series. Most gamers only notice about DRM when it happens in their game and causes problems for them. When their only encounter with DRM is Steam itself, most folks aren't too bothered by it just being a slow to start launcher. It's not that DRM isn't worse for them than the alternative, it's just not their priority and the push back is done by folks who care much more about it than they do. The more successful the vocal side is at keeping DRM out of their games, the less reason they have to join the vocal side.lmcd - Thursday, July 13, 2017 - link
There's a lot of games on steam that don't enable its copy protection, and steam is often just a convenient update + sync client for things obtained via GOG or Humble Bundles/Humble StoreCooe - Friday, July 14, 2017 - link
Yup, almost entirely laptops (with some old desktops as well), and heavily weighted by users in developing countries like Brazil and India with older models. The Steam survey is a terrible representation of just the North American desktop PC market.Santoval - Friday, July 14, 2017 - link
It's kind of a chicken and egg thing. Game developers will not use more than 4 cores because PC gamers rarely buy CPUs with more than 4 cores, and PC gamers rarely buy CPUs with more than 4 cores because games do not use more than 4 cores. I thought that the adoption of 8-core consoles of the last 1.5 generations would have broken that circle, but it did not. Both main consoles are x86 based, so their ports to games are relatively straightforward. Does this mean that even console games do not use more than 4 cores (maybe because their PC ports will waste them?) or they do and developers downgrade that support when making the PC ports? What about console exclusive games?Santoval - Friday, July 14, 2017 - link
edit: "..so their ports to *PCs*.."Krysto - Thursday, July 13, 2017 - link
8-core/16-threads should be the MINIMUM in all new game development right now. No excuses.MajGenRelativity - Thursday, July 13, 2017 - link
I think that's a bit excessive. I don't see 8 cores picking up into even most gaming oriented builds. 6 cores is still more reasonable, and is easier to achieve than 8 cores.OceanJP - Thursday, July 13, 2017 - link
I'm not sure whether you mean this is the minimum game developers should be using for their workstations or whether you mean 8 cores is the target they should optimize their games for. If you meant the latter: no excuse? How's this for an excuse?"It's wasteful to go out of my way optimizing my software for the benefit of an extremely tiny subset of my target audience".
Seriously, look at the latest steam survey: http://store.steampowered.com/hwsurvey/cpus/
The number of people running 8 or more cores is literally sub-1%. And mind you this survey is inherently skewed towards higher-end-than-average users (gamers on steam who know enough about their computers to report their specs). For reference into how skewed to the high end this is, the most popular GPU in the survey is the GTX 1060.
RadiatingLight - Thursday, July 13, 2017 - link
Actually, steam survey auto detects your hardware, so gamers done need to know anything about their systems.Ammaross - Thursday, July 13, 2017 - link
Or know enough to opt out during install.msroadkill612 - Monday, July 17, 2017 - link
A way of looking at it is to aim for four non distracted cores.Sure, the real time nature of gaming limits threading it seems, but clearly it also has some benefits.
I the real world, many gamers have the equivalent of 2 cores of intermittent distractions happening, knowingly or not, arguably of course.
It bears out in the many comments on the smoothness of gaming on ryzens 8/6 cores.
lmcd - Thursday, July 13, 2017 - link
Or, you know, you could use DX12 + Vulkan and barely touch a 4th thread, let alone a 4th core.msroadkill612 - Monday, July 17, 2017 - link
How practical is that. Can all games run using it.are u saying dx12 & vulkan mean u dont need as fancy hardware to game?
rocky12345 - Friday, July 14, 2017 - link
8/16 CPU's game development would be great but not practical because most of the user base is either on mid tier laptops or dual & quad core desktops. If they go the high core count to soon they pretty much remove 3/4 of their sales market. Now in 3-5 years we will see games and software all being made for higher core count systems most likely. It all takes time to get the ball rolling. Having AMD releasing high core count and rumored Intel doing the same in the consumer market is a good first step in the right direction for sure.leexgx - Monday, July 24, 2017 - link
i agree game must be optimised for 2 threads ideally (But a bonus if it scales with more cores), as that can impact there sales (really the bottleneck should be the GPU), other issue you have if the game can use more than 4 cores can cause issues for streamers who are using quad core cpus still (or even RYZEN 6-8 core cpus if the game is using all threads)BreakArms - Thursday, July 13, 2017 - link
Before too long (I hope), game engines will be poly-core aware from the ground up so they'll automatically and correctly scale with as many or as few cores as the system has with zero extra configuration. One can dream. It might take an engine written in Rust or Go to actually happen anytime soon.MajGenRelativity - Thursday, July 13, 2017 - link
That's simply not possible in almost every case. Some tasks cannot be infinitely, or even reasonably, parallelized.lmcd - Thursday, July 13, 2017 - link
It's also pointless with Vulkan and DX12 for the most part.lmcd - Thursday, July 13, 2017 - link
Why? DX12 and Vulkan are here to reduce CPU utilization anyway!tipoo - Thursday, July 13, 2017 - link
https://en.wikipedia.org/wiki/Amdahl%27s_lawnikon133 - Thursday, July 13, 2017 - link
They should be there already, but for different reasons. Considering how weak consoles' cores are, you cannot really get decent performance in many games if you don't properly utilize all available cores - I think both PS4 and X1 will allow up to 7 cores to be available to devs.Samus - Thursday, July 13, 2017 - link
It really starts with console gaming. Until consoles graduate to more cores, neither will PC games.The problem is everything is so GPU limited that strong CPU performance is irrelevant for most gaming. Where CPU's like this come in handy for typical home use is transcoding, encoding, even just decompressing RAR's. The platform itself offers so many PCIe lanes you could definately utilize it for multi-GPU scenarios as well. This is a really low price for a capable platform.
Santoval - Friday, July 14, 2017 - link
"Until consoles graduate to more cores, neither will PC games."The two main consoles are both 8-core, and we are talking about PC games limited to 4 cores. How many cores do you think would consoles need for PC games to move beyond 4 cores?
Hurr Durr - Friday, July 14, 2017 - link
About 32, if we go by IPC.ddriver - Thursday, July 13, 2017 - link
It is not all about games you know. People actually use computers to do work and such.MajGenRelativity - Thursday, July 13, 2017 - link
I'm aware, but the original post was about games, so I answered about games. Professional software should also target high core counts, but that wasn't the topic.ddriver - Thursday, July 13, 2017 - link
My post was not a reply to yours, mind the indentation.Games are years away from property utilizing 8 cores. They are just not that complex. You can throw more threads at them, but at this point it will result in a performance loss, because of the thread synchronization overhead.
In contrast, HPC workloads scale very well, most of the really time staking ones almost perfectly. Gaming is misplaced in HEDT today. You spend a lot of money on hardware you can't take advance of, and even thou far more expensive, it will actually perform worse than much more affordable products.
MajGenRelativity - Thursday, July 13, 2017 - link
My apologies, I missed the indentation.uklio - Thursday, July 13, 2017 - link
That's not strictly true. GTA V for example scales very well with cores/threads. An Intel i9 7900X with 10 cores gives 10fps more than its 4 core i7 7700K cousin paired with a GTX 1080! Also all current gen consoles PS4/XB1 are 8 core CPUS so developers are somewhat familiar with optimising code for more cores.. just saying.OceanJP - Thursday, July 13, 2017 - link
You're proving his point. You're increasing core count by 2 and a half times and getting 10 extra FPS, which depending on what the baseline framerate was might mean an extra 10-20%? 250% the cores for 10% performance gains is not an example of "scaling well".Actually this might be one of those times when internet sarcasm is hard to detect and I'm actually being daft while your joke flies over my head.
lmcd - Thursday, July 13, 2017 - link
The 10FPS is relevant for hitting 120FPS, possibly.It's all pretty silly though, it's probably mostly poor OS scheduling that creates the difference, not any scaling built into the game itself.
msroadkill612 - Monday, July 17, 2017 - link
Well yes, but if not for GTA etc., there wouldnt be so much carnage out there, and surgeons wouldnt need so many powerful imaging computers like these.Dribble - Thursday, July 13, 2017 - link
Games are going to be optimised for machines today, as you are selling the game today for $$$, no point optimising it for 3 years away when the game is in the bargain bucket. Games engines however can afford to be a bit more forward thinking.hansmuff - Thursday, July 13, 2017 - link
It's not an easy undertaking. Splitting tasks into independent threads can be done in some instances, but doesn't make sense in others. And sometimes when you split tasks, the overhead of doing so may cost performance rather than saving anything. Perhaps with the massive amounts of threads we're seeing, newer techniques come to light, but then those might cause trouble on, say 4c8t machines. So you might have to have two code paths, ugly and hard to maintain. It's very complex, never easy.MajGenRelativity - Thursday, July 13, 2017 - link
Exactly. My Java book had a chapter on multithreading, and I gave up on trying to learn it until I was ready to try it XDfazalmajid - Thursday, July 13, 2017 - link
My current software compilation machine is a 4C/8T Xeon 1650v3. I'll be happy to ditch it for a Threadripper, should shave hours off my parallel builds. The max RAM supported should also be an improvement over the paltry 32GB/64GB most non-extortionately priced workstations offer (think HP Z2xx/Z4xx class).lmcd - Thursday, July 13, 2017 - link
I was going to comment on how jumping to 8 with Ryzen was sufficient, then I read the "paltry 32GB/64GB" :)If you don't mind me asking, what do you build that takes quite so long?
mapesdhs - Thursday, July 13, 2017 - link
An ANSYS user once told me that his ideal workstation would have 1TB RAM. He didn't care about cores, even one would be enough he said, he just wanted RAM, lots of it. Funny how applications can vary so widely in what they need for optimal throughput. Btw, I've seen an After Effects render gobble 40GB RAM in seconds, and a guy at a US movie company told me some of their really heavy renders can pull in 500GB of data for a single frame, which of course really hammers a machine with only 64GB.Don't even get me startedon GIS, or Defense Imaging systems. :D Years ago this was the sole domain of SGI, producing systems for ERDAS that could handle datasets of hundreds of GB:
http://www.hexagongeospatial.com/products/power-po...
I talked a UK movie company that's working on the necessary tech for uncompressed 8K workflow in apps like Flame. So far they're meddling with PCIe SSDs to get about 9GB/sec, but they need more, especially for moving the data around.
There's big compute, there's big data, and a whole world of stuff inbetween.
Ian.
serendip - Thursday, July 13, 2017 - link
Then what's the point of high core chips like Threadripper or i9? They don't have server admin features like on Epyc or Xeon and they're crazy expensive for typical office desktops.Would Threadripper make sense for scientific workstations, video editing and CAD rigs? As compared to using Epyc or Xeon for these tasks.
msroadkill612 - Friday, July 14, 2017 - link
Ian, you are in a zone that fascinates me."Big memory", much of which involves virtualising nvme striped raid arrays, is a big trend big servers.
AMD seem in the process of introducing it much further down the IT food chain.
epyc has an awesome number of native nvme ports as i recall - 16 maybe?
even the fabric enabled vega frontier gpu has its own nvme ports (& plenty of reserved lanes) & a 1TB striped pair option as a sorta L3 for the vega gpuS hbm2 cache.
amd seem to have restricted vega frontiers use of the nvme array as virtual memory to about 256GB - speed reasons presumably (the published max is 512TB of address space), but its an interesting concept, aye? 256GB+ of gpu workspace on a vega gpu?
A similar virtualising facility seems buried in epyc/Fabric/HBCC, but better hidden or less publicised.
Most have a mindset that its silly to virtualise storage to simulate memory, and indeed it was historically, but storage has changed beyond recognition.
AMD has also focused on the smarts to enhance the virtualisation process.
I wonder what 4x striped samsung 500GB 960 proS would bench at on an epyc mobo? 10GBps sustained? Even much cheaper 128GB nvme drives would yield v fast yet capacious arrays.
So yes, its not dram/vram, & there has to be a perf penalty, but there is to swapping data in and out of workspace too. Epyc handles all that for the app in background.
So maybe your friends, and they are not alone i hear, CAN have ~infinite memory on an affordable AMD HEDT~.
I would love to hear your thoughts on this. apologies for being v tired.
cocochanel - Thursday, July 13, 2017 - link
Games can definitely use more cores. The two big areas that come to my mind are physics and A.I..Will developers go for it ? Probably not, since as someone pointed out, most users are still hanging on to dual core/hyper-threading machines. In 3-4 years maybe they will, as these machines become pretty much obsolete and 6-8 cores will be the norm. Or at least we can hope so.
lmcd - Thursday, July 13, 2017 - link
Physics can and should be accelerated by a co-processor and more advanced AI is likely to also become GPU accelerated. Schemes to allow an IGP to work on these problems in parallel with a rendering GPU would be preferable.LordanSS - Thursday, July 13, 2017 - link
Many, many years ago, Havok Inc., physics middleware vendor, was on the final steps of releasing an OpenCL branch of their physics engine to then be accelerated by GPUs as well, not only CPUs.And then Intel went and bought Havok. And we never heard a single peep about that OpenCL version again.
BurntMyBacon - Friday, July 14, 2017 - link
@ImcdPhysics I can agree with. I will, however, point out that in such a GPU limited gaming market, it would be better if the physics were not executed on the rendering GPUs. A dedicated physics card, IGP, or excess CPU cores in heavily threaded processors are all viable options. Given the state of the market, with only one (GPU vendor locked) GPU accelerated physics engine and the only commercialized dedicated physics hardware bought out and deprecated by that same vendor, thread heavy CPUs may be the only viable option for reducing GPU load in the short term. I would, however, welcome the return of some dedicated co-processor that doesn't put the load on the rendering GPU. Though, I would prefer some sort of vendor independent standard to avoid lock-in shenanigans.
I don't really see AI as a generically good fit for GPU acceleration. Perhaps crowd AI in RTS games would fit well, but in many games, AI is more branch heavy than will execute on a GPU optimally. I figure AI is a pretty good fit for heavily threaded processors.
msroadkill612 - Monday, July 17, 2017 - link
Its hobbies that sell volume powerful consumer pc hardware, and I suspect vid editing is a big one these days.I suspect having a cool computer is the real hobby :)
edzieba - Thursday, July 13, 2017 - link
Consumer tasks (e.g. gaming, but any task that interacts with a human rather than being queued up well in advance and running at 100% utilization all the time) are Amdahl's Law scaled, not Gustaffson's Law. There are very diminishing returns to parallelisation, and many tasks cannot be split at all.The old adages is: bringing a child to term will take 12 months, regardless of the number of women assigned to the task.
MajGenRelativity - Thursday, July 13, 2017 - link
Indeed, but I feel like 6 core utilization is probably not totally unachievableTheinsanegamerN - Thursday, July 13, 2017 - link
its certiany doable. But would it help at all?Unless you are a MMO or multiplayer shooter with a ridiculous amount of players, it seems the answer is no.
MajGenRelativity - Thursday, July 13, 2017 - link
You can always use more FPS!lmcd - Thursday, July 13, 2017 - link
And that's the problem with this thread (pun intended). Frame time is determined by how long the GPU has to spend retrieving resources + how long it takes the GPU to process the resources and render the frame from them. With most major engines supporting DX12 and Vulkan, and greater flexibility in GPU architectures + huge amounts of VRAM, CPUs are becoming comparatively less relevant. Most core count increases will help by keeping OS threads out of cores the game is using.TheinsanegamerN - Monday, July 17, 2017 - link
More cores /=/ more FPS. If the type of game you are playing cannot use them, you have just wasted money.goatfajitas - Thursday, July 13, 2017 - link
I know alot of women that can do it in 8 or less. :DAmmaross - Thursday, July 13, 2017 - link
They must be ovaryclocked. :P/coat
goatfajitas - Friday, July 14, 2017 - link
Well playedBurntMyBacon - Friday, July 14, 2017 - link
@goatfajitas: "I know alot of women that can do it in 8 or less. :D"Yes, but will adding more women to the task speed it up?
msroadkill612 - Friday, July 14, 2017 - link
From a bio of an oz crim, a cellmate had been cuckolded by his wife while he was inside, & he was furious, but it was twins, & he figured they take twice as long, so all was well.systemBuilder - Thursday, July 13, 2017 - link
[old addage is that bringing a child to term is going to take 12 months ...]. I think it speeds up to 9 months as soon as you add 1 woman to the parallelism mix!!!!!serendip - Thursday, July 13, 2017 - link
I think it was an engineer who thought he could speed up gestation to 1 month by splitting it up among 9 women...wolfemane - Friday, July 14, 2017 - link
I'm pretty sure it's 9 months to bring a child to term. At least it was both times with my wife.Am I missing something?
lordken - Sunday, July 16, 2017 - link
@edzieba: "The old adages is: bringing a child to term will take 12 months, regardless of the number of women assigned to the task. "If you want to produce more child in a year throwing more women at it helps, by having 12 of them you get 12 kids in a year (or 9m whatever)
So while you cant complete single kid/task in 1month you can you can complete much more by having more women/cores
Though I probably got your point that when something really cannot be parallelized and you only need to have one child, having 12 wont help...
TheinsanegamerN - Thursday, July 13, 2017 - link
Assuming the engines support this, or that there is any benefit to be had from running so many threads.Silma - Thursday, July 13, 2017 - link
Parallelization is a very very hard problem, and in many cases (audio production for example), there are many limitations that can't be overcome, since you must have result A before computing result B.Those 32 core monsters will be only interesting for very few tasks that are naturally very parallelizable, such as many video production tasks.
In gaming, I see immediate potential for servers, not for PC games.
boozed - Thursday, July 13, 2017 - link
I'd settle for the professional software I use at work supporting more than one thread.deil - Friday, July 14, 2017 - link
most of us already have 4 threads even on laptops. If that would push we should have much more quad/octa threaded games. It's just dead hard to async some parts of game.wiineeth - Thursday, July 13, 2017 - link
Can't wait for next year's 7nm Threadripper! I will buy it!MajGenRelativity - Thursday, July 13, 2017 - link
The core count will be bananas. I'm happy to see AMD finally revving up the competition quite significantly. If they can manage to release Bristol Ridge, I can divert a lot of my mainstream builds for customers their way.ddriver - Thursday, July 13, 2017 - link
What I wait for is the next zen revision in general. Seems like they have a hot spot somewhere in their design, which is what prevents higher clocks, because their power efficiency is pretty good. They are far from hitting thermal limits at 4 Ghz, something else is holding those cores back.MajGenRelativity - Thursday, July 13, 2017 - link
That's a pretty likely situation. I'd also like to see higher IPC, because they could lower clocks slightly and keep the same performance for some SKUs (i.e. not top end) and that would even further improve power effiiciency.BurntMyBacon - Friday, July 14, 2017 - link
I suspect, given some of their comments, that (among other things) they will be improving on IPCs. However, it will certainly not be as dramatic as Excavator -> Zen. They will likely receive IPC improvements reminiscent of Intel's last several generations (5% - 10%). If they claim more than 15% I'll be simultaneously extremely enthusiastic and deeply skeptical.cheshirster - Friday, July 14, 2017 - link
Zen's L3 must run at a speed of a fastest core.So they have 8Mb of "hot spot".
BurntMyBacon - Friday, July 14, 2017 - link
Cache (memories in general) take up a rather large area compared to the amount of circuitry that is active at any given time. Though the entire cache may be considered as running at full speed, only a single address per port is accessed at any given time. That said, the interface circuitry (ports) doing the accessing can and have been seen in the past to get warm. I believe RAMBUS had this heat issue in their complex interface circuitry with their RDRAM on the early P4s.BurntMyBacon - Friday, July 14, 2017 - link
@ddriver: "Seems like they have a hot spot somewhere in their design, which is what prevents higher clocks, because their power efficiency is pretty good."It is entirely likely that they have a hot spot that isn't doing them any favors. I haven't checked the extreme overclocking scene lately, but if there is a critical hot spot, then LN2 should cool it sufficiently to allow for some pretty impressive overclocks.
However, it is also possible that there is a critical path with too much capacitance on the lines. There could be too many splits off of a single driver, not enough transmission buffers, improperly sized transmission buffers, or too much (sequential) logic between flip-flops (too many transmission buffers, multi-plexors, comparators, gates, etc.) just to name a few. Given the large number of cores accessing the same two memory controllers, and the memory issues AMD has been having, I figure that would be a good place to start checking.
MajGenRelativity - Thursday, July 13, 2017 - link
I just want to say, that's a lot of pins. It's just so many. Not a bad thing, just a lot XDSetiroN - Thursday, July 13, 2017 - link
At this point the pin mass is basically a secondary heatsink :DMajGenRelativity - Thursday, July 13, 2017 - link
Interesting idealmcd - Thursday, July 13, 2017 - link
I think that's the opposite of what everyone wants.msroadkill612 - Monday, July 17, 2017 - link
yep, so why never a fan under the cpu?I never checked, but i imagine there is considerable heat in those little soldered pin protuberances, which would dissipate nicely.
TheinsanegamerN - Monday, July 17, 2017 - link
it would be fine if the motherboard wasnt in the way.Filiprino - Thursday, July 13, 2017 - link
You can overclock the memory. By changing the memory frequency you can increase the interconnect frequency, reducing latency.fanofanand - Thursday, July 13, 2017 - link
I could be wrong but I don't think you want to overclock your RAM when completing mission-critical tasks. I don't know how well ECC works with overclocking either, and when you are talking 128Gb+ of RAM, chances are you want/need ECC. For the 1800x what you are saying is correct, in this space, I'm not as confident.SaturnusDK - Friday, July 14, 2017 - link
You can clock the memory speed to whatever the maximum speed the RAM allows. You're right that you should not overclock RAM in any mssion critical tasks but that's not the same as you can't overclock the interface safely which gives the performance boost of the decreasing latency and increasing bandwidth of the infinity fabric as those are connected. If you're going for a big memory build you're limited to DDR4 2400 ECC RAM unless you're willing to spend silly money as 128Gb of the former in 4x32Gb configuration is already about $1600. If it's less critical, or you generally need less memory, you can use faster spec'ed ram and get the mentioned performance boost.It depends a little on how you see this platform being used. I see it as mainly being a replacement for 1P/2P Xeon workstations. That it can compete in, and basically win in, the high end desktop market as well is just an added bonus.
vanilla_gorilla - Thursday, July 13, 2017 - link
You have the Intel Core i9-7900X listed in the table as 10 cores / 12 threads instead of 20.Very excited about Threadripper, looks like the wait was worth it.
jjj - Thursday, July 13, 2017 - link
Yeah sure higher margins than server cos we deserve to be robbed.They can't support the platform with those prices.
As it is, only 3 of their SKUs are selling well, the two 6 cores and the 1700, all else needs tuning.
The 1800X and 1700X need to be closer to were cheapest prices are now, the quads need higher single core turbo but then again, their quads are dead ,they can't sell them now and Intel has Coffelake soon. Nice that AMD made sure to ship Ryzen 3 only when it doesn't matter anymore.
Threadripper needs a 16 cores at up to 800$ and a 12 cores at bellow 600$.
They got so used to not make any money that they refuse to change course and actually try to sell some products.
MajGenRelativity - Thursday, July 13, 2017 - link
These prices are NOT robbery. I would indicate that you should look at Intel's comparable i9 products, and tell me that these are overpriced.Stuka87 - Thursday, July 13, 2017 - link
You are nuts to say these are overpriced. They came in way less than rumored, and are HALF the price of the i9's.systemBuilder - Thursday, July 13, 2017 - link
Disagree any day a company undercuts a competitor by 50% is an awesome day BUT ONLY FOR AMD, LATELY!!!Ranger1065 - Friday, July 14, 2017 - link
Sad panda :(Zizy - Thursday, July 13, 2017 - link
Very interesting to have just 2 models, but then again how many do you really need. Pricing also makes surprising amount of sense. People are pushed towards the more expensive and powerful CPU as it is better perf/$; plus these prices are low enough to make a lot of sense. In stark contrast with desktop pricing, where 1600 is incredible perf/$ while 1800X is far too expensive.As for IPC, AMD's cinebench table gives Intel IPC of 65.7 points/(cores*GHz), 12C TR gets 57.9 and 16C gets 56.3. About 15% IPC advantage for Intel here. But this is using just base clocks, so mostly the worst case for AMD.
T1beriu - Thursday, July 13, 2017 - link
TDP is 180W for both as seen in their videoCliff34 - Thursday, July 13, 2017 - link
here comes the core arm race :)MajGenRelativity - Thursday, July 13, 2017 - link
Didn't you see that ThreadRipper 2 picture? Obviously we'll be up to around 100 cores next generation!PixyMisa - Friday, July 14, 2017 - link
Corean War.0ldman79 - Saturday, July 15, 2017 - link
Hahahaha!You win.
ET - Thursday, July 13, 2017 - link
I'm no longer interested in Bristol Ridge. I assume it will appear when Raven Ridge appears, as a low end option, at which point it won't be of much interest.MajGenRelativity - Thursday, July 13, 2017 - link
At this point, it's looking more and more like that. Even if it releases RIGHT NOW, a lot of people will probably say "just wait for Raven Ridge when it comes out before the end of the year, it'll be 1000% better"lmcd - Thursday, July 13, 2017 - link
Excavator's numbers were pretty solid on mobile -- biggest problem though is it's probably 28nm from what I'm seeing around the internet, so it's pretty hard for it to compete.TheinsanegamerN - Monday, July 17, 2017 - link
To clarify: they were pretty solid for construction cores. They still suck compared to intel.Raven ridge cant come soon enough.
Amoro - Thursday, July 13, 2017 - link
no word on other core counts or SKU's? I'd be interested in an 8-core or 10-core because I want the PCIe lanes over Ryzen 7/5 but don't want to pay the premium for 12-core. Plus they might turbo higher than the 12/16-core.Dribble - Thursday, July 13, 2017 - link
You know it's effectively 2 ryzen 8 core cpu's under the heat spreader. As they will want to make them the same you're going to end up with either 2*8=16 or partly disabled 2*6=12.oldlaptop - Thursday, July 13, 2017 - link
There are octo-core Epyc parts (that is *four* "Ryzen" dies, each with six cores out of eight disabled) for precisely this kind of reason.Amoro - Thursday, July 13, 2017 - link
Yeah but the clockspeeds are really low for some reason.systemBuilder - Thursday, July 13, 2017 - link
I think global foundries yields are just too high to give you a 10-core cpu - not only do most of the cores always work, most overclock to within 4% of each other! AMD wins!SaturnusDK - Friday, July 14, 2017 - link
At $799 and $999 price for the 12C and 16C respectively, it sort of tells us that the yields are too high for even the 12C variant, and they'd rather sell the 16C, or use the 6 cores they do have for R5sHul8 - Sunday, July 16, 2017 - link
Apart from the number of cores, the SKUs are pretty much identical. Same amount of RAM, same amount of PCIe, same amount of L3 cache. 80% price for 75% of the cores isn't overpriced if you consider that.msroadkill612 - Monday, July 17, 2017 - link
Yes, I suspect they are "wasting" many good cores by addressing many of their price points like the 1500x.Still, it illuminates what a fantastic biz model they have. They can still extract more value by restricting supply of the wasteful skuS if the 6 & 8 core models are selling well.
edzieba - Thursday, July 13, 2017 - link
If multi-die packages become more common again {ah, it's the days of the Core 2 Quads all over again!) then these gargantuan packages are bad news for ITX. ASRock have demonstrated that LGA2011-3 and LGA2066 can be crammed onto an ITX board, but even they have baulked at trying to fit Threadripper into ITX.MajGenRelativity - Thursday, July 13, 2017 - link
To be honest, how many people want a system this powerful crammed into that small of a package? I feel like they would have the room for at least a microATX case and board.TheinsanegamerN - Thursday, July 13, 2017 - link
ITX board already can handle 8 cores from ryzen right now, and x299 will have ITX options.Mini ITX are rarely being used as mega power houses. threadripper doesnt make much sense in that context.
tuxfool - Thursday, July 13, 2017 - link
You have 60 PCIE lanes and you want to waste them all on ITX?lmcd - Thursday, July 13, 2017 - link
What are you talking about? Ryzen ITX has already been launched, allowing for 8 cores in the ITX format even with the non-optimized chipset. Multi-die packages at the professional level have been common for higher core counts, and since Threadripper is effectively a "prosumer" part this isn't at all surprising, nor is it a worrying trend because it doesn't deviate from existing trends.Nem35 - Thursday, July 13, 2017 - link
Haha! Intel should put the lock on their factory.Same price, 50% more performance.
Can't wait for AMD mobile chip prices.
HighTech4US - Thursday, July 13, 2017 - link
180 watts and you expect it to go in a mobile device.As for Intel they too can go multi-die in the future just like AMD. So AMD's lead here might be short lived.
Nem35 - Thursday, July 13, 2017 - link
I don't expect mobile Threadripper or anything similar. I expect 4C/8T mobile chip with 8MB of Cache, and 2.5-3Ghz Clock with TDP of 30-50 for $250. More than enough to make i7-7700HQ and i7-7300HQ non-reasonable,overpriced choice - like they are. We just needed an example, luckily we've been getting a lot of those from March 2nd.silverblue - Thursday, July 13, 2017 - link
Intel have just gone on a huge AMD slating exercise in their Press Workshop 2017, and one of the things in their presentation was that EPYC was just "4 Glued-together Desktop Die". It'd be wonderfully hypocritical of them to follow suit, and yet I, too, expect it to happen.tamalero - Thursday, July 13, 2017 - link
Isnt that what they did when they rushed their first dual core processor and be the first before AMD with their monolithic die?tk.icepick - Friday, July 14, 2017 - link
You are correct, and that goes for quad cores as well! The delicious irony is that about 10 years back it was Intel offering a 2x2 CPU branded as quad-core, like the legendary Q6600. AMD had the Phenom chips, which were a single monolithic die but were significantly behind in several performance metrics.Now AMD with it's "glued together" chips seems to have turned the tables on Intel. :3
http://www.anandtech.com/show/2378
http://www.anandtech.com/show/2477
http://hexus.net/tech/reviews/cpu/10427-when-quad-...
https://en.wikipedia.org/wiki/Kentsfield_(micropro...
https://en.wikipedia.org/wiki/AMD_Phenom
Luckz - Saturday, July 15, 2017 - link
...and back then, AMD complained about Intel doing the glue thing much like Intel does now when AMD succeeds sniffs glue.eek2121 - Thursday, July 13, 2017 - link
"Threadripper 2"Whaaatt???
"Perhaps not - just a clever photoshop. Source: Reddit"
I just got trolled by Anandtech.
Spunjji - Thursday, July 13, 2017 - link
Intel looks to have a theoretical TDP advantage between Threadripper and the Core i9-7980XE, but given how poorly the i9 7900X behaves in that regard I have my doubts. Really, really looking forwards to see if AMD's power efficiency with Ryzen scales well to this core count!FMinus - Thursday, July 13, 2017 - link
I seriously doubt the 14/16/18 core part of Intel will come anywhere close the frequency ranges of AMD. I think the 16/32 Intel part will likely be 3.0 - 3.7 at best, if at all and the 18/36 part will likely be 3.0 - 3.4, if not a base frequency under 3.0.SaturnusDK - Friday, July 14, 2017 - link
Sustained rumours puts the 18C at 2.7GHz base, 3.7GHz all core boost, and 4.2GHz two-core boost. That's on a 165W package but unless Intel stops using Colgate as TIM and actually starts soldering the heat spreader on then it will be even more thermally constrained than the 7900X is on it's 140W TDP package.Azurael - Thursday, July 13, 2017 - link
This would be nice for my Android builds... But if an 8-thread configuration takes ~10GB RAM to comfortably build Nougat (I have to kill Xorg to do it with 8GB + a bit of swap thrashing), I suspect 32GB is going to be a bare minimum for any of these chips in a buildbox, preferably 64GB - not that that should be a huge issue for somebody with $1000 to burn on a CPU.bcronce - Thursday, July 13, 2017 - link
Memory is the cheapest part per unit of a computer. 32GiB is only $150-$220 unless you're buying ultra high end.lmcd - Thursday, July 13, 2017 - link
I'm pretty sure at 8GB you're bottlenecked by RAM super hard. I have no idea what your configuration is, but a 6-core Sandy Bridge-E + 24GB RAM is fast enough I can comfortably reserve a core for the rest of my OS while compiling most large projects I've tried within reasonable timeframes. 8GB sticks are quite cheap.fanofanand - Thursday, July 13, 2017 - link
Of course not. If they did that they would handicap their entire operation when Intel stops sending them review samples. This is an old practice at Anandtech, sweep Intel foibles under the rug, lambast AMD at every turn.Total Meltdowner - Thursday, July 13, 2017 - link
Seems like the price per performance is heavily in AMD's favor.XiroMisho - Thursday, July 13, 2017 - link
You mention "Better Chipset" and lower wattage but are you honestly ignoring the i9s horrific thermal waste unless it's delidded?Intel botched the i9 lid and the thermal conductivity has been discovered to be pretty terrible. We going to get a good breakdown on that Intel flaw?
fanofanand - Thursday, July 13, 2017 - link
Of course not. If they did that they would handicap their entire operation when Intel stops sending them review samples. This is an old practice at Anandtech, sweep Intel foibles under the rug, lambast AMD at every turn.HomeworldFound - Thursday, July 13, 2017 - link
Are these chips going to be outputting a lot of heat compared to something like a 5930K?fanofanand - Thursday, July 13, 2017 - link
Yes, they screwed up the TIM (once again) and there are major heat issues with these chips. Ian just decided that wasn't worth discussing in this piece. Draw your own conclusions on why that might be.DigitalFreak - Thursday, July 13, 2017 - link
Conspiracy! Conspiracy!HomeworldFound - Thursday, July 13, 2017 - link
I'll still give one a go I think.TheinsanegamerN - Monday, July 17, 2017 - link
Why pay more for a throttling inferior product?ReclusiveOrc - Thursday, July 13, 2017 - link
Do not know if this has been asked or stated in the documentation but does Windows 10 see this CPU as 1 or 2 sockets since licensing is per socket?TheinsanegamerN - Monday, July 17, 2017 - link
most likely just 1, as it is single socket. It sees ryzen as one CPU, no reason threadripper wouldnt be.jkhoward - Thursday, July 13, 2017 - link
You're all dumb.Game developers shouldn't target any specific amount of cores. They should instead focus on scale-able computing. Games should be able to detect how many cores a game has and use them appropriately. Don't look at a game being designed for a quad core system. Games should be developed like workstation software is developed. More cores? Better performance. Period. Up to infinity cores.
lazarpandar - Thursday, July 13, 2017 - link
Easier said than done. CPUs in games handle the way the world works rather than looks. If that differs too greatly between machines you're essentially talking about different games at that point.e.g. If AI-controlled NPCs scaled with cores (would be an easy way to accomplish what you're suggesting) the game would literally be harder on a more powerful computer. Just the first example I could think of, I'd love to hear some examples you can think of where this would not be the case.
dgingeri - Thursday, July 13, 2017 - link
One thing that really bothers me is that it shows each Ryzen core in the TR has 2X16 PCIe lanes, so why do we only get 24 in the mainstream Ryzen chip? Do all the Ryzen chips actually have 32 PCIe lanes, and 8 lanes are just not able to be used? If so, that annoying. If not, then it looks like TR is using a different die from the Ryzen mainstream chip.Glock24 - Thursday, July 13, 2017 - link
I think this has to do with cost. More PCI-E means more pins and bigger socket. It also means more complex motherboards. That would make mainstream/consumer oriented Ryzen computers to extensive for the target market. Most people won't use SLI.Glock24 - Thursday, July 13, 2017 - link
*too expensiveKAlmquist - Friday, July 14, 2017 - link
With Threadripper, you get 32 PCIe lanes per chip, but no USB support. Ryzen processors using the AM4 socket support four 10 Gbps USB connections. Due to encoding overhead, the actual data rate is reduced to about 9.7 Gbps, but that's still faster than a single PCIe 3.0 lane. My guess is that the Ryzen chip contains four USB controllers, each of which is fed by two PCIe lanes in order to be able to operate at full speed. For Threadripper and EPYC, these PCIe lanes are connected to the socket rather than the USB controllers.You have a similar situation with the SATA controllers on the Ryzen chip, but these are electrically switchable. The AM4 socket has 24 PCIe lanes and 2 SATA interfaces, but for two of the PCIe lanes, you have to decide whether they should be connected to the socket (in which case the SATA interfaces are disabled) or to the SATA controllers (in which case you have only 22 PCIe lanes going off chip). In EPYC and (I'm guessing) in Threadripper, the socket doesn't include pins for these SATA interfaces, so the SATA controllers are useless and the PCIe lanes are always connected to the socket.
msroadkill612 - Monday, July 17, 2017 - link
Sounds right, ta for theexplanation.Still, screw the already over catered sata ports, & 2x usb3 ports would do fine thanks, & let us have a second 4 lane nvme socket for ~7GBps raid nvme for paged virtual memory.
babadivad - Thursday, July 13, 2017 - link
Cheeky picture at the end.Samus - Thursday, July 13, 2017 - link
That is a ridiculous chip for $1000. The whole platform cost is cheaper than Intel too. Essentially you could build a pretty ridiculous PC for <$2500. (16 core CPU, 32GB RAM, X399 motherboard, 512GB SSD, 4TB HDD, GTX1080, and a Windows 10 license)Total Meltdowner - Friday, July 14, 2017 - link
I'm sort of pissed I just built a ryzen 1800x system and watercooled it. I'd buy threadripper for my workstation but... it's too close to me completing this build. I'M NOT MADE OF MONEY AMD GEEZ!serendip - Thursday, July 13, 2017 - link
Any news about APUs? Haven't heard a peep about this product category. I'm in the market for a new laptop and a Ryzen APU would be perfect, as long as TDP issues are handled properly.Troll_Slayer - Thursday, July 13, 2017 - link
actually Ryzen and Threadripper have same IPC as Sky-X.Intel having a better IPC is a myth.
Total Meltdowner - Friday, July 14, 2017 - link
i don't believe this is true my man. Care to explain? I've seen a lot of people discuss the differences.tamalero - Thursday, July 13, 2017 - link
Isnt that a bit stupid? I mean, they did glued 2 cores for their first dual core processor!KAlmquist - Thursday, July 13, 2017 - link
According to ark.intel.com, the max turbo frequency of the Core i7-7820X and Core i9-7900X is 4.3 Ghz, not the 4.5 Ghz listed in the article.Based on price, the Threadripper 1950X is competing with the i9-7900X. They are both listed as $999, but the i9-7900X price is for quantity 1000, so the 1950X should be a few dollars less than the i9-7900X.
Santoval - Friday, July 14, 2017 - link
I wonder if the $200 gap between the 12-core and 16-core Threadrippers means that the 10-core and 14-core CPUs will launch at $699 and $899 respectively. So, $100 steps for each 2 additional cores. Sounds, er, reasonable?Zizy - Friday, July 14, 2017 - link
Yeah completely reasonable if you forget 14 an 10 cores aren't reasonable to expect in the first place.Each die needs to have same number of cores per CCX. Epyc had all dies the same. Why wouldn't TR? For TR, sensible chips are 4C, 8C, 12C and 16C. AMD decided to release just 12C and 16C at this point. I guess because 8C at hypothetical 600$ would have stiff competition from 8C Intel and could probably lead to lower sales of the TR platform as a whole.
I just wonder why there is no 8C Epyc with high clocks and 180W TDP to better take care of the same niches as 8C and 4C TR would target.
Meaker10 - Friday, July 14, 2017 - link
Because the core breaks down past 4Ghz for most chips so offering more TDP headroom is not that helpful.SaturnusDK - Friday, July 14, 2017 - link
First off, as stated above, there will not be a 10C or 14C version, and strongly doubt 8C either.Yields are simply too high. There's no benefit for AMD disabling 2 cores on a fully functional 8C dies to make 12C chips when they can get higher profit margin selling them in 6C R5s instead. Basically the pricing is telling you: "Please buy the 1950X... although we know the 1920X actually outperforms the 7900X the 1920X doesn't actually exist" *waves hand*
Ranger1065 - Friday, July 14, 2017 - link
The prices in your "AMD Ryzen SKUs" table are inflated.SaturnusDK - Friday, July 14, 2017 - link
They always are on Anandtech. Any price reduction on Intel CPUs are immediately included in all future reviews while AMD prices are almost always listed at the highest price point in the previous 6 months. Anandtech is just as heavily Intel biased as Toms Hardware.TheinsanegamerN - Monday, July 17, 2017 - link
both owned by the same company....hahmed330 - Friday, July 14, 2017 - link
The base frequency is just insane for both processors for such a high core count and especially for the power consumption... Makes Intel's entire "i9" series look very lame especially with so few PCI-E lanes and on top of that no ECC support.Ebonstar - Friday, July 14, 2017 - link
"SHED", eh Ian? :) As a fellow Brit, I'm sure the slang use of that word isn't lost on you if you happened to be telling people your computer 'is a bit of a shed' :Dmapesdhs - Saturday, July 15, 2017 - link
I thought that aswell. :DHul8 - Sunday, July 16, 2017 - link
Shouldn't the original Ryzens be classified as mainstream and Ryzen Threadripper HEDT? It's silly to invent new classes just because at long last, the mainstream is getting more cores.corinthos - Friday, July 14, 2017 - link
AMD 1700X 8 Cores/16 Threads = $290/8 = $36.25 per core. Threadripper 12 Cores/24 Threads = $66.58 per core. Threadripper 16 Cores/36 Threads = $62.44 per core.Best Value looks kind of clear?
chuychopsuey - Friday, July 14, 2017 - link
Maybe, if all that matters to you is core count and if you don't care about extra PCI-e lanes or quad-channel memory.msroadkill612 - Friday, July 14, 2017 - link
Ryzen is ok if u dont want raid nvme - u can have 2 x nvme using onboard x370 m.2 ports, but one is fast and the other, very fast.How significant is quad memory?
Hul8 - Sunday, July 16, 2017 - link
You could always sacrifice one of the PCIe 3.0 x16@x8 slots and use an M.2 NVMe adapter on it.msroadkill612 - Monday, July 17, 2017 - link
Yes, most non heavy gamers even, could do that with impunity. Only a few games tax more than 8 gpu lanes I hear.Onboard, cpu adjacent, direct to cpu, nvme ports, are bound to perform better tho.
TheinsanegamerN - Monday, July 17, 2017 - link
for non professional uses? Almost nill.msroadkill612 - Friday, July 14, 2017 - link
A thought that bears on many of the gaming comments imo, is the sheer impossibility of gamers ever being content.i.e., every time resolution increments, grunt needed almost squares it seems :).
I think thats the attraction. They hate gaming, but must act keen in order to have an excuse for a whomper computer. Its not like u can brag about your word processor anymore.
Chaotic42 - Friday, July 14, 2017 - link
Shut up and rip my threads, AMD. $$$Morawka - Saturday, July 15, 2017 - link
AMD is smart. They priced it high enough to prevent Intel from slashing their prices. Because AMD knows that most gamers will buy the intel chip if it's half way close to the AMD in cores and performance. AMD is giving a solid 40% discount here, you just have to deal with a inferior chipset.TheinsanegamerN - Monday, July 17, 2017 - link
That "superior" one requires a several hundred dollar USB key just to work properly.glugglug - Saturday, July 15, 2017 - link
The 1TB of RAM support is interesting...In particular, this is already requires 40 address bits, before getting into virtual, rather than physical memory.
The original AMD64 specification had the MMU set up such that any address where the upper 16 bits aren't all the same would be an automatic page fault. So basically in every 64-bit OS, 0xFFFF000000000000-0xFFFFFFFFFFFFFFFF is the kernel space, and 0x0000000000000000-0x0000FFFFFFFFFFF is user space, with anything else being invalid -- effectively only 48-bit addressing (256TB virtual address space) is possible.
Will this be changing in the next 5-10 years, with multi-socket servers starting to approach this limit?
msroadkill612 - Monday, July 17, 2017 - link
Vega is specced as having 512TB of address space, if that helps or is relevant.Given vega is also Fabric, I suspect ~epyc may also have a 512TB limit.
none12345 - Saturday, July 15, 2017 - link
"The original AMD64 specification had the MMU set up such that any address where the upper 16 bits aren't all the same would be an automatic page fault. So basically in every 64-bit OS, 0xFFFF000000000000-0xFFFFFFFFFFFFFFFF is the kernel space, and 0x0000000000000000-0x0000FFFFFFFFFFF is user space, with anything else being invalid -- effectively only 48-bit addressing (256TB virtual address space) is possible."When they did the amd64 spec, there was no point in putting all 64 memory address lines in hardware. You couldnt buy 16 exabytes of memory, so putting in the extra transistors in hardware to handle it would have been a waste of die space. So, they choose to just implement 48 bits of space, and hardwire the upper 16 to 1s.
This is a normal thing to do. Intel did it in the x86 spec as well. The first 32 bit processors did not support 32 bits in hardware for memory. I think they started with 24, but i cant remember. And the upper 8 bits left over were just set to 1s. Later on they implemented the additonal hardware to support the rest of the bits. AMD just extended that practice with the AMD64 x86 extension.
Any address >48 bits is a fault, because there is no hardware to connect more then 48 bits worth. The same is true of intel chips. Except i think they only do 40 or 44 bits currently, not 48. This may have changed with skylake or kabylake. Its been awhile since i looked up these metrics, but it was definitly less then 48 on intels implementation of amd64 when they first started to adopt it.
This is fine because you cant buy that much memory anyway. Remember this is a physical ram limit not a virtual ram limit. However there will also be virtual limits, because it makes no sense to allocate a page table for the entire 64 bit address space, when you cant use that much. All you do is slow down page look ups, and waste ram doing it.
These limits should be pretty easy to increase. For hardware you just add in the missing bitlines, and change the fault bit mask to check less bits. I think its likely they go to 56 bits next, not the full 64. Tho they could do 52 bits next, which covers 4 petabytes. Increses are already in the spec, i believe(its been many years since i looked at it, but i seem to remember it)
Right now most you can do anyway is 2 terabytes on epyc, so only 41 bits necessary, still got a ways to go before an increase in hardware is necessary. Probably good till 2025-2030 for servers, and for desktop were good for.....i duno mabye 2040.
Freebyrd26 - Saturday, July 15, 2017 - link
Skynet will go "live" before we reach that limit... ;)none12345 - Saturday, July 15, 2017 - link
"Ryzen is ok if u dont want raid nvme - u can have 2 x nvme using onboard x370 m.2 ports, but one is fast and the other, very fast."Err, you realize that raid nvme makes no sense on intel right?. The m.2 nvme slots are connected to the chipset. The chipset has a the equivilent of 4x pci3.0 lanes of bandwidth to the cpu. And thats shared by everything connected to the chipset. Sata, usb, sound, network, everythign else shares it. A single nvme drive can saturate a 4x link. If you try to do a raid 0 with 2 nvme drives, you would effectivly slow each drive down to 2x speed when you used them. There isnt enough bandwidth to do raid nvme. I meant you can do it, just dont expect the speed increase you would normally get from raid.
I do not know if any of the x299 boards have connected 2 m.2 nvme slots directly to the cpu. There are enough pci lanes on skylake-x to do it. But for compatability reasons, because they have to support 16 28 and 44 configs, they will likely connect all the m.2 slots to the chipset. Since there isnt enough room to do it on the kabylake-x version.
2 nvme on ryzen vs kabylake. On ryzen, you have 1 direct connected to the cpu at a full 4x link that it does not share. The 2nd one is connected to the chipset, likely at 4x 2.0 speed(2x 3.0 equivilent), or half speed. On kabylake you have 2 drives connected to the chipset, electrically 4x pci 3.0, but they have to share the equivilent of 4x pci 3.0 lanes to the cpu.
For ryzen, that means you can access 1 drive at full speed, and 1 at half speed(or a bit more depends on the drive). But you get that speed full time, regardless if you access 1 drive or both drives at the same time.
For kabylake, that means you can access either drive at full speed by themselves, but if you try to do both at the same time, both are half speed, since they have to share the single 4x link to the cpu.
This assumes you also arent maxing out USB ports at the same time. If you do that on kabylake, they also share the 4x link to the cpu, which means it will slow down either drive if used at the same time. If you do this on ryzen, some of the usb ports have their own link to the cpu, so if you used those it wouldnt slow either drive down. Other ports are on the chipset, and would share with the 2nd drive. However the chipset link is 4x and the 2nd drive is likely at 2x, so there is plenty of bandwith to do a lot of usb at the same time as well, before you start slowing down the 2nd drive.
In most workloads you wouldnt notice a difference between either platform tho. Normally you dont use everyting flat out at the same time.
msroadkill612 - Monday, July 17, 2017 - link
Yes, I was aware of intels chipset deviousness, but thanks for documenting this amazingly neglected intel gotcha - oh, & btw, u can only use ~one device at a time. Its like a house only allowing use of one water tap at a time.I was simply saying that your best option w/ ryzen w/ a 16 lane gpu & my recommended 2x nvme m.2 ports onboard the mobo, and its not a bad option. - A single nvme 3x best sata ssd speed, and a single nvme, 5-7x best sata ssd speed.
Intels chipset ports, as u describe, certainly preclude raid0 pcie3 nvme. To use a top ssd as it should be used, on the intel onboard m.2 port, would max out the entire chipsetS 4 lane bandwidth.
Yet they claim u can connect 2 such devices.
A notable rule of thumb w/ intel is, it is only $1k+ cpuS that offer 40+ pcie3 lanes. Some only offer 16 lanes - the scoundrels - what a dead end PC?.
I would be interested in your take on the intel non chipset architecture also. It seems suspiciously similar, using; cumbersome (multi hop data path / vs amd) switches and crossbars on the cpu io lanes - sharing limited bandwidth among many ports?
I cant believe its legal to tell such ~lies (even in the pinko EU it seems) as intel do with their 4 lane chipset - seeming to promise an endlessly expandable PC to newbs.
Outlander_04 - Saturday, July 15, 2017 - link
Ok. Great.16 cores and 32 threads.
Not worth anything for gamers, so who is the target audience for threadripper?
Encoders? Renderers? People who feel inadequate?
RealBeast - Sunday, July 16, 2017 - link
No question that it is for higher end users and I don't see it as a good choice for a mostly gamer. I am quite interested in the 12 core for running Adobe content creation projects (video and photos) and still have some ability to do minor background tasks.It is also helpful for me that it has the ability to handle a bunch of cards so I won't have any issue with my Adaptec 8805/Intel X520DA2/and several other cards that hog PCIe lanes.
And yup, between projects there may be some gaming that it will handle without a problem but that's not even close to critical for me.
Of course, I will wait for the reviews but I'm pretty excited about the possibilities here.
PixyMisa - Sunday, July 16, 2017 - link
Video editing, software-based rendering, engineers and designers, developers working on very large software projects. The market that used to buy SGI and Sun workstations back in the day.mapesdhs - Monday, July 17, 2017 - link
2 points PixyMisa for just mentioning SGI. :D SUN didn't serve quite the same markets, as they never had the equivalent 3D/visualisation product line. SUN were stronger in business markets, though SGI did do well there too in some cases.Anyway, this is why the strong I/O of TR (and Naples) is so interesting, SGI's traditional markets focused on big data, where it wasn't so much how fast one could process data as how much (or as John Mashey once put it, "It's the bandwidth, stupid!"). SGI scaled its I/O to tens of GB/sec using vastly parallel FC and scalable shared memory NUMA, which even with the earlier Onyx2 model could handle large data sets (eg. early 2000s, the 64-CPU Group Station for Defense Imaging, load and display a 67Gbyte 2D image in less than 2 seconds, sustained I/O rate of 40GB/sec), but the systems to do this were very expensive, easily fill a machine room (I used to run a 16-CPU system with 6 gfx pipes for driving a RealityCentre and CAVE). Medical, GIS, defense, aerospace, auto, etc., they all generate huge amounts of data. Finally having a platform that can support such complex loads even at the desktop level will be a boon. It's a pity SGI has effectively gone, would have been so funny if they'd released a Naples-based server product to succeed UV.
Lolimaster - Sunday, July 16, 2017 - link
GUYSRyzen 7 1700 8c/16t $269 on AMAZON, say good bye to your money.
Outlander_04 - Sunday, July 16, 2017 - link
Excellent bargain.Just like the old days when both intel an AMD regularly discounted ......... except the intels are still over priced
msroadkill612 - Monday, July 17, 2017 - link
$33.75 per core.msroadkill612 - Monday, July 17, 2017 - link
the ryzen range fr amazon linkhttps://smile.amazon.com/s/ref=nb_sb_noss_2?url=no...
msroadkill612 - Monday, July 17, 2017 - link
Its almost a 1P epyc with a decent clock speed - i.e., they pulled off pairing 2x zeppelin die w/o too much latency creeping in.Thats a nice niche.
Just saying, but there has long been a ~prestige/hedt/workstation market for rigs with $1k cpuS & 1k$ gpuS, so what about a 2k$ monster apu. Thats a tasty sale for amd.
As I figure it, a TR MCM has space for 8 cores and 2 x vega gpu, and could extend to "adjacent" HBM2 cache & nvme on the mcm - all on the very impressive Infinity Fabric.
Fabric is fundamentally about maintaining coherency between teamed processors, and there is no reason to think it wont solve those ~crossfire type problems with gpuS as they have for cpuS.
They are forced to produce a die w/ a single zen 4 core ccx and a single vega gpu for the economy desktop apu and mobile market, but i have a feeling that beyond that, there heart is in a 2x vega die, like zeppelin, and we will see epyc like mcmS w/ 16 cores & 4x gpu (or 2x zeppelin die & 2x 2 gpu die).
Xajel - Monday, July 17, 2017 - link
I'm looking forward if a company will make such a good microATX X399 motherboard.. but looking at the status of AM4 and there's no "good-enough" microATX one.. then we will have to wait a long time ago...Lolimaster - Tuesday, July 18, 2017 - link
Why would you want mATX for such a monster, the socket itself will allow some funny 140mm heatsink+fan.corinthos - Monday, July 17, 2017 - link
Ryzen 1700 = about half the price per core. Value King!Rοb - Thursday, July 27, 2017 - link
@Ian Cutress: "Up until this point, we knew a few things – Threadripper would consist of two Zeppelin dies featuring AMD’s latest Zen core and microarchitecture, and would essentially double up on the HEDT Ryzen launch. Double dies means double pretty much everything: Threadripper would support up to 16 cores, up to 32 MB of L3 cache, quad-channel memory support, and would require a new socket/motherboard platform ...".
Incorrect, Threadripper is made from the Epyc CPU.
See this DeLidding Video: https://youtu.be/ZoVK6rJR5VE?list=PLWa6uO3ZUweCJdk...
sebacorp - Thursday, August 10, 2017 - link
What is the benchmark score of WinRAR 5.40 for Threadripper 1920x and 1950x ???