Comments Locked

64 Comments

Back to Article

  • Jeff72 - Wednesday, February 1, 2023 - link

    Subject $499 price needs correction. Second to last sentence has 7900X3D/7900X3D and I assume you meant 7950X3D/7900X3D.
  • Ryan Smith - Wednesday, February 1, 2023 - link

    Ack! You are correct. There's too many nines in all of these prices!

    Thanks!
  • Hulk - Wednesday, February 1, 2023 - link

    One v-cache lower clocked chiplet, one without v-cache and clocked higher. Could be construed as a hybrid approach.
  • godrilla - Wednesday, February 1, 2023 - link

    One that can make or break the Zen4 3d R9s if not scheduled to perfection. Will professionals pay more for potentially lower performance in professional apps from a significant amount of cores with higher boost and higher base clocks on a 7950x cpu that will be selling for about $120 less?
  • Cooe - Thursday, February 2, 2023 - link

    Why would professionals be looking at the X3D parts? O_o🤦😑 They are EXPLICITLY MARKETED TOWARDS GAMERS! (And multi-use users who heavily game). Literally NOBODY building a pure workstation is shopping the X3D chips unless they KNOW one of their major workloads responds to extra cache!

    And for any "mixed workload" users, single-core/lightly threaded boost isn't faster/higher on a regular R9 7950X vs the R9 7950X3D! While all-core boost won't be that much lower than the regular parts! (Think non-X Ryzen 7000 NOT R7 5800X3D).

    The vanilla 7950X hits about ≈5.1GHz on all-cores while the R7 7800X3D shows that a Zen 4 X3D CCD can hit 5GHz. Aka, we're talking about a loss of only like -≈100-300MHz here on the all-core boost (≈4.8-5GHz vs ≈5.1GHz) for the highest clocking & core count part! (Or basically EXACTLY what you'd expect from the -50W TDP reduction going from 170W to 120W!)

    Anyone who isn't a complete & utter idiot and who ACTUALLY uses their PC to game a lot won't even REMOTELY CARE about losing a mere ≈6% of their all-core boost clock (≈4.8GHz vs ≈5.1GHz) in productivity workloads in exchange for X3D's absolutely GARGANTUAN gaming performance gains! And if losing that much clock-speed/productivity performance IS a "game ender" for you, then you were never actually considering an X3D chip in the first place!
  • godrilla - Thursday, February 2, 2023 - link

    Wait so you are saying that professionals that do not game would not be smart for buying these Zen 4 3d R9s because the non 3d ones are significantly cheaper?
  • godrilla - Thursday, February 2, 2023 - link

    Update same goes for people that just game*. This product is just a niche of niche product. Don't forget the non 3d parts don't stop being good at gaming either.
  • godrilla - Thursday, February 2, 2023 - link

    Update the 7950x can oc to all core overclock to 5.8 ghz
    https://www.techpowerup.com/299126/amd-ryzen-9-795...

    or 5.95 ghz according to

    https://youtu.be/yXU1FJxbToY

    fyi
  • Zoolook - Saturday, February 4, 2023 - link

    Correct and the gain is only massive in certain games and at lower to mid resolution otherwise you are gpu bottlenecked anyway, a niche of a niche is a very apt description.
  • Targon - Wednesday, February 8, 2023 - link

    For those who pay attention, that is what everyone has been saying. The vast majority of programs, and even some games, do not get extra performance from the extra cache. In the case of the 5800X3D, you already saw how the CPU was actually slower in many/most programs compared to the Ryzen 7 5800X due to the clock speed difference and the cache not resulting in higher performance. The Zen4 chips with 3D stacked cache won't have the clock speed hurt quite as much, and for the Ryzen 9 versions, the 3D stacked cache is only on one of the CCDs, not both, so it won't even hurt performance for most programs.

    For the PROFESSIONAL market that actually uses 16+ cores, the X3D would result in lower performance in those professional applications due to lower clock speeds on one CCD. It would be better to get the non-3D cache versions at that point.
  • Hifihedgehog - Wednesday, February 22, 2023 - link

    Correct. They are professionals so knowing the right tools for the job comes with the territory.
  • Dribble - Friday, February 3, 2023 - link

    Why would gamers need X3D parts with the uncached cores? They would just get the 7800X3D, the only market for the 79?0X3D is the professional who also games a bit.
  • godrilla - Friday, February 3, 2023 - link

    There is some rumors that AMD is attempting to using clever scheduling to utilize the faster cores from one chiplet ane extra cache on the other. Source hardware unboxed show on yt 2/2/23 but we would have to wait for some benchmarks to determine this. Plus the 7900x3d would be more practical. Also we would still have to wait for the 5800x3d to launch for a direct comparison to see if the this holds any weight.
  • Tom Sunday - Friday, February 3, 2023 - link

    I am not quite an idiot as I do drive a forklift at Walmart on the third shift and at age 36 still living in Moms basement with full fridge benefits. But I am most definitely one who is on the lower rung of the food chain and who uses their PC to game in order to exercise my brain and drinking loads of Mountain Dew. Like with most AMD CPUs or products these days…I will wait until this coming November when the 7950X3D series processors will be available at about 1/3 of their price. I will feel smarter that way!
  • Gastec - Friday, February 3, 2023 - link

    I don't think X3D gaming chips compute sarcasm.
  • Gastec - Friday, February 3, 2023 - link

    There's no need to be so condescending, we're not noobs here on AnandTech. Also there is no such thing as a "gaming" CPU, that is just a marketing term for the sheple. Some CPU's are faster than others in certain applications, but cost more money and/or consume more power, and that's all there is to it.
  • nandnandnand - Thursday, February 2, 2023 - link

    It absolutely can. Reviewers should be on the hunt for games and applications using the "wrong" chiplet. Running a game and workload at the same time could also be interesting.
  • Chaitanya - Thursday, February 2, 2023 - link

    That probably was the reason why MS staff was present when AMD presented these parts originally at CES few days back. AMD needs MS to properly program schedulers to keep latency to minimum when accessing the cache.
  • haukionkannel - Thursday, February 2, 2023 - link

    That also can be main reason to delay the release of these. Windows scheduler may need so "fine tuning" before it can really benefit v-cache versions... But lets see. 7800x3d will be real deal easily, so releasing that later does not goes well the speculation that scheduler needs tweaking for asymmetric cores.
    The test will be interesting!

    The easy solution would be to make v cache version are the main cores and high boosting as weak cores and when application can use all cores, the high boost will benefit in there. But that would leave some productivity benefits in the table when only few cores are used so really interesting how the cores and scheduler will behave!
  • Zoolook - Saturday, February 4, 2023 - link

    The most effective approach would be application profiling but I'm sure they will go with some "clever" estimation hack that will be put on the right core around 60% of the time.
  • Bruzzone - Tuesday, February 7, 2023 - link

    According to Kevin Krewell ccd to ccd and to i/o on an off fabric connects through Serdes its not a direct bus yet. mb
  • godrilla - Wednesday, February 1, 2023 - link

    Gaming performance is probably the same hence why they wanted to space the cheaper 7800x3d cpu in the end of April. Also if the 7900x3d has a 6 core chiplet with 3d cache does that mean a 7600x3d is possible? Or is it a 8 core 3d chiplet with 4 core chiplet for second chiplet making a 4 core part for Zen 4 possible as well.
  • brucethemoose - Wednesday, February 1, 2023 - link

    Thats a very interesting question, as it would definitely affect scheduling.

    I would probably buy a 7600X3D myself. I am dying for ST performance (not just for simulation games, but for Python and some other niches), and most anything I need more than 6 threads for has either moved to GPUs, is not time sensitive or would be "good enough" on a 7600X3D.
  • Cooe - Thursday, February 2, 2023 - link

    3D V Cache doesn't improve single-core performance (unless that single thread manages to be cache/memory bound)... In fact, for a single CCD part it actually makes it WORSE because of the lower clocks!

    And plenty of things still want CPU threads, which is increasingly including current gen games. With the current gen consoles having straight up 8c/16t Zen 2 Mobile CPU's in them (aka, basically pulled right outta Renoir/Ryzen Mobile 4000), more and MORE games are able to utilize a full 8-core CPU! If you play modern games AT ALL you are setting yourself up for massive failure later in this console generation if you only get a 6c/12t CPU and don't plan on upgrading it anytime in the next couple of years. 🤷

    (This is EXACTLY analogous to how 4c/4t i5's were PLENTY for most of the last console generation until the very last couple of years where all 8x wimpy CPU threads on the PS4/XBO were being COMPLETELY TAPPED by developers, causing SIGNIFICANT performance gaps in-between the 4c/4t i5's and the full-fat & also 8x thread 4x/8t i7's.)
  • brucethemoose - Thursday, February 2, 2023 - link

    This is untrue for me, the simulation-heavy games I play are bound to one or a few threads and respond very well to VCache.

    The few AAAs I play are fine on older CPUs.

    I am less sure about python stuff (where the bottleneck shows up in little bursts where the GPU isnt working). But I think VCache should be kind to JIT languages as well.
  • lmcd - Friday, February 3, 2023 - link

    This isn't a good summary of the scenario because the PS4/XBO CPU problem only happened out of desperation due to the abysmal IPC on each PS4/XBO core. On games that drop XBO/PS4, they can revert to primarily loading up a few threads heavily, which results in less total work (as synchronizing workloads that were not meant to be split across threads ends up increasing the total amount of work).

    Zen 2 can bottleneck the XSX and PS5 GPUs, but not proportionately as much. There are way more optimizations to exploit that will prevent overloading all 8 cores, and in a situation where 4 of the 8 threads are truly loaded up, a 6c12t part that is already substantially ahead of the 8c16t part in IPC will be absolutely fine.

    The bigger concern is memory bandwidth, which is what the V-Cache solves. If anything, I think parts that don't have V-Cache will get shredded once games start truly taking advantage of direct GDDR access from the CPU (remember, Xbox One OG/S had DDR3 and a 32MB cache, which did very little to solve the DDR3 problem).
  • nandnandnand - Thursday, February 2, 2023 - link

    I don't think AMD is required to do 8+4 due to a limitation of how the cache works, so a 7600X3D should be possible. But if they want to keep X3D premium, they won't make one. It would be interesting if the 7900X3D was 8+4, but I expect 6+6.

    4-cores are obviously possible, just unlikely because of chiplets and yields. AMD will even disable 6-7 cores in some cases. Like the Ryzen 3 5125C with only 2 of 8 cores (monolithic die), or the EPYC 72F3 with only 1 core enabled per chiplet. There is already a Zen 4 chip that enables 4 cores on each chiplet: the EPYC 9374F.
  • Silver5urfer - Friday, February 3, 2023 - link

    AMD will never release a 7600X3D processor, that will eat all Gaming sales for these Zen 4 lineup. Plus binning a 6C CCD on a X3D stack is not easy because you already have defect silicon with 2C disabled. And these X3D are high voltage unlike the Zen 3 X3D which are low voltage.
  • Bruzzone - Tuesday, February 7, 2023 - link

    7600X3D is possible It's retail would be $368 down to $276 run end clearance but on more cache per core I agree on the octa cannibalization question. I also question whether hexa ccd can be used on bonding pad placement. mb
  • Zoolook - Saturday, February 4, 2023 - link

    Considering that the boost clock is much lower on the 7800X3D than the 7700X, I'd say it's one CCD.
  • Bruzzone - Tuesday, February 7, 2023 - link

    I think you r correct, 7900X3D is 8 + 4 so the non hybrid bounded quad like its octa sibling is tuned for frequency. I question hexa can be relied for hybrid bonding L3 add on the pad placement. mb
  • Samus - Thursday, February 2, 2023 - link

    Hella money, but pretty insane to have 128MB+ on-die cache in a consumer CPU.
  • nandnandnand - Thursday, February 2, 2023 - link

    The best you're getting is still 96 MB per core.

    Intel did have 128 MB of eDRAM on some Haswell through Coffee Lake chips, although it was a lot slower than 3D V-Cache.

    https://www.anandtech.com/show/16195/a-broadwell-r...

    It would be nice to see big L4 on consumer CPUs again in the future.
  • Samus - Thursday, February 2, 2023 - link

    The 5775C cache was external of the CPU die, not stacked, kind of a glorified external cache on a package vs a PCB (ala Pentium 2)

    And as you mentioned, whether directed at your eDRAM\Haswell statement or not, it was technically L4 and a victim cache in the hierarchy. Who knows maybe it inspiredAMD to do what they did here but Intel killed it because manufacturing just wasn't ready to make it worthwhile...for the price.
  • geniekid - Thursday, February 2, 2023 - link

    Indeed. The 5800X3D reminds me a lot of the 5775C where specific workloads would gain such a large performance boost that the chip continued to be competitive in overall benchmarks across multiple generations.

    I do wonder how a modern "L4" implementation might compare against stacked L3. Surely there would be trade-offs worth considering between speed/latency and size/thermals/power.considering.
  • brucethemoose - Thursday, February 2, 2023 - link

    We will get to see with sapphire rapids, which can use the HBM as L4 cache.
  • brucethemoose - Thursday, February 2, 2023 - link

    TBH Intel killed it because the only OEM who wanted IGP performance was Apple.
  • Bruzzone - Tuesday, February 7, 2023 - link

    Broadwell 5775C one of the most costly Core line products Intel every made. mb
  • abufrejoval - Monday, February 20, 2023 - link

    I have a NUC8 like that, 48EU iGPU with the 128MB eDRAM chip and a NUC11 with the 96EU iGPU that doesn't have the eDRAM chip. Both run DDR4-3200. And I also have a NUC10 with the normal 24EU iGPU, no extra eDRAM and DDR4-3200 as well (in sticks, it might actually operate slightly slower).

    The theory behind the eDRAM chip was that it would enable the iGPU to scale whilst DRAM bandwidth was the main obstacle to benefit from additional iGPU cores.

    And it was quite obvious, that the eDRAM wasn't nearly as effective as it should as the 48EU iGPU only delievered around "32EU" worth of performance, 50% more iGPU power than the 24EU baseline. So even if the eDRAM enhanced was designed for Apple, Apple evidently wasn't very happy with the power they got.

    So when the Tiger Lake came out with 96EU and zero eDRAM, I was actually expecting to be quite disappointed because the DRAM bandwidth hadn't really improved by much: the value differences reported by GPU-Z for all three variants were really minor.

    But the 96EU iGPU did deliver almost exactly 4x the performance of the 24EU baseline, which was never well researched or explained by anyone in the press. My only explanation would be that an internal cache was now acting as a scratchpad for all sorts of texture operations and that thus it was able to make do without using invisible parts of the frame buffer to run shader operations at full speed of the EUs.

    It still couldn't convert the current 96EU iGPUs into speed monsters that could rival a dGPU, but in it's niche it was pretty impressive.

    Outside graphics, I didn't notice any significant benefits from the eDRAM. Playing NUC10 cores (no eDRAM) and NUC8 cores (128MB eDRAM) against each other in scalar workloads, the NUC10 always came ahead ever so slightly, because it had 200MHz more clock. IPC was pretty mucn unchanged between those two, while the 4 Tiger Lake cores performed pretty exactly the same as the 6 Cannon Lake cores on multi-threaded workloads.

    Very long story short: I don't believe you'll see any benefit from an L4 on any software you actually run. It would take a tailor made synthetic workload to eke out any benefit, nothing quite on par with the the iGPUs could do with it. And even those struggled...
  • Cooe - Thursday, February 2, 2023 - link

    The only real downside of AMD's asymmetric approach to the multi-CCD Ryzen 7000X3D parts is that they will require OS side optimizations to thread scheduling to work correctly (although these are relatively easy enough changes to make, as it's BLATANTLY APPARENT to the OS how cache/memory heavy a given process is, and even how often its cache accesses are hitting vs missing [indicating a need for more/less L3]), meaning they will be limited in terms of operating system selection to prolly just Windows 11 & anything using the Linux kernel. 🤷

    Otherwise it's basically ALL win, as a real "Have your cake and eat it too!" kinda genius lightbulb moment on AMD's part!
  • nandnandnand - Thursday, February 2, 2023 - link

    Probably true, or AMD wouldn't do it, but it still needs to be confirmed by reviews.

    Intel going first with even more asymmetric CPUs probably helped lay the groundwork.
  • James5mith - Thursday, February 2, 2023 - link

    Why are base clock speeds so RF#$*() high these days?

    Why not run at 1-2GHz and keep power consumption to an absolute minimum until needed?
  • Glaurung - Thursday, February 2, 2023 - link

    They do. My old 3ghz cup runs at something like 800mhz whenever the system is idle or mostly idle. The "base" clock speed is what you are guaranteed to get when you ask the CPU to do work. When you're not asking it to do work, it idles at a fraction of that. The turbo clock speed is not guaranteed - you get it only if parameters (mostly heat, but also esoterica like power demands and the speed of the other cores) are not exceeded.
  • meacupla - Thursday, February 2, 2023 - link

    They don't. Desktop ryzen chips have a rather high power draw when they are idle, because they don't bump down below 1ghz.
    Unless there is some discrepency in how low the multiplier goes on X, X3D, G, and regular chips?

    As for why AMD doesn't have a more aggressive idle? IDK. Their laptop ryzen parts can do low idle of 800Mhz, so there really should be no reason for the very high idle clocks of desktop ryzen, unless there is some kind of software or hardware power delivery issue.
  • brantron - Thursday, February 2, 2023 - link

    I disable that on both my Tigerlake Surface and Skylake XPS 13 because it does not change idle power even 0.1w or the temperature 1 degree.

    They are always 2.4 to 3 GHz and it's much more likely I see 4+ GHz that way doing normal things. I can set the Surface as high as 3.8 GHz minimum, and it still doesn't matter, even though it's fanless.

    Just leave the C states alone, and it will run at a very low voltage and gate whatever isn't in use. CPUs have been able to run around 4 GHz at 1.0v or less for years. It wasn't long ago that 0.9v was for the 800 MHz lowest power state.

    Many laptops overdo it and hang around the high latency power states. It's like they're stuck in battery saver mode, but while plugged in.

    Go to a store with different laptops on display, set them to the high power profile, and watch how the 4 to 5 GHz i7s and Zen 3s stay closer to 3 GHz.

    Desktops have no use for that, so it isn't supported.

    The difference with some AMD desktop CPUs is that the separate controller chip does not have the same fine grain power control. Some use 10 watts minimum, which may add up to 20+ for the CPU package because of BIOS settings.

    Supposedly Zen 4 addressed that by making the controller chip more like the Zen 3+ system on a chip. It remains to be seen if the Dragon Range laptops use the same chips.
  • meacupla - Thursday, February 2, 2023 - link

    Okay, so since you don't seem to understand that the desktop and laptop parts are not comparable, here is the gist of it.

    Intel laptop CPUs have very poor idle power draw
    Intel desktop CPUs have good idle power draw
    AMD laptop CPUs/APUs have good idle power draw
    AMD desktop CPUs have poor idle power draw

    Ryzen desktop CPU idle is so bad, that it can pull twice the wattage at the wall when compared to Intel desktop CPUs
    On the other hand, Intel laptop CPUs implemented efficiency cores, and they still can't keep up with how efficient Ryzen laptop CPUs are.
    Ryzen controller on Ryzen laptop CPUs will have a measurable difference in results.
  • JasonMZW20 - Thursday, February 2, 2023 - link

    AMD’s desktop cores using CPPC2 don’t report lower clocks because they’re sleeping; desktop parts should report about 3.6-3.7GHz, but are actually off. If you see something like 1700MHz at idle, CPPC2 has been disabled in power plan and CPU has reverted to legacy P-states. Only Ryzen Master can show true core info.

    As for the high idle draw on desktop parts, that’s mostly a consequence of having chiplets and needing to keep PHYs powered for IOD/CCD communication. This is also why laptop parts are power sippers: they’re monolithic, so more hardware can be power-gated.
  • abufrejoval - Sunday, February 19, 2023 - link

    I was going to post something like "your tool is reading it wrong", need to use Ryzen Master to check core clocks, but you've done it much better ;-)
  • Golgatha777 - Thursday, February 2, 2023 - link

    I'm feeling pretty good about taking Micro Center up on their ASUS ROG B650E-F, 32GB DDR5 6000, and 7900X for $599 deal. I'll wait for the price reductions and do UEFI updates in the meantime to be ready for a drop-in replacement somewhere down the line once all the bugs are worked out.
  • nandnandnand - Saturday, February 4, 2023 - link

    Wait for Zen 5 X3D or something.
  • Bruzzone - Tuesday, February 7, 2023 - link

    7900X (non 3D) price to high volume retail is $274 so clearance price will be a smidge above. AMD contribution to the kit bundle when finding 7900X at $439 is around $50. 7900X3D retail low price is $462 but they're buying them from AMD for $299. Retails pays $349 for the 16C 3D so they're likely to be a reason to upsell from 12C to 16C. BUT here is a question, if on 7900X3D the frequency ccd is a quad is that component higher clocked then a full octa? I don't think hexa is bondable. mb
  • GreenReaper - Friday, February 10, 2023 - link

    Same configuration we ended up with, too. Don't really need more for most things.
  • lopri - Thursday, February 2, 2023 - link

    It must not cost that much to make these compared to regular Zen 4 CPUs. They are staggering the releases to squeeze the most out of the market. I am not a fan of AMD's sales tactic as of late.
  • lmcd - Friday, February 3, 2023 - link

    or it takes additional time to append a feature?

    seriously, come on
  • lopri - Sunday, February 5, 2023 - link

    Re-reading it, my comment was not clear. I was referring to the later release of 7800X3D, after that of 7900X3D and 7950X3D.

    On the other hand, I hear that AMD has lowered the price of regular Zen 4 CPUs even more. In that case I would retract the above criticism as well. I take no issue of AMD going for high-end market segment while catering to the other market segments.
  • nandnandnand - Sunday, February 5, 2023 - link

    We can't know for sure but they probably want to sell 7950X3D and 7900X3D first because they are more expensive but have roughly the same gaming performance as the cheaper 7800X3D, and because they want the first reviews showing absolute dominance over Intel. In theory these will look better in review charts because the games that don't need the extra cache can use the other chiplet. They must be confident in the asymmetric cache and clock speed approach.

    Zen 4 X non-3D CPUs are heavily discounted from MSRPs and will stay that way. Non-X get most of the performance of X. 7950X3D dropped in at the same $700 MSRP as the 7950X. 7900X3D got +$50, leaving it weirdly close to the superior 7950X3D (85.7% the price for 75% of the cores). 7800X3D also got +$50. At $450, it's the same price the 5800X3D was at launch.

    These are expensive CPUs, but will most likely top the gaming charts for the rest of 2023. 7800X3D will be in high demand, and the 7900X3D is fascinating because it's teasing the $100 upsell to 7950X3D. If AMD set the wrong MSRP for 7900X3D, they can just let the price fall $20-80.
  • Bruzzone - Tuesday, February 7, 2023 - link

    I've read through the comments and curious not in gaming applications performance necessarily but the professional modeling and simulation performance of 3D where an entire model fits in L3.

    According to wccftech on April 12, 2022 AMD would supply 50K units of 5800X3D per quarter and on channel data 1% of Vermeer CPUs produced in q1+2 of 22 = 234,207 units. I originally back in April estimated 18 to 24 months' worth of 5800X 3D in the 365K unit range I recall.

    In q4 2022 AMD produced 4 M R7K following 13 M in q3. I suspect the entire lot of q4 Raphael production is 3D and 4 M is approximately AMD dGPU quarterly finished goods production on financial on channel sustaining inventory holding. CPU volume in unison with dGPU volume is a planning tactic and at 4 M 3D AMD would hold CPU gaming for the moment.

    For those interested in holding out until run end R7X3D inventory clearing, here are the targets to know you are close;

    7950X3D at $699 low retail is $576 and high-volume procurement price $349

    See how AMD encourages stepping up to the 16C.

    Note 70% of R7K channel (production) available is 79x0_ suggesting why octa is pushed out.

    7900X3D at $599 low retail is $463 and high volume to OEM/retail = $299
    7800X3D at $499 low retail is $351 and high volume procurement = $249

    Intel Raptor desktop margins are up to 100% goosing the channel to push them through. R7K channel margins have been 30% and being pulled through think about it. R7K3D margin potential means they will be pushed all 4 M of them.

    If there were to be a hexa which I discount requiring octa ccd for hybrid bonding interconnect (pad) and I agree with others pointing out it would cannibalize the 8C on more core per cache essentially, its retail MSRP would be $368 down to $275 run end clearance.

    Mike Bruzzone, Camp Marketing
  • deil - Wednesday, February 8, 2023 - link

    Availability will feel like they were released on Feb 29'th
  • nandnandnand - Wednesday, February 8, 2023 - link

    I'll be surprised if the 7900X3D flies off the shelves. The MSRP is too close to the 7950X3D.
  • Sliderpro - Thursday, February 9, 2023 - link

    I find it a bit disappointing that we don't get a 6 core CPU which makes sense due to being 8-2 cores to help with yield.
    Personally I wish there was a 4 core with 64mb cache with maybe 5.3-5.5 GHz at roughly 250$ price point. It would slap 5600x in single core and be perfectly good for gaming. Very few games would be able to max 4 cores powerful as these.
  • nandnandnand - Thursday, February 9, 2023 - link

    Premium quad-cores don't make sense with AMD's 8-core chiplet. It's dead, get it out of your head.

    7600X3D would be nice, but AMD would rather keep X3D prices higher, especially if they all beat the 13900KS on average.

    If would be perfect if they could use a defective cache chip to make a 7600X3D with somewhere between 32 and 96 MB, and launch that at a later date. Maybe at that $250 price point.
  • abufrejoval - Sunday, February 19, 2023 - link

    As I understand it, there is two major reasons for binning: Outright defects and excess voltage requirements (or clock limits) due to borderline connection driving a chip beyond the allowable power boundary.

    I am assuming that outright defects with "defect craters" taking out full cores have become very rare indeed and that the second type is far more common, especially near the edges or when masks need cleaning. Most 6-core CCDs might actually contain 8 functional cores, but get the worst two voltage or clock offenders cut to fit inside the permissible power envelope and boost range.

    A fabbing process that regularly results in half of the 8 cores designed into a CCD to get evicted, results in 100% cost and below 50% revenue. And only if the internal L3 cache survives, which is incidentally already uses much more surface area than the cores themselves.

    Latching the 64MB V-cache addon on such a broken chip can only add marginal margin at full 3D cost, so it makes absolutely no sense economically.

    AMDs cost structure is based on reusing a single basic CCD design across the largest range of SoCs, which simply doesn't include quad core CPUs any more.

    A separate 4 core design might perhaps deliver half size CCD and twice as many of them, but a finished SoC would be difficult to produce at 50% cost while they are far too likely to sell at less than that.

    Intel's internal charging model doesn't need to account for silicon surface area consumed as they go to these small and low revenue dies, often with large iGPUs thrown for next to nothing.

    But AMD has to pay for every square millimeter consumed and needs to make the highest revenue from that to thrive. It's more likely futher generations of CCDs will go to even higher minimal design core counts, quad core survivors might still collect like iGPU-less APUs did, but they are more likely to be sold as scrap to OEMs e.g. in China than become an official SKU.

    Well, there are reports of 8-core SKUs made from dual quad survivor CCDs... but again, that's recycling not design. And no one will fit V-cache tiles on known crap.
  • abufrejoval - Sunday, February 19, 2023 - link

    I doubt something like an OS scheduler could properly decide if a process is better served via a core with more cache or higher clocks: it requires extensive profiling data and something of an expert system to make and review such decisions and nobody would accept a scheduler that might need more time than what fits into a hardware clock tick.

    True, you could fit evaluating an external bias collected by such an expert system into a modified scheduler, but a "consumer friendly" qualified decision maker will need to come as an app or service, which is fed with game profile data previously collected and perhaps enhanced via runtime data collection (and likely little respect for the GDPR).

    Perhaps a basic mode would simply consist in assigning each application a primary CCD type, but in a complex game different processes or even thread might actually be better served by one type or the other.

    One thing already seems pretty sure to me: there will be endless debates on right, better and wrong with plenty more opinion than tangible benefits in real games.

    While I consider the mixed type CCD approach technically brilliant, I'm not so sure it will survive publicity that doesn't understand it.
  • Lawzy - Monday, February 27, 2023 - link

    Nice and way overkill. Depends on your use though, if someone is playing on 1080 @ 144 even the 5600x and 2070 is more plenty. Want more add 3070.

    Same for Intel those I9s and massive cards are a waste for the majority of gamers. Again depends on use and how much you want to blow for useless unnoticed performance (waste)

Log in

Don't have an account? Sign up now