"Moving forward into 2019, AMD debuted Zen 2 or, as it is more widely known, Ryzen 3000. Switching to TSMC's high-performance 7 nm manufacturing process, AMD delivered higher performance levels over Zen/Zen+, with double-digit gains in IPC performance and a completely new design shift through the use of chipsets."
The very notion that NVME slots on this socket will have as much bandwidth as the GPU socket on my current home computer (Z270, lol) is kinda blowing my mind a bit.
Yeah, but there aren't M.2 drives that fast, and there won't be for a while. That makes this whole rush to PCIe 5.0 a bit ridiculous.
About the only place where PCIe 5.0 really makes much sense is in the chipset link, yet some posters below claim the first AM5 chipsets will only run the link at PCIe 4.0 speeds.
The first one has already been announced, and AMD claims that more will come when the platform launch (in about 4-5 months).
But I agree to the second point, I really hoped the chipset link went for a PCIe 5.0 x4 so it can give the full bandwidth to the rest of the IO, even connecting an PCIe 4.0 NVMe at full blast will take half the bandwidth only.. Maybe they should have done that for the X670(E) to give it more differentiation than the rest.
> The first one has already been announced, and AMD claims that more will come
It took at least a year between when AMD added PCIe 4.0 support and when the first NVMe drives appeared that could actually surpass PCIe 3.0 x4 speeds. I'm expecting the lag for PCIe 5.0 will be even longer.
So, you'd rather your motherboard limit you in 2022 and 2023, and 2024, possibly 2025? If you plan to JUST upgrade your CPU and video card, but keep your motherboard and RAM over the next 3-4 years, then having a motherboard that gives you PCIe 5.0 will keep you from feeling you NEED a motherboard upgrade sooner.
Faster is better, but it is overkill for now. SSD that support that speed are coming, but to copy what? You only get that speed in real world if copying from a pcie5 to another pcie5 drive... my old pcie3 SSD was fast as hell at nearly 3500 MB/s, and so is my pcie4 SSD at nearly 7000 MB/s but i really can't tell the difference... still I want it.
Chicken or the egg. This has happened with every PCIe bump and ever early adopter technology. It also makes perfect sense for the primary drive and video card to have the most bandwidth. Nothing on a chipset link would have have bandwidth priority over those two for the vast majority of users.
Not really. When NVMe drives aren't even maxing out PCIe 4.0, there's really no case to be made for them moving to PCIe 5.0. It just burns more power and creates more heat, which in turn causes more thermal-throttling that hurts both performance and data retention.
You probably haven't thought that if you aren't pushing the bandwidth, the power draw/heat won't be very high. Stick to PCIe 4.0 for NOW, and in another two years, you can go to PCIe 5.0 NVMe drives.
> Stick to PCIe 4.0 for NOW, and in another two years, you can go to PCIe 5.0 NVMe drives.
So, why are we wasting money on PCIe 5.0-capable boards, then? PCIe 5.0 requires more exotic construction techniques and components, which is one of the reasons Alder Lake boards are so expensive.
Keep in mind that AMD keeps socket compatibility for far longer than Intel. If you buy a X670 or X670e based motherboard in 2022, in 2024 or 2025 you can drop in a new CPU. At that point, will you be upset that your motherboard is held back horribly due to ONLY supporting PCIe 4.0 at that time where you will want to upgrade your motherboard?
Now, as far as running the slots at PCIe 4.0 speeds, if you see motherboards with extra PCIe lanes connected via chipset instead of direct to the CPU, then sure, they will be limited. Only the 20 PCIe lanes off the CPU will definitely be PCIe 5.0, 16 for primary PCIe slot and 4 for the first M.2 slot. Beyond that is where there will be debates.
"Keep in mind that AMD keeps socket compatibility for far longer than Intel. If you buy a X670 or X670e based motherboard in 2022, in 2024 or 2025 you can drop in a new CPU. "
You can but why would you do that? Increases in computing performance have barely crawl now. Consider this: new process node + 65% more power + twice the L2 cache + architectural improvements + DDR5 = +15% of performance at best. They threw everything at it at the same time, and this is as much as they got.
There will be nothing in 2024-2025 warranting upgrade over this, and probably not ever as things are going if you buy a 8-16 core part this fall.
If you understand demographic trends too, you'll understand that this is essentially the end.
Come on! There's a large speed lift, the L2, faster DRAM plus around 8% IPC and the greater MT performance than ST. If you look at the Zen3 launch its enhancements seemed smaller but the result was much more in practice than first sight, despite the main goal being AM5 introduction. Zen5 is expected sooner and is a full redesign not incremental improvement. Those who have enough performance can look forward to more efficient and cost effective design, while others can expect decent performance lifts, not just 5% each generation.
It's a minor use case in the market. PCIe 3.0 systems started to add more lanes to compensate for the prolonged development time of PCIe 4.0. Once those faster lanes showed up, the chipset could reasonably handle all the peripherals with fewer lanes.
> more bandwidth means you must have fewer peripherals?
No, it means two things:
1. More bandwidth (potentially) to the chipset, which can fan out into more lanes. 2. Direct-connected peripherals can use fewer lanes @ same bandwidth -> more peripherals.
Case in point would be AMD's ratcheting down of video card lane widths. With PCIe 5.0 coming onto the scene, we can expect that to continue.
You must have missed that AMD didn't even lay out the details of the Ryzen 7000 series. Zen4 in very general statements without specifics means that AMD isn't letting Intel know the full details on what to expect.
10+% clockspeed improvement and we get 15+% performance benefit. Consider it's a marketing slide, it's safe to assume they must have cherry picked the best result meaning.This would imply that we are looking at 4-5% IPC improvement in a best case scenario. Interesting.
On the other hand, I am quite surprised they have managed to push those tiny 5nm transistors to 5.5 GHz. There was a time when we needed LN2 to go that high. I wonder what the all core boost for a 8 core part would be. 5 GHz across all 8 cores seems pretty doable.
I will say though. 170 watts of power across those tiny chiplets... ouch that's a lot of heat density. It would be the expectation that someone purchasing these brings something like a NH D15 or AIO or something just to run the thing at stock at comfortable temperatures at full load.
What a time to live in. Consumer CPUs are reaching 170 watts for AMD and 241 watts for Intel. Consumer GPUs are reaching 350 watts by design and 450 watts for enhanced cards. And this is with boost algorithms keeping the power consumption somewhat in check for all products.
Cooling is not the bottleneck. The problem is there is a practical ~1500W limit, in many parts of the world, if you don't want to have to redo your house electricity just to connect a computer.
And this is if you have a dedicated circuit just for your computer. Forget about having two of them on the same circuit.
>What a time to live in. Consumer CPUs are reaching 170 watts for AMD and 241 watts for Intel. Consumer GPUs are reaching 350 watts by design and 450 watts for enhanced cards. And this is with boost algorithms keeping the power consumption somewhat in check for all products.
While electric power prices are going up. On the upside, we won't need additional heating in the winter.
Let’s not ignore the elephant in the room. Apple and their vastly lower power solutions on 5nm and competing with performance. Although I absolutely expect single and multi core performance on a majority of these 7000 series chips to beat an M1, I’m intrigued by what M2 and derivatives can muster against the high power Intel and AMD bad boys.
… Which is better than Windows, far less obstrusive to productive workflows (what an overwhelming majority use high-end hardware for), and order of magnitudes more UI friendly out-of-the-box.
Power consumption is pretty far out of control for computer components so while competition is good, the most significant outcome of it is that the few companies that can afford the R&D are incrementing power input and consequently heat output in order to keep pace with one another in terms of compute performance. I gave up keeping up with that quite some time ago and just run whatever a cheap laptop can handle. It's been refreshingly nice to not fuss over upgrades and to just work within the limits of my hardware rather than chase after benchmark results or FPS in the latest games.
Anyone else think of Willow Cove in 11th gen? Node improvement and way bigger L2, negligible IPC gain but big freq boost? it also claimed a 10-20% ST bump: https://www.anandtech.com/show/16084/intel-tiger-l... Obviously quite different, Tiger Lake had much better than 1.1x freq relative to Ice Lake and an L3 upgrade...
I was assuming the same, even before this announcement. Something like a +5% IPC gain at slightly lower power consumption, but in practice, using more power and generating around +15% performance. This to put pressure on Intel's 12th-gen and similarly performing 13th-gen processors. What surprised me was to include an (RDNA-2) iGPU in the mainstream processors (eg r9-7950x), which is handy for AI-tasks, troubleshooting graphics issues, and biding time during unpredictable GPU shortages.
Overall, this is AMD adapting it's late Zen3+ architecture to it's new platform. No major surprises, and a wise move indeed for a smooth launch, healthy and long-term AM5 support. So think of this somewhat like the 16nm Zen1 to the 12nm Zen+. When Zen5 arrives by Early-2024, it will be a proper new design, and that's when Intel's 14th-gen will be in trouble.
I would say people should take a pause on this one, the older AM4 is still plenty competitive. Whilst on the cutting-edge there is the new Apple A16-Bionic architecture coming and it is being adapted to the M2-family of chips. It looks like a decent upgrade over the previous performance level of M1=A13/A14/A15, so it would be wise to see how Intel and AMD response to it (though it's very predictable).
"When Zen5 arrives by Early-2024, it will be a proper new design, and that's when Intel's 14th-gen will be in trouble."
There is so much speculation here. Neither Arrow Lake nor Zen 5 are even taped in yet and are not design finalized. There's no way anyone can truly say which will be better - not even AMD and Intel's own engineers could be sure.
Arrow Lake should be 15th gen, right? Barring any revisions or naming scheme changes (and it this point the Core iX naming scheme is almost as old x86 was when the last new 486 variant came out), Raptor Lake is 13th, Meteor Lake is 14th, and Luna Lake is 16th.
ArrowLake is 15th gen. Intel's 14th gen is launching in Spring of 2023. Arrow Lake will be 2024 and compete against Zen 5. Either 16th gen or 17th gen is going to the biggest fundamental change to Intel's architecture since the debut of "Core" back in the mid 2000's - 'Royal Core Project' as it's known is partly what Keller was brought back to work on starting. It'll be a "brand new architecture" in the same way Zen 1 was for AMD. I wonder if they'll still keep the "Core" naming/branding scheme, of if like how BMW still calls it the "328i" despite no longer using a 2.8L because the name has become too recognizable to change - They just keep the same branding despite the brand new architecture.
> the "15% IPC gain" figure is measured using Cinebench and compares a Ryzen 9 5950X processor (not 5800X3D), on a Socket AM4 platform with DDR4-3600 CL16 memory, to the new Zen 4 platform running at DDR5-6000 CL30 memory. If we go by the measurements from our Alder Lake DDR5 Performance Scaling article, then this memory difference alone will account for roughly 5% of the 15% gains.
This is according to techpowerup.com
So we have 0 IPC improvement. Zen 4 is just a node shrink + clock speed boost + upgraded memory compared to Zen 3.
I'll bet AMD fell into the AVX-512 trap. They probably sunk so much engineering resources and die area into implementing it that it starved other areas of the chip.
They did a good job delaying it for so long, but I guess they're probably facing growing demands for it, in the server market.
It was a worthwhile investment. They would have to leap on that AVX-512 some day, and this way they can just do it and get it out of the way. And the sooner the better, as software adapts to it. If anything this was the perfect time. It's on a brand new platform, and they have plenty of performance and efficiency on the table to waste. Intel is going to be dominating them with the 12th-gen and 13th-gen products.
AMD still has options such as: - slight tweaks to Zen4 - overclock it - add more cores per chiplet - add their X3DCache - remove the iGPU - use the AVX-512 perhaps more in software at that point. ....So they can easily sit on this Zen4/Zen4+ for the next 2-3 years, buy themselves some time, until Intel becomes a threat. Then they can respond swiftly with their Zen5 or whatever they want to call their New-Architecture that they've been working on.
Meanwhile, Apple is going to smoke every competitor with their A16-family, all the way from the Silicon in their Watch, to Mini Phone, Large Phone, Tablet, Ultrabook, Laptop, and Desktop offerings. And possibly even an Apple Cloud service against Intel's Xeon, AMD's Epyc, Amazon's Graviton, and the Ampere Altra lineup.
Die area isn't as much of a concern for AVX512 as most people think. It's on the order of 1% of the total die area for a 12900K. It does add about 10-15% to an FPU/SIMD block, but you can see from annotated die shots that it's still a comparatively small portion of the 12900K die.
Nonsense. They didn't cherry pick anything. They used CB R23 ST which is even something like Intel's ST best case scenario. And they didn't say 15%, they said >15%. Depending on the clock speed it could mean ~10% IPC improvement. Though that's still underwhelming after two years. People expected more like 15-20%. OTOH 46% faster than 12900K in Blender is quite good.
Google was in Vegas to show Android to carriers (as a clone of Blackberry) when Jobs demoed the iPhone. They immediately dropped plans to have Android be a Blackberry clone and copied the iPhone instead.
That's true. Amazon's Graviton3 (64cores - roughly Cortex-X1) is launching, and there's the Ampere Altra Max (128cores - roughly Cortex-A76). The bigger concern is ARMv9 and the Cortex-A730 family of processors coming, when combined with TSMC's ~6nm node.
They're serious competitors in the cloud and server market, maybe even "supercomputers" in the future. Not just cost, but energy, multithreaded performance, and they're still not bad from the latency/single-core performance either. The biggest hurdle for ARM seems to be features and software, but that is shrinking every year.
This would make ST slower than Alder Lake and likely well behind Raptor Lake.
In addition, M2 is coming and will almost certainly be based on the A16 rather than the iPhone 13's A15. There should be a very sizable increase in CPU performance from the M2 over the M1.
Of course it will be based on A16. Apple isn't going to design and manufacture a new M series every year. Imagine having to design an M Ultra every year. It'll be every two years which means Apple will always use cores from the next A series.
M2 = A16 M3 = A18 M4 = A20
And so on...
In addition, M1 and A14 were released within a month together. If we haven't seen M2 by now, it's not going to be based on A15.
Performance between A14 CPU and A15 CPU where it matters is almost flat, only a few % gain, so for Apples sake they better base it on A16 otherwise it will not improve by much.
What does even that garbage mobile SoC processor has to do with AMD's Zen 4 or Intel's RPL platforms ?
Apple is a BGA trash ware use and throw mobile toy. Do not compare it to the powerhouses called AMD and Intel. M series Laptops have non modular soldered OR proprietary garbage designs while these have M.2 and PCIe expansion slots.
Second, Alder Lake trashed Apple in IPC and ST flatout. Then Nvidia destroyed it, all on TSMC 5N first dibs months before vs Intel's 7, Nvidia's 8N Samsung.
M2 will be using TSMC's new node again, comparing apples to oranges on top the transistor count on Apple CPUs is very high vs these. That marks how Apple scaling is.
Yeah sure, I have a long history of comments here but since I mocked that BGA dumpster design which costs north of $4000 for high performance for Max and Ultra designs with full soldered BS designs and proprietary non standard HW it's bad.
First you can go and buy it mate, nobody is stopped you. Second you do not have an urgency to comment on every single comment here as if we need your validation.
A decommissioned Xeon has more use than a BGA proprietary ecosystem locked down overpriced silicon trash compete in the ranks of AMD and Intel.
It's overpriced because it doesn't fit your value calculus. The world doesn't revolve around you or any particular person's tastes. Apple makes computers for a particular demographic that has fitted them well just fine. Same with AMD and Intel.
Agreed. Apple's is trash ecosystem. If only they have at least non-soldered storage, then I can recommend them to new user that want to buy PC. As of now, as they're break-go-to-bin, I won't recommend them, unless to apple fanatics.
They have non-soldered storage with their desktop platforms not all-in-ones or portables.
My Mac Pro has 192GB RAM and 4TB RAID 0 ARRAY SSD in addition to the 4TB memory module in a non-PCIe form factor that swappable with a number of options from Apple or outside of Apple anytime.
I’m thinking it could be a 170W package power tracking (PPT) as currently Ryzen is at a 142W PPT and power usage is currently at about 140W. The CPU AI acceleration is also an interesting approach, as it’s on the CPU rather than the GPU.
This shows if nothing else is a messed up release by AMD. They aimed for 10% clock speeds increase that comes at the expense of increased TDP, when the competition has 19% IPC uplift. Maybe zen 3 is already at dead ends for their engineers?
LOL the trolls are active quickly on here, so the 5.5 ghz running during gameplay on all cores was not an advance because trust me that was all core not just one or two peaking, the blender demo was a fail? The power budget sticking below 200w for the CPU whilst achieving all this is also a fail whilst the competition can easily eat its way to over 270w doing the same, and as we know watts equals heat. Am reserving judgement until launch because Intel wont stand still just as AMD have not and we the consumers are living in the golden age right now.
I'm an Intel fan but this is a poor interpretation.
This will be the Bristol Ridge of AM5: pleasant platform intro that brings the platform into availability but really doesn't reinvent much else.
The new IO die, cache ratios, and memory bandwidth should be able to feed more interesting CCD designs, but AMD siphoned resources into Zen 3+ for back to school timing.
I applaud the smart roadmap strategy, since Zen 4 is probably doomed to land post-crash and Zen 3+ probably isn't.
Bristol Ridge was anything but pleasant. It was a cut down version of the construction core. Although IPC had improved a bit over Piledriver, the lack of cores and cache did not help.
This release is offering competitive parts. Bristol Ridge was anything but.
You are assuming that the max TDP for the socket is the TDP for the chips. Also, if you are concerned about deadends, you might ask yourself why Alder Lake gets no uplift from the use of DDR5.
AMD clearly knows Intel is capped at 8C max for gaming, so they will have 7800X to top out RPL or ADL processors, IPC uplift doesn't mean JACK, look at Rocket Lake, it had IPC boost over Cometlake in double digits but it lost 2Cores 4 Threads, what happened ? Massacred by their own 10900K.
AMD's Zen 4 packs full fat x86 cores with all using higher cache than Zen 3 on top using TSMC 5N with 5GHz+ boost clocks. This will hemorrage Intel's Raptor lake in MT workloads, AMD's HT / MT performance is always very high. Alderlake was barely able to edge out Zen 3 processors which were from 2020 vs 2022.
Also Intel's Core is the oldest uArchitecture around CPU designs, it is the same thing since the first Nehalem came, Intel kept on improving it and hit big walls with Comet Lake and Raptor Lake on both Nodes and uArch. They cannot add more 8P cores so they widened it gave it ultra high Clocks like 14nm++ on 10nmESF / Intel 7. And you think AMD's Zen is already a dead end ? lol.
AMD doesn't have to push X3D on this, there's no need as MT will completely eat RPL for breakfast and gaming they will be on par if not able to beat Intel RPL, Intel knows it very well, that's why they have 8 extra E cores to offset the MT lead by AMD by a few % points. So it's always like this a CPU major uArch gets improved over the time and you do not need to pump xx digit growth when there's no need to.
All in all it's not a major release but a very welcome change, improved Zen 3 on all fronts from IOD to uArch and Clocks plus new Chipsets.
This reads very emotionally invested in AMD... Rocket Lake being downgraded to 8 cores was specifically the result of the backport process. Obviously the backport process was due to 10nm failing to hit high enough clock speeds for desktop, and this was a bandaid measure.
As for "they cannot add more than 8P cores" - that's just totally false. Every cluster of 4 E cores is the same size/thermals as a P core. 8+8 was chosen instead of 10+0. For 13900K, 8+16 instead of 12+0.
As for your assumption that Zen4 will completely dominate RpL in MT, there's just no reason to believe that at this point. You think 7600X with 6 cores will dominate 13600K with 6+8 in MT? You think 7800X with 8 cores will dominate 13700K with 8+8 in MT? RpL will almost certainly beat Zen4 consumer in MT across the whole product line, with the only uncertainty being 7950X vs 13900K.
And in addition, the uArch changes from Skylake -> Alder Lake are greater than the uArch changes from Zen 3 -> Zen 4. You can call heterogenous compute a "bandaid", but I'm more convinced that disaggregated heterogenous design is the future of CPUs.
I do not have any AMD processor lol, I hate Zen 3 because of its stupid IOD issues and the awful Firmware plus the IMC being worse which RKL incorporated later on.
Intel P 8 is MAX. You cannot say it's false by adding the junk E nonsense to the equation. Go to Ian's coverage and also Intel's Engineers points it's on Infoworld I guess, they cannot add more 8 to mainstream socket it will blow the roof of TDP. Period. E is added because there's no way for Intel there which is why SPR XEON has and not LGA1700. E is just the garbage there Intel used it to add the MT and Intel Thread Director to compete vs AMD and their aging SKL designs.
10+0 is not there so you cannot talk about it but it will shred the 8+8 garbage, ADL's P is high performance design, no amount of garbage E cores can make up for that. Across the whole product line, that's too far fetched mate. AMD's Zen 4 will bloodgeon Intel, there's no way Intel's BigLittle junk can keep up with high frequency big fat x86 cores.
Heterogenous garbage is copied by Intel for BGA that's where their money is for mainstream. Because of TDP and power budget for their aging Core uArch.
XEON and EPYC E only cores are made to compete vs upcoming ARM parts which have high thread density without HT/SMT. There's not a single server CPU which follows this, which is why ARM Server SKUs also have all uniform cores.
And I cannot see why 'Intel is 8P max', with no choice from Intel on the matter. I thought the other poster explained it to you rather well, so I guess a fanboi, with or without AMD purchase, hath been exposed...
" And I cannot see why 'Intel is 8P max " thats simple, any more then 8 p cores, and the power and thermals would be through the roof. thats why for the desktop, intel is maxed out at 8 performance cores, and makes up for the " thread parity, with the Efficiency cores
All that's needed to keep thermals in check is to lower the all core boost and seeing how 99.9% of all games can't even put 6 cores to good use the 8P max claim is remarkably silly.
If it was possible, why wouldn't Intel launch a 10+8 instead of going all the way to 8+16, should be room in between and it's not like Intel to go easy on the diversification.
" All that's needed to keep thermals in check is to lower the all core boost and seeing how 99.9% of all games can't even put 6 cores to good use the 8P max claim is remarkably silly. " you are forgetting one thing, intel pretty much NEEDS the high clocks in order to compete with AMD.
" You OTOH seem to think you live an age where cores can't be individually clockgated. " still doesnt change the fact that intel needs its higher clocks in order to compete with amd. intel lowers its clocks, it looses performance.
> Intel's Core is the oldest uArchitecture around CPU designs > it is the same thing since the first Nehalem came
A few years ago, I went back and read the coverage on this site about Nehalem & Sandybridge. From the sound of it, it was a smaller change vs. Core 2 than Sandybridge was vs. Nehalem.
Now I'm laughing at all the AMD fanboys who thought 5nm would make Zen4 competitive with M1 in terms of perf/watt. Not even close. AM5 going up to 170w and have to boost to 5.5Ghz just to get 15% increase in ST.
I’m laughing at Intel even more: bloated core architecture, 7nm++++, E-cores that aren’t that efficient (only space efficient), and a single P-core that uses more power than an entire M1 Pro CPU. That means they are even further away from Apple in terms of performance per watt.
I beg your pardon? My 12900k gets 15k+ CBR23 score @ 35watts. That handily beats the M1 that was benched in this very webiste in terms of performance per watt.
Too bad that non-stock configs aren’t relevant for the broad mass of people and hence not really relevant at all. No, Intel is now where near AMD or Apple in terms of efficiency, on top AMD can be optimized as well, Apple probably not but it doesn’t need to.
I don’t think so, a lot of people buy it and not just enthusiasts and most people don’t OC of touch settings at all. A lot of them also delivered via prebuilt systems and those users are even less likely to touch settings.
But you misunderstand M1 is as relevant to the PC world as a raindrop in the ocean, the biggest advantage M1 has is its software, its designed from the ground up as a walled system for Apple along with all the software limitations that comes along with that requirement. If MACs are so great why are they not the dominant force in computing ?
No -- that's not at all necessary and probably counterproductive. By standardizing hardware (Apple silicon) and software (Metal, Swift) across iPhone, iPad, and Mac, Apple has the technological foundation for games, as well as economies of scale.
What they need is to jumpstart development. They can do that by buying a studio, building one internally, and/or subsidize the porting of lots of AAA games. The main issue is just laying down the cash to one way or another get games developed for and/or ported to all of Apple's platforms.
Apple has tried to take an 'if we build it, they will come' approach, and that just hasn't worked. It also didn't work for video content, which is why we now have Apple TV+. Apple needs to throw a similar amount of $$ at video games than they've thrown at video content.
Apple does NOT want to release a console, and it doesn't want PC Gaming. The biggest GAMING COMPANY is the world is actually Apple. That's a fact. Their gaming division makes more money than Nintendo, Sony, Xbox, Epic combined. And it dwarfs Valve and Google. Their gaming division is very successful, and they achieve this by taking a 30% cut from every In-App Purchase on the iPhone (the iTouch, iPad, and iMac are nice too).
If Apple wanted to, they could definitely dominate the Gaming Industry directly. But they choose to approach it indirectly. Even though the iPhone only accounts for 20% of phones on the market, they command around 80% of the profits. There is every incentive for them to keep the status-quo.
Remember they have: Apple Silicon, iOS, Metal, Swift, and Closed Ecosystem. They would absolutely destroy "consoles" the likes of: PS Vita, Xbox Series X, Google Stadia (Pocketable, Home, Cloud). But that would be foolish. I am not a fan of them, but even I can admit to this. If they wanted to, they could easily pay and license iconic IP or outright buy a Game Development Studio (anywhere from Pokemon, GTA, NFS, FIFA, BattleField, Elder Scrolls, Fallout, Final Fantasy, Metal Gear, Witcher, etc etc). They have the cash.
It doesn't matter if the M1 is more or less efficient or has better or worse performance or anything else. It's like comparing the motor in a sports car with the motor in a speed boat. I mean, I guess it's good for Mac users the M1 isn't a turd as it's their only option.
Well it does matter. You see, it shows to Intel and AMD that bloating up the architecture will not be that effective. It’s also a more significant impact on mobile. People are being drawn away from x86 laptops which have bad battery life, and are being led to Apple M1 laptops which have far better battery life. This is why Intel or AMD have never tried x86 on phones. Apple is simply scaling up a super efficient architecture, and it works. Despite the fact that it’s within Apple’s own walled garden, it is impacting things outside too.
Which failed. Intel heavily subsidized the atom for them. And as Intel bleeding money, a lot, they then gave up on mobile market. Cannot compete with ARM on perf/watt and price.
It's more like, Intels inferior production lines couldn't compete with TSMC on a perf/watt basis, and halfassing a medium size core down in size didn't make it better. It was a lame effort, not really putting their best effort in.
Besides, much of the perf/watt advantage of socs are all the specialized functions outside of the core, to reach the efficiency of mobile socs you need to add all the fixed function encode/decode, image dsps etc, without those the mobile socs would loose half their efficiency.
Not sure as to the point of your comment? ARM will be more efficient than X86 at a lower power envelopes (<30W or so about) since there's a minimum cost for X86's architecture overhead. Over that power envelope, AMD is quite competitive with these scores, so is Intel on their energy efficient cores (don't know if ahead, since AMD hasn't actually released these yet, these are engineering samples).
Pretty much everything I do on a PC requires X86 since coding, and most games I play are X86, so M1 is pretty useless to me, and anything I would do that's in that power envelope could technically be done on my phone or NUC at its tiny power envelope where most of it is handled by hardware AV1 decode or is just powering the screen for a book. M1 isn't that interesting for most of the market, its marketing a bit overblown, and repair-ability is near non-existent.
That ! A well thought and impartial comment. Let me add some more empirical data for those who are fanboys (on either side).
If you use your MacBookPro in an average way (little bit of productivity/browsing/remote terminal/etc) the battery life is phenomenal. Once you do a zoom or a teams meeting battery life drops back to “Intel” levels.
I second
1. ARM is super efficient at lower power envelopes 2. A walled garden allows for optimizations that are not otherwise possible when you have to address so many more use cases 3. Engineers on both ISA sides are equally capable but they are focusing on different markets
See it in the server space. ARM derivatives got some 10-15% of the hyperscalers share because they address perfectly the low usage cases. And then the growth “stalled"
Again, I am not predicting the future, things do change (from both sides), it is a constantly evolving field but the low handing fruit has been harvested already.
Too bad that Zen 4 competes along all markets including server and Apple doesn’t, meaning, Apple already lost the biggest fight. Your comment is childish and ill informed.
Another post on Apple's BGA garbage vs x86 Desktop parts. Why is M series in the picture ? That piece of junk got slaughtered by Alder Lake isn't it ? Also Apple's garbage claim on 3090 rofl. Wait for RTX4090 and RDNA3 with MCM it's going to be shredded, yeah M2.
As for the TDP boost, yeah higher performance and more work done in less time with new CPUs drinking more power is a bad thing, right ? Just use M series low power trash processors which cannot run any OS other than Apple garbage and no Vulkan on top all BGA hardware.
Zen 4 vs M2 it will be a bloodbath because these are not some big little junk, full fat x86 cores with brand new DDR5 IMC, from here on DDR5 will only improve yeah its not as big as M series unified memory but that thing costs north of $3000 while you can max out an x86 PC to balls with same cash. This thing will also beat Intel's MT performance because it packs proper cores and not those small toy cores for lapjokes.
The subchannels on a DDR5 DIMM are half width (32 bit for non-ECC), so it's still a bit disingenuous to call it quad channel, given the traditional usage of the term. It's still one pair of memory traces being wired to the IMC and DIMM slots.
Something is rotten in the state of Denmark. 10% more clock, 100% more L2 cache, and ~30% more memory bandwidth would result in 15% more performance. Has AMD not made any improvement to the ZEN3 core in two years??? I think they just didn't want to show everything that is coming.
They managed to shrink it and reduce the power budget while adding PCIe 5 and RDNA2. This may not wow people (it's a node shrink, not a new architecture, get over it) but what they are doing is safe and careful that when moving over to the new process, so they don't fux things up.
Also, Intel skipped out on COMPUTEX so AMD had the floor all to themselves. What a great opportunity to just crap all over Intel and sell their new product. I don't think AMD is sandbagging. I think they are being deliberately cagey about what Zen 4 can really do, and people here are misinterpreting that kind of careful demeanor. Ryzen 7000 is on a new chipset, socket, iGPU added, PCIe5, DDR5......
WTF are you people talking about? Ryzen 7000 is going to be the most popular Zen EVER.
I completely agree. I’m an AMD fan (or certainly at least not a Intel fanboy) but this seems very much Zen3+, and very akin to the Zen+ launch with the 2000 series. I suspect there are basically zero core improvements here, just maybe some changes oriented around the higher power envelope. Whether that means they’re poised for a big jump in Zen 5 or whether it means their roadmap has stagnated due to leadership changes I don’t know. The fact that their release cadence is slowing doesn’t make me especially optimistic either.
Someone below says they’re sandbagging and not announcing a 32core chip this early, and that at least seems a credible reason for optimism for heavily threaded loads.
They invested a lot of resources into making ddr5 and PCI-E 5.0 and other IO happen, is my take. It’s a Plattform changing architecture and hence lower IPC gain, I bet Zen 5 will bring more IPC and could be huge. But 15% is a lot, IPC or not doesn’t matter if you get 15% single performance and over 30% multi core. Look at performance and not IPC.
Semiconductor engineering wisdom 101 is never combine new production process and new architecture, there has been exemptions in the past but it can go either way, and there is an unprecedented many new technologies in this release. TLDR: It would be stupid to add a new architecture into the mix while introducing everything else.
> It would be stupid to add a new architecture into the mix while introducing everything else.
Zen 2 was a pretty major change & new process node (plus, added PCIe 4.0 and chiplets)! Intel's new Golden Cove & Gracemont were joined on new "Intel 7", for Alder Lake.
Like it or not, AMD can't afford to change only process node or micro-architecture, along the lines of Intel's old tick-tock model. At least, not when they've been sitting on Zen 3 for 2 years.
> But 15% is a lot, IPC or not doesn’t matter if you get 15%
I agree that it doesn't *really* matter where it comes from, for the end user. I'm most interested in how it compares on perf/W.
However, for judging AMD's execution, microachitecture efficiency and sophistication is indeed relevant. It suggests how competitive it'll be in perf/W and that will be especially important for their competitiveness in laptops and servers.
Xilinx was a much better target for AMD than ARM was for Nvidia. NV corporate culture is uncouth at best. ARM + NV would never have worked.
The future of mobile is AMD. I predict that future Ryzens will have both x86 CISC cores and ARM/Xilinx RISC cores, kind of like BIG.little but way more advanced. Intel has been playing the same game for years. Sooo boring to talk about them. 128-bit RISC cores sound interesting to anyone?
Interesting statement. I'm genuinely curious why you think that. I'm neither a current nor former employee.
> future Ryzens will have both x86 CISC cores and ARM/Xilinx RISC cores
I don't see a mixing of x86 + ARM making much sense. Sometimes, people talk about a core with both front-ends, but it's hard to have a core that's efficient at both ISAs, because certain ISA details permeate far deeper than the front end.
Xylinx is a different matter. I can see an embedded FPGA being useful for certain datacenter tasks (mostly some variation on software-defined networking), but I don't see consumer use cases for it.
> 128-bit RISC cores sound interesting to anyone?
No, why? 128-bit integer arithmetic is niche and I think probably addressed in various SIMD extensions, to the extent it's useful (e.g. crypto). And 128-bit addressing seems a bit nuts.
It might be there is Xilinx tech in the new IO-chip, will be interesting to see what they have added in there, even at 6nm is much bigger than the Zen4 cores which is interesting even considering the added RDNA cores.
> It might be there is Xilinx tech in the new IO-chip
Besides possible security mitigations, I don't really know what for. And there aren't too many security mitigations you can do in the I/O die. Motherboards are going to tie down whatever I/O capabilities the CPU offers, so there's not much benefit in making it reconfigurable.
Apple’s Neural Engine is an FPGA solution borrowed from the Afterburner project. It is used extensively in macOS. The same will happen to Zen5 and beyond with mature software in ROCm via HIP processing compute calculations most won’t understand happening under the hood accelerating system wide performance
>NV can be a-holes, YMMV I'll say this about NV instead: they are the best at what they do. They can be annoying to work with because they act like they deserve preferential treatment. I follow the gaming scene closely, and I've read reports about how they treat channel partners, competitors, vendors. Sometimes they are quite bad, like locking people into proprietary technology or treating channel partners roughly, sometimes trying to force partners into marketing programs that punish them for working with AMD. They threatened the website Hardware Unboxed because they felt that their reviews weren't positive enough. Other tech sites have noted problematic behavior of NV reps.
>Hybrid cores There is a tiny ARM core in Ryzen Pro that handles some security functions. In the future this might be expanded to handle certain instructions on a RISC core like AVX-512 or something like that, for efficiency reasons. AVX-512 causes a lot of heat on x86, maybe on ARM it might not be so challenging to integrate.
>big bits well at the very least you can run 2 64-bit ops in parallel, which might be useful when transcoding video or something.
Okay, I get your point about their external behavior. That's not news to me. What surprised me was the idea that their corporate culture was rotten. I don't see how they could continue to execute on such a high technical level, if that were true. Maybe I'm just naive.
> AVX-512 causes a lot of heat on x86
It causes a lot of heat because they introduced 512-bit wide vector arithmetic @ 14 nm with CPUs that clocked pretty high. The energy footprint would be similar, if you did that with any other ISA.
> well at the very least you can run 2 64-bit ops in parallel
You can already run 8 64-bit ops in parallel, using AVX-512. I really don't see a case for CPUs widening their addressing beyond 64-bit, and that means their general-purpose registers are going to stay 64-bit.
When Intel had the IPC lead, it was relevant. When AMD took the IPC crown, suddenly it wasn't about IPC anymore. Then Core12 arrived, and Intel starting crowing about IPC again, and meanwhile AMD is putting their resources into engineering instead of marketing like Intel. Single-threaded performance on the CPU might seem important but actually, the main use-case relevant to single-threaded is Gaming, where the CPU is not nearly as important as the GPU anyways.
> 10% more clock, 100% more L2 cache, and ~30% more memory bandwidth > would result in 15% more performance.
Doubling cache typically adds just a couple percent. It's a reliable, but expensive way to pad the performance margins.
The additional memory bandwidth is mainly useful for highly-threaded workloads, and that shouldn't be where their 15% figure is coming from. 15% should be the median or average performance increase, across a mix of lightly- & heavily- threaded workloads.
The question seems to be about Zen2 and Zen3. The article repeatedly calls it 14 nm, but the chart says 12 nm. It's hard to tell if this is a true inconsistency, or where the error lies.
The next iteration of DDR5 platforms should contain more support for the 5600 JEDEC speed rate. At least that should be the JEDEC speed targeted to be supported by both AMD and Intel with their upcoming chips, in my opinion. If all goes well, maybe we'll get a tested support for the more interesting capacities and even for the 6400 rate.
"In particular, X670 does not require PCIe 5.0 support for the PCIe x16 slots – while many boards will offer it, an X670 board would also be allowed to implement PCIe 4.0 instead. Do note, however, the PCIe 5.0 is still required for at least one M.2 slot for NVMe SSDs."
I'm not sure where you are getting your PCIe 5 "required" info from, but I think you have misinterpreted what you've read. The way it works now on x570 is PCie4 or PCIe3 devices are supported universally, including .M2. What happens is that the slots autoconfigure themselves to run at the PCIe level of the device. My first .M2 was a PCIe 3 960 Evo, running in the first slot, my second .M2 was a 980 Pro, a PCIe4 .M2, which runs in the first .M2 slot now, alongside my PCIe3 .M2. 980 runs in PCIe4, my 960 Evo runs at PCIe3 in the second slot, but I could just as well be running two PCie4 .M2s, as well.
I think you misinterpreted the slide here...the top chipset will support PCIe5 universally, exactly like the x570 supports PCIe4. But that doesn't mean that PCIe5 devices are in any way required, imo. The second motherboard supports up to PCIe5 for the GPU and storage, but I gather the rest of the bus is up to PCIe4, only. The second chipset does PCIe5 like Intel does it--supporting PCIe 5 partially, whereas the top chipset will support PCIe5 universally, just like the x570 supports up to PCIe 4, across all buses.
I mean, there are no "PCIe5" GPUs for sale atm, so I kind of doubt that the user will have to install PCIe5 devices or the motherboards won't function...That would be a huge step backwards for the current x570 standards. That's what that slide says to me.
When it comes to PCIe slots, motherboard vendors are only required to make their X670E boards PCIe 5.0 compliant. For X670 it's optional (so they can just do PCIe 4.0 instead), and B650 will be PCIe 4.0 out of the gate.
I have a suspicion. The gains in performance/power from TSMC N7 to N5 were not as much as they were hyped up (for AMD). Going by this review, Zen 4 will not even match Raptor Lake in performance, which is fabricated on that much ridiculed Intel 7 or 10nm ESF, whatever you call.
Intel 7 is good. It was just very late and seems to be expensive. This is just another data point that it's good.
Intel messed up one node and tried to mend it on the cheap while executing stock buybacks and everyone decides they forgot how to engineer transistors and concurrently suddenly TSMC were, by default I guess, the gods of transistors. It wasn't all that long ago that TSMC messed up at 20 nm (2014). Leading edge semiconductor manufacturing is hard and even the best mess up. Others, such as globalfoundries, dropped out of the game completely.
By all acounts TSMC messed up with their 10 nm, it's just that 7nm became such a huge success with high initial yields that noone looked back and pairing that with Intels floundering just made it even more of a hype.
Your take is a bit dumb. The power savings were invested into higher clocks of course, do you think 5.5GHz are free? Intel needed 250W to make it happen. And we will see how fast Raptor Lake will be, you’re in for a surprise if you think Raptor Lake is great, it’s more or less just Skylake with 8 cores instead of 6, not that great, typical 2nd plattform architecture.
Not sure what you are trying to say. Are you under the impression that Alder Lake needs 5.5GHz to beat Zen 3 in single threading?
Skylake? You're a delusional fanatic. So if Intel's architecture is Skylake and their process is shit, then why is Golden Cove faster than Zen 3? Intel doesn't need 250 W to be faster in ST. Alder Lake is faster in games at lower power usage. Missed that somehow? Or just ignored it?
It’s really funny if someone who extremely toxically sounds like a fanboy calls me “fanatic” and I’m not gonna debate things with such a weird person. Bye
"It’s really funny if someone who extremely toxically sounds like a fanboy calls me “fanatic”"
That's what's known as word salad.
"and I’m not gonna debate things with such a weird person."
And that's called not having a rational response to my points. If you truly do get so upset by being called a delusional fanatic I suggest you don't start your first response by telling the other person his "take is a bit dumb". Battleship mouth and rowboat rump?
Yeah, likening Golden Cove "Skylake" is enough to render that post not worth responding to. The point about 8 cores instead of 6 is even weirder. It's like they read something about Coffee Lake-R and are confusing it with Raptor Lake.
Only E cores are Skylake class not P cores, the latter are very powerful cores and do not exhibit the flaws of SKL with Cache errors because they maxed out at 8 and RING is locked to low count even if they run dual RING due to E cores. Raptor Lake will be having a lot of Gaming potency due to higher L2 cache added to the RING bus. And E cores added on top will allow them to slightly approach Zen 4 arena but not beat them.
Also they will get DLVR which will optimize the Voltage on Raptor Lake to not guzzle a lot and end up in flames like Rocket Lake. Still it's not worth because Biglittle BS. It all depends on how fast Zen 4 7000 series is. Plus how Z790 motherboards have ILM this time as Z690 is a disaster and must be avoided as it bends the CPUs and mobos even.
If you wondering what did AMD use that 80% increase in logic density on TSMC 5nm for, here's a plausible answer. We know AMD able to push next gen server part (Genoa) to 96 and 128 cores, they may very well also double core count on desktop. I belive today announcement is AMD teasing Intel and us. They never say the chip they showing is highest end part, it may be successor to the 5800x. The real surprise will come later in the form of 32 cores monster mainstream.
That really stretches the definition of a desktop, then. If a server board is placed in a tower (sometimes called a "pedestal") case, does that *also* make it a desktop?
I think there's a pretty clear delineation of the workstation market, if you look at price and capabilities. In the past, I'd have added power to that list, but that particular line is now getting very blurry.
Who wants a 32 core desktop part? Only a minority of users with workloads for which it's useful. AMD know they need to keep ST performance up to compete in the market. But their core is clock for clock slower than Intel's. So they give up a little power efficiency to push up clock rates and expand the cache of the processor so it performs better in certain workloads, including games. The cache density improvement from TSMC's 7nm to 5nm is about 1.25 times, not 1.8x. The 5 nm process is likely more expensive than the 7 nm process at this point, so perhaps the Zen4 desktop parts will have a smaller area of compute dies allowing AMD to compete on price and still make profits now that they no longer have a performance advantage over Intel.
32 cores is what I need :D It may sound overkill, but thinking about it, when AMD introduced 8 cores and 16 cores part, majority of people still depend on 4 cores Intel chip. But time changes, Raptor Lake may have 8+16 small cores. But your argument is sound, the economic however may play significant part preventing that from happening.
Out of curiosity what do you 'need' 32 cores for ?
I can't think of any consumer software which needs 32, or even 8+ cores of current MT processing power. Sure you can put more cores to good use when transcoding but if it's imperial to you that such a task takes 10 mins instead of 20 you're in professional usecase territory.
well the extra cores are there if you need them in case you want to try your hand at making Youtube vids or something and want good performance editing video. Also, it doesn't make any sense to criticize AMD for not optimizing Ryzen on 5nm when it's the first time they've used it. After they get Zen4 to market, they will introduce Zen4+ and there you will see the big IPC gains.
Of course it’s not highest end, it doesn’t have 3DV Cache. But 16 Cores will be high end, the single dies top out at 8 cores and there is no room for more than 2 chiplets. Raptor Lake could be easily countered by another “5800X3D” named 7800X3D or higher, it depends if it’s needed.
I remembered when they showed off Zen 2, it has room for one more chiplet. Not this time around, but AMD may not necessarily have one design to begin with.
The whole package just seems to small for a 3rd chiplet, aside from the news I’ve read that it’s 2 of 2 max. I don’t think AMD cares for more cores on the normal desktop Plattform, anyone who needs more can buy Threadripper.
On an important die shrink, you probably want to make sure that your current architecture at the very least isn't slower on 5nm vs 7nm. I mean, how embarrassing would that be. Are we forgetting that we can just get more cores on the same size die as before with die shrinks?
If the chiplet gets expanded to 12 or 16 CPU cores and however many RDNA3 cores, then AMD will be able to offer the same number of cores as Ryzen9 but with only 1 chiplet instead of two. Why not do this?
If people want 24 or 32 cores they can buy the 2 CPU chiplet Ryzen 6900X or 6950X or whatever they are going to name them. If you don't need that many cores, buy the 6600/6700/r6800 with just one CPU chiplet with 12 or 16 cores each.
You may not think you will never need that many cores, but high rez VR is coming, I'm talking something like a Quest 2 but with 8K for each eye, power efficient enough that it doesn't scorch your scalp and sips power like an old lady. If you've ever played VR games before, you know what I'm talking about. Remember when AMD bought a small company doing high bandwidth short range wireless networking? That was part of a tech portfolio that will become an AMD designed VR headset made custom for Sony or Microsoft.
I'm optimistic that they are being very conservative with their performance claims here--a 5-10% IPC increase 2 years after Zen 3 was released (most of the uplift coming from the 5nm switch and associated clock speeds) would be disappointing. I was optimistic that they could get 15-20% IPC level improvement but seems unlikely at this point. The real bummer here is that it will result in very small increases in laptop performance since those TDPs will not have the wiggle room to grow like desktop does.
It didn’t happen because they invested time into the new Plattform and new IOD design, is my take. Users should look at total performance gain and not IPC though, it’s 15% and over 30% MT, nothing to snoff at.
Are we so spoiled that a 5-10% IPC increase is disappointing? And why are people ignoring all the stuff they are adding to the CPU chiplets to focus solely on IPC? IPC is only important when Intel has the IPC crown. When AMD has faster IPC, Intel says it's not about IPC, it's about something else, preferably something in which AMD is inferior to Intel.
"What this means for the future of AMD's monolithic desktop APUs is uncertain, but at a minimum, it means that all (or virtually all) of AMD's CPUs will be suitable for use in systems without discrete graphics, which although not a huge deal for consumer systems, is very much a big deal for corporate/commercial systems."
Still worth a lot in consumer systems, helps resale quite a bit, also means later on if e.g. use same system as a NAS or something I can run the system without a dedicated graphics card, can also help diagnose if issue with graphics card.
We consumers forget that ours is a side market, like grease is to an oil refiner. The main game is the premium tier datacenter/workstation/... Our market is nice, but is a way of unloading "harvested" lower binned product.
This is fundamental to the whole AMD architecture & incredible cost structure- the same lowly mass produced core unit is teamed in progressively greater multiples from bottom to top.
Whatever Zen 4 may be, it is that because it is what the big guys wanted.
They continue to have the odd problem of seeming overpowered - 6 core is about as low as they go these days, & many users could make do with less ATM
Previously before Ryzen era, Intel used lower bins from the Xeon cut HCCs and then once AMD started their HEDT level core generations, Intel had to improve a lot by adding STIM like Sandy Bridge and also change their entire Xeon lineup to Mesh and have even more high binned cores for Mainstream. That was true until Comet Lake, you see those Bins on CML needed to hit 5.2GHz on all cores, it's a high bin ratio that was needed. Granted it's nothing vs actual Xeon money still that's what it is.
With AMD your point is even more strong because, AMD heavily relies on EPYC binning, and their Chiplets cannot simply prioritize Mainstream over that. That existed until X3D processor 5800X3D too, it was an experiment they didn't want to commit fully by refreshing the whole stack as it will eat into their Threadripper Pro (dead now I think as sTRX series mobos are basically abandoned, X299 was even better lol had a good PCIe and had something vs this orphaned junk) and Milan EPYC cut, so they just dumped a simple Gaming tag product for masses with restricted voltage controls and all things, it's a castrated silicon but for masses it might work.
But if you see this product, Zen 4 they are hitting 5.5GHz on many cores of that beast 7950X (Assuming the SKU id that since Lisa Su said it's a 16C part and we do not know 24C even exists for mainstream, I doubt) So that clock rate is exactly like Intel's Comet Lake era, you need ultimate binning to get those normal crap won't get through that high speed unless TSMC 5N has super high yield rate, which I doubt.
Also it's a great time, we have a solid PC ecosystem since 30 years, and all games, programs from that era and even before that era of 4-6th gen consoles can be preserved. And we can preserve the WW2 content too, this is a boon to mankind. Even a PC from past decade is superb today and can do what it does because it's a PC not a Toy garbage social media junk ala Smartphone.
So I welcome with my open hands that we are getting a superb Socketed Ecosystem and upgradeable modular ATX standard still to date plus more over a ton of HW options to choose from rather than Apple type castrated crippled anti consumer soldered garbage hardware and locked down software. Oh you can even buy a decommissioned XEON and run ProxMox, Unraid, FreeNAS, Arch, Mint, VSphere ESXi or tons of others and even Windows 7 SP1 with Simplix Pack for latest updates without telemetry, Windows 10 LTSC2019 or Windows 10 LTSC 21H2 both de-bloated with DISM based tools.
Only with open-source hardware, software, and networks — all of which are constantly vetted by rigorous trustworthy independent researchers — can there be the possibility of privacy.
What we have is an ‘all telemetry all the time’ situation unless you’re in government, in which case the spying is there but more known/controlled.
Honestly if you want to speak about 100% facts then once you are on grid you can not simply evade, only super talented folks can do that even then it's impossible thanks CIA MOSSAD.
Windows 7 Simplix pack with latest update is better than Windows 10 Home / Pro / Enterprise even LTSC because there's a lot left in there which you need to remove manually for the 10, 7 they added Telemetry after the Windows 10 debuted with cumulative updates. So my point still stands. Also with enough knowledge of DISM tools you can remove the bloatware and telemetry from Windows 10 LTSC both 21H2 and 1809, it's possible but it may break a few things like you need to completely axe Xbox with that it will break the OS not to work with Gamepass GaaS cancer or Xbox services utilizing latest games like Flight Sim 2020 and Forza 4 and up.
Esp if you wanna talk Linux, I heard about a lot on that systemd too. But yea I agree that Microsoft OS is not 100% private.
But let's go a step more, did you see that Pluton processor ? It's literally Mossad CIA level silicon, same for that NSOGroup state sponsored Mossad tool which the latest mobile spyware tool news broke which even bypassed that so called "Apple is built for Privacy in mind" as well. And finally AMD's BGA Zen3+ has that Pluton on die. AMD silicon has PSP which is an ARM unit that runs on its own OS like Intel ME and Apple's A Series Secure Enclave processors.
So yeah there you go, you cannot escape no matter what. Which is where conscious purchase and knowledge of tech is needed as I mentioned above.
Threadripper isn't dying. A lot of your points don't make sense. You're talking about binning when dealing with chiplets? These dies are tiny because you get better yields that way. Threadripper is more than a bigger version of Ryzen. Workstations have different needs. Multiple Instinct cards for example. Tons of RAM. ECC would be nice. Multi-display on an iGPU. Lots of NVMe drives, means lots of PCIe5 lanes. Threadripper is not simply a relabeled low binned EPYC.
Binning is a key point here, 5800X3D is also a reject EPYC. Binning is all there is about Silicon manufacturing, you think AMD is running something different and making special chiplets for Ryzen mainstream ? lol.
Threadripper is already dead, they axed the Socket for a single generation and left it hanging, on top except for the Threadripper PRO entire TR is not being made anymore. AMD killed HEDT entirely because there's less money in that vs EPYC esp when Intel's own HEDT died, X299 being last and better, since AMD's RAID is garbage (Level1Techs, go watch his video) and also insanely overpriced.
I forgot to add High Leakage chips go into Desktop K SKUs because they can operate at much higher voltage (more clocks) than Low Leakage which often will go into Laptops (but they are anyways garbage silicon since they cannot be really binned high as they run at high voltages despite their binning, Intel used to do that until Mobile Extreme - Until Haswell's last i7 4940MX rPGA CPU. They made one more special one, i7 4980HQ, L4 eDRAM specially made for Apple) the latest mobile BGA HQ and all are not that great bins, some could be but they are worst.
Binning is a core aspect of Lithography. On top your point about dies are tiny is funny. 5900X is literally having a reject die and got 4 cores disabled else it would have been 5950X, and 10850K is same as 10900K but Intel didn't label it same why ? because it won't hit more than 5.2GHz, no matter what you do. That's what Binning does and ofc 5800XD operates as 1.3v than 5800X 1.4v it's lesser voltage because it's higher bin quality if AMD used the original die it won't work as the heat would be too much. It's a proof that X3D will cut into Milan profits esp when there's no Threadrippers with X3D in existence.
What does it mean for the chipset to have Wifi 6E support? Surely you can just plug any Wifi 6E NIC into any PCIe slot the NIC fits in and it will work?
So these 15% theoretical gains on single threaded performance means it will equal Intel 12th gen Core architecture? I hope that the price will be better than for Intel, otherwise I don't think it will have much success.
https://twitter.com/PaulyAlcorn/status/15287574538... AMD has confirmed that the 170W figure for AM5 is PPT, not TDP. That means this is peak power for the socket, so, *assuming* that AMD sticks with the standard PPT = 1.35X TDP, the maximum TDP for AM5 should be 125W.
What will be next bottleneck on 786(Intel) or 770(AMD) mainboards; memory bandwidth (latency), storage bandwidth, heat reduction/cooling, power envelopes for peak demand, cost? ( PCIe 5.x will be a Q3/2022 reality for consumer market, https://www.anandtech.com/comments/17203/pcie-60-s... and hardware seems to outpace software iterations, at the moment, including compatibility between x64/arm64/riscV systems? )
means there are statistics on external and/or internal (e.g. cpu, discrete/integrated gpu (even tpu/npu?) related clocks, memory controller? ) interrupt latencies?
It's surprising, because if it isn't a substantial improvement at the architectural level, this is the first time they've iterated the number with scarcely an changes. Leaves an uneasy feeling when marketing outstrips engineering.
I like what I saw, a proper x86 successor for Zen 3, a good chunk of most important updates - IOD on TSMC 6N that means the horrendous USB and other I/O related stability issues are fixed with this. new DDR5 IMC not that old memory, here it's even running on 6000MT/s much welcome change.
They boosted the clocks to 5.5GHz even, that's a great achievement for AMD. The thing is X670E it's segmented which is really unfortunate as the HEDT level costs will come I fear. But AMD's OC for that mobo is questionable I hope their Zen 4 CPUs have a full TDP unlock and go maximum in boost but I fear the TSMC 5N wont let all those damn 16Cores boost to that high freq, Intel knows 8P is max too, higher clocks and high cores are not possible anymore on these Core and Zen CPU designs.
The 15% ST growth is the most hot topic in tech people now who follow this, I mean yeah sure the IPC is lower because they are just barely beating Alder Lake on CBR23 ST scores 1800s vs 2000s but if you think logically AMD knows 8P for Intel is maxed out and high Clocks, with more Cache for gaming. And E cores is where they try to fight vs AMD. So their approach is simple make 7800X boost to high clocks and be the gaming SKU while 7950X will be top MT champion beating out 13900K, 8E cores more aint gonna cut the Ryzen 7000 MT performance as you can clearly see how 12900K is barely able to beat 5950X from 2020.
Also AMD doesn't need to put a brand new design again because they simply do not need right now, Intel is on LGA1700 for RPL and *maybe* Meteor Lake, they can get a new design which drastically improves and in the meantime let the DDR5 and PCIe5.0 ecosystem saturate a bit. Optimize the R&D costs. Same for Intel, they are also riding the same bandwagon but the shame is E core garbage, well that's how their Core uArch is, cannot scale past 8C because too hot, so improve at more IPC and try to get all sort of Power improvements on top, DLVR is coming to RPL for better voltage regulation and clock speed balance.
All in all any good x86 CPU is welcome, even that biglittle junk by Intel; because ARM is pure use and throw garbage running on garbage platforms and pathetic OSes like iOS and Android (because scoped storage and post Oreo versions mimicking Apple very hard).
If it’s just a “tick” why is it then so much faster and more efficient than Zen 3? No, Intel doesn’t copy Intels terrible strategy, they don’t want to lose.
The clock and L2 increases could explain most of the performance uplift, suggesting the architectural changes to the cores are minimal. Or maybe AMD is sandbagging Zen4 to keep Intel asleep,.
What was so bad about the tick/tock strategy? From a risk mitigation perspective it makes a lot of sense and risk mitigation is a great engineering practice. They only stopped talking about it because they couldn't get a new process to work so the clock stopped.
Looks like late 2022 will be upgrade time for me. With my pre-existing servers I'll need the time to upgrade my office power, fortunately I tore out my pool and have a spare 100A bus on the wall outside the office, now for an extra 30A circuit through the office walls. 10 gauge wire is such a pain in the ass to work with though.
Maybe AMD is saving something for later. Or maybe it still has issues and they're using Intel's withdrawal of the feature from consumer CPUs to iterate on their own implementation.
I'm a little surprised not to see the GPU as a separate chiplet. I had a hunch the RX 6500XT would make an appearance, here. For an iGPU, that would actually be pretty impressive.
Without GPU, that 6N IO chip does not make any sense. Well, instead of GPU they could have put a really large LLC on it, but consumer value would very low compared to a built-in GPU. Unless they would eliminate L3 caches from the CPU chiplets and either make them smaller/cheaper or add a couple more cores to each. They very well might do just that for server parts, but I doubt it.
Why keep bringing Apple into it? Windows users or Linux users aren't switching OS or paying the Cupertino tax. Corporates still rely on AD & Group policy to manage their fleets and that's not there with MacOS. Plus of course all those who need to run x86/64 VMs.
It's just the usual Mac crowd, most of them love Apple so it's just a side effect because the Macboys finally have their own silicon (please do not talk about price rofl) or how it's inability to run majority of the Software or how open it is on all fronts, we got a notch this time too on a laptop.
The funny part was Anandtech's own benchmarks on M1 are up and they show how lowly BGA processors eat it out and even pathetic low quality mobile BGA GPUs from Nvidia. Then they use M1 Max and Ultra which are a joke, because too high cost ($3K+) for such locked down Hardware and Software ecosystem (please do not talk how Apple claimed on RTX3090 class and ended up getting slammed by even Verge). Oh yea let's talk about DaVinci Resolve, why not because ARM has specialized silicon cores so it excels. I'll just wait for AMD's Xilinx FPGA integration into x86 parts, it will beat the ARM specialized joke.
PC ecosystem is a boon to computing for many people sadly it's now just glorified for gaming, still there are so many who use PC for far more than anything these Macs can do. And do not forget the AM4 socket or any socket for that matter with a great PCIe ecosystem. It all doesn't matter to these.
The funny thing is, these Apple proponents have been using Intel x86 for about 15 years or so. Now that they've got their own blazing CPU, they can turn their tales and run down Intel, AMD, and x86. It doesn't smack of good sportsmanship. On a more general note, the hate we sometimes see directed at x86 is due, I would say, to its being "old-fashioned," and ARM being (apparently) the shiny "new" thing.
What is more important is the UI degradation. For instance, the original Mac operating system from 1984 gave people control over how many times menus flash at them when items have been selected.
Ghz billion core CPUs, mountains of blazingly-fast storage, and OS RAM requirements that would be incomprehensible to a normal 1984 consumer later we have the situation where people are forced to endure three flashes in a row for each menu item.
Apparently, Apple thinks making it more difficult tor epileptics to use their operating systems is progress. This is the price that’s paid for cutting corners on macOS development team quality.
The quality of the UI continues to decline beyond the rather disastrous initial OS X release. The quality of bundled software is so atrocious in key cases (i.e. the Music program) as to leave one speechless.
Apple products, beginning with Lisa, were supposed to offer a more efficient experience than Microsoft’s offerings. That was supposed to be Apple’s angle. The company has replaced that goal with margin chasing, realizing that people, when given two bad UI options, will continue to patronize their option out of the inertia of familiarity.
If you can believe it, I've never used a Mac, so can't comment on first-hand experience. But being brought up on Windows, Apple's UI often seems nonsensical to me. At least with iTunes on Windows, I battled making sense of it. Could be wrong but often felt Apple's philosophy was about removing unnecessary UI features (the best design is about removing not adding).
As for Windows, it just gets worse and worse. Microsoft seems even more incompetent than Apple. Vista's copying of Mac's gloss, 8's preposterous tiles, and now the heaven-knows-what-to-call-it of 11.
‘In any case, the long and short of it is that user control has been and is going out of the window.’
Yes. The experience of being computing consumer is steadily transforming into the joke about Soviet Russia. Computer use you.
The same is true of televisions, where Vizio is more of an ad company than a TV producer.
Going back to MS vs. Apple. The Lisa was a quantum leap over what Microsoft was offering. The Mac UI had some good points and bad (versus Lisa) but was vastly better than Windows up to W95. The Mac UI was still superior until OS X. Although that had good points, it had serious defects.
OS X has been degraded in recent years to the point where I am actually almost ambivalent about which platform to use.
From a superficial glance, I've noticed it has taken on a lot of mobile design motifs, and on the Windows side of the coin, the same thing has been happening, much to the dislike of many of us.
That is not true generally. It is only true some of the time. That ‘laziness is efficiency’ design philosophy is a very large part of the problem, when it comes to Apple.
Personal computers thrive via increasing, not decreasing, sophistication. Individuality, such as those with epilepsy I mentioned, warrants a feature-rich design — one where people have control over the machine. Lazy design substitutes individual agency with a particular design team’s conception of Joe Average and/or the team’s arbitrary ideas.
Good design is efficient, with efficiency taking into full account the needs of individuality.
PCs have the resources to have an extremely fine-grained level of user control over the OS — without that being a burden for those who are content with defaults. That a machine with an OS that fit into a 64K ROM gave users control over features, like menu flashing, and a machine with a 2 TB SSD and 32 GB of RAM does not says one thing only: failure.
Oxford Guy, I agree with your comment about design, but was careless in my phrasing. I think the best design has full functionality for the artefact, but no more and as simple as possible. Generally, adding functionality and then stripping it away to its essentials. Indeed, this principle carries through the whole length and breadth of life (Occam's razor). While I like extensive functionality myself, there's a hazy line where it becomes feature creep. In games, simpler interfaces, such as those of Half-Life or Starcraft, have appealed to me, whereas today I am repelled by the information plastered all over the screen. And I use Notepad more than any word processor.
There is a big difference between feature richness and poor design that impedes efficiency, despite having many features. Implementation quality is just as crucial as having enough feature richness.
Occam’s heuristic is, like any heuristic, capable of being very misleading. It works better for math than for people, as individuality is important for the latter and not the former. Humans like to give short shrift to individuality, probably due to egocentrism and overestimation in terms of the accuracy of focus groups and similar research.
The simplest design that manages to encompass individuality is much different from simplistic design that only works well for a subset of the population.
Well said. At the end of the day, implementation quality carries the design, and shows the skill, or want thereof, of those that have put it together. Taking an example, I would say Firefox is excellent, being plain and simple on the surface but powerful beneath the bonnet; it can take a great deal of customisation.
We should have minority report UIs by now. By that I mean, everything should have hotkeys and windows should be flying around the screen like lightning. Menus and dialog boxes should be fully featured and organized. Also, you should be able to enable or disable any UI feature you want. Every time you are forced to slow down, awkwardly click around, or manually type something you've already typed a million times, or fight with a feature you don't really want, that's a massive waste of time and effort. No thought has gone into improving the UI since it was first released.
Except windows users are, in fact, switching to Apple. Broadly. Heck even here in my local town, I personally know dozens of folk who have switched to Mac over the last couple years. People want simplicity. They don’t want to switch storage. And most of them don’t know what a GPU is. They want to just get their work done and do it without worrying about IT, virus protection, and the myriad specs that come with computers. I appreciate this may not be the thinking of the Anandtech reader base, but let’s face it, us Anandtech readers are tech enthusiasts. Like cars, most people are not car enthusiasts. They just drive cars as they serve a purpose. Just my $0.02
" Except windows users are, in fact, switching to Apple. Broadly. " i know no one that has switch to apple from windows, most cite the reason as ? cost. apple is just too expensive for what you get.
Could be location based but among my friends and colleagues, family etc, practically everyone I know has switched to Apple 100% in recent years. A few creative/dev fold but the rest average joes. They also acknowledged the cost but many didn’t. Many just purchased MacBook airs which really are quite good value given the performance, reliability, retina screen etc.
maybe, but still, to claim everyone is switching to apple or not, based on location, is moot. the people i know that do use apple only use the iphone, and 1 with a mac due to his music thing. the rest are windows comp based
I think the status take is kind of lazy. Apple has passed the point of status and is very mainstream. And you can get an air for what is pretty cheap in terms of a new ultralight laptop.
I don't know about how much switching is going on, but I definitely think people have been gravitating more and more to simplicity and that this is being reflected in tech design. There is so much complexity in the world that people (those that aren't power users that need and/or enjoy the complexity) don't care if features are locked out as long as "it just works" etc. Corporations are happy to give this to them because they can also market it and make more money. Perhaps I am wrong though.
Thank you for the highly anticipated Ryzen 7000 coverage. While I haven't finished reading the article, I'd like to report a typo in the word "discuss": "While AMD isn't going into great detail on the Zen 4 architecture today – they have to save something to disucss for later in the year"
What AMD means by + 15% Zen 4 Rafael performance is + 15% price increase regardless the actual performance delta in relation V5x. TSMC is raising price into second half 2022.
Intel Alder (Raptor top bin) missing price points are $749 and $999. Here's a second take at AMD Zen 4 price mirroring the forgotten [?] Intel Extreme Edition price ladder;
I've spent more time estimating 'what is' Zen 4 Raphael price. Framework to consider in the equation. 1) TSMC & AMD + 20% combined price increase into second half 2022. 2) From 7 nm to 5 nm shrink + 40% more good die per wafer. aha the 20 : 40 rule.
I disqualified TSMC 7 to 5 nm potential 81% density increase on manufacturability for a high frequency part.
What is good silicon per wafer? On ccx + i/o normalized basis across the product stack, V5x + i/o = 245 mm^2 normal and yes on the napkin math the R5K i/o is fabricated at 12+ = from 216 good complete components to 302 good complete component per wafer recall normalized is a full product stack calc on percent grade SKU weight on die (1 or 2) + i/o composition.
So what's R7X 1K?
Ryzen performance desktop average price to AMD from TSMC across whole product stack is $163. At plus 20% now $195ish where AMD adds traditional x1.5 margin for OEM high volume procurement. The range is x1.5 to x2 for low volume procurement.
So what's R7X 1K?
Does + 40% good components offset + 20% price increase from TSMC to AMD?
just more personal opinon bs and garbage from the pretend couch analist known as mike brazzone, who posts NO sources other then his own bs page, which also has no direct link to sources id like to see where he gets his numbers from. my guess, thin air
as said before, nothing but fluff posts, from a fluff " analyst " if he can even be called that
As I said before my numbers are from primary research and the base data for WW channel is ebay offers. There are other electronic seller sites like Amazon that can also be relied as AMD, Intel, Nvidia have a segment sales manager for every seller site. ebay however is what the industry, the channel itself relies for WW inventory supply management as Intel supply signal cipher was relied until 2012 as Price Watch was relied in the 1990s as Storeboard was relied in the 1980s.
Pursuant my specific post here TSMC price to AMD and AMD price to OEM is calculated on the stakeholder / 3 rule; TSMC gross, AMD gross, OEM NRE and margin validated on AMD financial reconciliation.
Specific Raphael component count increase on good wafer area at 53,000 mm2 / die size on 7 nm to 5 nm subject V5x normalized full stack ccx area + i/o subject ebay percent grade SKU to determine the normalized silicon area requirement across the full product stack.
Supply wave slides at my Seeking Alpha blog sites are the ebay primary research and proof of work;
then post direct links to your sources, and your own page is NOT a source, its self promotion, nothing more. i think you dont post direct links to your sources, because you dont have any, period.
the more you post this bs, the more it looks like you are a fluff analyst, posting fluff, and a fraud.
Mike Bruzzone, enlisted by attorneys of the Federal Trade Commission v Intel Docket in May 1998 at FTC HQ Washington DC. Lettered to work report California Department of Justice Deputy State AG March 2000 inputting to antitrust division primarily on Intel vertical tying and horizontal contract in combine beginning March/April 1998. Informal begins EUCC in 2001 and by 2009 an honor from EU Commissioner by letter. Referred by Congress and Houce subcommittee commerce and FTC Harvard production economist from Docket 9288 we were aware of each otherrs work then back to attorneys enlisted aid for FTC v Intel Docket 9341. Same year retained by Congress of United States on USDOJ contract and in 2010 made Docket 9341 consent order monitor by FTC attorney to monitor AMD, Intel, Nvidia and VIA. I said if you need a work assignment, I'd given u one so you to can contribute productively and know what primary research is and how primary research makes a data source; regards.
Mike Bruzzone former Orchid Technology, Arche Technology, Cyrix, ARM, NexGen, AMD employee, Radius, C-Cube, Samsung Alpha, Intel, IDT Centaur consultant. With the FTC since May 1998. First through 11 not Intel x86 and ARM introductions, over 30 PC introductions, first Symmetrical PC Processor, first PC MPEG encoder, first 3D PC Graphics Card Orchid IBM AT Turbo PGA
mike, you are the source ? yea ok sure. and you pull these numbers out of thin air ? and the whole above post ?
" I said if you need a work assignment,." so i can post made up numbers and data ? no thanks.
"and know what primary research is and how primary research makes a data source" with out LINKS to where you get this " data source " its meaningless, fluff and drivel.
IF i wanted to i could post a bunch of text like you do claiming the same things. point is ? doesnt prove ONE thing. LINKS would, which you STILL refuse to post, so again, all of your posts are just spam, fluff, and BS, from a fraud couch potato " analyst "
Quasr, then all primary auditors, analysts inferring on data whether quantitative such as the ebay supply chain data I examine, or qualitative when from, for example, a cross matrix organization, production, supply chain or channel audit are other than comparative, reflective, inferring in their search for thing is concert. Confirmation or contradiction on all the fuzzy ambiguities that exist in between? On which, perhaps, you'll make a new discovery?
are you dunk ??? that looks like the ramblings of some one who drinks too much
the only discovery i would like to make, is where you come with with this BS. no links, means NO ONE can prove you are right OR wrong, how convenient.
WHY wont you post direct links to your sources ? is it because you dont wont to ? or is it be cause you have NO sources so you cant ? i guess its difficult to show your sources when it is all made up.
Ian Cutress asked Robert Hallock about the PCIe bandwidth on the chipset link too. It is also PCIe 5.0, so all 28 lanes from CPU are 5.0. It looks like the first gen of Prom21 chipsets are 4.0 capable only.
This means that Alder Lake has double bandwidth on the chipset link. DMI stands at x8 PCIe 4.0.
AMD either needs to enable 5.0 chipset link in the second gen of chipsets or sacrifice one M.2 on the chipset's chip 2 and allocate another x4 link from CPU on motherboards without ASM4242 USB4 chip to create double x4 chipset link.
Alder Lake DMI is PCIe4.0x8, Ryzen 7000 is PCIe5.0x4, it's same bandwidth. It's the case with Z590 as well, with 11th gen it's DMI is 3.0x8 which is same as PCIe4.0x4 link on X570.
The CPU lanes are 24, it's the same with Zen 3 as well. Intel had 16 lanes until 10th gen, they upgraded it to 20 lanes on 11th and it's the same for 12th as well. They might upgrade to 24, on Raptor Lake, we do not know yet.
I'm not sure that the DMI link speed is a measure for anything useful. My PCIe 4 SSD's, which individually can do 6-7 GB/s, are maxing out at 3.3 GB/s SSD to SSD transfer rate over DMI on a Z690 board.
first comments (and with name change second comments series) were related to some extent to initial comments, as far as i looked into that, and some content (one paragraph) was from reddit.com also - bot with context duplication ability (related to initial catch words) - spamming script with web search tool generated commenting texts - AI testing (statistical weighing of content already on site into answering comments)
(if it had been less intrusive, it might have been interesting what are secondary or multiple iteration effects of generated comments on comments - probably somekind of summary of mostly used lemma)
Crossfire hasn't been a thing for years and hybrid crossfire is even less likely to make a comeback. I wouldn't hold my breath. Switchable graphics might make it, but the utility on desktop is extremely limited.
Even 16x pci-e 5.0 isn't fast enough for full memory coherency so for it to work it'd need to be a crossfire/SLI like technology, which is really not in vogue right now.
Hybrid crossfire was available on A8/A10, with MUCH, MUCH slower main memory and PCIe. The code is still probably in the drivers. And that seems like an incentive for AMD CPU users to buy an AMD card instead of Nvidia.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
332 Comments
Back to Article
Devo2007 - Monday, May 23, 2022 - link
"Moving forward into 2019, AMD debuted Zen 2 or, as it is more widely known, Ryzen 3000. Switching to TSMC's high-performance 7 nm manufacturing process, AMD delivered higher performance levels over Zen/Zen+, with double-digit gains in IPC performance and a completely new design shift through the use of chipsets."Should be "Chiplets." (Pg 1)
Ryan Smith - Monday, May 23, 2022 - link
Thanks!mode_13h - Tuesday, May 24, 2022 - link
Article also misquoted PCIe 5.0 x4 bandwidth as "32GB/sec of bandwidth in each direction". Should be 16.An easy way to remember is that PCIe 3.0 is roughly 1 GB/sec per lane per direction. Going to PCIe 5.0 makes is 4, then going to x4 makes it 16.
mukiex - Wednesday, May 25, 2022 - link
The very notion that NVME slots on this socket will have as much bandwidth as the GPU socket on my current home computer (Z270, lol) is kinda blowing my mind a bit.mode_13h - Thursday, May 26, 2022 - link
Yeah, but there aren't M.2 drives that fast, and there won't be for a while. That makes this whole rush to PCIe 5.0 a bit ridiculous.About the only place where PCIe 5.0 really makes much sense is in the chipset link, yet some posters below claim the first AM5 chipsets will only run the link at PCIe 4.0 speeds.
Xajel - Thursday, May 26, 2022 - link
The first one has already been announced, and AMD claims that more will come when the platform launch (in about 4-5 months).But I agree to the second point, I really hoped the chipset link went for a PCIe 5.0 x4 so it can give the full bandwidth to the rest of the IO, even connecting an PCIe 4.0 NVMe at full blast will take half the bandwidth only.. Maybe they should have done that for the X670(E) to give it more differentiation than the rest.
mode_13h - Thursday, May 26, 2022 - link
> The first one has already been announced, and AMD claims that more will comeIt took at least a year between when AMD added PCIe 4.0 support and when the first NVMe drives appeared that could actually surpass PCIe 3.0 x4 speeds. I'm expecting the lag for PCIe 5.0 will be even longer.
Targon - Thursday, May 26, 2022 - link
So, you'd rather your motherboard limit you in 2022 and 2023, and 2024, possibly 2025? If you plan to JUST upgrade your CPU and video card, but keep your motherboard and RAM over the next 3-4 years, then having a motherboard that gives you PCIe 5.0 will keep you from feeling you NEED a motherboard upgrade sooner.peevee - Friday, May 27, 2022 - link
16 PCIe5 lanes for graphics (2x8 for 2 slots), 4 for NVMe.Leaves 4 more for the chipset.
goatfajitas - Saturday, June 4, 2022 - link
Faster is better, but it is overkill for now. SSD that support that speed are coming, but to copy what? You only get that speed in real world if copying from a pcie5 to another pcie5 drive... my old pcie3 SSD was fast as hell at nearly 3500 MB/s, and so is my pcie4 SSD at nearly 7000 MB/s but i really can't tell the difference... still I want it.Molor1880 - Thursday, May 26, 2022 - link
Chicken or the egg. This has happened with every PCIe bump and ever early adopter technology. It also makes perfect sense for the primary drive and video card to have the most bandwidth. Nothing on a chipset link would have have bandwidth priority over those two for the vast majority of users.mode_13h - Thursday, May 26, 2022 - link
> Chicken or the egg.Not really. When NVMe drives aren't even maxing out PCIe 4.0, there's really no case to be made for them moving to PCIe 5.0. It just burns more power and creates more heat, which in turn causes more thermal-throttling that hurts both performance and data retention.
Targon - Thursday, May 26, 2022 - link
You probably haven't thought that if you aren't pushing the bandwidth, the power draw/heat won't be very high. Stick to PCIe 4.0 for NOW, and in another two years, you can go to PCIe 5.0 NVMe drives.mode_13h - Sunday, May 29, 2022 - link
> Stick to PCIe 4.0 for NOW, and in another two years, you can go to PCIe 5.0 NVMe drives.So, why are we wasting money on PCIe 5.0-capable boards, then? PCIe 5.0 requires more exotic construction techniques and components, which is one of the reasons Alder Lake boards are so expensive.
Targon - Thursday, May 26, 2022 - link
Keep in mind that AMD keeps socket compatibility for far longer than Intel. If you buy a X670 or X670e based motherboard in 2022, in 2024 or 2025 you can drop in a new CPU. At that point, will you be upset that your motherboard is held back horribly due to ONLY supporting PCIe 4.0 at that time where you will want to upgrade your motherboard?Now, as far as running the slots at PCIe 4.0 speeds, if you see motherboards with extra PCIe lanes connected via chipset instead of direct to the CPU, then sure, they will be limited. Only the 20 PCIe lanes off the CPU will definitely be PCIe 5.0, 16 for primary PCIe slot and 4 for the first M.2 slot. Beyond that is where there will be debates.
peevee - Friday, May 27, 2022 - link
"Keep in mind that AMD keeps socket compatibility for far longer than Intel. If you buy a X670 or X670e based motherboard in 2022, in 2024 or 2025 you can drop in a new CPU. "You can but why would you do that? Increases in computing performance have barely crawl now.
Consider this:
new process node + 65% more power + twice the L2 cache + architectural improvements + DDR5 = +15% of performance at best.
They threw everything at it at the same time, and this is as much as they got.
There will be nothing in 2024-2025 warranting upgrade over this, and probably not ever as things are going if you buy a 8-16 core part this fall.
If you understand demographic trends too, you'll understand that this is essentially the end.
RobATiOyP - Friday, August 12, 2022 - link
Come on! There's a large speed lift, the L2, faster DRAM plus around 8% IPC and the greater MT performance than ST. If you look at the Zen3 launch its enhancements seemed smaller but the result was much more in practice than first sight, despite the main goal being AM5 introduction.Zen5 is expected sooner and is a full redesign not incremental improvement. Those who have enough performance can look forward to more efficient and cost effective design, while others can expect decent performance lifts, not just 5% each generation.
rmfx - Monday, May 23, 2022 - link
Quite disappointing upgrade.I hope they show better results with rdna3.
ballsystemlord - Monday, May 23, 2022 - link
I'm only disappointed that they stuck with 24 PCIe lanes. In previous CPUs, when they actually had a North and South bridge, AMD gave us a lot more.mode_13h - Tuesday, May 24, 2022 - link
That was also PCIe 2.0. 5.0 is 8x as fast. The x4 link to the chipset is like another x16 PCIe 3.0 lanes, all by itself!ballsystemlord - Thursday, May 26, 2022 - link
So more bandwidth means you must have fewer peripherals? No, of course not. That's why I'm complaining.Molor1880 - Thursday, May 26, 2022 - link
It's a minor use case in the market. PCIe 3.0 systems started to add more lanes to compensate for the prolonged development time of PCIe 4.0. Once those faster lanes showed up, the chipset could reasonably handle all the peripherals with fewer lanes.mode_13h - Thursday, May 26, 2022 - link
> more bandwidth means you must have fewer peripherals?No, it means two things:
1. More bandwidth (potentially) to the chipset, which can fan out into more lanes.
2. Direct-connected peripherals can use fewer lanes @ same bandwidth -> more peripherals.
Case in point would be AMD's ratcheting down of video card lane widths. With PCIe 5.0 coming onto the scene, we can expect that to continue.
ballsystemlord - Thursday, June 2, 2022 - link
1: But that's not the path the MB manufacturers are taking...2: I think this is great, but we're still getting fewer PCIe connectors in MBs.
Targon - Thursday, May 26, 2022 - link
You must have missed that AMD didn't even lay out the details of the Ryzen 7000 series. Zen4 in very general statements without specifics means that AMD isn't letting Intel know the full details on what to expect.blanarahul - Monday, May 23, 2022 - link
10+% clockspeed improvement and we get 15+% performance benefit. Consider it's a marketing slide, it's safe to assume they must have cherry picked the best result meaning.This would imply that we are looking at 4-5% IPC improvement in a best case scenario. Interesting.On the other hand, I am quite surprised they have managed to push those tiny 5nm transistors to 5.5 GHz. There was a time when we needed LN2 to go that high. I wonder what the all core boost for a 8 core part would be. 5 GHz across all 8 cores seems pretty doable.
blanarahul - Monday, May 23, 2022 - link
I will say though. 170 watts of power across those tiny chiplets... ouch that's a lot of heat density. It would be the expectation that someone purchasing these brings something like a NH D15 or AIO or something just to run the thing at stock at comfortable temperatures at full load.What a time to live in. Consumer CPUs are reaching 170 watts for AMD and 241 watts for Intel. Consumer GPUs are reaching 350 watts by design and 450 watts for enhanced cards. And this is with boost algorithms keeping the power consumption somewhat in check for all products.
Threska - Monday, May 23, 2022 - link
It'll mean that purchasing a computer will take the same amount of commitment as a home furnace.ballsystemlord - Monday, May 23, 2022 - link
I wonder what the energy density is going to be.Khanan - Monday, May 23, 2022 - link
GPUs are already at 450W stock at Nvidia. And unless EU or other body limits it they will go higher and higher until they hit a cooling limit.danbob999 - Tuesday, May 24, 2022 - link
Cooling is not the bottleneck. The problem is there is a practical ~1500W limit, in many parts of the world, if you don't want to have to redo your house electricity just to connect a computer.And this is if you have a dedicated circuit just for your computer. Forget about having two of them on the same circuit.
jakky567 - Wednesday, May 25, 2022 - link
You forgot about connecting monitors on the same circuit.TheinsanegamerN - Wednesday, May 25, 2022 - link
Cooling the 3090ti isa difficult job and pushing it any higher via OCing gets temps dangerously close to throttling even with a 4! slot cooler.DeathArrow - Tuesday, May 24, 2022 - link
>What a time to live in. Consumer CPUs are reaching 170 watts for AMD and 241 watts for Intel. Consumer GPUs are reaching 350 watts by design and 450 watts for enhanced cards. And this is with boost algorithms keeping the power consumption somewhat in check for all products.While electric power prices are going up. On the upside, we won't need additional heating in the winter.
scottrichardson - Wednesday, May 25, 2022 - link
Let’s not ignore the elephant in the room. Apple and their vastly lower power solutions on 5nm and competing with performance. Although I absolutely expect single and multi core performance on a majority of these 7000 series chips to beat an M1, I’m intrigued by what M2 and derivatives can muster against the high power Intel and AMD bad boys.Sailor23M - Wednesday, May 25, 2022 - link
Absolutely correct.TheinsanegamerN - Wednesday, May 25, 2022 - link
Let's not ignore apple's other elephant.MacOS.
lilkwarrior - Wednesday, May 25, 2022 - link
… Which is better than Windows, far less obstrusive to productive workflows (what an overwhelming majority use high-end hardware for), and order of magnitudes more UI friendly out-of-the-box.at_clucks - Wednesday, May 25, 2022 - link
De gustibus my friend. But it's always a treat to see youg'uns so victoriously declaring what's better in this world and what isn't.haukionkannel - Wednesday, May 25, 2022 - link
TDP has confirmed to be 145w max. So not so high as was expected. The 170w is just upper limit to socket.PeachNCream - Wednesday, May 25, 2022 - link
Power consumption is pretty far out of control for computer components so while competition is good, the most significant outcome of it is that the few companies that can afford the R&D are incrementing power input and consequently heat output in order to keep pace with one another in terms of compute performance. I gave up keeping up with that quite some time ago and just run whatever a cheap laptop can handle. It's been refreshingly nice to not fuss over upgrades and to just work within the limits of my hardware rather than chase after benchmark results or FPS in the latest games.Unashamed_unoriginal_username_x86 - Monday, May 23, 2022 - link
Anyone else think of Willow Cove in 11th gen? Node improvement and way bigger L2, negligible IPC gain but big freq boost? it also claimed a 10-20% ST bump:https://www.anandtech.com/show/16084/intel-tiger-l...
Obviously quite different, Tiger Lake had much better than 1.1x freq relative to Ice Lake and an L3 upgrade...
mode_13h - Tuesday, May 24, 2022 - link
This also has DDR5, which should be a big boost for the 12- and 16- core models.Kangal - Monday, May 23, 2022 - link
I was assuming the same, even before this announcement.Something like a +5% IPC gain at slightly lower power consumption, but in practice, using more power and generating around +15% performance. This to put pressure on Intel's 12th-gen and similarly performing 13th-gen processors. What surprised me was to include an (RDNA-2) iGPU in the mainstream processors (eg r9-7950x), which is handy for AI-tasks, troubleshooting graphics issues, and biding time during unpredictable GPU shortages.
Overall, this is AMD adapting it's late Zen3+ architecture to it's new platform. No major surprises, and a wise move indeed for a smooth launch, healthy and long-term AM5 support. So think of this somewhat like the 16nm Zen1 to the 12nm Zen+. When Zen5 arrives by Early-2024, it will be a proper new design, and that's when Intel's 14th-gen will be in trouble.
I would say people should take a pause on this one, the older AM4 is still plenty competitive. Whilst on the cutting-edge there is the new Apple A16-Bionic architecture coming and it is being adapted to the M2-family of chips. It looks like a decent upgrade over the previous performance level of M1=A13/A14/A15, so it would be wise to see how Intel and AMD response to it (though it's very predictable).
kwohlt - Monday, May 23, 2022 - link
"When Zen5 arrives by Early-2024, it will be a proper new design, and that's when Intel's 14th-gen will be in trouble."There is so much speculation here. Neither Arrow Lake nor Zen 5 are even taped in yet and are not design finalized. There's no way anyone can truly say which will be better - not even AMD and Intel's own engineers could be sure.
drothgery - Monday, May 23, 2022 - link
Arrow Lake should be 15th gen, right? Barring any revisions or naming scheme changes (and it this point the Core iX naming scheme is almost as old x86 was when the last new 486 variant came out), Raptor Lake is 13th, Meteor Lake is 14th, and Luna Lake is 16th.kwohlt - Tuesday, May 24, 2022 - link
ArrowLake is 15th gen. Intel's 14th gen is launching in Spring of 2023. Arrow Lake will be 2024 and compete against Zen 5.Either 16th gen or 17th gen is going to the biggest fundamental change to Intel's architecture since the debut of "Core" back in the mid 2000's - 'Royal Core Project' as it's known is partly what Keller was brought back to work on starting. It'll be a "brand new architecture" in the same way Zen 1 was for AMD. I wonder if they'll still keep the "Core" naming/branding scheme, of if like how BMW still calls it the "328i" despite no longer using a 2.8L because the name has become too recognizable to change - They just keep the same branding despite the brand new architecture.
blanarahul - Tuesday, May 24, 2022 - link
> the "15% IPC gain" figure is measured using Cinebench and compares a Ryzen 9 5950X processor (not 5800X3D), on a Socket AM4 platform with DDR4-3600 CL16 memory, to the new Zen 4 platform running at DDR5-6000 CL30 memory. If we go by the measurements from our Alder Lake DDR5 Performance Scaling article, then this memory difference alone will account for roughly 5% of the 15% gains.This is according to techpowerup.com
So we have 0 IPC improvement. Zen 4 is just a node shrink + clock speed boost + upgraded memory compared to Zen 3.
Wow. Just... wow.
blanarahul - Tuesday, May 24, 2022 - link
Btw they mistyped. It's not 15% IPC but 15% Single Thread per claim.mode_13h - Tuesday, May 24, 2022 - link
I'll bet AMD fell into the AVX-512 trap. They probably sunk so much engineering resources and die area into implementing it that it starved other areas of the chip.They did a good job delaying it for so long, but I guess they're probably facing growing demands for it, in the server market.
Kangal - Thursday, May 26, 2022 - link
It was a worthwhile investment.They would have to leap on that AVX-512 some day, and this way they can just do it and get it out of the way. And the sooner the better, as software adapts to it. If anything this was the perfect time. It's on a brand new platform, and they have plenty of performance and efficiency on the table to waste. Intel is going to be dominating them with the 12th-gen and 13th-gen products.
AMD still has options such as:
- slight tweaks to Zen4 - overclock it - add more cores per chiplet - add their X3DCache - remove the iGPU - use the AVX-512 perhaps more in software at that point.
....So they can easily sit on this Zen4/Zen4+ for the next 2-3 years, buy themselves some time, until Intel becomes a threat. Then they can respond swiftly with their Zen5 or whatever they want to call their New-Architecture that they've been working on.
Meanwhile, Apple is going to smoke every competitor with their A16-family, all the way from the Silicon in their Watch, to Mini Phone, Large Phone, Tablet, Ultrabook, Laptop, and Desktop offerings. And possibly even an Apple Cloud service against Intel's Xeon, AMD's Epyc, Amazon's Graviton, and the Ampere Altra lineup.
Slash3 - Sunday, May 29, 2022 - link
Die area isn't as much of a concern for AVX512 as most people think. It's on the order of 1% of the total die area for a 12900K. It does add about 10-15% to an FPU/SIMD block, but you can see from annotated die shots that it's still a comparatively small portion of the 12900K die.https://pbs.twimg.com/media/FCvOcnYXIAInJhh?format...
And here, a comparison between Skylake and Rocket Lake (both on 14nm, the latter of which added AVX512)
https://pbs.twimg.com/media/E2GW2NWXIAUbdBK?format...
Slash3 - Sunday, May 29, 2022 - link
(~37% increase for each FPU/SIMD unit, ~10-15 for the total die cluster area - I phrased it a bit awkwardly above)gruffi - Tuesday, May 24, 2022 - link
Nonsense. They didn't cherry pick anything. They used CB R23 ST which is even something like Intel's ST best case scenario. And they didn't say 15%, they said >15%. Depending on the clock speed it could mean ~10% IPC improvement. Though that's still underwhelming after two years. People expected more like 15-20%. OTOH 46% faster than 12900K in Blender is quite good.whatthe123 - Tuesday, May 24, 2022 - link
it's not the best case for intel ST. it's the best case for both companies in ST since its lean on everything but FP demands.best case for intel would be CB MT, but for MT testing they used blender where performance varies a lot depending on scene rendered.
deepblue08 - Wednesday, May 25, 2022 - link
Yeah...and the 5% IPC improvement is probably the increase in L2 cache and maybe a bit from DDR5peevee - Friday, May 27, 2022 - link
There is nothing 5nm about the transistors. They are about the same size as the old "32nm" transistors...BillBear - Monday, May 23, 2022 - link
>Zen 4 marks the first use of 5 nm for desktop systems.The M1 Mac Mini is built on TSMC 5nm and has been shipping since 2020.
Trackster11230 - Monday, May 23, 2022 - link
The article may have been corrected after your comment, but it now states for x86 systems.Hifihedgehog - Monday, May 23, 2022 - link
The official slides state "first 5nm PC processor cores." I really wish Ian was back. He was much better at seeing nuance like this.RSAUser - Monday, May 23, 2022 - link
Technically an M1 Mac is not a PC as not x86 (remember IBM PC).Oxford Guy - Tuesday, May 24, 2022 - link
PC = personal computer.name99 - Monday, May 23, 2022 - link
Let them have their fun for a few months.M2 will be out soon and there will be lots of sad faces.
mode_13h - Tuesday, May 24, 2022 - link
M2 won't kill the PC market, just like iPhone didn't kill the Android market.scottrichardson - Wednesday, May 25, 2022 - link
iPhone birthed the Android market. Not kill it ;)Sailor23M - Wednesday, May 25, 2022 - link
LOLZoolook - Thursday, May 26, 2022 - link
Android development started in 2003, og iphone with IOS 1.0 launched in 2007, and Android launched in 2008, I'd say they are very much contemporary.Zoolook - Thursday, May 26, 2022 - link
Latter half of comment fell away, PalmOS and Symbian birthed both IOS and Android would be more correct.BillBear - Saturday, May 28, 2022 - link
Google was in Vegas to show Android to carriers (as a clone of Blackberry) when Jobs demoed the iPhone. They immediately dropped plans to have Android be a Blackberry clone and copied the iPhone instead.>The Day Google Had to 'Start Over' on Android
https://www.theatlantic.com/technology/archive/201...
mode_13h - Tuesday, May 24, 2022 - link
I'd add that the main place AMD needs to worry about ARM is in servers. Everywhere else, Intel is still far-and-away their #1 concern.Kangal - Wednesday, May 25, 2022 - link
That's true.Amazon's Graviton3 (64cores - roughly Cortex-X1) is launching, and there's the Ampere Altra Max (128cores - roughly Cortex-A76). The bigger concern is ARMv9 and the Cortex-A730 family of processors coming, when combined with TSMC's ~6nm node.
They're serious competitors in the cloud and server market, maybe even "supercomputers" in the future. Not just cost, but energy, multithreaded performance, and they're still not bad from the latency/single-core performance either. The biggest hurdle for ARM seems to be features and software, but that is shrinking every year.
lemurbutton - Monday, May 23, 2022 - link
This would make ST slower than Alder Lake and likely well behind Raptor Lake.In addition, M2 is coming and will almost certainly be based on the A16 rather than the iPhone 13's A15. There should be a very sizable increase in CPU performance from the M2 over the M1.
Once again, AMD is 3rd.
Bik - Monday, May 23, 2022 - link
M1 is based on A14 of the iPhone 12, so M2 will be 2 generations ahead of M1 if they use A16 as base. The bet is indeed on Intel and Apple now.lemurbutton - Monday, May 23, 2022 - link
Of course it will be based on A16. Apple isn't going to design and manufacture a new M series every year. Imagine having to design an M Ultra every year. It'll be every two years which means Apple will always use cores from the next A series.M2 = A16
M3 = A18
M4 = A20
And so on...
In addition, M1 and A14 were released within a month together. If we haven't seen M2 by now, it's not going to be based on A15.
Zoolook - Thursday, May 26, 2022 - link
Performance between A14 CPU and A15 CPU where it matters is almost flat, only a few % gain, so for Apples sake they better base it on A16 otherwise it will not improve by much.Silver5urfer - Tuesday, May 24, 2022 - link
What does even that garbage mobile SoC processor has to do with AMD's Zen 4 or Intel's RPL platforms ?Apple is a BGA trash ware use and throw mobile toy. Do not compare it to the powerhouses called AMD and Intel. M series Laptops have non modular soldered OR proprietary garbage designs while these have M.2 and PCIe expansion slots.
Second, Alder Lake trashed Apple in IPC and ST flatout. Then Nvidia destroyed it, all on TSMC 5N first dibs months before vs Intel's 7, Nvidia's 8N Samsung.
M2 will be using TSMC's new node again, comparing apples to oranges on top the transistor count on Apple CPUs is very high vs these. That marks how Apple scaling is.
mode_13h - Tuesday, May 24, 2022 - link
> What does even that garbage mobile SoC processorLOL. Too obvious, troll. I'll bet nobody even read past that point.
Silver5urfer - Wednesday, May 25, 2022 - link
Yeah sure, I have a long history of comments here but since I mocked that BGA dumpster design which costs north of $4000 for high performance for Max and Ultra designs with full soldered BS designs and proprietary non standard HW it's bad.First you can go and buy it mate, nobody is stopped you. Second you do not have an urgency to comment on every single comment here as if we need your validation.
A decommissioned Xeon has more use than a BGA proprietary ecosystem locked down overpriced silicon trash compete in the ranks of AMD and Intel.
lilkwarrior - Wednesday, May 25, 2022 - link
It's overpriced because it doesn't fit your value calculus. The world doesn't revolve around you or any particular person's tastes. Apple makes computers for a particular demographic that has fitted them well just fine. Same with AMD and Intel.Qasar - Wednesday, May 25, 2022 - link
" The world doesn't revolve around you or any particular person's tastes "the same thing can be said about your praise of apple, and you, point is ?
t.s - Wednesday, May 25, 2022 - link
Agreed. Apple's is trash ecosystem. If only they have at least non-soldered storage, then I can recommend them to new user that want to buy PC. As of now, as they're break-go-to-bin, I won't recommend them, unless to apple fanatics.lilkwarrior - Wednesday, May 25, 2022 - link
They have non-soldered storage with their desktop platforms not all-in-ones or portables.My Mac Pro has 192GB RAM and 4TB RAID 0 ARRAY SSD in addition to the 4TB memory module in a non-PCIe form factor that swappable with a number of options from Apple or outside of Apple anytime.
supdawgwtfd - Thursday, May 26, 2022 - link
Your mac pro is obsolete.Technetium. - Monday, May 23, 2022 - link
I’m thinking it could be a 170W package power tracking (PPT) as currently Ryzen is at a 142W PPT and power usage is currently at about 140W. The CPU AI acceleration is also an interesting approach, as it’s on the CPU rather than the GPU.Bik - Monday, May 23, 2022 - link
This shows if nothing else is a messed up release by AMD. They aimed for 10% clock speeds increase that comes at the expense of increased TDP, when the competition has 19% IPC uplift. Maybe zen 3 is already at dead ends for their engineers?alufan - Monday, May 23, 2022 - link
LOL the trolls are active quickly on here, so the 5.5 ghz running during gameplay on all cores was not an advance because trust me that was all core not just one or two peaking, the blender demo was a fail?The power budget sticking below 200w for the CPU whilst achieving all this is also a fail whilst the competition can easily eat its way to over 270w doing the same, and as we know watts equals heat.
Am reserving judgement until launch because Intel wont stand still just as AMD have not and we the consumers are living in the golden age right now.
lmcd - Monday, May 23, 2022 - link
I'm an Intel fan but this is a poor interpretation.This will be the Bristol Ridge of AM5: pleasant platform intro that brings the platform into availability but really doesn't reinvent much else.
The new IO die, cache ratios, and memory bandwidth should be able to feed more interesting CCD designs, but AMD siphoned resources into Zen 3+ for back to school timing.
I applaud the smart roadmap strategy, since Zen 4 is probably doomed to land post-crash and Zen 3+ probably isn't.
Oxford Guy - Tuesday, May 24, 2022 - link
Bristol Ridge was anything but pleasant. It was a cut down version of the construction core. Although IPC had improved a bit over Piledriver, the lack of cores and cache did not help.This release is offering competitive parts. Bristol Ridge was anything but.
Oxford Guy - Tuesday, May 24, 2022 - link
BR also had a serious clockspeed deficit, defying the paradigm of the construction core gambit (long pipeline, high clocks).Low clocks, few cores (especially FPU elements), little cache, and IPC that had mostly stagnated.
BR was an embarrassment for the AM4 platform.
GeoffreyA - Tuesday, May 24, 2022 - link
But at least it was quickly forgotten once Ryzen came out.CrystalCowboy - Monday, May 23, 2022 - link
You are assuming that the max TDP for the socket is the TDP for the chips.Also, if you are concerned about deadends, you might ask yourself why Alder Lake gets no uplift from the use of DDR5.
Silver5urfer - Tuesday, May 24, 2022 - link
AMD clearly knows Intel is capped at 8C max for gaming, so they will have 7800X to top out RPL or ADL processors, IPC uplift doesn't mean JACK, look at Rocket Lake, it had IPC boost over Cometlake in double digits but it lost 2Cores 4 Threads, what happened ? Massacred by their own 10900K.AMD's Zen 4 packs full fat x86 cores with all using higher cache than Zen 3 on top using TSMC 5N with 5GHz+ boost clocks. This will hemorrage Intel's Raptor lake in MT workloads, AMD's HT / MT performance is always very high. Alderlake was barely able to edge out Zen 3 processors which were from 2020 vs 2022.
Also Intel's Core is the oldest uArchitecture around CPU designs, it is the same thing since the first Nehalem came, Intel kept on improving it and hit big walls with Comet Lake and Raptor Lake on both Nodes and uArch. They cannot add more 8P cores so they widened it gave it ultra high Clocks like 14nm++ on 10nmESF / Intel 7. And you think AMD's Zen is already a dead end ? lol.
AMD doesn't have to push X3D on this, there's no need as MT will completely eat RPL for breakfast and gaming they will be on par if not able to beat Intel RPL, Intel knows it very well, that's why they have 8 extra E cores to offset the MT lead by AMD by a few % points. So it's always like this a CPU major uArch gets improved over the time and you do not need to pump xx digit growth when there's no need to.
All in all it's not a major release but a very welcome change, improved Zen 3 on all fronts from IOD to uArch and Clocks plus new Chipsets.
kwohlt - Tuesday, May 24, 2022 - link
This reads very emotionally invested in AMD...Rocket Lake being downgraded to 8 cores was specifically the result of the backport process. Obviously the backport process was due to 10nm failing to hit high enough clock speeds for desktop, and this was a bandaid measure.
As for "they cannot add more than 8P cores" - that's just totally false. Every cluster of 4 E cores is the same size/thermals as a P core. 8+8 was chosen instead of 10+0. For 13900K, 8+16 instead of 12+0.
As for your assumption that Zen4 will completely dominate RpL in MT, there's just no reason to believe that at this point. You think 7600X with 6 cores will dominate 13600K with 6+8 in MT? You think 7800X with 8 cores will dominate 13700K with 8+8 in MT? RpL will almost certainly beat Zen4 consumer in MT across the whole product line, with the only uncertainty being 7950X vs 13900K.
And in addition, the uArch changes from Skylake -> Alder Lake are greater than the uArch changes from Zen 3 -> Zen 4. You can call heterogenous compute a "bandaid", but I'm more convinced that disaggregated heterogenous design is the future of CPUs.
Silver5urfer - Tuesday, May 24, 2022 - link
I do not have any AMD processor lol, I hate Zen 3 because of its stupid IOD issues and the awful Firmware plus the IMC being worse which RKL incorporated later on.Intel P 8 is MAX. You cannot say it's false by adding the junk E nonsense to the equation. Go to Ian's coverage and also Intel's Engineers points it's on Infoworld I guess, they cannot add more 8 to mainstream socket it will blow the roof of TDP. Period. E is added because there's no way for Intel there which is why SPR XEON has and not LGA1700. E is just the garbage there Intel used it to add the MT and Intel Thread Director to compete vs AMD and their aging SKL designs.
10+0 is not there so you cannot talk about it but it will shred the 8+8 garbage, ADL's P is high performance design, no amount of garbage E cores can make up for that. Across the whole product line, that's too far fetched mate. AMD's Zen 4 will bloodgeon Intel, there's no way Intel's BigLittle junk can keep up with high frequency big fat x86 cores.
Heterogenous garbage is copied by Intel for BGA that's where their money is for mainstream. Because of TDP and power budget for their aging Core uArch.
XEON and EPYC E only cores are made to compete vs upcoming ARM parts which have high thread density without HT/SMT. There's not a single server CPU which follows this, which is why ARM Server SKUs also have all uniform cores.
Notmyusualid - Wednesday, May 25, 2022 - link
This reads very emotionally invested in AMD...And I cannot see why 'Intel is 8P max', with no choice from Intel on the matter. I thought the other poster explained it to you rather well, so I guess a fanboi, with or without AMD purchase, hath been exposed...
Qasar - Wednesday, May 25, 2022 - link
" And I cannot see why 'Intel is 8P max "thats simple, any more then 8 p cores, and the power and thermals would be through the roof. thats why for the desktop, intel is maxed out at 8 performance cores, and makes up for the " thread parity, with the Efficiency cores
Kvaern1 - Thursday, May 26, 2022 - link
All that's needed to keep thermals in check is to lower the all core boost and seeing how 99.9% of all games can't even put 6 cores to good use the 8P max claim is remarkably silly.Zoolook - Thursday, May 26, 2022 - link
If it was possible, why wouldn't Intel launch a 10+8 instead of going all the way to 8+16, should be room in between and it's not like Intel to go easy on the diversification.Qasar - Thursday, May 26, 2022 - link
" All that's needed to keep thermals in check is to lower the all core boost and seeing how 99.9% of all games can't even put 6 cores to good use the 8P max claim is remarkably silly. "you are forgetting one thing, intel pretty much NEEDS the high clocks in order to compete with AMD.
Kvaern1 - Sunday, May 29, 2022 - link
I'm not forgetting anything.You OTOH seem to think you live an age where cores can't be individually clockgated.
Qasar - Sunday, May 29, 2022 - link
" You OTOH seem to think you live an age where cores can't be individually clockgated. "still doesnt change the fact that intel needs its higher clocks in order to compete with amd. intel lowers its clocks, it looses performance.
mode_13h - Tuesday, May 24, 2022 - link
> Intel's Core is the oldest uArchitecture around CPU designs> it is the same thing since the first Nehalem came
A few years ago, I went back and read the coverage on this site about Nehalem & Sandybridge. From the sound of it, it was a smaller change vs. Core 2 than Sandybridge was vs. Nehalem.
lemurbutton - Monday, May 23, 2022 - link
Now I'm laughing at all the AMD fanboys who thought 5nm would make Zen4 competitive with M1 in terms of perf/watt. Not even close. AM5 going up to 170w and have to boost to 5.5Ghz just to get 15% increase in ST.Technetium. - Monday, May 23, 2022 - link
I’m laughing at Intel even more: bloated core architecture, 7nm++++, E-cores that aren’t that efficient (only space efficient), and a single P-core that uses more power than an entire M1 Pro CPU. That means they are even further away from Apple in terms of performance per watt.Just Benching - Monday, May 23, 2022 - link
I beg your pardon? My 12900k gets 15k+ CBR23 score @ 35watts. That handily beats the M1 that was benched in this very webiste in terms of performance per watt.[email protected]1c0c3ed6806805c0314bd09ee9e2d1.png" target="_blank" rel="nofollow">https://cdn-bb-eu1.insomnia.gr/file/insomnia-s3/mo...
Khanan - Monday, May 23, 2022 - link
Too bad that non-stock configs aren’t relevant for the broad mass of people and hence not really relevant at all. No, Intel is now where near AMD or Apple in terms of efficiency, on top AMD can be optimized as well, Apple probably not but it doesn’t need to.Just Benching - Tuesday, May 24, 2022 - link
And a 12900k isn't relevant for the broad mass of people either. Those that are gonna buy it, are gonna tweak it, at least the biggest percentage.Khanan - Tuesday, May 24, 2022 - link
I don’t think so, a lot of people buy it and not just enthusiasts and most people don’t OC of touch settings at all. A lot of them also delivered via prebuilt systems and those users are even less likely to touch settings.alufan - Monday, May 23, 2022 - link
But you misunderstand M1 is as relevant to the PC world as a raindrop in the ocean, the biggest advantage M1 has is its software, its designed from the ground up as a walled system for Apple along with all the software limitations that comes along with that requirement. If MACs are so great why are they not the dominant force in computing ?Blastdoor - Monday, May 23, 2022 - link
The number one reason macs don’t dominate is lack of AAA games.Apple can fix that if they want to.
Fulljack - Monday, May 23, 2022 - link
starting by officially supporting Vulkan on it's hardware and software stack.Blastdoor - Tuesday, May 24, 2022 - link
No -- that's not at all necessary and probably counterproductive. By standardizing hardware (Apple silicon) and software (Metal, Swift) across iPhone, iPad, and Mac, Apple has the technological foundation for games, as well as economies of scale.What they need is to jumpstart development. They can do that by buying a studio, building one internally, and/or subsidize the porting of lots of AAA games. The main issue is just laying down the cash to one way or another get games developed for and/or ported to all of Apple's platforms.
Apple has tried to take an 'if we build it, they will come' approach, and that just hasn't worked. It also didn't work for video content, which is why we now have Apple TV+. Apple needs to throw a similar amount of $$ at video games than they've thrown at video content.
Oxford Guy - Tuesday, May 24, 2022 - link
A high-performance handheld 'console' version would be an essential part of that strategy. Buying studios is only part of it.Kangal - Monday, May 30, 2022 - link
Apple does NOT want to release a console, and it doesn't want PC Gaming.The biggest GAMING COMPANY is the world is actually Apple. That's a fact. Their gaming division makes more money than Nintendo, Sony, Xbox, Epic combined. And it dwarfs Valve and Google. Their gaming division is very successful, and they achieve this by taking a 30% cut from every In-App Purchase on the iPhone (the iTouch, iPad, and iMac are nice too).
If Apple wanted to, they could definitely dominate the Gaming Industry directly. But they choose to approach it indirectly. Even though the iPhone only accounts for 20% of phones on the market, they command around 80% of the profits. There is every incentive for them to keep the status-quo.
Remember they have: Apple Silicon, iOS, Metal, Swift, and Closed Ecosystem.
They would absolutely destroy "consoles" the likes of: PS Vita, Xbox Series X, Google Stadia (Pocketable, Home, Cloud). But that would be foolish. I am not a fan of them, but even I can admit to this. If they wanted to, they could easily pay and license iconic IP or outright buy a Game Development Studio (anywhere from Pokemon, GTA, NFS, FIFA, BattleField, Elder Scrolls, Fallout, Final Fantasy, Metal Gear, Witcher, etc etc). They have the cash.
evilpaul666 - Monday, May 23, 2022 - link
It doesn't matter if the M1 is more or less efficient or has better or worse performance or anything else. It's like comparing the motor in a sports car with the motor in a speed boat. I mean, I guess it's good for Mac users the M1 isn't a turd as it's their only option.Technetium. - Monday, May 23, 2022 - link
Well it does matter. You see, it shows to Intel and AMD that bloating up the architecture will not be that effective. It’s also a more significant impact on mobile. People are being drawn away from x86 laptops which have bad battery life, and are being led to Apple M1 laptops which have far better battery life. This is why Intel or AMD have never tried x86 on phones. Apple is simply scaling up a super efficient architecture, and it works. Despite the fact that it’s within Apple’s own walled garden, it is impacting things outside too.iphonebestgamephone - Monday, May 23, 2022 - link
Intel have tried x86 on phones. Lenovo, asus, acer, xolo all had them.t.s - Wednesday, May 25, 2022 - link
Which failed. Intel heavily subsidized the atom for them. And as Intel bleeding money, a lot, they then gave up on mobile market. Cannot compete with ARM on perf/watt and price.iphonebestgamephone - Thursday, May 26, 2022 - link
Ok?Zoolook - Thursday, May 26, 2022 - link
It's more like, Intels inferior production lines couldn't compete with TSMC on a perf/watt basis, and halfassing a medium size core down in size didn't make it better.It was a lame effort, not really putting their best effort in.
Zoolook - Thursday, May 26, 2022 - link
Besides, much of the perf/watt advantage of socs are all the specialized functions outside of the core, to reach the efficiency of mobile socs you need to add all the fixed function encode/decode, image dsps etc, without those the mobile socs would loose half their efficiency.WaltC - Monday, May 23, 2022 - link
You will learn just how much Apple stinks...;) Hang in there a bit longer...I'm too polite to laugh at you...;)AshlayW - Monday, May 23, 2022 - link
Show me where you tested M1 and Zen4 at their efficiency sweet spots again?RSAUser - Monday, May 23, 2022 - link
Not sure as to the point of your comment? ARM will be more efficient than X86 at a lower power envelopes (<30W or so about) since there's a minimum cost for X86's architecture overhead.Over that power envelope, AMD is quite competitive with these scores, so is Intel on their energy efficient cores (don't know if ahead, since AMD hasn't actually released these yet, these are engineering samples).
Pretty much everything I do on a PC requires X86 since coding, and most games I play are X86, so M1 is pretty useless to me, and anything I would do that's in that power envelope could technically be done on my phone or NUC at its tiny power envelope where most of it is handled by hardware AV1 decode or is just powering the screen for a book. M1 isn't that interesting for most of the market, its marketing a bit overblown, and repair-ability is near non-existent.
demian_thorne - Friday, May 27, 2022 - link
That ! A well thought and impartial comment. Let me add some more empirical data for those who are fanboys (on either side).If you use your MacBookPro in an average way (little bit of productivity/browsing/remote terminal/etc) the battery life is phenomenal. Once you do a zoom or a teams meeting battery life drops back to “Intel” levels.
I second
1. ARM is super efficient at lower power envelopes
2. A walled garden allows for optimizations that are not otherwise possible when you have to address so many more use cases
3. Engineers on both ISA sides are equally capable but they are focusing on different markets
See it in the server space. ARM derivatives got some 10-15% of the hyperscalers share because they address perfectly the low usage cases. And then the growth “stalled"
Again, I am not predicting the future, things do change (from both sides), it is a constantly evolving field but the low handing fruit has been harvested already.
Khanan - Monday, May 23, 2022 - link
Too bad that Zen 4 competes along all markets including server and Apple doesn’t, meaning, Apple already lost the biggest fight. Your comment is childish and ill informed.Silver5urfer - Tuesday, May 24, 2022 - link
Another post on Apple's BGA garbage vs x86 Desktop parts. Why is M series in the picture ? That piece of junk got slaughtered by Alder Lake isn't it ? Also Apple's garbage claim on 3090 rofl. Wait for RTX4090 and RDNA3 with MCM it's going to be shredded, yeah M2.As for the TDP boost, yeah higher performance and more work done in less time with new CPUs drinking more power is a bad thing, right ? Just use M series low power trash processors which cannot run any OS other than Apple garbage and no Vulkan on top all BGA hardware.
Zen 4 vs M2 it will be a bloodbath because these are not some big little junk, full fat x86 cores with brand new DDR5 IMC, from here on DDR5 will only improve yeah its not as big as M series unified memory but that thing costs north of $3000 while you can max out an x86 PC to balls with same cash. This thing will also beat Intel's MT performance because it packs proper cores and not those small toy cores for lapjokes.
Zoolook - Thursday, May 26, 2022 - link
170w is the max of the socket design, Zen 4 desktop will top out at 145w TDP.Kushan - Monday, May 23, 2022 - link
Article claims DDR5 Quad channel support but I don't think that's correct? I'm sure the keynote said Dual Channel?TheThiefMaster - Monday, May 23, 2022 - link
DDR5 has two channels per DIMM so it's probably dual sticks = four channels.CrystalCowboy - Monday, May 23, 2022 - link
Thanks, I was going to ask about this too.Slash3 - Tuesday, May 24, 2022 - link
The subchannels on a DDR5 DIMM are half width (32 bit for non-ECC), so it's still a bit disingenuous to call it quad channel, given the traditional usage of the term. It's still one pair of memory traces being wired to the IMC and DIMM slots.Oxford Guy - Tuesday, May 24, 2022 - link
AMD would never be misleading.Zoolook - Thursday, May 26, 2022 - link
Yeah true. That's why Lisa said dual channel.aritex - Monday, May 23, 2022 - link
Something is rotten in the state of Denmark.10% more clock, 100% more L2 cache, and ~30% more memory bandwidth would result in 15% more performance. Has AMD not made any improvement to the ZEN3 core in two years??? I think they just didn't want to show everything that is coming.
ZoZo - Monday, May 23, 2022 - link
It says ">15%". That could be anywhere between 15.00..01% and ∞.aritex - Monday, May 23, 2022 - link
that is absolutely right. But in the past, AMD has always made correct and precise statements. ZEN2 -> ZEN3 +19% IPC or 5800X3D +15% in gaming.EasyListening - Wednesday, May 25, 2022 - link
They managed to shrink it and reduce the power budget while adding PCIe 5 and RDNA2. This may not wow people (it's a node shrink, not a new architecture, get over it) but what they are doing is safe and careful that when moving over to the new process, so they don't fux things up.Also, Intel skipped out on COMPUTEX so AMD had the floor all to themselves. What a great opportunity to just crap all over Intel and sell their new product. I don't think AMD is sandbagging. I think they are being deliberately cagey about what Zen 4 can really do, and people here are misinterpreting that kind of careful demeanor. Ryzen 7000 is on a new chipset, socket, iGPU added, PCIe5, DDR5......
WTF are you people talking about? Ryzen 7000 is going to be the most popular Zen EVER.
jospoortvliet - Saturday, May 28, 2022 - link
> They managed to shrink it and reduce the power budgetEhm, did you read the same article as the rest of us? 25-60% higher tdp is quite a bit higher power budget I would say…
Sunrise089 - Monday, May 23, 2022 - link
I completely agree. I’m an AMD fan (or certainly at least not a Intel fanboy) but this seems very much Zen3+, and very akin to the Zen+ launch with the 2000 series. I suspect there are basically zero core improvements here, just maybe some changes oriented around the higher power envelope. Whether that means they’re poised for a big jump in Zen 5 or whether it means their roadmap has stagnated due to leadership changes I don’t know. The fact that their release cadence is slowing doesn’t make me especially optimistic either.Someone below says they’re sandbagging and not announcing a 32core chip this early, and that at least seems a credible reason for optimism for heavily threaded loads.
Khanan - Monday, May 23, 2022 - link
They invested a lot of resources into making ddr5 and PCI-E 5.0 and other IO happen, is my take. It’s a Plattform changing architecture and hence lower IPC gain, I bet Zen 5 will bring more IPC and could be huge. But 15% is a lot, IPC or not doesn’t matter if you get 15% single performance and over 30% multi core. Look at performance and not IPC.mode_13h - Tuesday, May 24, 2022 - link
> They invested a lot of resources into making ddr5 and PCI-E 5.0 and other IO happenThat's on a separate die. The engineers working on memory controllers and PCIe aren't the same ones who do the core microarchitecture.
Zoolook - Thursday, May 26, 2022 - link
Semiconductor engineering wisdom 101 is never combine new production process and new architecture, there has been exemptions in the past but it can go either way, and there is an unprecedented many new technologies in this release.TLDR: It would be stupid to add a new architecture into the mix while introducing everything else.
mode_13h - Thursday, May 26, 2022 - link
> It would be stupid to add a new architecture into the mix while introducing everything else.Zen 2 was a pretty major change & new process node (plus, added PCIe 4.0 and chiplets)! Intel's new Golden Cove & Gracemont were joined on new "Intel 7", for Alder Lake.
Like it or not, AMD can't afford to change only process node or micro-architecture, along the lines of Intel's old tick-tock model. At least, not when they've been sitting on Zen 3 for 2 years.
EasyListening - Friday, May 27, 2022 - link
COVID bluesmode_13h - Tuesday, May 24, 2022 - link
> But 15% is a lot, IPC or not doesn’t matter if you get 15%I agree that it doesn't *really* matter where it comes from, for the end user. I'm most interested in how it compares on perf/W.
However, for judging AMD's execution, microachitecture efficiency and sophistication is indeed relevant. It suggests how competitive it'll be in perf/W and that will be especially important for their competitiveness in laptops and servers.
EasyListening - Wednesday, May 25, 2022 - link
Xilinx was a much better target for AMD than ARM was for Nvidia. NV corporate culture is uncouth at best. ARM + NV would never have worked.The future of mobile is AMD. I predict that future Ryzens will have both x86 CISC cores and ARM/Xilinx RISC cores, kind of like BIG.little but way more advanced. Intel has been playing the same game for years. Sooo boring to talk about them. 128-bit RISC cores sound interesting to anyone?
mode_13h - Thursday, May 26, 2022 - link
> NV corporate culture is uncouth at best.Interesting statement. I'm genuinely curious why you think that. I'm neither a current nor former employee.
> future Ryzens will have both x86 CISC cores and ARM/Xilinx RISC cores
I don't see a mixing of x86 + ARM making much sense. Sometimes, people talk about a core with both front-ends, but it's hard to have a core that's efficient at both ISAs, because certain ISA details permeate far deeper than the front end.
Xylinx is a different matter. I can see an embedded FPGA being useful for certain datacenter tasks (mostly some variation on software-defined networking), but I don't see consumer use cases for it.
> 128-bit RISC cores sound interesting to anyone?
No, why? 128-bit integer arithmetic is niche and I think probably addressed in various SIMD extensions, to the extent it's useful (e.g. crypto). And 128-bit addressing seems a bit nuts.
Zoolook - Thursday, May 26, 2022 - link
It might be there is Xilinx tech in the new IO-chip, will be interesting to see what they have added in there, even at 6nm is much bigger than the Zen4 cores which is interesting even considering the added RDNA cores.mode_13h - Sunday, May 29, 2022 - link
> It might be there is Xilinx tech in the new IO-chipBesides possible security mitigations, I don't really know what for. And there aren't too many security mitigations you can do in the I/O die. Motherboards are going to tie down whatever I/O capabilities the CPU offers, so there's not much benefit in making it reconfigurable.
mdriftmeyer - Thursday, May 26, 2022 - link
Apple’s Neural Engine is an FPGA solution borrowed from the Afterburner project. It is used extensively in macOS. The same will happen to Zen5 and beyond with mature software in ROCm via HIP processing compute calculations most won’t understand happening under the hood accelerating system wide performancemode_13h - Thursday, May 26, 2022 - link
> Apple’s Neural Engine is an FPGA solutionI bet it'll switch to a hard-wired block, in their next iteration.
EasyListening - Friday, May 27, 2022 - link
>NV can be a-holes, YMMV
I'll say this about NV instead: they are the best at what they do. They can be annoying to work with because they act like they deserve preferential treatment. I follow the gaming scene closely, and I've read reports about how they treat channel partners, competitors, vendors. Sometimes they are quite bad, like locking people into proprietary technology or treating channel partners roughly, sometimes trying to force partners into marketing programs that punish them for working with AMD. They threatened the website Hardware Unboxed because they felt that their reviews weren't positive enough. Other tech sites have noted problematic behavior of NV reps.
>Hybrid cores
There is a tiny ARM core in Ryzen Pro that handles some security functions. In the future this might be expanded to handle certain instructions on a RISC core like AVX-512 or something like that, for efficiency reasons. AVX-512 causes a lot of heat on x86, maybe on ARM it might not be so challenging to integrate.
>big bits
well at the very least you can run 2 64-bit ops in parallel, which might be useful when transcoding video or something.
mode_13h - Sunday, May 29, 2022 - link
> I'll say this about NV insteadOkay, I get your point about their external behavior. That's not news to me. What surprised me was the idea that their corporate culture was rotten. I don't see how they could continue to execute on such a high technical level, if that were true. Maybe I'm just naive.
> AVX-512 causes a lot of heat on x86
It causes a lot of heat because they introduced 512-bit wide vector arithmetic @ 14 nm with CPUs that clocked pretty high. The energy footprint would be similar, if you did that with any other ISA.
> well at the very least you can run 2 64-bit ops in parallel
You can already run 8 64-bit ops in parallel, using AVX-512. I really don't see a case for CPUs widening their addressing beyond 64-bit, and that means their general-purpose registers are going to stay 64-bit.
EasyListening - Friday, May 27, 2022 - link
When Intel had the IPC lead, it was relevant. When AMD took the IPC crown, suddenly it wasn't about IPC anymore. Then Core12 arrived, and Intel starting crowing about IPC again, and meanwhile AMD is putting their resources into engineering instead of marketing like Intel. Single-threaded performance on the CPU might seem important but actually, the main use-case relevant to single-threaded is Gaming, where the CPU is not nearly as important as the GPU anyways.mode_13h - Tuesday, May 24, 2022 - link
> 10% more clock, 100% more L2 cache, and ~30% more memory bandwidth> would result in 15% more performance.
Doubling cache typically adds just a couple percent. It's a reliable, but expensive way to pad the performance margins.
The additional memory bandwidth is mainly useful for highly-threaded workloads, and that shouldn't be where their 15% figure is coming from. 15% should be the median or average performance increase, across a mix of lightly- & heavily- threaded workloads.
James5mith - Monday, May 23, 2022 - link
So was the I/O block 14nm, like the article says, or 12nm, like the table says 2 lines below the sentence in the article?CrystalCowboy - Monday, May 23, 2022 - link
In which generation? The first chiplet architecture came with a 14 nm IO die, then they upgraded to 12 nm later. https://en.wikipedia.org/wiki/Zen_2Khanan - Monday, May 23, 2022 - link
The IOD is in fact 6nm this time around.mode_13h - Tuesday, May 24, 2022 - link
The question seems to be about Zen2 and Zen3. The article repeatedly calls it 14 nm, but the chart says 12 nm. It's hard to tell if this is a true inconsistency, or where the error lies.TeXWiller - Monday, May 23, 2022 - link
The next iteration of DDR5 platforms should contain more support for the 5600 JEDEC speed rate. At least that should be the JEDEC speed targeted to be supported by both AMD and Intel with their upcoming chips, in my opinion. If all goes well, maybe we'll get a tested support for the more interesting capacities and even for the 6400 rate.LuckyWhale - Monday, May 23, 2022 - link
More like a Zen3++ than a Zen4.Makaveli - Tuesday, May 24, 2022 - link
you have decided this from a couple marketing slides? lol how about waiting until the product is out.mode_13h - Tuesday, May 24, 2022 - link
Well, Zen+ didn't add any new instructions and Zen4 does. So, that's one difference.WaltC - Monday, May 23, 2022 - link
"In particular, X670 does not require PCIe 5.0 support for the PCIe x16 slots – while many boards will offer it, an X670 board would also be allowed to implement PCIe 4.0 instead. Do note, however, the PCIe 5.0 is still required for at least one M.2 slot for NVMe SSDs."I'm not sure where you are getting your PCIe 5 "required" info from, but I think you have misinterpreted what you've read. The way it works now on x570 is PCie4 or PCIe3 devices are supported universally, including .M2. What happens is that the slots autoconfigure themselves to run at the PCIe level of the device. My first .M2 was a PCIe 3 960 Evo, running in the first slot, my second .M2 was a 980 Pro, a PCIe4 .M2, which runs in the first .M2 slot now, alongside my PCIe3 .M2. 980 runs in PCIe4, my 960 Evo runs at PCIe3 in the second slot, but I could just as well be running two PCie4 .M2s, as well.
I think you misinterpreted the slide here...the top chipset will support PCIe5 universally, exactly like the x570 supports PCIe4. But that doesn't mean that PCIe5 devices are in any way required, imo. The second motherboard supports up to PCIe5 for the GPU and storage, but I gather the rest of the bus is up to PCIe4, only. The second chipset does PCIe5 like Intel does it--supporting PCIe 5 partially, whereas the top chipset will support PCIe5 universally, just like the x570 supports up to PCIe 4, across all buses.
I mean, there are no "PCIe5" GPUs for sale atm, so I kind of doubt that the user will have to install PCIe5 devices or the motherboards won't function...That would be a huge step backwards for the current x570 standards. That's what that slide says to me.
Kushan - Monday, May 23, 2022 - link
By "required" it means that the motherboard will have to offer at least one Gen-5 m.2 slot. The requirement is on the slot, not the drive.Ryan Smith - Monday, May 23, 2022 - link
Bingo.When it comes to PCIe slots, motherboard vendors are only required to make their X670E boards PCIe 5.0 compliant. For X670 it's optional (so they can just do PCIe 4.0 instead), and B650 will be PCIe 4.0 out of the gate.
Lion's Share - Monday, May 23, 2022 - link
I have a suspicion. The gains in performance/power from TSMC N7 to N5 were not as much as they were hyped up (for AMD). Going by this review, Zen 4 will not even match Raptor Lake in performance, which is fabricated on that much ridiculed Intel 7 or 10nm ESF, whatever you call.Yojimbo - Monday, May 23, 2022 - link
Intel 7 is good. It was just very late and seems to be expensive. This is just another data point that it's good.Intel messed up one node and tried to mend it on the cheap while executing stock buybacks and everyone decides they forgot how to engineer transistors and concurrently suddenly TSMC were, by default I guess, the gods of transistors. It wasn't all that long ago that TSMC messed up at 20 nm (2014). Leading edge semiconductor manufacturing is hard and even the best mess up. Others, such as globalfoundries, dropped out of the game completely.
Zoolook - Thursday, May 26, 2022 - link
By all acounts TSMC messed up with their 10 nm, it's just that 7nm became such a huge success with high initial yields that noone looked back and pairing that with Intels floundering just made it even more of a hype.Makaveli - Monday, May 23, 2022 - link
What review?Khanan - Monday, May 23, 2022 - link
Your take is a bit dumb. The power savings were invested into higher clocks of course, do you think 5.5GHz are free? Intel needed 250W to make it happen. And we will see how fast Raptor Lake will be, you’re in for a surprise if you think Raptor Lake is great, it’s more or less just Skylake with 8 cores instead of 6, not that great, typical 2nd plattform architecture.Yojimbo - Monday, May 23, 2022 - link
Not sure what you are trying to say. Are you under the impression that Alder Lake needs 5.5GHz to beat Zen 3 in single threading?Skylake? You're a delusional fanatic. So if Intel's architecture is Skylake and their process is shit, then why is Golden Cove faster than Zen 3? Intel doesn't need 250 W to be faster in ST. Alder Lake is faster in games at lower power usage. Missed that somehow? Or just ignored it?
Khanan - Tuesday, May 24, 2022 - link
It’s really funny if someone who extremely toxically sounds like a fanboy calls me “fanatic” and I’m not gonna debate things with such a weird person. ByeYojimbo - Tuesday, May 24, 2022 - link
"It’s really funny if someone who extremely toxically sounds like a fanboy calls me “fanatic”"That's what's known as word salad.
"and I’m not gonna debate things with such a weird person."
And that's called not having a rational response to my points. If you truly do get so upset by being called a delusional fanatic I suggest you don't start your first response by telling the other person his "take is a bit dumb". Battleship mouth and rowboat rump?
mode_13h - Tuesday, May 24, 2022 - link
> Skylake? You're a delusional fanatic.Yeah, likening Golden Cove "Skylake" is enough to render that post not worth responding to. The point about 8 cores instead of 6 is even weirder. It's like they read something about Coffee Lake-R and are confusing it with Raptor Lake.
Silver5urfer - Wednesday, May 25, 2022 - link
Only E cores are Skylake class not P cores, the latter are very powerful cores and do not exhibit the flaws of SKL with Cache errors because they maxed out at 8 and RING is locked to low count even if they run dual RING due to E cores. Raptor Lake will be having a lot of Gaming potency due to higher L2 cache added to the RING bus. And E cores added on top will allow them to slightly approach Zen 4 arena but not beat them.Also they will get DLVR which will optimize the Voltage on Raptor Lake to not guzzle a lot and end up in flames like Rocket Lake. Still it's not worth because Biglittle BS. It all depends on how fast Zen 4 7000 series is. Plus how Z790 motherboards have ILM this time as Z690 is a disaster and must be avoided as it bends the CPUs and mobos even.
Bik - Monday, May 23, 2022 - link
If you wondering what did AMD use that 80% increase in logic density on TSMC 5nm for, here's a plausible answer. We know AMD able to push next gen server part (Genoa) to 96 and 128 cores, they may very well also double core count on desktop. I belive today announcement is AMD teasing Intel and us. They never say the chip they showing is highest end part, it may be successor to the 5800x. The real surprise will come later in the form of 32 cores monster mainstream.Wereweeb - Monday, May 23, 2022 - link
They won't do 32 cores on desktop, at least not on Zen 4. It might come later though.romrunning - Monday, May 23, 2022 - link
Do you count future Threadripper CPUs as "desktop"? I can easily see 32-cores in a Zen 4 Threadripper.nandnandnand - Monday, May 23, 2022 - link
We do not count that!mode_13h - Tuesday, May 24, 2022 - link
Yeah, especially when you look at ThreadRipper pricing. It's very much outside of the desktop market.EasyListening - Friday, May 27, 2022 - link
yea but it's still desktop, just expensive desktopmode_13h - Sunday, May 29, 2022 - link
> yea but it's still desktopThat really stretches the definition of a desktop, then. If a server board is placed in a tower (sometimes called a "pedestal") case, does that *also* make it a desktop?
I think there's a pretty clear delineation of the workstation market, if you look at price and capabilities. In the past, I'd have added power to that list, but that particular line is now getting very blurry.
Yojimbo - Monday, May 23, 2022 - link
Who wants a 32 core desktop part? Only a minority of users with workloads for which it's useful. AMD know they need to keep ST performance up to compete in the market. But their core is clock for clock slower than Intel's. So they give up a little power efficiency to push up clock rates and expand the cache of the processor so it performs better in certain workloads, including games. The cache density improvement from TSMC's 7nm to 5nm is about 1.25 times, not 1.8x. The 5 nm process is likely more expensive than the 7 nm process at this point, so perhaps the Zen4 desktop parts will have a smaller area of compute dies allowing AMD to compete on price and still make profits now that they no longer have a performance advantage over Intel.Bik - Tuesday, May 24, 2022 - link
32 cores is what I need :D It may sound overkill, but thinking about it, when AMD introduced 8 cores and 16 cores part, majority of people still depend on 4 cores Intel chip. But time changes, Raptor Lake may have 8+16 small cores. But your argument is sound, the economic however may play significant part preventing that from happening.Kvaern1 - Thursday, May 26, 2022 - link
Out of curiosity what do you 'need' 32 cores for ?I can't think of any consumer software which needs 32, or even 8+ cores of current MT processing power.
Sure you can put more cores to good use when transcoding but if it's imperial to you that such a task takes 10 mins instead of 20 you're in professional usecase territory.
EasyListening - Friday, May 27, 2022 - link
well the extra cores are there if you need them in case you want to try your hand at making Youtube vids or something and want good performance editing video. Also, it doesn't make any sense to criticize AMD for not optimizing Ryzen on 5nm when it's the first time they've used it. After they get Zen4 to market, they will introduce Zen4+ and there you will see the big IPC gains.Kvaern1 - Sunday, May 29, 2022 - link
Need != Nice to have.Zoolook - Thursday, May 26, 2022 - link
I beg to differ, 5800x3D is faster than Intel at a lower clockspeed and certainly faster clock vs clock.Khanan - Monday, May 23, 2022 - link
Of course it’s not highest end, it doesn’t have 3DV Cache. But 16 Cores will be high end, the single dies top out at 8 cores and there is no room for more than 2 chiplets. Raptor Lake could be easily countered by another “5800X3D” named 7800X3D or higher, it depends if it’s needed.Bik - Tuesday, May 24, 2022 - link
I remembered when they showed off Zen 2, it has room for one more chiplet. Not this time around, but AMD may not necessarily have one design to begin with.Khanan - Tuesday, May 24, 2022 - link
The whole package just seems to small for a 3rd chiplet, aside from the news I’ve read that it’s 2 of 2 max. I don’t think AMD cares for more cores on the normal desktop Plattform, anyone who needs more can buy Threadripper.EasyListening - Friday, May 27, 2022 - link
On an important die shrink, you probably want to make sure that your current architecture at the very least isn't slower on 5nm vs 7nm. I mean, how embarrassing would that be. Are we forgetting that we can just get more cores on the same size die as before with die shrinks?If the chiplet gets expanded to 12 or 16 CPU cores and however many RDNA3 cores, then AMD will be able to offer the same number of cores as Ryzen9 but with only 1 chiplet instead of two. Why not do this?
If people want 24 or 32 cores they can buy the 2 CPU chiplet Ryzen 6900X or 6950X or whatever they are going to name them. If you don't need that many cores, buy the 6600/6700/r6800 with just one CPU chiplet with 12 or 16 cores each.
You may not think you will never need that many cores, but high rez VR is coming, I'm talking something like a Quest 2 but with 8K for each eye, power efficient enough that it doesn't scorch your scalp and sips power like an old lady. If you've ever played VR games before, you know what I'm talking about. Remember when AMD bought a small company doing high bandwidth short range wireless networking? That was part of a tech portfolio that will become an AMD designed VR headset made custom for Sony or Microsoft.
TouchdownTom - Monday, May 23, 2022 - link
I'm optimistic that they are being very conservative with their performance claims here--a 5-10% IPC increase 2 years after Zen 3 was released (most of the uplift coming from the 5nm switch and associated clock speeds) would be disappointing. I was optimistic that they could get 15-20% IPC level improvement but seems unlikely at this point. The real bummer here is that it will result in very small increases in laptop performance since those TDPs will not have the wiggle room to grow like desktop does.Khanan - Monday, May 23, 2022 - link
It didn’t happen because they invested time into the new Plattform and new IOD design, is my take. Users should look at total performance gain and not IPC though, it’s 15% and over 30% MT, nothing to snoff at.EasyListening - Friday, May 27, 2022 - link
Are we so spoiled that a 5-10% IPC increase is disappointing? And why are people ignoring all the stuff they are adding to the CPU chiplets to focus solely on IPC? IPC is only important when Intel has the IPC crown. When AMD has faster IPC, Intel says it's not about IPC, it's about something else, preferably something in which AMD is inferior to Intel.RSAUser - Monday, May 23, 2022 - link
"What this means for the future of AMD's monolithic desktop APUs is uncertain, but at a minimum, it means that all (or virtually all) of AMD's CPUs will be suitable for use in systems without discrete graphics, which although not a huge deal for consumer systems, is very much a big deal for corporate/commercial systems."Still worth a lot in consumer systems, helps resale quite a bit, also means later on if e.g. use same system as a NAS or something I can run the system without a dedicated graphics card, can also help diagnose if issue with graphics card.
msroadkill612 - Monday, May 23, 2022 - link
Egocentricity is only human.We consumers forget that ours is a side market, like grease is to an oil refiner.
The main game is the premium tier datacenter/workstation/...
Our market is nice, but is a way of unloading "harvested" lower binned product.
This is fundamental to the whole AMD architecture & incredible cost structure- the same lowly mass produced core unit is teamed in progressively greater multiples from bottom to top.
Whatever Zen 4 may be, it is that because it is what the big guys wanted.
They continue to have the odd problem of seeming overpowered - 6 core is about as low as they go these days, & many users could make do with less ATM
Silver5urfer - Wednesday, May 25, 2022 - link
You are only partially correct.Previously before Ryzen era, Intel used lower bins from the Xeon cut HCCs and then once AMD started their HEDT level core generations, Intel had to improve a lot by adding STIM like Sandy Bridge and also change their entire Xeon lineup to Mesh and have even more high binned cores for Mainstream. That was true until Comet Lake, you see those Bins on CML needed to hit 5.2GHz on all cores, it's a high bin ratio that was needed. Granted it's nothing vs actual Xeon money still that's what it is.
With AMD your point is even more strong because, AMD heavily relies on EPYC binning, and their Chiplets cannot simply prioritize Mainstream over that. That existed until X3D processor 5800X3D too, it was an experiment they didn't want to commit fully by refreshing the whole stack as it will eat into their Threadripper Pro (dead now I think as sTRX series mobos are basically abandoned, X299 was even better lol had a good PCIe and had something vs this orphaned junk) and Milan EPYC cut, so they just dumped a simple Gaming tag product for masses with restricted voltage controls and all things, it's a castrated silicon but for masses it might work.
But if you see this product, Zen 4 they are hitting 5.5GHz on many cores of that beast 7950X (Assuming the SKU id that since Lisa Su said it's a 16C part and we do not know 24C even exists for mainstream, I doubt) So that clock rate is exactly like Intel's Comet Lake era, you need ultimate binning to get those normal crap won't get through that high speed unless TSMC 5N has super high yield rate, which I doubt.
Also it's a great time, we have a solid PC ecosystem since 30 years, and all games, programs from that era and even before that era of 4-6th gen consoles can be preserved. And we can preserve the WW2 content too, this is a boon to mankind. Even a PC from past decade is superb today and can do what it does because it's a PC not a Toy garbage social media junk ala Smartphone.
So I welcome with my open hands that we are getting a superb Socketed Ecosystem and upgradeable modular ATX standard still to date plus more over a ton of HW options to choose from rather than Apple type castrated crippled anti consumer soldered garbage hardware and locked down software. Oh you can even buy a decommissioned XEON and run ProxMox, Unraid, FreeNAS, Arch, Mint, VSphere ESXi or tons of others and even Windows 7 SP1 with Simplix Pack for latest updates without telemetry, Windows 10 LTSC2019 or Windows 10 LTSC 21H2 both de-bloated with DISM based tools.
Oxford Guy - Wednesday, May 25, 2022 - link
‘for latest updates without telemetry’This is a fantasy, unfortunately.
Only with open-source hardware, software, and networks — all of which are constantly vetted by rigorous trustworthy independent researchers — can there be the possibility of privacy.
What we have is an ‘all telemetry all the time’ situation unless you’re in government, in which case the spying is there but more known/controlled.
Silver5urfer - Friday, May 27, 2022 - link
Honestly if you want to speak about 100% facts then once you are on grid you can not simply evade, only super talented folks can do that even then it's impossible thanks CIA MOSSAD.Windows 7 Simplix pack with latest update is better than Windows 10 Home / Pro / Enterprise even LTSC because there's a lot left in there which you need to remove manually for the 10, 7 they added Telemetry after the Windows 10 debuted with cumulative updates. So my point still stands. Also with enough knowledge of DISM tools you can remove the bloatware and telemetry from Windows 10 LTSC both 21H2 and 1809, it's possible but it may break a few things like you need to completely axe Xbox with that it will break the OS not to work with Gamepass GaaS cancer or Xbox services utilizing latest games like Flight Sim 2020 and Forza 4 and up.
Esp if you wanna talk Linux, I heard about a lot on that systemd too. But yea I agree that Microsoft OS is not 100% private.
But let's go a step more, did you see that Pluton processor ? It's literally Mossad CIA level silicon, same for that NSOGroup state sponsored Mossad tool which the latest mobile spyware tool news broke which even bypassed that so called "Apple is built for Privacy in mind" as well. And finally AMD's BGA Zen3+ has that Pluton on die. AMD silicon has PSP which is an ARM unit that runs on its own OS like Intel ME and Apple's A Series Secure Enclave processors.
So yeah there you go, you cannot escape no matter what. Which is where conscious purchase and knowledge of tech is needed as I mentioned above.
EasyListening - Friday, May 27, 2022 - link
Threadripper isn't dying. A lot of your points don't make sense. You're talking about binning when dealing with chiplets? These dies are tiny because you get better yields that way. Threadripper is more than a bigger version of Ryzen. Workstations have different needs. Multiple Instinct cards for example. Tons of RAM. ECC would be nice. Multi-display on an iGPU. Lots of NVMe drives, means lots of PCIe5 lanes. Threadripper is not simply a relabeled low binned EPYC.Silver5urfer - Friday, May 27, 2022 - link
You are dead wrong. AMD themselves literally shown how top 5% bins go to Threadrippershere's a url for that.
reddit dot com/r/Amd/comments/6svkdg/threadripper_is_binned_for_top_5_of_dies/
Binning is a key point here, 5800X3D is also a reject EPYC. Binning is all there is about Silicon manufacturing, you think AMD is running something different and making special chiplets for Ryzen mainstream ? lol.
Threadripper is already dead, they axed the Socket for a single generation and left it hanging, on top except for the Threadripper PRO entire TR is not being made anymore. AMD killed HEDT entirely because there's less money in that vs EPYC esp when Intel's own HEDT died, X299 being last and better, since AMD's RAID is garbage (Level1Techs, go watch his video) and also insanely overpriced.
Silver5urfer - Friday, May 27, 2022 - link
I forgot to add High Leakage chips go into Desktop K SKUs because they can operate at much higher voltage (more clocks) than Low Leakage which often will go into Laptops (but they are anyways garbage silicon since they cannot be really binned high as they run at high voltages despite their binning, Intel used to do that until Mobile Extreme - Until Haswell's last i7 4940MX rPGA CPU. They made one more special one, i7 4980HQ, L4 eDRAM specially made for Apple) the latest mobile BGA HQ and all are not that great bins, some could be but they are worst.Binning is a core aspect of Lithography. On top your point about dies are tiny is funny. 5900X is literally having a reject die and got 4 cores disabled else it would have been 5950X, and 10850K is same as 10900K but Intel didn't label it same why ? because it won't hit more than 5.2GHz, no matter what you do. That's what Binning does and ofc 5800XD operates as 1.3v than 5800X 1.4v it's lesser voltage because it's higher bin quality if AMD used the original die it won't work as the heat would be too much. It's a proof that X3D will cut into Milan profits esp when there's no Threadrippers with X3D in existence.
evilspoons - Monday, May 23, 2022 - link
What does it mean for the chipset to have Wifi 6E support? Surely you can just plug any Wifi 6E NIC into any PCIe slot the NIC fits in and it will work?Khanan - Monday, May 23, 2022 - link
It means it has WiFi 6E on itself, no extra card needed. Support means it’s optional, not a must. “Wifi” boards will have it, more basic boards not.DeathArrow - Tuesday, May 24, 2022 - link
So these 15% theoretical gains on single threaded performance means it will equal Intel 12th gen Core architecture? I hope that the price will be better than for Intel, otherwise I don't think it will have much success.nandnandnand - Tuesday, May 24, 2022 - link
If it matches Alder Lake on ST, it beats it on MT, since it's 16 big cores. It should also beat it on efficiency.But Zen 4 is being released around the same time as Raptor Lake, so that's not enough.
rojer_31 - Tuesday, May 24, 2022 - link
https://twitter.com/PaulyAlcorn/status/15287574538...AMD has confirmed that the 170W figure for AM5 is PPT, not TDP. That means this is peak power for the socket, so, *assuming* that AMD sticks with the standard PPT = 1.35X TDP, the maximum TDP for AM5 should be 125W.
jospoortvliet - Saturday, May 28, 2022 - link
… another update: the 170 is indeed TDP, total power is up to 230 watt. https://www.tomshardware.com/news/amd-corrects-soc...back2future - Tuesday, May 24, 2022 - link
What will be next bottleneck on 786(Intel) or 770(AMD) mainboards; memory bandwidth (latency), storage bandwidth, heat reduction/cooling, power envelopes for peak demand, cost?( PCIe 5.x will be a Q3/2022 reality for consumer market, https://www.anandtech.com/comments/17203/pcie-60-s... and hardware seems to outpace software iterations, at the moment, including compatibility between x64/arm64/riscV systems? )
tygrus - Wednesday, May 25, 2022 - link
Current major bottlenecks for modern x86 based CPU's are interrupts & security features/mitigations.back2future - Thursday, May 26, 2022 - link
means there are statistics on external and/or internal (e.g. cpu, discrete/integrated gpu (even tpu/npu?) related clocks, memory controller? ) interrupt latencies?GeoffreyA - Tuesday, May 24, 2022 - link
It's surprising, because if it isn't a substantial improvement at the architectural level, this is the first time they've iterated the number with scarcely an changes. Leaves an uneasy feeling when marketing outstrips engineering.Silver5urfer - Tuesday, May 24, 2022 - link
I like what I saw, a proper x86 successor for Zen 3, a good chunk of most important updates - IOD on TSMC 6N that means the horrendous USB and other I/O related stability issues are fixed with this. new DDR5 IMC not that old memory, here it's even running on 6000MT/s much welcome change.They boosted the clocks to 5.5GHz even, that's a great achievement for AMD. The thing is X670E it's segmented which is really unfortunate as the HEDT level costs will come I fear. But AMD's OC for that mobo is questionable I hope their Zen 4 CPUs have a full TDP unlock and go maximum in boost but I fear the TSMC 5N wont let all those damn 16Cores boost to that high freq, Intel knows 8P is max too, higher clocks and high cores are not possible anymore on these Core and Zen CPU designs.
The 15% ST growth is the most hot topic in tech people now who follow this, I mean yeah sure the IPC is lower because they are just barely beating Alder Lake on CBR23 ST scores 1800s vs 2000s but if you think logically AMD knows 8P for Intel is maxed out and high Clocks, with more Cache for gaming. And E cores is where they try to fight vs AMD. So their approach is simple make 7800X boost to high clocks and be the gaming SKU while 7950X will be top MT champion beating out 13900K, 8E cores more aint gonna cut the Ryzen 7000 MT performance as you can clearly see how 12900K is barely able to beat 5950X from 2020.
Also AMD doesn't need to put a brand new design again because they simply do not need right now, Intel is on LGA1700 for RPL and *maybe* Meteor Lake, they can get a new design which drastically improves and in the meantime let the DDR5 and PCIe5.0 ecosystem saturate a bit. Optimize the R&D costs. Same for Intel, they are also riding the same bandwagon but the shame is E core garbage, well that's how their Core uArch is, cannot scale past 8C because too hot, so improve at more IPC and try to get all sort of Power improvements on top, DLVR is coming to RPL for better voltage regulation and clock speed balance.
All in all any good x86 CPU is welcome, even that biglittle junk by Intel; because ARM is pure use and throw garbage running on garbage platforms and pathetic OSes like iOS and Android (because scoped storage and post Oreo versions mimicking Apple very hard).
Grayswean - Tuesday, May 24, 2022 - link
Looks like AMD is running Intel's old Tick-Tock strategy. Zen3 was tock (architecture change), Zen4 is tick (process/die shrink).Khanan - Tuesday, May 24, 2022 - link
If it’s just a “tick” why is it then so much faster and more efficient than Zen 3? No, Intel doesn’t copy Intels terrible strategy, they don’t want to lose.Khanan - Tuesday, May 24, 2022 - link
*AMD oofGrayswean - Wednesday, May 25, 2022 - link
The clock and L2 increases could explain most of the performance uplift, suggesting the architectural changes to the cores are minimal. Or maybe AMD is sandbagging Zen4 to keep Intel asleep,.ingwe - Friday, May 27, 2022 - link
What was so bad about the tick/tock strategy? From a risk mitigation perspective it makes a lot of sense and risk mitigation is a great engineering practice. They only stopped talking about it because they couldn't get a new process to work so the clock stopped.Raven437 - Tuesday, May 24, 2022 - link
More Lanes!!!!RealBeast - Tuesday, May 24, 2022 - link
Looks like late 2022 will be upgrade time for me. With my pre-existing servers I'll need the time to upgrade my office power, fortunately I tore out my pool and have a spare 100A bus on the wall outside the office, now for an extra 30A circuit through the office walls. 10 gauge wire is such a pain in the ass to work with though.mode_13h - Tuesday, May 24, 2022 - link
PCIe 5.0 will already double bandwidth. The chipset could offer plenty of lanes, as it'll have the equivalent of PCIe 3.0 x16 bandwidth.mode_13h - Tuesday, May 24, 2022 - link
Wow, no mention of AVX-512?Maybe AMD is saving something for later. Or maybe it still has issues and they're using Intel's withdrawal of the feature from consumer CPUs to iterate on their own implementation.
GeoffreyA - Wednesday, May 25, 2022 - link
Holding their cards to their chest, waiting for Intel's move.mode_13h - Thursday, May 26, 2022 - link
Given the rumors of Raptor Lake using the same Gracemont cores as Alder Lake, I doubt it'll come back in that generation.So, I think AMD is just holding back on it to have something else to announce.
GeoffreyA - Thursday, May 26, 2022 - link
Yes, instead of wasting announcement all in one go.Zoolook - Thursday, May 26, 2022 - link
I really hope they are not wasting chip area on AVX-512, they can do much better things with it.mode_13h - Tuesday, May 24, 2022 - link
I'm a little surprised not to see the GPU as a separate chiplet. I had a hunch the RX 6500XT would make an appearance, here. For an iGPU, that would actually be pretty impressive.peevee - Friday, June 3, 2022 - link
Without GPU, that 6N IO chip does not make any sense. Well, instead of GPU they could have put a really large LLC on it, but consumer value would very low compared to a built-in GPU. Unless they would eliminate L3 caches from the CPU chiplets and either make them smaller/cheaper or add a couple more cores to each. They very well might do just that for server parts, but I doubt it.Whiteknight2020 - Wednesday, May 25, 2022 - link
Why keep bringing Apple into it? Windows users or Linux users aren't switching OS or paying the Cupertino tax. Corporates still rely on AD & Group policy to manage their fleets and that's not there with MacOS. Plus of course all those who need to run x86/64 VMs.Notmyusualid - Wednesday, May 25, 2022 - link
+1Silver5urfer - Wednesday, May 25, 2022 - link
It's just the usual Mac crowd, most of them love Apple so it's just a side effect because the Macboys finally have their own silicon (please do not talk about price rofl) or how it's inability to run majority of the Software or how open it is on all fronts, we got a notch this time too on a laptop.The funny part was Anandtech's own benchmarks on M1 are up and they show how lowly BGA processors eat it out and even pathetic low quality mobile BGA GPUs from Nvidia. Then they use M1 Max and Ultra which are a joke, because too high cost ($3K+) for such locked down Hardware and Software ecosystem (please do not talk how Apple claimed on RTX3090 class and ended up getting slammed by even Verge). Oh yea let's talk about DaVinci Resolve, why not because ARM has specialized silicon cores so it excels. I'll just wait for AMD's Xilinx FPGA integration into x86 parts, it will beat the ARM specialized joke.
PC ecosystem is a boon to computing for many people sadly it's now just glorified for gaming, still there are so many who use PC for far more than anything these Macs can do. And do not forget the AM4 socket or any socket for that matter with a great PCIe ecosystem. It all doesn't matter to these.
GeoffreyA - Wednesday, May 25, 2022 - link
The funny thing is, these Apple proponents have been using Intel x86 for about 15 years or so. Now that they've got their own blazing CPU, they can turn their tales and run down Intel, AMD, and x86. It doesn't smack of good sportsmanship. On a more general note, the hate we sometimes see directed at x86 is due, I would say, to its being "old-fashioned," and ARM being (apparently) the shiny "new" thing.Oxford Guy - Wednesday, May 25, 2022 - link
What is more important is the UI degradation. For instance, the original Mac operating system from 1984 gave people control over how many times menus flash at them when items have been selected.Ghz billion core CPUs, mountains of blazingly-fast storage, and OS RAM requirements that would be incomprehensible to a normal 1984 consumer later we have the situation where people are forced to endure three flashes in a row for each menu item.
Apparently, Apple thinks making it more difficult tor epileptics to use their operating systems is progress. This is the price that’s paid for cutting corners on macOS development team quality.
The quality of the UI continues to decline beyond the rather disastrous initial OS X release. The quality of bundled software is so atrocious in key cases (i.e. the Music program) as to leave one speechless.
Apple products, beginning with Lisa, were supposed to offer a more efficient experience than Microsoft’s offerings. That was supposed to be Apple’s angle. The company has replaced that goal with margin chasing, realizing that people, when given two bad UI options, will continue to patronize their option out of the inertia of familiarity.
GeoffreyA - Thursday, May 26, 2022 - link
If you can believe it, I've never used a Mac, so can't comment on first-hand experience. But being brought up on Windows, Apple's UI often seems nonsensical to me. At least with iTunes on Windows, I battled making sense of it. Could be wrong but often felt Apple's philosophy was about removing unnecessary UI features (the best design is about removing not adding).As for Windows, it just gets worse and worse. Microsoft seems even more incompetent than Apple. Vista's copying of Mac's gloss, 8's preposterous tiles, and now the heaven-knows-what-to-call-it of 11.
GeoffreyA - Thursday, May 26, 2022 - link
In any case, the long and short of it is that user control has been and is going out of the window.Oxford Guy - Thursday, May 26, 2022 - link
‘In any case, the long and short of it is that user control has been and is going out of the window.’Yes. The experience of being computing consumer is steadily transforming into the joke about Soviet Russia. Computer use you.
The same is true of televisions, where Vizio is more of an ad company than a TV producer.
Going back to MS vs. Apple. The Lisa was a quantum leap over what Microsoft was offering. The Mac UI had some good points and bad (versus Lisa) but was vastly better than Windows up to W95. The Mac UI was still superior until OS X. Although that had good points, it had serious defects.
OS X has been degraded in recent years to the point where I am actually almost ambivalent about which platform to use.
GeoffreyA - Saturday, May 28, 2022 - link
"OS X has been degraded in recent years"From a superficial glance, I've noticed it has taken on a lot of mobile design motifs, and on the Windows side of the coin, the same thing has been happening, much to the dislike of many of us.
Oxford Guy - Thursday, May 26, 2022 - link
‘the best design is about removing not adding’That is not true generally. It is only true some of the time. That ‘laziness is efficiency’ design philosophy is a very large part of the problem, when it comes to Apple.
Personal computers thrive via increasing, not decreasing, sophistication. Individuality, such as those with epilepsy I mentioned, warrants a feature-rich design — one where people have control over the machine. Lazy design substitutes individual agency with a particular design team’s conception of Joe Average and/or the team’s arbitrary ideas.
Good design is efficient, with efficiency taking into full account the needs of individuality.
PCs have the resources to have an extremely fine-grained level of user control over the OS — without that being a burden for those who are content with defaults. That a machine with an OS that fit into a 64K ROM gave users control over features, like menu flashing, and a machine with a 2 TB SSD and 32 GB of RAM does not says one thing only: failure.
GeoffreyA - Saturday, May 28, 2022 - link
Oxford Guy, I agree with your comment about design, but was careless in my phrasing. I think the best design has full functionality for the artefact, but no more and as simple as possible. Generally, adding functionality and then stripping it away to its essentials. Indeed, this principle carries through the whole length and breadth of life (Occam's razor). While I like extensive functionality myself, there's a hazy line where it becomes feature creep. In games, simpler interfaces, such as those of Half-Life or Starcraft, have appealed to me, whereas today I am repelled by the information plastered all over the screen. And I use Notepad more than any word processor.Oxford Guy - Sunday, May 29, 2022 - link
There is a big difference between feature richness and poor design that impedes efficiency, despite having many features. Implementation quality is just as crucial as having enough feature richness.Occam’s heuristic is, like any heuristic, capable of being very misleading. It works better for math than for people, as individuality is important for the latter and not the former. Humans like to give short shrift to individuality, probably due to egocentrism and overestimation in terms of the accuracy of focus groups and similar research.
The simplest design that manages to encompass individuality is much different from simplistic design that only works well for a subset of the population.
GeoffreyA - Friday, June 3, 2022 - link
Well said. At the end of the day, implementation quality carries the design, and shows the skill, or want thereof, of those that have put it together. Taking an example, I would say Firefox is excellent, being plain and simple on the surface but powerful beneath the bonnet; it can take a great deal of customisation.peevee - Friday, June 3, 2022 - link
Agree. Just like with laws, the design for lowest IQ users is a failure.flyingpants265 - Tuesday, May 31, 2022 - link
We should have minority report UIs by now. By that I mean, everything should have hotkeys and windows should be flying around the screen like lightning. Menus and dialog boxes should be fully featured and organized. Also, you should be able to enable or disable any UI feature you want. Every time you are forced to slow down, awkwardly click around, or manually type something you've already typed a million times, or fight with a feature you don't really want, that's a massive waste of time and effort. No thought has gone into improving the UI since it was first released.scottrichardson - Wednesday, May 25, 2022 - link
Except windows users are, in fact, switching to Apple. Broadly. Heck even here in my local town, I personally know dozens of folk who have switched to Mac over the last couple years. People want simplicity. They don’t want to switch storage. And most of them don’t know what a GPU is. They want to just get their work done and do it without worrying about IT, virus protection, and the myriad specs that come with computers. I appreciate this may not be the thinking of the Anandtech reader base, but let’s face it, us Anandtech readers are tech enthusiasts. Like cars, most people are not car enthusiasts. They just drive cars as they serve a purpose. Just my $0.02Qasar - Wednesday, May 25, 2022 - link
" Except windows users are, in fact, switching to Apple. Broadly. " i know no one that has switch to apple from windows, most cite the reason as ? cost. apple is just too expensive for what you get.scottrichardson - Thursday, May 26, 2022 - link
Could be location based but among my friends and colleagues, family etc, practically everyone I know has switched to Apple 100% in recent years. A few creative/dev fold but the rest average joes. They also acknowledged the cost but many didn’t. Many just purchased MacBook airs which really are quite good value given the performance, reliability, retina screen etc.GeoffreyA - Thursday, May 26, 2022 - link
I reckon prestige and status play a role in that.Qasar - Thursday, May 26, 2022 - link
maybe, but still, to claim everyone is switching to apple or not, based on location, is moot.the people i know that do use apple only use the iphone, and 1 with a mac due to his music thing. the rest are windows comp based
ingwe - Friday, May 27, 2022 - link
I think the status take is kind of lazy. Apple has passed the point of status and is very mainstream. And you can get an air for what is pretty cheap in terms of a new ultralight laptop.ingwe - Friday, May 27, 2022 - link
I don't know about how much switching is going on, but I definitely think people have been gravitating more and more to simplicity and that this is being reflected in tech design. There is so much complexity in the world that people (those that aren't power users that need and/or enjoy the complexity) don't care if features are locked out as long as "it just works" etc. Corporations are happy to give this to them because they can also market it and make more money. Perhaps I am wrong though.Oxford Guy - Sunday, May 29, 2022 - link
Feature richness is simplicity when implemented well. Feature poverty increases complexity.Sivar - Wednesday, May 25, 2022 - link
Thank you for the highly anticipated Ryzen 7000 coverage.While I haven't finished reading the article, I'd like to report a typo in the word "discuss":
"While AMD isn't going into great detail on the Zen 4 architecture today – they have to save something to disucss for later in the year"
Mike Bruzzone - Wednesday, May 25, 2022 - link
What AMD means by + 15% Zen 4 Rafael performance is + 15% price increase regardless the actual performance delta in relation V5x. TSMC is raising price into second half 2022.75xx quad = $330
76xx_ hexa = $458
78xx_ octa = $516
79xx_ dodeca = $823
795x_ hexadeca = $919
Intel Alder (Raptor top bin) missing price points are $749 and $999. Here's a second take at AMD Zen 4 price mirroring the forgotten [?] Intel Extreme Edition price ladder;
hexadeca = $999
dodeca = $749
octa = $583
hexa = $389
quad = $323
Mike Bruzzone, Camp Marketing
Bruzzone - Monday, June 6, 2022 - link
I've spent more time estimating 'what is' Zen 4 Raphael price. Framework to consider in the equation. 1) TSMC & AMD + 20% combined price increase into second half 2022. 2) From 7 nm to 5 nm shrink + 40% more good die per wafer. aha the 20 : 40 rule.I disqualified TSMC 7 to 5 nm potential 81% density increase on manufacturability for a high frequency part.
What is good silicon per wafer? On ccx + i/o normalized basis across the product stack, V5x + i/o = 245 mm^2 normal and yes on the napkin math the R5K i/o is fabricated at 12+ = from 216 good complete components to 302 good complete component per wafer recall normalized is a full product stack calc on percent grade SKU weight on die (1 or 2) + i/o composition.
So what's R7X 1K?
Ryzen performance desktop average price to AMD from TSMC across whole product stack is $163. At plus 20% now $195ish where AMD adds traditional x1.5 margin for OEM high volume procurement. The range is x1.5 to x2 for low volume procurement.
So what's R7X 1K?
Does + 40% good components offset + 20% price increase from TSMC to AMD?
Please chime in observations appreciated.
mb
Qasar - Saturday, June 11, 2022 - link
just more personal opinon bs and garbage from the pretend couch analist known as mike brazzone, who posts NO sources other then his own bs page, which also has no direct link to sourcesid like to see where he gets his numbers from. my guess, thin air
as said before, nothing but fluff posts, from a fluff " analyst " if he can even be called that
Bruzzone - Sunday, June 12, 2022 - link
Quasar,As I said before my numbers are from primary research and the base data for WW channel is ebay offers. There are other electronic seller sites like Amazon that can also be relied as AMD, Intel, Nvidia have a segment sales manager for every seller site. ebay however is what the industry, the channel itself relies for WW inventory supply management as Intel supply signal cipher was relied until 2012 as Price Watch was relied in the 1990s as Storeboard was relied in the 1980s.
Pursuant my specific post here TSMC price to AMD and AMD price to OEM is calculated on the stakeholder / 3 rule; TSMC gross, AMD gross, OEM NRE and margin validated on AMD financial reconciliation.
Specific Raphael component count increase on good wafer area at 53,000 mm2 / die size on 7 nm to 5 nm subject V5x normalized full stack ccx area + i/o subject ebay percent grade SKU to determine the normalized silicon area requirement across the full product stack.
Supply wave slides at my Seeking Alpha blog sites are the ebay primary research and proof of work;
https://seekingalpha.com/user/5030701/instablogs
Thank for asking. mb
Qasar - Sunday, June 12, 2022 - link
myke brazoneblah blah blah blah blah blah.
then post direct links to your sources, and your own page is NOT a source, its self promotion, nothing more.
i think you dont post direct links to your sources, because you dont have any, period.
the more you post this bs, the more it looks like you are a fluff analyst, posting fluff, and a fraud.
start posting direct links, or dont post at all.
Bruzzone - Monday, June 13, 2022 - link
Qasar, I am the source.Mike Bruzzone, enlisted by attorneys of the Federal Trade Commission v Intel Docket in May 1998 at FTC HQ Washington DC. Lettered to work report California Department of Justice Deputy State AG March 2000 inputting to antitrust division primarily on Intel vertical tying and horizontal contract in combine beginning March/April 1998. Informal begins EUCC in 2001 and by 2009 an honor from EU Commissioner by letter. Referred by Congress and Houce subcommittee commerce and FTC Harvard production economist from Docket 9288 we were aware of each otherrs work then back to attorneys enlisted aid for FTC v Intel Docket 9341. Same year retained by Congress of United States on USDOJ contract and in 2010 made Docket 9341 consent order monitor by FTC attorney to monitor AMD, Intel, Nvidia and VIA. I said if you need a work assignment, I'd given u one so you to can contribute productively and know what primary research is and how primary research makes a data source; regards.
Mike Bruzzone former Orchid Technology, Arche Technology, Cyrix, ARM, NexGen, AMD employee, Radius, C-Cube, Samsung Alpha, Intel, IDT Centaur consultant. With the FTC since May 1998. First through 11 not Intel x86 and ARM introductions, over 30 PC introductions, first Symmetrical PC Processor, first PC MPEG encoder, first 3D PC Graphics Card Orchid IBM AT Turbo PGA
Qasar - Monday, June 13, 2022 - link
mike, you are the source ? yea ok sure. and you pull these numbers out of thin air ? and the whole above post ?" I said if you need a work assignment,." so i can post made up numbers and data ? no thanks.
"and know what primary research is and how primary research makes a data source" with out LINKS to where you get this " data source " its meaningless, fluff and drivel.
IF i wanted to i could post a bunch of text like you do claiming the same things. point is ?
doesnt prove ONE thing. LINKS would, which you STILL refuse to post, so again, all of your posts are just spam, fluff, and BS, from a fraud couch potato " analyst "
Bruzzone - Monday, June 13, 2022 - link
Quasr, then all primary auditors, analysts inferring on data whether quantitative such as the ebay supply chain data I examine, or qualitative when from, for example, a cross matrix organization, production, supply chain or channel audit are other than comparative, reflective, inferring in their search for thing is concert. Confirmation or contradiction on all the fuzzy ambiguities that exist in between? On which, perhaps, you'll make a new discovery?mb
Bruzzone - Monday, June 13, 2022 - link
whoops, typo, search for 'things in concert'. mbQasar - Wednesday, June 15, 2022 - link
myke brazone, blah blah blah blah.are you dunk ??? that looks like the ramblings of some one who drinks too much
the only discovery i would like to make, is where you come with with this BS. no links, means NO ONE can prove you are right OR wrong, how convenient.
WHY wont you post direct links to your sources ? is it because you dont wont to ? or is it be cause you have NO sources so you cant ? i guess its difficult to show your sources when it is all made up.
TekCheck - Wednesday, May 25, 2022 - link
Ian Cutress asked Robert Hallock about the PCIe bandwidth on the chipset link too. It is also PCIe 5.0, so all 28 lanes from CPU are 5.0. It looks like the first gen of Prom21 chipsets are 4.0 capable only.This means that Alder Lake has double bandwidth on the chipset link. DMI stands at x8 PCIe 4.0.
AMD either needs to enable 5.0 chipset link in the second gen of chipsets or sacrifice one M.2 on the chipset's chip 2 and allocate another x4 link from CPU on motherboards without ASM4242 USB4 chip to create double x4 chipset link.
Silver5urfer - Wednesday, May 25, 2022 - link
Alder Lake DMI is PCIe4.0x8, Ryzen 7000 is PCIe5.0x4, it's same bandwidth. It's the case with Z590 as well, with 11th gen it's DMI is 3.0x8 which is same as PCIe4.0x4 link on X570.The CPU lanes are 24, it's the same with Zen 3 as well. Intel had 16 lanes until 10th gen, they upgraded it to 20 lanes on 11th and it's the same for 12th as well. They might upgrade to 24, on Raptor Lake, we do not know yet.
Kvaern1 - Sunday, May 29, 2022 - link
I'm not sure that the DMI link speed is a measure for anything useful.My PCIe 4 SSD's, which individually can do 6-7 GB/s, are maxing out at 3.3 GB/s
SSD to SSD transfer rate over DMI on a Z690 board.
shabby - Monday, May 30, 2022 - link
What's with the message count? It was 450 in the morning and 300 now? Did I miss a massive purge? 😂Qasar - Monday, May 30, 2022 - link
huge spammer twice. one on saturday, one today. saturday was copy and paste from other forums, today was just usless jibber jabberback2future - Tuesday, May 31, 2022 - link
first comments (and with name change second comments series) were related to some extent to initial comments, as far as i looked into that, and some content (one paragraph) was from reddit.com also- bot with context duplication ability (related to initial catch words)
- spamming script with web search tool generated commenting texts
- AI testing (statistical weighing of content already on site into answering comments)
(if it had been less intrusive, it might have been interesting what are secondary or multiple iteration effects of generated comments on comments - probably somekind of summary of mostly used lemma)
GeoffreyA - Tuesday, May 31, 2022 - link
Yes, I thought it was some sort of AI, because the comments did seem vaguely related.GeoffreyA - Tuesday, May 31, 2022 - link
(Then again, perhaps we are all some sort of hyper-advanced Turing-test-passing AI!)dicobalt - Monday, May 30, 2022 - link
Meanwhile over in DRAM land... https://imgur.com/UESHF0Lpeevee - Tuesday, May 31, 2022 - link
I have so many questions.This built-in graphics, it is on the IO chip, right? Will it work together with AMD cards in the slots, or the choice of one or another?
Will a GPU in the slot be able to output video through MB outs? Or other way around?
How well AMD cards scale if used as dual? I think PCIe5 x8 is fast enough for dual cards already, but will it make any sense with driver quality etc?
qwertymac93 - Wednesday, June 1, 2022 - link
Crossfire hasn't been a thing for years and hybrid crossfire is even less likely to make a comeback. I wouldn't hold my breath. Switchable graphics might make it, but the utility on desktop is extremely limited.Even 16x pci-e 5.0 isn't fast enough for full memory coherency so for it to work it'd need to be a crossfire/SLI like technology, which is really not in vogue right now.
peevee - Friday, June 3, 2022 - link
Hybrid crossfire was available on A8/A10, with MUCH, MUCH slower main memory and PCIe. The code is still probably in the drivers. And that seems like an incentive for AMD CPU users to buy an AMD card instead of Nvidia.