Integrated GPUs live or die by the memory bandwidth, and using DDR5 would bring a huge difference. On the other hand, DDR5 is more expensive and might not "fit" a low-budget computer. Basically, people that buy a cheap i3 with expensive RAM and no dedicated GPU aren't that many.
As its iGPU has only 24 Execution Units, I doubt DDR5 will have such a momentous impact. Even if it does, there's still no way its iGPU performance is going to be terribly interesting or even competitive with 8 CU (512 shader) competition from AMD.
By comparison, consider that Tiger Lake needed up to 96 EU of a roughly comparable iGPU architecture to eke out a win over AMD's 8 CU iGPUs. So, that should set a very modest expectation for the iGPU performance of these chips.
They don't want to say how bad it is...;) Just another one of those inexplicable omissions AT seems to enjoy making in product reviews, I guess. Even if it's terrible, it should be demonstrated. Anyway, I'll say that Intel has a long and somewhat stagnant history of 2-4 core CPUs...;)
"Intel has a long and somewhat stagnant history of 2-4 core CPUs" Must be quite frustrating from AMD fans seeing the 'stagnant' i3 occasionally defeat the 5600X in gaming benchmarks, too...
Really? I've checked the archives in the 'CPU' section of anandtech. The last extensive review of an AMD Desktop CPU seems to be the 5300G/5600G/5700G... And it featured two pages of iGPU tests... And the first page of the same review was 'Power Consumption'. So your 'shortcuts' seem to be a bit... Selective, I'd say.
What's the status of ECC RAM support for Alder Lake CPUs? Does DDR5 support imply ECC support? Are there different levels of ECC for DDR5? Intel Ark is no help. Thanks for any hints.
Yes, according to ark.intel.com, ECC support is *not* listed for this or its sibling. They *do* list ECC support for the "E" and "TE" variants, but I don't know if you'll be able to source those easily, or how you'll find motherboard compatibility.
Basically, plan on using an E-series Xeon, if you want ECC on Alder Lake.
No. All DDR5 has on-die ECC, but that's just a band-aid to cover other shortcuts made by DDR5 (increased density, long-refresh) and probably won't deliver a net reliability improvement.
The two variations of ECC currently supported by Intel are traditional out-of-band ECC, which requires special DIMMs and motherboards, or in-band ECC that should work with any DIMMs but at a slight performance penalty. From what I've seen, only certain Elkhart Lake CPUs so far support in-band ECC.
When it comes to excelling in single thread applications, which applications are we really talking about in 2022 ? MS-DOS emulators and CPU-tests limited to 1 thread?
I'm not talking about where single threaded software exists, because that's just about everywhere. I'm wondering which singlethreaded applications is it possible to excel in, because they actually exist and their performance actually matter.
Games. Multithreaded games still have a single primary rendering thread, or other tasks such as ai, that cannot be made parallel easily or at all, hence why alder lake wins at gaming benchmarks, even multithreaded ones
OCR of a long document in Adobe Acrobat. It’s infuriating that it’s still single threaded in 2022 even in the pro version, when it’s such an obviously parallelizable task. But it is what it is.
I wonder if you are serous or just trolling... If you know "single threaded software exists, because that's just about everywhere. I'm wondering which singlethreaded applications is it possible to excel in, because they actually exist and their performance actually matter." you may just think that you excel in all those mono tasking application that require times to be completed. I do not think it is that difficult to understand, so I just think you are just acting as a troll.
Todays games are nowhere near single threaded, even though there is a main thread. The reason alder lake does well is because it has several/many cores/threads clocked high performing well. both xbox and playstation have had multicore amds for a decade (since 2006 for PS, 2013 for xbox), which forced developers to focus on weaker threads rather than the old fashioned monolithic design.
most games have some sort of single thread bottleneck also web browser performance is highly affected by the single thread performance this is what most people do 99% of time on their computers, almost nobody is "rendering" more cores help up to a certain point, then it becomes useless. anything above 6C12T is usually completelly useless. single thread perf matters much more than 16 core 32 thread benchmark
Office applications. Legacy software. Interpreted code (as was the Visual Basic for Applications). Compilation (C or C++) of a single file. There are many places where "single-core" performance counts, as - if your typical operation lasts only a few seconds you might not really care to optimize (and going parallel might not be easy, and might not even be possible for some problems).
You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.
All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables. Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.
All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.
Printing app and format converters are monothread. CADs are also. And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.
At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core. Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market. BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.
> if you have mane many cores it just end up being I/O limited.
I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.
> At the end of the compiling process, however, > you'll have the linked which is a single threaded task.
There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.
> At the end, there are many many task that cannot be parallelized.
There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code.
"You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes." Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.
"There are more that could. They just aren't because... reasons." Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit. The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.
Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one. You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
> Try compiling something that is not "Hello world" and you'll see
My current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.
As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.
I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.
> You can easily see this in the fact that multi core CPU in consumer market has been > existed for more than 15 years and still only a minor number of applications, mostly > rendered and video transcoders, do really take advantage of many cores.
Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.
Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.
Oh, and don't forget games. Most games are pretty well-threaded.
"analyzing web browser performance and revealing they're quite heavily multi-threaded"
I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.
Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.
Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.
So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.
Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.
Finally, here you'll find some fantastic C++ infographics:
Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!
China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.
I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.
That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.
I know about laziness. I probably should be working on programming puzzles, since I hear job interviews tend to be big on those. The last time I did anything like that was Google's "foobar", which was pretty fun. I did well enough to get an interview, but I didn't pursue it.
I really wish I had known about these things, or that they had existed, 10-15 years ago. I had a thirst for programming back then and, if I may say so, would've done well. I've let go of thinking of myself as a programmer any more, but still hope to keep up an acquaintance with code.
Especially learning *new* things. The first time I tried to learn another spoken language, as an adult, I could feel the blood rushing to my brain as I struggled to remember and pronounce new words and phrases.
I am learning francaise, the beautiful language itself these days (and incidentally, it makes me so disappointed in English as always). There was a great article on Quanta last week that really got my brain working too.
OMG, I stay away from any cutting-edge theoretical physics. I'm not investing that much effort into trying to understand something that will likely turn out to be wrong. It'd be different if I had a stake in the matter, but there's more than enough more practical stuff I should be learning.
But, if you enjoy it, and trying to wrap your head around it, then it's far from the worst way you could spend your time!
I enjoy it, and wish I was a physicist, but it does drain out the mind considerably, and afterwards one feels it's all rather meaningless, and it's the practical business of life that really counts: love, family, work, etc.
High respect to AT team, for mentioning the fundamental design flaw of this 12th generation. Intel really screwed up hard. Nobody cares, esp those Youtubers.
They messed up the ILM hard. And the AVX512 fuse off is another gigantic kick in the face, latest update they are saying Intel will fuse them off from factory. If we see the silicon area on the ADL processors for P cores it occupies a good chunk of space plus it allows the P core to fully unleash it's performance (no more E-Cores hampering the Uncore and Powerplane and overall CPU performance)
Prime reason to skip this entire 12th gen, esp with the new rumors saying RPL LGA1700 Z700 chipset might be DDR5 only, so you get this haphazardly designed ILM which requires end user to perform a Socket mod (I would not do it despite love tinkering because the torque and all the fitting is not public, Igor clearly mentioned this) for DDR4 or buy the uber expensive DDR5 kits which have 2 flaws on their own - Price to performance, Dual Rank kits mandatory for ADL IMC to maximize the quad channel speed and performance, a.k.a needing APEX or Unify X or Tachyon all top end boards HWBot grade.
Intel really messed this, a solid CPU Arch but from an enthusiast point of view who wants to use a PC for decades going forth. AMD also, AM4 has it's own share of issues - AGESA1.2.0.5 is busted, they still did not fix the IOD USB issues through firmware, it has been plaguing the platform. And now no more X3D CPUs refresh which would have perfectly fixed all the firmware problems and IOD and DRAM plus the WHEA. Sad, still Ryzen is best if you do not want to OC or tweak just enable PBO2 and that's it. Let it do it's job, and do not touch DRAM past 3600MHz. For Intel LGA1200 10th gen is best for SMT performance and all workloads for those who want PCIe4.0 (still not a big deal because not much use) a 11900K is fine but the absolute worst class of binning, poor IMC is a strict no no. The biggest loss going with LGA1200 Z590 is DMI speed is very low. But since majority will not saturate the NVMe on a constant load it is okay, a shame if 10900K had DMI of 11th gen it would have been solid since both X570 and Z590 are practically same in PCH link lanes, only on the CPU side Ryzen has extra USB but with the crapping out it doesn't make sense, at-least to me.
AM5 needs maturity as well, first customers will always end up being guinea pigs. Still I would like to see how AMD plays their game, and I'm looking forward since it won't have BS E-Core crap. Full fast fat cores.
then maybe pure luck. 5900x/980pro/external drives through USB 3.0 + C. I can arbitrarily hit the USB drop problem with this setup even on stock settings. I bought the system for the throughput so I just deal with it but it is absolutely never been fixed even with the latest agesa.
Alder 6P, 4P are from area optimized mask sets and contain no E cores I believe you articulated that on no thread direction firmware perf and power hit @Silver5urfer "who wants a PC for decades going forth" I do and always buy top bin at prior gen run end clearance priced on the objective of 10 year system life.5800X here I come April/May. My 4M point databases will do fine and my extra budget will go to fast NIC and memory and 1080 and I'll be in performance Heavan. Mike Bruzzone, Camp Marketing
bandwidth wise, think I am using it all 5900x x570 board rx6900 @ PCIe 4 980 pro 2 TB @ PCIe 4 3x SATA in storage spaces raid 0 2x SATA in normal USB audio recorders (roland), USB audio playback (also roland)
I think you just made up all this just to find some weak point in Intel architecture while just compensating with a subtle (and probably what you consider secondary) I/O problem for AMD.
I would like to quote this post for the future, when AMD will be limited to DDR5 only with Zen4 and re-post this statement again: "Prime reason to skip this entire 12th gen, esp with the new rumors saying RPL LGA1700 Z700 chipset might be DDR5 only, so you get this haphazardly designed ILM which requires end user to perform a Socket mod [...] for DDR4 or buy the uber expensive DDR5 kits which have 2 flaws on their own - Price to performance,"
A part that RPL won't be DDR5 only, but most probably more DDR5 oriented (which means you'll find less DDR4 offers for it, the same you find less DDR5 offers for ADL), you can buy a 500 chipset with all the features you want supporting DDR4. When Zen4 will be out you will have to buy "uber expensive DDR5 kits which have 2 flaws on their own - Price to performance". Let's see what you'll say about the corner AMD has put itself with that choice. I just would think that AMD will delay Zen4 as much as possible till DDR5 becomes available at an affordable price.
At the end, with more time and knowledge, you'll see that the E-cores are not that a hindrance to P-cores, but the real clever way to support extensively multi-threaded jobs, much better than beefed up cores supporting low efficient SMT. AMD will arrive at that as well. With Zen5 they have already announced they will propose they usual mock-up copy (stand-alone efficiente core) of the then 2 years older competition solution. With Zen6 they will probably integrate it as Intel has done today. In 4 years (possibly) you'll have AMD with the same architectural big.LITTLE solution than Intel has now. And I bet my cat that when this will happen we will all ear you and AMD fanboys how revolutionary and game changing will be that choice. It was already done in the past, it will be done again.
And BTW, I still have to see an AMD motherboard supporting all the technology that is present on Intel MB with not a single issue as are on Intel ones. For me it makes a big difference in having even a slightly slower performance but rock steady than something that sometimes is faster but even more often doesn't work as expected.
Intel 12th gen is garbage Page 2 ILM seals the fate of this trash LGA1700, it's over. As for AMD their Zen 3 is flawed. That's why I suggest either Intel 10th gen or AMD Ryzen 5000 (*only with PBO and 3600MHz nothing more).
As for smearing fanboy crap on me, you think you have any ounce of credibility left ? I literally gave links and where the AVX512 and P core design is much superior and yet you are that clown who comes in and says BigLittle is good, you do not have technological knowledge at all. Just basing on what makes your stupid AMD point about Zen 5 will copy Intel ? AMD never copied Intel. Intel is the one which copies AMD's chiplet design and MCM in EMIB format for Xeon SPR. And Intel is the one which copied ARM's design, AMD is not going to make this junk for desktop, they want leadership they will get. And Intel is going this way because their P cores cannot handle more than 8P as their thermal ceiling is low. 12900K is the proof of that.
Leaks point to AMD putting 8x Zen 5 cores and 16x Zen 4C cores on desktop (Granite Ridge), as well as 8x Zen 5 and 4x Zen 4C in mobile/desktop APUs (Strix Point).
AGESA or AGESA v2 for the problems you are fighting with? If you have USB ports that are not controlled by the CPU, AGESA isn't necessarily the source of those problems.
Last page: "Intel has rated the i3-12300 at base frequencies with a TDP of 60 W and a 69 W TDP when at turbo clock speeds." - the TDP was mentioned in the first page as 89W (basically a tie to the 88W of AMD). "Due to AMD's Zen architectures, Intel has been on the ropes in both performance and value for a while." - Intel suffered a lot from their inability to improve their lithography, they could have been competitive in cost, performance and power use with better lithography " One of AMD's most cost-effective processors remains the Ryzen 5 5600X, with six cores, eight threads" - it has 12 threads.
All in all, AMD still make sense in an "upgrade only the processor" scenario - though that could be a niche within a niche. And, apparently, the greatest competition the i3-12300 with DDR5 has is from the i3-12300 with DDR4 (or maybe an i5 with DDR4).
Intel's dies are massive and entirely fabbed in Intel 7. They're only competing in cost because Intel is deliberately choosing to sacrifice their famous "Intel margins" to get back into the market.
Failing to do that might've meant inactive fabs, and if fabs aren't making money they're losing money (Since new fabs are so expensive and there's a definite time frame where they can recuperate the investments into cutting edge tooling, after which wafer prices will tend to fall)
This is Intel at it's most desperate yet, and I'm loving it.
Intel still reports very high "Gross Margins". How much other activities (cough OEM bribes cough) eat into this might not be truly evident. As for "Failing to do that might've meant inactive fabs, and if fabs aren't making money they're losing money"... AMD simply can not produce enough - so if Intel stopped fabrication of those inferior processors (of the last at least couple of years), prices would have exploded. While I don't condone the US government saving banks involved in the sub-prime mortgage crisis, at this moment at least Intel truly is too big to fail.
In my opinion, running both of these CPUs with JDEC standard memory is an incredibly stupid idea that makes this review significantly less useful. DDR4 3600cl18 or 3200cl16 are super affordable. DDR5 is not affordable right now, but purchasing DDR5 with similar latencies (timing in terms of ns, not just CAS latency) would result in a much more effective review.
Imho, running both of these CPUs with XMP profile is an incredibly stupid idea for review. CPU, MB and stick makers give guarantee only for JEDEC profiles.
Integrated circuit fabrication technology changes and no doubt DRAM chip design, along with it. Maybe memory errors were relatively more common, in the memory available at the time, and Lisa surely needed a lot of it.
The Ryzen 3 5300G APU has literally HALF the amount of L3 cache as other Zen 3 CPU's, so trying to use that part to claim AMD couldn't be make a competitive quad core CPU right now if they needed to based on just that data alone is pretty god-tier idiotic. I expect better of this site.
AMD went from 4 core per CCX with the Zen2 generation to 8 core per CCD in the Zen3 generation. As a result, AMD doesn't have non-APU chips with only 4 cores. Monolithic design means AMD isn't using chiplets for that 5300G. If you are limited by fab capacity, do you divert a lot of capacity for low-margin and low end products?
Zen4 may switch things up a bit, or, AMD could potentially put low end Zen4 on 7nm since having the best efficiency and performance won't be needed for the low end products.
AMD couldn't make a competitive 4-core Zen3 CPU at the price Intel is selling their latest generation. Comparisons with cheaper or more expensive processors is useful only to a point... And making a true comparison (i.e. platform costs) is a quagmire of "if this, if that, if ...". Not to mention that - at least for a while - the prices will be volatile, so no comparison based on "price bracket" will be long-lived.
When Zen3 chiplets are based around 8 cores per CCD, the only quad-core chips will be monolithic APUs. With any luck, AMD will relegate Ryzen 3 CPUs to 7nm while Ryzen 5, 7, and 9 will be on 5nm.
7 and 5 don't necessarily share the same libraries and quirks. That's a lot of engineering resources for questionable gain. I don't disagree that it'd be nice but I don't think it's likely.
CL16 3200 RAM was cheap many many years ago and I have not heard of a single stability problem with any platform other than Zen 1, which was quite special.
It’s preposterous to run 3200-speed RAM at anything slower than CL16.
* Sad to see DDR5 used for remainder of benchmarks, given current price & availability. People buying a sub-$150 CPU won't be using DDR5, making these benchmarks unrealistic.
* Sad to see minimal analysis of power consumption. I believe much of their advantage over the Ryzen R3 5300G comes from burning more power and DDR5, but without power measurements on individual benchmarks, we can't compute perf/W or make other conclusions about this.
* Glad to see the 5300G showing up, where it did.
* Glad to see the i7-6700K (and i7-2600K) sometimes making an appearance. So very interesting that a couple benchmarks showed the i7-6700K with roughly equal performance!
* On the last page, Turbo power is mistakenly stated as 69 W, although the first page chart correctly lists it as 89 W.
* Please ask Ian to open source his 3D Particle Movement benchmark, or stop using it. As the rest of your benchmarks are publicly available & independently verifiable, this is only fair.
Regarding the CPU:
* Definitely a performance bargain, if you can get it near list price!
* Sad to see ark.intel.com doesn't specify ECC support (which IMO means probably not... but check the docs of any LGA 1700 ECC-capable motherboard to be sure).
"Sad to see DDR5 used for remainder of benchmarks," As a lower performance processor, DDR5 wouldn't bring too much to the table. They specify a 5-10% increase in performance with DDR5, with an average of some 6%. So, basically nothing would change in the benchmarks - a 10% performance difference could easily be ignored for many other factors (price, availability, necessary power/cooling, ...)
3% used to be Anand's Anandtech "noise". I wouldn't care for a 10% - 25 seconds compile time down to 22... or editing images, 50 images to take 54 seconds instead of one minute. That's the reason Intel used to compare new processors to 3 generations old ones (3-5 years old). The improvement over multiple generations grew to a nice 25% or more (at least in some benchmarks). But, if all you do takes seconds or minutes, that 10% reduction in time (or 10% increase in throughput) is almost never truly useful.
> But, if all you do takes seconds or minutes, that 10% reduction in time > (or 10% increase in throughput) is almost never truly useful.
I'm not talking about upgrading for an absolute increase of 10%. However, 10% is a lot of error to stack with whatever else you're comparing against.
Either the accuracy of the benchmarks matters or it doesn't. If not, then obviously we don't need to bother about 10%. If it does, then 10% is too much to ignore.
"Sad to see DDR5 used for remainder of benchmarks, given current price & availability. People buying a sub-$150 CPU won't be using DDR5, making these benchmarks unrealistic."
Including the DDR4 vs. DDR5 numbers was our compromise, here. We're going to be using this dataset for a long time going forward; it didn't make much sense to base everything around DDR4, and thus unnecessarily kneecapping the CPU in current and future comparisons.
I would like to point out that it has been five months already, still can not buy a single quad core CPU. Intel is teasing us with a good cheap product, but it doesn't actually exist. If and when it finally shows up, it will probably be overpriced (over $200 CAD?) so this product might as well not exist.
It's been 2 months since non-K Alder Lake CPUs launched (including all quad core SKUs) and they are definitely available in the UK from reputable independent retailers that specialise in computer equipment. The 12100 is available & in stock for £135 - around £30 more than the 10100 and £5 less than the 10300. Not great prices but from experience fairly typical for the market, at least in the UK.
The US isn't great though the usual suspects do have some options. Considering that everyone seems to have the 12xxxK SKUs in stock it will likely be easy to get whatever you want in a month or 2.
The fact that your i3 beats your i5 in Speedometer 2 implies a problem with the platform. Windows should be keeping a user-interactive process on the P cores all the time, but it seems from these results that Chrome's threads are wandering between P and E cores. There's really no other explanation for why the i5-12600K with 11% higher clocks gets a lower score.
Personally I find the E cores more of a hazard than a benefit, and I have them disabled.
The scores are all too low. Prior reviews used Chrome 92, which is missing a significant V8 update for Windows.
Mobile CPUs also exhibit this behavior, due to more hardware managed power states. It is a royal PITA to find the secret handshake that maintains their litany of peak clocks per CPU core, ring bus, memory controller, RAM, etc.
My Tiger Lake i5 goes from 160 to 200 in Speedometer by adding "processor energy performance preference policy" to the Windows power advanced power settings menu. Reducing it to 0% raises the minimum clock speed to about 3 GHz.
Alder Lake CPUs with E cores introduce another variable with the hardware thread scheduler. There is also a larger ring bus on the i5 12600K, which increases latency.
Yes the scores are extremely low. With current Chrome/Linux on an i7-12700K with the E-cores disabled I get 305.
Considering that the browser is a compiler for Javascript it doesn't make a lot of sense to freeze its version. It would be nice if we could get a rolling picture, based on a few key reference systems, of what a buyer *today* would experience with *today's* software.
Anyone else find the DDR4 vs DDR5 graphs annoying? I realize they are sorted by performance but with only 2 entries in each one I think it would make a lot more sense to just have a consistent order.
That would just introduce the problem that you must check if a lower bar is better than a longer one, or viceversa. Ordering it by performance allows to easily see which is faster and by what amount.
Beyond industrial embedded for control plane / processing [including coprocessing acceleration point of sale for example], I'm not sure why quads remain a subject of interest there are sooo many recent back generation options why even produce them today for mass market? Ryzen quads failing out of sort are relied in 1x8ccx+1x4ccx for dodadeca so they're still utilized but a standalone 4C/8t [?] there are so many good used options and hexa used [Coffee Lake] is gaining volume in the secondary market on system upgrade on hexa reclaim.
All Alder i3 12300 available in the WW channel today equals 0.00047% of full line available. The top three AL SKUs are 12900K at 40.8%, 700K at 46.7% now at 87.6% of full line and number 3 SKU volume wise is 12400 at 2.5% and at number 4 is 12600K/Kf at 2.4% combined.
On finished component yield and silicon performance all Alder Lake are essentially 900K/700K that is similar to Coffee Refresh octa before disablement most 9th gen started out as 9900_.
On the AMD side you can purchase a used 2600 hexa for less than $100 and on WW channel availability that is for both used and new there are x126 more R5 2600 than i3 12300. Pursuant AMD quads 5300G has currently x37 more channel available and R3 3300X x25.
2) An Alder Lake quad-core is equal or better than a Ryzen 5 2600. All benchmarks also show substantial improvements in 1% lows. What matters most is overall performance, not simply the number of cores.
3) The vast majority of people are just fine running a modern quad-core.
actually a quad core is great for 360hz gaming also, the problem is the locked clock speed
if Intel would release an unlocked quad core that can run at 5ghz+ it would be a dream chip
that's why they don't release it, they want gamers to buy useless 16 core CPUs for gaming, as the game FPS is higher from cache and clock speed, not core count
I definitely agree that you shouldn't have to buy more cores just to get higher peak clock speeds.
With Intel's Xeon CPUs, it would typically be the case that models with fewer cores would have higher base & peak clock speeds. I think that started to change when AMD setup their product stack so that each step enabled more cores and/or higher clock speeds. As Intel moved to 6- and 8-core mainstream CPUs, they did the same thing.
Where AMD sort of bucked the trend was with the 3300X. That little screamer was an absolute performance bargain. I almost bought one, a couple times - first, when it launched, and then I passed on it because it was selling above list price when it came back in stock in late 2020 or early 2021.
Anyway, I wish AMD would do something like that with a Zen3 or Zen3+, though it's looking unlikely.
Agreed, modern quads work great for Office essentials and home essentials including facility management and security.
I'm speaking English, are we communicating I think so, on what dialect on practice area can however lead to interpretative earning to confer in another practice area for comprehension cross practice cross functions achieving dialogue and I think so.
"An Alder Lake quad-core is equal or better than a Ryzen 5 2600. All benchmarks also show substantial improvements in 1% lows. What matters most is overall performance, not simply the number of cores."
Encoding, transcoding and compiling for octa centric advantages,
AMD with 3300x went after Ivy Bridge EE quad and won and there are plenty of priced right E5 1600 v2 quad and a bunch v2 hexa plus Haswell EE all cores just entered used market plenty of good choices especially if you have a board that can be upgraded.
Channel this last week;
Core Haswell desktop returns to secondary market + 46.6%, and mobile + 7.5% in prior eight weeks and the replacement trend is from Haswell forward in time.
Ivy Bridge EE + 161.11% octa/hexa return to used market prior eight weeks presents a telling indicator.
Haswell Extremes all SKUs + 180% in the prior eight weeks is a strong desktop upgrade indicator.
i7 Refresh + 14%, 4790 comes back to secondary + 17.8% and 4790K + 14% that is 10% of 90_
i5 Refresh 4590 comes back to secondary + 403% and 4690 sells down < 69% at 19% of 4590 i3 Refresh + 81% and 4150 comes back + 161%
Pentium Refresh + 5.5% Celeron Refresh + 14.5%
i7 Original 4770 + 131% and K + 217% that is 24.1% of 70_
i5 Original + 169% and 4570 + 98%, 4570S + 952%, 4570T + 49.7% and 4570T is 26.3% of 4570_ all varients
i3 Original + 4% and 4130T + 13.7%
Pentium Original + 18.8% and G3220 comes back to secondary + 21.8% followed by 3420 + 19.2%
More in in comment line, several comments actually keep scrolling down until you find last week's Intel channel data and sales trend;
"As it turns out, GIMP does optimizations for every CPU thread in the system, which requires that higher thread-count processors take a lot longer to run."
Holy cow. I don't believe that. There's something else going on there, like maybe code using a stupid spinlock or something... which could actually be the case if some plugins or the core app used libgomp.
At the time that article was written, the only Big.Little CPUs were in phones (okay, let's forget Lakemont - nobody was running GIMP on a Lakemont). There was absolutely no reason for it to do per-thread optimizations!
Lakefield, you mean! Although Intel does appear to have had a Lakemont, Google "Intel Lakemont" to find another deceased product.
I have used GIMP on RPi4 (which can be rough but usable) so I can imagine Lakefield would be better. Lakefield was too expensive for relatively bad performance (couldn't run all 5 cores at once apparently). Intel gets another swing at it with the Pentium 8500 and other Alder Lake chips.
Yeah, I get the feeling Lakefield was testing out a few too many new technologies to be executed well. At least it served as a test vehicle for Big+Little and their die-stacking tech.
For 150 USD or £140 this is a really nice product from Intel. Good to see some good value/budget options. Normally I would scoff at a quad core but the Golden Cove cores here are strong enough that it does really well for itself. AMD is in a spot of trouble if they don't lower 5600X price.
Did Cinebench R23 change behaviour compared to earlier versions? That's quite a difference with DDR4 - DDR5 scaling in multi-thread. Up to R20 it seemed insignificantly affected by ram. I did quickly test R23 on 6700k at 2133 vs 3200, and saw no significant difference there. So I'd question that specific result, unless DDR5 does something with R23?
R23 seems to be the same on the surface, just with the addition of an adjustable looping timer. Perhaps running the test for 10 or 30 minutes shows the RAM differences much better than 1 run, of several seconds to several minutes, depending on core count.
Good point, fixed power limits can cause what you described. If that is the reason, would it not apply to R20 also? Unless R23 does behave very differently to R20.
Thanks Gavin! While I agree with much of what you wrote, I have one question: Why test a decidedly budget CPU only in a clearly premium-level board, with a also not-so-cheap AIO cooling? Both cost a lot more than the i3 itself. Yes, I assume you're doing so to minimize differences to test of better and pricier CPUs, but I really doubt a $ 130 CPU would find itself in a high-end board with that AIO cooling attached. Wouldn't it make sense to test a CPU in its "natural habitat", so in a budget socket 1700 board with the stock cooler on it? Just wondering.
AMD doesn't want to be mentioned anywhere near the term, "budget" going forward. AMD's goal is to assume the premium/luxury class role (selling $15k EPYC 3D stacked Genoa). Chasing the low-end makes it difficult to attain high margins. This is why we have not seen the low-end Zen 3 updates to this point.
With Zen 4, we will see the 1M L2 + 64M 3D stacked-fed L3, full-fat cores. The raw performance in single and multithread will render those Intel E cores worthless. The 7950X3D will give Threadripper-class multithread, along with fantastical single-core performance.
Its not that AMD doesn't care about the low-end anymore....they just don't care about the low-end anymore.
The 8-core chiplet with high yields makes 4-cores pointless for AMD to produce, and that won't change anytime soon since Zen 4 and probably Zen 5 will use 8-core chiplets.
The real "problem" is that AMD hiked prices during Intel's stumbles. A good strategy that made them lots of cash during a chip shortage/supply crisis. But if that leak is correct, they will launch a 6-core near $100-120 to counter budget Alder Lake chips like the i3-12100F.
AMD could put Van Gogh on AM5 for the DIY market. That would use 7nm while other products move down to 5nm. They will also have basic graphics on Zen 4 Raphael which would allow for office-type builds without discrete GPUs. Finally, there is the Monet on GloFo 12LP+ rumor. Even if that was laptop only, it could be an impulse buy (use display output).
AM5 will only be a good budget option when the DDR5 prices come down, but the Zen 3 price cuts and new rumored CPUs keep AM4 in the running.
" The real "problem" is that AMD hiked prices during Intel's stumbles " and its the same " problem " intel did pre zen, yet very few seem to complain about it then. whats your point ? some really need to let this go, its like some think amd should of kept their prices low, as thats what they did before, before they had the performance to go with those prices, like you know, intel did all those years ?
In Poland there are pretty much no compatible Intel boards under 100$, meanwhile there are plenty of B450 and few B550 available. Cheapest of them at 50$.
Intel's up to their usual stuff misleading market? Why am i not surprised.
DDR5 4800 costs 250% of DDR4 3200 but only gives 10% performance improvement. I keep telling people the DDR5 launch is premature until Q4-2024 to Q1-2025 when all the major memory manufacturers finally have new fabs online.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
140 Comments
Back to Article
29a - Thursday, March 3, 2022 - link
No iGPU benchmarks, why?Calin - Thursday, March 3, 2022 - link
Integrated GPUs live or die by the memory bandwidth, and using DDR5 would bring a huge difference.On the other hand, DDR5 is more expensive and might not "fit" a low-budget computer.
Basically, people that buy a cheap i3 with expensive RAM and no dedicated GPU aren't that many.
mode_13h - Thursday, March 3, 2022 - link
As its iGPU has only 24 Execution Units, I doubt DDR5 will have such a momentous impact. Even if it does, there's still no way its iGPU performance is going to be terribly interesting or even competitive with 8 CU (512 shader) competition from AMD.mode_13h - Thursday, March 3, 2022 - link
By comparison, consider that Tiger Lake needed up to 96 EU of a roughly comparable iGPU architecture to eke out a win over AMD's 8 CU iGPUs. So, that should set a very modest expectation for the iGPU performance of these chips.29a - Thursday, March 3, 2022 - link
If they would have benchmarked the iGPU we would know how it performs. I've heard for the last 2 years now how awesome Xe is going to be.mode_13h - Thursday, March 3, 2022 - link
Then you need only look for a review of Tiger Lake's iGPU. Alder Lake is only a minor refresh of it.https://www.anandtech.com/show/16084/intel-tiger-l...
WaltC - Friday, March 4, 2022 - link
They don't want to say how bad it is...;) Just another one of those inexplicable omissions AT seems to enjoy making in product reviews, I guess. Even if it's terrible, it should be demonstrated. Anyway, I'll say that Intel has a long and somewhat stagnant history of 2-4 core CPUs...;)mode_13h - Saturday, March 5, 2022 - link
Yeah, I think it could've done with a single page of 720p or 1080p benchmarks for a selection of relevant titles.MDD1963 - Monday, April 25, 2022 - link
"Intel has a long and somewhat stagnant history of 2-4 core CPUs" Must be quite frustrating from AMD fans seeing the 'stagnant' i3 occasionally defeat the 5600X in gaming benchmarks, too...DannyH246 - Thursday, March 3, 2022 - link
LOL - We all know why.Ryan Smith - Friday, March 4, 2022 - link
"No iGPU benchmarks, why?"Frankly, we haven't been doing iGPU benchmarks for desktop processors for a while. It's been a shortcut to get CPU reviews done on time.
That needs to (and will be) changing. But right now if we benchmarked the i3-12300's iGPU, we wouldn't have sufficient data to compare it to anyhow.
nandnandnand - Sunday, March 6, 2022 - link
Comparing it to the Vega 6 in the 5300G should get the point across: UHD 730 is weak.kkilobyte - Sunday, March 6, 2022 - link
Really? I've checked the archives in the 'CPU' section of anandtech. The last extensive review of an AMD Desktop CPU seems to be the 5300G/5600G/5700G... And it featured two pages of iGPU tests... And the first page of the same review was 'Power Consumption'.So your 'shortcuts' seem to be a bit... Selective, I'd say.
DannyH246 - Sunday, March 6, 2022 - link
Intel cannot be shown in ANY bad light.paulwalker - Tuesday, April 5, 2022 - link
Lovely pictures, awesome these are looking so funny interesting but professional and artistic pics.<a href="https://starboardgroup.com/team/">andrew levy starboard</a>
Spunjji - Monday, March 7, 2022 - link
It's bad. How bad? Take Vega 6, divide by two (or three depending on application).MDD1963 - Monday, April 25, 2022 - link
Suspect your desired iGPU performance figures of 12300's performance would still best those of the 5900X and 5950X....? :)fmyhr - Thursday, March 3, 2022 - link
What's the status of ECC RAM support for Alder Lake CPUs? Does DDR5 support imply ECC support? Are there different levels of ECC for DDR5? Intel Ark is no help. Thanks for any hints.mode_13h - Thursday, March 3, 2022 - link
Yes, according to ark.intel.com, ECC support is *not* listed for this or its sibling. They *do* list ECC support for the "E" and "TE" variants, but I don't know if you'll be able to source those easily, or how you'll find motherboard compatibility.Basically, plan on using an E-series Xeon, if you want ECC on Alder Lake.
Slash3 - Thursday, March 3, 2022 - link
The -E/TE are BGA variants for embedded applications. It will need to be a Xeon.mode_13h - Thursday, March 3, 2022 - link
> The -E/TE are BGA variantsAccording to Intel, they're the same FCLGA1700 package as the CPU reviewed in this article.
https://ark.intel.com/content/www/us/en/ark/compar...
However, whether you can get them in Qty. 1 and whether any retail motherboards have validated ECC support for them is another matter.
mode_13h - Thursday, March 3, 2022 - link
> Are there different levels of ECC for DDR5?No. All DDR5 has on-die ECC, but that's just a band-aid to cover other shortcuts made by DDR5 (increased density, long-refresh) and probably won't deliver a net reliability improvement.
The two variations of ECC currently supported by Intel are traditional out-of-band ECC, which requires special DIMMs and motherboards, or in-band ECC that should work with any DIMMs but at a slight performance penalty. From what I've seen, only certain Elkhart Lake CPUs so far support in-band ECC.
fmyhr - Thursday, March 3, 2022 - link
Thank you! Very much appreciate the info. First I'd heard of in-band ECC.SunMaster - Thursday, March 3, 2022 - link
When it comes to excelling in single thread applications, which applications are we really talking about in 2022 ? MS-DOS emulators and CPU-tests limited to 1 thread?badger2k - Thursday, March 3, 2022 - link
Comments like these are a really easy way to show that you have no knowledge of how computer programs work.SunMaster - Thursday, March 3, 2022 - link
Really. So how with an example where it matters?SunMaster - Thursday, March 3, 2022 - link
I'm not talking about where single threaded software exists, because that's just about everywhere. I'm wondering which singlethreaded applications is it possible to excel in, because they actually exist and their performance actually matter.TheinsanegamerN - Thursday, March 3, 2022 - link
Games. Multithreaded games still have a single primary rendering thread, or other tasks such as ai, that cannot be made parallel easily or at all, hence why alder lake wins at gaming benchmarks, even multithreaded onesmagreen - Thursday, March 3, 2022 - link
OCR of a long document in Adobe Acrobat. It’s infuriating that it’s still single threaded in 2022 even in the pro version, when it’s such an obviously parallelizable task. But it is what it is.CiccioB - Friday, March 4, 2022 - link
I wonder if you are serous or just trolling...If you know "single threaded software exists, because that's just about everywhere. I'm wondering which singlethreaded applications is it possible to excel in, because they actually exist and their performance actually matter."
you may just think that you excel in all those mono tasking application that require times to be completed.
I do not think it is that difficult to understand, so I just think you are just acting as a troll.
TheinsanegamerN - Thursday, March 3, 2022 - link
You could read the review and look at the benchmarks, that may helpSunMaster - Thursday, March 3, 2022 - link
Todays games are nowhere near single threaded, even though there is a main thread. The reason alder lake does well is because it has several/many cores/threads clocked high performing well. both xbox and playstation have had multicore amds for a decade (since 2006 for PS, 2013 for xbox), which forced developers to focus on weaker threads rather than the old fashioned monolithic design.SunMaster - Thursday, March 3, 2022 - link
Can't edit my post it seems, but by multicore AMDS I meant 8 core.GeoffreyA - Tuesday, March 8, 2022 - link
I would say, ST is really the building block of multi-threading. Get that single brick strong, and the entire wall will be strong.mode_13h - Wednesday, March 9, 2022 - link
Ah, but it's not that simple. You need a good interconnect, cache, memory system, and clock/power-management.For instance, just look an Ampere Altra. Even though its single-thread performance is somewhat lacking, it shines at MT.
GeoffreyA - Wednesday, March 9, 2022 - link
Indeed, the mortar and bond style are just as important as the brick.mirancar - Thursday, March 3, 2022 - link
most games have some sort of single thread bottleneckalso web browser performance is highly affected by the single thread performance
this is what most people do 99% of time on their computers, almost nobody is "rendering"
more cores help up to a certain point, then it becomes useless. anything above 6C12T is usually completelly useless. single thread perf matters much more than 16 core 32 thread benchmark
Calin - Thursday, March 3, 2022 - link
Office applications. Legacy software. Interpreted code (as was the Visual Basic for Applications). Compilation (C or C++) of a single file.There are many places where "single-core" performance counts, as - if your typical operation lasts only a few seconds you might not really care to optimize (and going parallel might not be easy, and might not even be possible for some problems).
jcb2121 - Thursday, March 3, 2022 - link
ZwiftWereweeb - Thursday, March 3, 2022 - link
Virtually every application. Google "Amdahl's Law".CiccioB - Friday, March 4, 2022 - link
You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables.
Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.
All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.
Printing app and format converters are monothread. CADs are also.
And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.
At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core.
Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market.
BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
mode_13h - Saturday, March 5, 2022 - link
> Compilers are not multi-thread.There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.
> if you have mane many cores it just end up being I/O limited.
I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.
> At the end of the compiling process, however,
> you'll have the linked which is a single threaded task.
There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.
> At the end, there are many many task that cannot be parallelized.
There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
CiccioB - Saturday, March 5, 2022 - link
You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code."You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes."
Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.
"There are more that could. They just aren't because... reasons."
Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit.
The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.
Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one.
You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
mode_13h - Tuesday, March 8, 2022 - link
> Try compiling something that is not "Hello world" and you'll seeMy current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.
As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.
I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.
> You can easily see this in the fact that multi core CPU in consumer market has been
> existed for more than 15 years and still only a minor number of applications, mostly
> rendered and video transcoders, do really take advantage of many cores.
Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.
Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.
Oh, and don't forget games. Most games are pretty well-threaded.
GeoffreyA - Tuesday, March 8, 2022 - link
"analyzing web browser performance and revealing they're quite heavily multi-threaded"I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
GeoffreyA - Tuesday, March 8, 2022 - link
As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.mode_13h - Wednesday, March 9, 2022 - link
Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
GeoffreyA - Wednesday, March 9, 2022 - link
On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.mode_13h - Thursday, March 10, 2022 - link
So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.
Finally, here you'll find some fantastic C++ infographics:
https://hackingcpp.com/cpp/cheat_sheets.html
Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!
China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.
https://summer.iscas.ac.cn/#/org/projectlist?lang=...
GeoffreyA - Thursday, March 10, 2022 - link
I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.
Again, much obliged for the leads and references.
mode_13h - Friday, March 11, 2022 - link
Always glad to share!I know about laziness. I probably should be working on programming puzzles, since I hear job interviews tend to be big on those. The last time I did anything like that was Google's "foobar", which was pretty fun. I did well enough to get an interview, but I didn't pursue it.
GeoffreyA - Monday, March 14, 2022 - link
I really wish I had known about these things, or that they had existed, 10-15 years ago. I had a thirst for programming back then and, if I may say so, would've done well. I've let go of thinking of myself as a programmer any more, but still hope to keep up an acquaintance with code.mode_13h - Monday, March 14, 2022 - link
Anything to keep your mind active is good!Especially learning *new* things. The first time I tried to learn another spoken language, as an adult, I could feel the blood rushing to my brain as I struggled to remember and pronounce new words and phrases.
GeoffreyA - Monday, March 14, 2022 - link
Absolutely!I am learning francaise, the beautiful language itself these days (and incidentally, it makes me so disappointed in English as always). There was a great article on Quanta last week that really got my brain working too.
https://www.quantamagazine.org/crisis-in-particle-...
mode_13h - Tuesday, March 15, 2022 - link
OMG, I stay away from any cutting-edge theoretical physics. I'm not investing that much effort into trying to understand something that will likely turn out to be wrong. It'd be different if I had a stake in the matter, but there's more than enough more practical stuff I should be learning.But, if you enjoy it, and trying to wrap your head around it, then it's far from the worst way you could spend your time!
GeoffreyA - Friday, March 18, 2022 - link
I enjoy it, and wish I was a physicist, but it does drain out the mind considerably, and afterwards one feels it's all rather meaningless, and it's the practical business of life that really counts: love, family, work, etc.mode_13h - Friday, March 18, 2022 - link
A lot of physicists don't have a career doing physics!It's not a bad field of study, but it doesn't pay the bills for many.
That said, I'm sure all the big quantum computing projects are staffed by some of the best physicists.
GeoffreyA - Saturday, March 19, 2022 - link
Yep, there's big money to be made in this field right now. Intel should jump on the quantum bandwagon any day now, if they haven't already done so.Silver5urfer - Thursday, March 3, 2022 - link
High respect to AT team, for mentioning the fundamental design flaw of this 12th generation. Intel really screwed up hard. Nobody cares, esp those Youtubers.They messed up the ILM hard. And the AVX512 fuse off is another gigantic kick in the face, latest update they are saying Intel will fuse them off from factory. If we see the silicon area on the ADL processors for P cores it occupies a good chunk of space plus it allows the P core to fully unleash it's performance (no more E-Cores hampering the Uncore and Powerplane and overall CPU performance)
Prime reason to skip this entire 12th gen, esp with the new rumors saying RPL LGA1700 Z700 chipset might be DDR5 only, so you get this haphazardly designed ILM which requires end user to perform a Socket mod (I would not do it despite love tinkering because the torque and all the fitting is not public, Igor clearly mentioned this) for DDR4 or buy the uber expensive DDR5 kits which have 2 flaws on their own - Price to performance, Dual Rank kits mandatory for ADL IMC to maximize the quad channel speed and performance, a.k.a needing APEX or Unify X or Tachyon all top end boards HWBot grade.
Intel really messed this, a solid CPU Arch but from an enthusiast point of view who wants to use a PC for decades going forth. AMD also, AM4 has it's own share of issues - AGESA1.2.0.5 is busted, they still did not fix the IOD USB issues through firmware, it has been plaguing the platform. And now no more X3D CPUs refresh which would have perfectly fixed all the firmware problems and IOD and DRAM plus the WHEA. Sad, still Ryzen is best if you do not want to OC or tweak just enable PBO2 and that's it. Let it do it's job, and do not touch DRAM past 3600MHz. For Intel LGA1200 10th gen is best for SMT performance and all workloads for those who want PCIe4.0 (still not a big deal because not much use) a 11900K is fine but the absolute worst class of binning, poor IMC is a strict no no. The biggest loss going with LGA1200 Z590 is DMI speed is very low. But since majority will not saturate the NVMe on a constant load it is okay, a shame if 10900K had DMI of 11th gen it would have been solid since both X570 and Z590 are practically same in PCH link lanes, only on the CPU side Ryzen has extra USB but with the crapping out it doesn't make sense, at-least to me.
AM5 needs maturity as well, first customers will always end up being guinea pigs. Still I would like to see how AMD plays their game, and I'm looking forward since it won't have BS E-Core crap. Full fast fat cores.
Makaveli - Thursday, March 3, 2022 - link
No USB issues here on AGESA 1.2.0.3 Patch C or WHEA.whatthe123 - Thursday, March 3, 2022 - link
probably because you're not actually making use of the bandwidth. i didn't run into the problem either until adding in a pcie 4 drive and a USB HDD.Makaveli - Thursday, March 3, 2022 - link
I have a 6800XT in PCIe 4.0 modeA Corsair MP600 1TB in 4.0 mode
multiple usb devices including a Brio 4k Webcam, external usb microphone, a Western digital passport drive that I use occasionally for temp backs up.
No USB issues!
whatthe123 - Thursday, March 3, 2022 - link
then maybe pure luck. 5900x/980pro/external drives through USB 3.0 + C. I can arbitrarily hit the USB drop problem with this setup even on stock settings. I bought the system for the throughput so I just deal with it but it is absolutely never been fixed even with the latest agesa.SunMaster - Thursday, March 3, 2022 - link
No USB issues on 1.2.0.6, nor WHEA. Fabric on 1900.Mike Bruzzone - Thursday, March 3, 2022 - link
Alder 6P, 4P are from area optimized mask sets and contain no E cores I believe you articulated that on no thread direction firmware perf and power hit @Silver5urfer "who wants a PC for decades going forth" I do and always buy top bin at prior gen run end clearance priced on the objective of 10 year system life.5800X here I come April/May. My 4M point databases will do fine and my extra budget will go to fast NIC and memory and 1080 and I'll be in performance Heavan. Mike Bruzzone, Camp MarketingLeeea - Thursday, March 3, 2022 - link
no USB issues hereor other issues
bandwidth wise, think I am using it all
5900x
x570 board
rx6900 @ PCIe 4
980 pro 2 TB @ PCIe 4
3x SATA in storage spaces raid 0
2x SATA in normal
USB audio recorders (roland), USB audio playback (also roland)
it all works fine
CiccioB - Friday, March 4, 2022 - link
I think you just made up all this just to find some weak point in Intel architecture while just compensating with a subtle (and probably what you consider secondary) I/O problem for AMD.I would like to quote this post for the future, when AMD will be limited to DDR5 only with Zen4 and re-post this statement again: "Prime reason to skip this entire 12th gen, esp with the new rumors saying RPL LGA1700 Z700 chipset might be DDR5 only, so you get this haphazardly designed ILM which requires end user to perform a Socket mod [...] for DDR4 or buy the uber expensive DDR5 kits which have 2 flaws on their own - Price to performance,"
A part that RPL won't be DDR5 only, but most probably more DDR5 oriented (which means you'll find less DDR4 offers for it, the same you find less DDR5 offers for ADL), you can buy a 500 chipset with all the features you want supporting DDR4.
When Zen4 will be out you will have to buy "uber expensive DDR5 kits which have 2 flaws on their own - Price to performance". Let's see what you'll say about the corner AMD has put itself with that choice. I just would think that AMD will delay Zen4 as much as possible till DDR5 becomes available at an affordable price.
At the end, with more time and knowledge, you'll see that the E-cores are not that a hindrance to P-cores, but the real clever way to support extensively multi-threaded jobs, much better than beefed up cores supporting low efficient SMT.
AMD will arrive at that as well. With Zen5 they have already announced they will propose they usual mock-up copy (stand-alone efficiente core) of the then 2 years older competition solution. With Zen6 they will probably integrate it as Intel has done today.
In 4 years (possibly) you'll have AMD with the same architectural big.LITTLE solution than Intel has now. And I bet my cat that when this will happen we will all ear you and AMD fanboys how revolutionary and game changing will be that choice.
It was already done in the past, it will be done again.
And BTW, I still have to see an AMD motherboard supporting all the technology that is present on Intel MB with not a single issue as are on Intel ones. For me it makes a big difference in having even a slightly slower performance but rock steady than something that sometimes is faster but even more often doesn't work as expected.
Silver5urfer - Saturday, March 5, 2022 - link
Intel 12th gen is garbage Page 2 ILM seals the fate of this trash LGA1700, it's over. As for AMD their Zen 3 is flawed. That's why I suggest either Intel 10th gen or AMD Ryzen 5000 (*only with PBO and 3600MHz nothing more).As for smearing fanboy crap on me, you think you have any ounce of credibility left ? I literally gave links and where the AVX512 and P core design is much superior and yet you are that clown who comes in and says BigLittle is good, you do not have technological knowledge at all. Just basing on what makes your stupid AMD point about Zen 5 will copy Intel ? AMD never copied Intel. Intel is the one which copies AMD's chiplet design and MCM in EMIB format for Xeon SPR. And Intel is the one which copied ARM's design, AMD is not going to make this junk for desktop, they want leadership they will get. And Intel is going this way because their P cores cannot handle more than 8P as their thermal ceiling is low. 12900K is the proof of that.
What a damn clown.
nandnandnand - Sunday, March 6, 2022 - link
Leaks point to AMD putting 8x Zen 5 cores and 16x Zen 4C cores on desktop (Granite Ridge), as well as 8x Zen 5 and 4x Zen 4C in mobile/desktop APUs (Strix Point).Targon - Friday, March 4, 2022 - link
AGESA or AGESA v2 for the problems you are fighting with? If you have USB ports that are not controlled by the CPU, AGESA isn't necessarily the source of those problems.Calin - Thursday, March 3, 2022 - link
Last page:"Intel has rated the i3-12300 at base frequencies with a TDP of 60 W and a 69 W TDP when at turbo clock speeds." - the TDP was mentioned in the first page as 89W (basically a tie to the 88W of AMD).
"Due to AMD's Zen architectures, Intel has been on the ropes in both performance and value for a while." - Intel suffered a lot from their inability to improve their lithography, they could have been competitive in cost, performance and power use with better lithography
" One of AMD's most cost-effective processors remains the Ryzen 5 5600X, with six cores, eight threads" - it has 12 threads.
All in all, AMD still make sense in an "upgrade only the processor" scenario - though that could be a niche within a niche. And, apparently, the greatest competition the i3-12300 with DDR5 has is from the i3-12300 with DDR4 (or maybe an i5 with DDR4).
Wereweeb - Thursday, March 3, 2022 - link
Intel's dies are massive and entirely fabbed in Intel 7. They're only competing in cost because Intel is deliberately choosing to sacrifice their famous "Intel margins" to get back into the market.Failing to do that might've meant inactive fabs, and if fabs aren't making money they're losing money (Since new fabs are so expensive and there's a definite time frame where they can recuperate the investments into cutting edge tooling, after which wafer prices will tend to fall)
This is Intel at it's most desperate yet, and I'm loving it.
Calin - Friday, March 4, 2022 - link
Intel still reports very high "Gross Margins". How much other activities (cough OEM bribes cough) eat into this might not be truly evident.As for "Failing to do that might've meant inactive fabs, and if fabs aren't making money they're losing money"...
AMD simply can not produce enough - so if Intel stopped fabrication of those inferior processors (of the last at least couple of years), prices would have exploded.
While I don't condone the US government saving banks involved in the sub-prime mortgage crisis, at this moment at least Intel truly is too big to fail.
Lbibass - Thursday, March 3, 2022 - link
In my opinion, running both of these CPUs with JDEC standard memory is an incredibly stupid idea that makes this review significantly less useful. DDR4 3600cl18 or 3200cl16 are super affordable. DDR5 is not affordable right now, but purchasing DDR5 with similar latencies (timing in terms of ns, not just CAS latency) would result in a much more effective review.TheinsanegamerN - Thursday, March 3, 2022 - link
So where is this ddr5 that has equivalent latency? Also no matter which ddr4 kit is used the peanut gallery will complain it isn’t the RIGHT kitAlB80 - Thursday, March 3, 2022 - link
Imho, running both of these CPUs with XMP profile is an incredibly stupid idea for review.CPU, MB and stick makers give guarantee only for JEDEC profiles.
Wereweeb - Thursday, March 3, 2022 - link
If you mean warranty, how are they going to prove that you were running it with XMP on?AlB80 - Saturday, March 5, 2022 - link
Guarantee of stable operation. XMP is always OC.Also all XMP sticks have very low JEDEC profiles.
Oxford Guy - Sunday, March 6, 2022 - link
If JEDEC’s actual mission is stability, ECC should have been required for many many years now.Oxford Guy - Sunday, March 6, 2022 - link
The Apple Lisa bad ECC. That was 1983 tech. I am less than impressed with jEDEC and its alleged concern with stability.mode_13h - Tuesday, March 8, 2022 - link
> The Apple Lisa had ECC. That was 1983 tech.Integrated circuit fabrication technology changes and no doubt DRAM chip design, along with it. Maybe memory errors were relatively more common, in the memory available at the time, and Lisa surely needed a lot of it.
mode_13h - Tuesday, March 8, 2022 - link
The problem with ECC is that it adds cost. So, if merely adequate stability can be delivered without it, then they're not going to require it.Oxford Guy - Thursday, March 3, 2022 - link
JEDEC has been baffling for quite some time now. No ECC mandate yet ultra-high latency with low clocks in the name of stability.Cooe - Thursday, March 3, 2022 - link
The Ryzen 3 5300G APU has literally HALF the amount of L3 cache as other Zen 3 CPU's, so trying to use that part to claim AMD couldn't be make a competitive quad core CPU right now if they needed to based on just that data alone is pretty god-tier idiotic. I expect better of this site.TheinsanegamerN - Thursday, March 3, 2022 - link
Ok, so where is the Ryzen quad core with all that cache?lmcd - Thursday, March 3, 2022 - link
Disabled with the cores that had the cache. Oh wait!Targon - Friday, March 4, 2022 - link
AMD went from 4 core per CCX with the Zen2 generation to 8 core per CCD in the Zen3 generation. As a result, AMD doesn't have non-APU chips with only 4 cores. Monolithic design means AMD isn't using chiplets for that 5300G. If you are limited by fab capacity, do you divert a lot of capacity for low-margin and low end products?Zen4 may switch things up a bit, or, AMD could potentially put low end Zen4 on 7nm since having the best efficiency and performance won't be needed for the low end products.
Calin - Friday, March 4, 2022 - link
AMD couldn't make a competitive 4-core Zen3 CPU at the price Intel is selling their latest generation. Comparisons with cheaper or more expensive processors is useful only to a point...And making a true comparison (i.e. platform costs) is a quagmire of "if this, if that, if ...".
Not to mention that - at least for a while - the prices will be volatile, so no comparison based on "price bracket" will be long-lived.
Targon - Friday, March 4, 2022 - link
When Zen3 chiplets are based around 8 cores per CCD, the only quad-core chips will be monolithic APUs. With any luck, AMD will relegate Ryzen 3 CPUs to 7nm while Ryzen 5, 7, and 9 will be on 5nm.lmcd - Friday, March 4, 2022 - link
7 and 5 don't necessarily share the same libraries and quirks. That's a lot of engineering resources for questionable gain. I don't disagree that it'd be nice but I don't think it's likely.Makaveli - Thursday, March 3, 2022 - link
DDR4-3200 CL22Don't know anyone using DDR4 with that high cas latency.
Going to CL14 memory will most likely remove the gap in gaming.
Oxford Guy - Thursday, March 3, 2022 - link
CL16 3200 RAM was cheap many many years ago and I have not heard of a single stability problem with any platform other than Zen 1, which was quite special.It’s preposterous to run 3200-speed RAM at anything slower than CL16.
mode_13h - Thursday, March 3, 2022 - link
Regarding the review:* Glad to see the DDR4 vs. DDR5 comparison.
* Sad to see DDR5 used for remainder of benchmarks, given current price & availability. People buying a sub-$150 CPU won't be using DDR5, making these benchmarks unrealistic.
* Sad to see minimal analysis of power consumption. I believe much of their advantage over the Ryzen R3 5300G comes from burning more power and DDR5, but without power measurements on individual benchmarks, we can't compute perf/W or make other conclusions about this.
* Glad to see the 5300G showing up, where it did.
* Glad to see the i7-6700K (and i7-2600K) sometimes making an appearance. So very interesting that a couple benchmarks showed the i7-6700K with roughly equal performance!
* On the last page, Turbo power is mistakenly stated as 69 W, although the first page chart correctly lists it as 89 W.
* Please ask Ian to open source his 3D Particle Movement benchmark, or stop using it. As the rest of your benchmarks are publicly available & independently verifiable, this is only fair.
Regarding the CPU:
* Definitely a performance bargain, if you can get it near list price!
* Sad to see ark.intel.com doesn't specify ECC support (which IMO means probably not... but check the docs of any LGA 1700 ECC-capable motherboard to be sure).
lmcd - Thursday, March 3, 2022 - link
Ironically DDR5 benchmarks can be used to get a sense of what using higher-speed DDR4 can unlock.Calin - Friday, March 4, 2022 - link
"Sad to see DDR5 used for remainder of benchmarks,"As a lower performance processor, DDR5 wouldn't bring too much to the table. They specify a 5-10% increase in performance with DDR5, with an average of some 6%.
So, basically nothing would change in the benchmarks - a 10% performance difference could easily be ignored for many other factors (price, availability, necessary power/cooling, ...)
mode_13h - Saturday, March 5, 2022 - link
> So, basically nothing would change in the benchmarks -> a 10% performance difference could easily be ignored
That's ridiculous. 10% is certainly significant. I would consider <= 1% to be down in the noise.
Calin - Monday, March 7, 2022 - link
3% used to be Anand's Anandtech "noise".I wouldn't care for a 10% - 25 seconds compile time down to 22... or editing images, 50 images to take 54 seconds instead of one minute.
That's the reason Intel used to compare new processors to 3 generations old ones (3-5 years old). The improvement over multiple generations grew to a nice 25% or more (at least in some benchmarks). But, if all you do takes seconds or minutes, that 10% reduction in time (or 10% increase in throughput) is almost never truly useful.
mode_13h - Tuesday, March 8, 2022 - link
> But, if all you do takes seconds or minutes, that 10% reduction in time> (or 10% increase in throughput) is almost never truly useful.
I'm not talking about upgrading for an absolute increase of 10%. However, 10% is a lot of error to stack with whatever else you're comparing against.
Either the accuracy of the benchmarks matters or it doesn't. If not, then obviously we don't need to bother about 10%. If it does, then 10% is too much to ignore.
Ryan Smith - Friday, March 4, 2022 - link
"Sad to see DDR5 used for remainder of benchmarks, given current price & availability. People buying a sub-$150 CPU won't be using DDR5, making these benchmarks unrealistic."Including the DDR4 vs. DDR5 numbers was our compromise, here. We're going to be using this dataset for a long time going forward; it didn't make much sense to base everything around DDR4, and thus unnecessarily kneecapping the CPU in current and future comparisons.
Alistair - Thursday, March 3, 2022 - link
I would like to point out that it has been five months already, still can not buy a single quad core CPU. Intel is teasing us with a good cheap product, but it doesn't actually exist. If and when it finally shows up, it will probably be overpriced (over $200 CAD?) so this product might as well not exist.skydiverian - Saturday, March 5, 2022 - link
It's been 2 months since non-K Alder Lake CPUs launched (including all quad core SKUs) and they are definitely available in the UK from reputable independent retailers that specialise in computer equipment. The 12100 is available & in stock for £135 - around £30 more than the 10100 and £5 less than the 10300. Not great prices but from experience fairly typical for the market, at least in the UK.The US isn't great though the usual suspects do have some options. Considering that everyone seems to have the 12xxxK SKUs in stock it will likely be easy to get whatever you want in a month or 2.
bwj - Thursday, March 3, 2022 - link
The fact that your i3 beats your i5 in Speedometer 2 implies a problem with the platform. Windows should be keeping a user-interactive process on the P cores all the time, but it seems from these results that Chrome's threads are wandering between P and E cores. There's really no other explanation for why the i5-12600K with 11% higher clocks gets a lower score.Personally I find the E cores more of a hazard than a benefit, and I have them disabled.
brantron - Thursday, March 3, 2022 - link
The scores are all too low. Prior reviews used Chrome 92, which is missing a significant V8 update for Windows.Mobile CPUs also exhibit this behavior, due to more hardware managed power states. It is a royal PITA to find the secret handshake that maintains their litany of peak clocks per CPU core, ring bus, memory controller, RAM, etc.
My Tiger Lake i5 goes from 160 to 200 in Speedometer by adding "processor energy performance preference policy" to the Windows power advanced power settings menu. Reducing it to 0% raises the minimum clock speed to about 3 GHz.
Alder Lake CPUs with E cores introduce another variable with the hardware thread scheduler. There is also a larger ring bus on the i5 12600K, which increases latency.
bwj - Thursday, March 3, 2022 - link
Yes the scores are extremely low. With current Chrome/Linux on an i7-12700K with the E-cores disabled I get 305.Considering that the browser is a compiler for Javascript it doesn't make a lot of sense to freeze its version. It would be nice if we could get a rolling picture, based on a few key reference systems, of what a buyer *today* would experience with *today's* software.
Makaveli - Thursday, March 3, 2022 - link
Those Speedometer scores have always been off. Take look at the thread in the forum with user posted scores.https://forums.anandtech.com/threads/how-fast-is-y...
kpb321 - Thursday, March 3, 2022 - link
Anyone else find the DDR4 vs DDR5 graphs annoying? I realize they are sorted by performance but with only 2 entries in each one I think it would make a lot more sense to just have a consistent order.CiccioB - Friday, March 4, 2022 - link
That would just introduce the problem that you must check if a lower bar is better than a longer one, or viceversa.Ordering it by performance allows to easily see which is faster and by what amount.
Mike Bruzzone - Thursday, March 3, 2022 - link
Beyond industrial embedded for control plane / processing [including coprocessing acceleration point of sale for example], I'm not sure why quads remain a subject of interest there are sooo many recent back generation options why even produce them today for mass market? Ryzen quads failing out of sort are relied in 1x8ccx+1x4ccx for dodadeca so they're still utilized but a standalone 4C/8t [?] there are so many good used options and hexa used [Coffee Lake] is gaining volume in the secondary market on system upgrade on hexa reclaim.All Alder i3 12300 available in the WW channel today equals 0.00047% of full line available. The top three AL SKUs are 12900K at 40.8%, 700K at 46.7% now at 87.6% of full line and number 3 SKU volume wise is 12400 at 2.5% and at number 4 is 12600K/Kf at 2.4% combined.
On finished component yield and silicon performance all Alder Lake are essentially 900K/700K that is similar to Coffee Refresh octa before disablement most 9th gen started out as 9900_.
On the AMD side you can purchase a used 2600 hexa for less than $100 and on WW channel availability that is for both used and new there are x126 more R5 2600 than i3 12300. Pursuant
AMD quads 5300G has currently x37 more channel available and R3 3300X x25.
Mike Bruzzone, Camp Marketing
Wereweeb - Thursday, March 3, 2022 - link
1) Speak english please2) An Alder Lake quad-core is equal or better than a Ryzen 5 2600. All benchmarks also show substantial improvements in 1% lows. What matters most is overall performance, not simply the number of cores.
3) The vast majority of people are just fine running a modern quad-core.
Wereweeb - Thursday, March 3, 2022 - link
And yes, by "vast majority of people" I mean people, not "hurr durr 360hz monitor" g*mers.Alistair - Friday, March 4, 2022 - link
actually a quad core is great for 360hz gaming also, the problem is the locked clock speedif Intel would release an unlocked quad core that can run at 5ghz+ it would be a dream chip
that's why they don't release it, they want gamers to buy useless 16 core CPUs for gaming, as the game FPS is higher from cache and clock speed, not core count
mode_13h - Saturday, March 5, 2022 - link
I definitely agree that you shouldn't have to buy more cores just to get higher peak clock speeds.With Intel's Xeon CPUs, it would typically be the case that models with fewer cores would have higher base & peak clock speeds. I think that started to change when AMD setup their product stack so that each step enabled more cores and/or higher clock speeds. As Intel moved to 6- and 8-core mainstream CPUs, they did the same thing.
Where AMD sort of bucked the trend was with the 3300X. That little screamer was an absolute performance bargain. I almost bought one, a couple times - first, when it launched, and then I passed on it because it was selling above list price when it came back in stock in late 2020 or early 2021.
Anyway, I wish AMD would do something like that with a Zen3 or Zen3+, though it's looking unlikely.
Mike Bruzzone - Friday, March 4, 2022 - link
Hi Werewebb,Agreed, modern quads work great for Office essentials and home essentials including facility management and security.
I'm speaking English, are we communicating I think so, on what dialect on practice area can however lead to interpretative earning to confer in another practice area for comprehension cross practice cross functions achieving dialogue and I think so.
"An Alder Lake quad-core is equal or better than a Ryzen 5 2600. All benchmarks also show substantial improvements in 1% lows. What matters most is overall performance, not simply the number of cores."
Encoding, transcoding and compiling for octa centric advantages,
AMD with 3300x went after Ivy Bridge EE quad and won and there are plenty of priced right E5 1600 v2 quad and a bunch v2 hexa plus Haswell EE all cores just entered used market plenty of good choices especially if you have a board that can be upgraded.
Channel this last week;
Core Haswell desktop returns to secondary market + 46.6%, and mobile + 7.5% in prior eight weeks and the replacement trend is from Haswell forward in time.
Ivy Bridge EE + 161.11% octa/hexa return to used market prior eight weeks presents a telling indicator.
Haswell Extremes all SKUs + 180% in the prior eight weeks is a strong desktop upgrade indicator.
i7 Refresh + 14%, 4790 comes back to secondary + 17.8% and 4790K + 14% that is 10% of 90_
i5 Refresh 4590 comes back to secondary + 403% and 4690 sells down < 69% at 19% of 4590
i3 Refresh + 81% and 4150 comes back + 161%
Pentium Refresh + 5.5%
Celeron Refresh + 14.5%
i7 Original 4770 + 131% and K + 217% that is 24.1% of 70_
i5 Original + 169% and 4570 + 98%, 4570S + 952%, 4570T + 49.7% and 4570T is 26.3% of 4570_ all varients
i3 Original + 4% and 4130T + 13.7%
Pentium Original + 18.8% and G3220 comes back to secondary + 21.8% followed by 3420 + 19.2%
More in in comment line, several comments actually keep scrolling down until you find last week's Intel channel data and sales trend;
https://seekingalpha.com/instablog/5030701-mike-br...
mb
The vast majority of people are just fine running a modern quad-core.
nandnandnand - Thursday, March 3, 2022 - link
The explanation from here should be mentioned for AppTimer: GIMP since the results are so weird:https://www.anandtech.com/show/16214/amd-zen-3-ryz...
Maybe the test should be dropped entirely.
Slash3 - Thursday, March 3, 2022 - link
It is, in fact, a deeply stupid test with no value.mode_13h - Thursday, March 3, 2022 - link
"As it turns out, GIMP does optimizations for every CPU thread in the system, which requires that higher thread-count processors take a lot longer to run."Holy cow. I don't believe that. There's something else going on there, like maybe code using a stupid spinlock or something... which could actually be the case if some plugins or the core app used libgomp.
At the time that article was written, the only Big.Little CPUs were in phones (okay, let's forget Lakemont - nobody was running GIMP on a Lakemont). There was absolutely no reason for it to do per-thread optimizations!
lmcd - Friday, March 4, 2022 - link
No one ran anything on a Lakemont, as no one ran a Lakemont.mode_13h - Saturday, March 5, 2022 - link
Right. I was just noting that for completeness.nandnandnand - Sunday, March 6, 2022 - link
Lakefield, you mean! Although Intel does appear to have had a Lakemont, Google "Intel Lakemont" to find another deceased product.I have used GIMP on RPi4 (which can be rough but usable) so I can imagine Lakefield would be better. Lakefield was too expensive for relatively bad performance (couldn't run all 5 cores at once apparently). Intel gets another swing at it with the Pentium 8500 and other Alder Lake chips.
mode_13h - Tuesday, March 8, 2022 - link
Thanks for the correction.Yeah, I get the feeling Lakefield was testing out a few too many new technologies to be executed well. At least it served as a test vehicle for Big+Little and their die-stacking tech.
Kyrie - Friday, March 4, 2022 - link
The main problem with 12300 is the existence of 12100(F).Alistair - Friday, March 4, 2022 - link
The quad cores are garbage because they are either not available or incorrectly priced.Right now in Canada I can buy the 12400f for $199, and i3 on the other hand is $220. ... Pointless.
yetanotherhuman - Friday, March 4, 2022 - link
Those fans look to be using turbulent flow, not laminar.They blow down, it seems. What a stupid name.
AshlayW - Friday, March 4, 2022 - link
For 150 USD or £140 this is a really nice product from Intel. Good to see some good value/budget options. Normally I would scoff at a quad core but the Golden Cove cores here are strong enough that it does really well for itself. AMD is in a spot of trouble if they don't lower 5600X price.porina - Friday, March 4, 2022 - link
Did Cinebench R23 change behaviour compared to earlier versions? That's quite a difference with DDR4 - DDR5 scaling in multi-thread. Up to R20 it seemed insignificantly affected by ram. I did quickly test R23 on 6700k at 2133 vs 3200, and saw no significant difference there. So I'd question that specific result, unless DDR5 does something with R23?erotomania - Friday, March 4, 2022 - link
R23 seems to be the same on the surface, just with the addition of an adjustable looping timer. Perhaps running the test for 10 or 30 minutes shows the RAM differences much better than 1 run, of several seconds to several minutes, depending on core count.brantron - Friday, March 4, 2022 - link
Gear 1 adds a few watts, which may exceed a power limit. Peak power of 68 watts was with DDR5, which requires gear 2.Some 65w Rocket Lake CPUs do the same thing. It can be overridden.
porina - Saturday, March 5, 2022 - link
Good point, fixed power limits can cause what you described. If that is the reason, would it not apply to R20 also? Unless R23 does behave very differently to R20.eastcoast_pete - Friday, March 4, 2022 - link
Thanks Gavin! While I agree with much of what you wrote, I have one question: Why test a decidedly budget CPU only in a clearly premium-level board, with a also not-so-cheap AIO cooling? Both cost a lot more than the i3 itself. Yes, I assume you're doing so to minimize differences to test of better and pricier CPUs, but I really doubt a $ 130 CPU would find itself in a high-end board with that AIO cooling attached. Wouldn't it make sense to test a CPU in its "natural habitat", so in a budget socket 1700 board with the stock cooler on it? Just wondering.cowymtber - Saturday, March 5, 2022 - link
AMD doesn't want to be mentioned anywhere near the term, "budget" going forward. AMD's goal is to assume the premium/luxury class role (selling $15k EPYC 3D stacked Genoa). Chasing the low-end makes it difficult to attain high margins. This is why we have not seen the low-end Zen 3 updates to this point.With Zen 4, we will see the 1M L2 + 64M 3D stacked-fed L3, full-fat cores. The raw performance in single and multithread will render those Intel E cores worthless. The 7950X3D will give Threadripper-class multithread, along with fantastical single-core performance.
Its not that AMD doesn't care about the low-end anymore....they just don't care about the low-end anymore.
nandnandnand - Monday, March 7, 2022 - link
The 8-core chiplet with high yields makes 4-cores pointless for AMD to produce, and that won't change anytime soon since Zen 4 and probably Zen 5 will use 8-core chiplets.https://videocardz.com/newz/amd-rumored-to-launch-...
The real "problem" is that AMD hiked prices during Intel's stumbles. A good strategy that made them lots of cash during a chip shortage/supply crisis. But if that leak is correct, they will launch a 6-core near $100-120 to counter budget Alder Lake chips like the i3-12100F.
AMD could put Van Gogh on AM5 for the DIY market. That would use 7nm while other products move down to 5nm. They will also have basic graphics on Zen 4 Raphael which would allow for office-type builds without discrete GPUs. Finally, there is the Monet on GloFo 12LP+ rumor. Even if that was laptop only, it could be an impulse buy (use display output).
AM5 will only be a good budget option when the DDR5 prices come down, but the Zen 3 price cuts and new rumored CPUs keep AM4 in the running.
Qasar - Tuesday, March 8, 2022 - link
" The real "problem" is that AMD hiked prices during Intel's stumbles " and its the same " problem " intel did pre zen, yet very few seem to complain about it then. whats your point ?some really need to let this go, its like some think amd should of kept their prices low, as thats what they did before, before they had the performance to go with those prices, like you know, intel did all those years ?
mode_13h - Tuesday, March 8, 2022 - link
> office-type builds without discrete GPUs.Office-type PCs haven't had dGPUs for more than a decade!
Heck, I used a Sandbridge i7-2600K without a dGPU, and it was entirely fine at 1440p.
eloyard - Sunday, March 6, 2022 - link
In Poland there are pretty much no compatible Intel boards under 100$, meanwhile there are plenty of B450 and few B550 available. Cheapest of them at 50$.Intel's up to their usual stuff misleading market? Why am i not surprised.
529th - Sunday, March 6, 2022 - link
Why has Anandtech benches excluded CS:GO?MDD1963 - Monday, March 7, 2022 - link
Wonder just how many purchasing the 12300 are also getting Z690,...and DDR5? :)mode_13h - Tuesday, March 8, 2022 - link
Exactly.dicobalt - Friday, March 11, 2022 - link
DDR5 4800 costs 250% of DDR4 3200 but only gives 10% performance improvement. I keep telling people the DDR5 launch is premature until Q4-2024 to Q1-2025 when all the major memory manufacturers finally have new fabs online.kath1mack - Thursday, April 14, 2022 - link
Looks great