Intel said QuickSync will active automatically whatever you're using graphics card or not, so if we're using Radeon/GeForce we need just install Intel driver right?
If you're talking encoding, its not completely automatic. The application itself has to specifically support quicksync, and support for it is less common than accelerated decoding.
I run BlueIris DVR software for security cameras and it is virtually unusable WITHOUT QuickSync unless you have a ridiculously powerful CPU and don't mind the heat and power consumption of running said ridiculously powerful CPU at it's TDP limit 24/7.
QuickSync is so important to the application that it unfortunately removes all AMD CPU's out of consideration. There is nVidia encoding support but the performance per watt is around 4x the power consumption compared to QuickSync though BlueIris v5 definitely improved this...but who's going to put a performance GPU in a DVR when QuickSync is supported by even the cheapest Intel CPU's going back almost a decade (though it didn't get very good until Haswell.)
For QuickSync to work, the GPU has to be active (drivers installed) which can be tricky when you have a PCIe GPU installed as the primary display GPU. Some BIOS don't support it, and Windows is clunky about which GPU is used for what. But as long as Intel GPU drivers are installed and the GPU shows up in 'Display adapters' any program that has QuickSync hooks will use the GPU and it is pretty amazing - keep in mind the GPU is a tiny part of the die area, thus a tiny part of the overall TDP. Utilizing the GPU to 100% does impact the CPU core turbo, but that doesn't matter for encoding with QuickSync because it doesn't even use the CPU cores.
It's worth pointing out there is a technical quality difference when comparing two identically encoded (or exported) videos side by side, one with QSV, another without hardware acceleration, and the QSV file is blockier using the same H264 profile. NVENC doesn't seem to produce a different quality file.
Obviously you need to take into consideration the importance of encoding "accuracy" for your application but in most scenarios, QuickSync is amazing and it has only gotten better over the years with improved quality and more efficiency. It used to be an i5-3570 would encode around 150FPS at 37w (that seems to be the power consumption of the total 77w TDP CPU with just GT2 being loaded for QSV.) Now an i5-8600 can encode around 200FPS at 25w (of the total 65w package) with better accuracy.
@Samus is right, though. It might be a niche market, but there are cases like this where a decent iGPU makes all the difference
I've seen cases where I can get comparable neural network inferencing performance on an intel desktop iGPU as I get with AVX2 code on its CPU cores, but the GPU uses only 20 Watts. And that's just a vanilla 24 EU Gen9 GPU.
Sadly, AMD has not seen fit to equip even their dGPUs with much hardware encoding muscle. It's as if the only use case they care about a person doing live-streaming.
Unimpressive performance to say the least. The best place for these iGPUs would be on the pentium and i3 class, whom are stuck with the old design.
And I have to wonder what the tiger lake 1185G7 would do with a desktop power limit. There is no way having 3x as many cores can result in such terrible performance unless it is constantly power throttling. It should be obliterating the 11900k.
You're forgetting that it's also somewhat memory-bottlenecked. Performance was never going to scale linearly, but I agree that it should generally look better than it does.
Based on the 39% average advantage of fully-enabled RKL at actually playable settings, the 33% reduction in execution resources for the i5-11400, plus the clock speed deficit on top, it's safe to assume that it's not really going to be any better than 10th gen.
With DDR3-1600, the i7-2600K is fine for desktop graphics at 1440p. I could swear I noticed the difference between that and DDR3-1333, although that was nearly as good.
It was a real breakthrough GPU too. Sandy Bridge as a whole was a monumental achievement for Intel, though the microarchitecture of the CPU cores weren't a huge improvement over Nehalem, the simple package, clock headroom, graphics package, power efficiency and the platform (other than the SATA bug debacle on the P67) were all giant evolutionary steps in the right direction.
Around Haswell things started falling apart because, at least myself, expected another Sandy Bridge.
Yes, the notable thing about Sandy was the microarchitecture, which they reworked considerably, switching to physical register files, adding AVX in a way that was efficient, and the celebrated micro-op cache.
Yup, I think sometimes people forget that the architecture does have a lot to do with how high a processor can be clocked (alongside the manufacturing node of course).
Yes, this CPU looks like it could be the sleeper CPU of 2021. Early benchmarks are showing it beating almost every other 8-core (or less) chip, usually by a landslide, just like the 5800X... and yet you can buy a whole prebuilt system with one of these right now for $550. I have a 3600 right now and I can see buying one of these systems, finding a cheap APU and swapping them out and reselling the prebuilt so I can get a very cheap processor upgrade.
The downside which is stated on the motherboard’s spec sheet, and I can confirm as one who has bought one, is HP locks their B550A motherboard’s firmware to just 5000G APUs. I tried a 4000 series APU—no dice. And another user on Reddit tried a 3200G—no joy.
What a joke! HP and Dell prebuilts these days are the worst in decades IMO. Towers these companies sold from the Athlon 64 through Haswell eras (10+ years) were mainly standard micro-ATX systems with very little proprietary stuff or artificial limitations (aside from chipset and microcode limitations brought on by Intel). I'm not familiar with all of the models made since then, but over the last 5 years I've seen so many desktops\towers with artificial limitations it is sickening.
I kind of doubt it will happen, but it'd be nice if someone figured out how to add other CPUs to the firmware. I wonder if they used an exceedingly small ROM so that it simply can't hold the microcode needed to recognize them (remember all the news about boards getting BIOS updates to add Zen 2 support but requiring that they drop several Zen 1 chips due to space limitations).
This is a very confusing time for PC enthusiasts. We have so many interesting things going on with affordable insanely powerful multi-core CPUs (and software that uses them!), the biggest generational leap in GPU performance in years (RTX 3000 series), hardware ray tracing, extremely fast SSDs (with DirectStorage coming soon!), Intel entering the GPU market, AMD making the best CPUs for the first time since 2005... and yet here we are with massively overpriced hardware that is hard to find, and OEMs like Dell and HP deliberately going out of their way to make their products undesirable to people who actually STILL LIKE DESKTOPS. You'd think all of these companies would be embracing all of the things that make computers (desktops in particular) useful when consoles and mobile devices are such a huge threat to their business.
> You'd think all of these companies would be embracing all of the things that make computers (desktops in particular) useful when consoles and mobile devices are such a huge threat to their business.
Greed is a mighty powerful drug. Just look at the quad-core shovelware Intel kept giving us until AMD finally got their act together.
The Dell tower I had at work was Haswell and had a ton of proprietary stuff. The PSU and motherboard were not compatible with the 24 pin ATX standard. The motherboard powered the SATA drives.
Likewise for HP boxes with Sandy Bridge. We inherited several of those. I think that in addition to the PSU-MB connector, the case is non-standard, so you cannot use it with a standard MB. If the PSU or the MB fails, you pretty much can only throw the whole thing away. CPU and RAM are still standard, but if your have to replace the rest, the remaining value of these parts is so small that you probably decide not to reuse them, and instead go for a completely new system.
That's the new spec. ATX 12v only https://en.wikipedia.org/wiki/ATX#ATX12VO. It's not proprietary stuff. It's just increasing the efficiency of the PSU because of regulations around climate change "It was motivated by stricter power efficiency requirements by the California Energy Commission going into effect in 2021." -- see above link. Ultimately, they are just transferring the electrical inefficiencies to the MB. Thus demonstrating once again that the solution really doesn't solve anything -- just like solar panels and battery powered cars.
@ballsystemlord They were talking about a Haswell system, so it's not the new ATX12VO spec as that wasn't finalised until 2019. Dell have had proprietary pin-outs on their PSUs for decades now.
That 12VO description is a little bit of an oversimplification, too. Aside from simplifying connector design, it saves on materials costs for power supplies - no need to build circuits for conversion to every voltage a PC might need when many won't even use those circuits (e.g. an ITX gaming PC that only uses NVMe storage). Having drives powered from the board also simplifies cabling - you just run one power+storage connector from the board, instead of threading from both the motherboard and the PSU.
Dell have had proprietary components on their motherboards for decades - I remember trying to repair a Pentium 4-based Dell desktop and discovering that I had to *re-wire the 20-pin PSU connector* because they used a nonstandard pin-out. And don't get me started on their front panel connectors... they used to leave the AGP ports off the boards so they couldn't be upgraded with decent graphics adaptors, too.
Even Compaq too. I've got a Coppermine which I'd like to fire up for the fun of it, but the PSU is blown, and if I remember rightly, it's got a nonstandard connector.
@GeoffreyA - if you ever feel like getting creative, it looks like people have documented the pin-outs and the changes needed to replicate them with a standard PSU: https://www.ecoustics.com/electronics/forum/comput...
Contrary to Nvidia's marketing materials, it's not that big of an uplift. The big GPU uplift has been from Navi -- and just like ryzen's uplift, this is due to AMD being so far behind.
Appreciate the heads up on the OfficeDepot link. I managed to snag one (the only one they had locally) Saturday. One day later and they've already raised the price $70 and are probably OOS all over anyway.
I think there are a number of errors at the bottom of the first page. In the chart and in below paragraph, all of the Rocket Lake processors should say 32 EUs (not 64), and the Core i9 10900K I thought is still Gen9 graphics, not Gen11 (which should only be on 10-nm Ice Lake). Also, no shout-out to the Gen10 Cannon Lake GPU that was too beautiful for this world lol?
haha, Yeah no doubt! I was just thinking it would be nice to mention it, since that initial chart goes sooo far back all the way to Sandy Bridge, so you see the evolution from Gen6 all the way thru to Gen11, then Xe - but if new readers look at it, wouldn't they wonder what happened to Gen10? I was thinking throw it in there with a strikethrough lol
I definitely think this should be done - not just for historical accuracy, but also because Intel are so keen to memory-hole that distressing little period of their recent history!
It would have been nice to see some GPU uses other than gaming. For example, I use the NLE Vegas Pro with integrated graphics. Not for hardware rendering, but for accelerating playback. I realize I am in a very small minority though.
Hardware decode benchmarking is pretty straightforward. Just run a test video through ffmpeg to null output, with hwaccel on, and time how long it takes.
I think our minority isn't so small, and I would love for Anand to bench decode performance more regularly.
On Twitter, he posts a "Happy New Year" and retweets some Apple things, but other than that, nothing much. I do recall one message saying, he was happy and couldn't complain.
Beyond hardware video decode I was referring to the OpenCL accelerated video filters Vegas Pro uses to make previewing smooth. Rendering isn't a big deal as I can frameserve a project to Handbrake and transcode overnight. But smooth previews during the editing process are essential for creative on-the-fly work.
Ahhh, yeah, that's very different. Some sites like Pudget bench the GPU filters, but I'm not sure how well that translates to preview performance, and they don't do IGPs.
It'd have been nice if you'd bothered to look up the EU-counts for the earlier CPUs. Here they are:
Sandy Bridge i7-2600K Jan 2011 4 Gen6 12 EU 11% Ivy Bridge i7-3770K April 2012 4 Gen7 16 EU 29% Haswell i7-4770K June 2013 4 Gen7.5 20 EU 29% Broadwell i7-5775C June 2015 4 Gen8 48 EU 48% Skylake i7-6700K Aug 2015 4 Gen9 24 EU 36% Kaby Lake i7-7700K Jan 2017 4 Gen9 24 EU 36% Coffee Lake i7-8700K Sept 2017 6 Gen9 24 EU 30% Coffee Lake i9-9900K Oct 2018 8 Gen9 24 EU 26%
I knew the formatting wouldn't look great, but that's a mess. It kept only one space between words!
Here's a simplified table, with the full list:
i7-2600K 12 EUs i7-3770K 16 EUs i7-4770K 20 EUs i7-5775C 48 EUs i7-6700K 24 EUs i7-7700K 24 EUs i7-8700K 24 EUs i9-9900K 24 EUs i9-10900K 24 EUs i9-11900K 32 EUs
I'm sort of perplexed--I didn't see that the title question was answered in the article...unless Dr. Potato Head is asking whether Intel's current IGPs are "competitive" with older Intel IGPs...which would seem to be the case. I mean, we should hope that Intel's latest should best its previous efforts. But is that really being "competitive"...? Intel seems very confused of late as to who its chief competitor is, imo--at least publicly...;)
> I didn't see that the title question was answered in the article
I think they presume that piece of meat behind your eyes is doing more than keeping your head from floating away. Look at the graphs, and see the answer for yourself.
However, the article does in fact sort of answer it, in the title of the final page:
> unless Dr. Ian Cutress is asking whether Intel's current IGPs are "competitive" > with older Intel IGPs...which would seem to be the case.
As is often the case, they're comparing it with previous generations that readers might be familiar with, in order to get a sense of whether/how much better it is.
And it's not as if that's *all* they compared it against!
My favorite part of the Intel CPU + Intel GPU history is Atom, where serious hype was created over how fabulously efficient the chip was, while it was sold with a GPU+chipset that used — what was it? — three times the power — negating the ostensible benefit from paying for the pain of an in-order CPU (a time-inefficient design sensibly abandoned after the Pentium 1). The amazing ideological purity of the engineering team’s design goal (maximizing the power efficiency of the CPU) was touted heavily. Netbooks were touted heavily. I said they’re a mistake, even before I learned (which wasn’t so easy) that the chipset+GPU solution Intel chose to pair with Atom (purely to save the company money) made the whole thing seem like a massive bait and switch.
> fabulously efficient the chip was, while it was sold with a GPU+chipset that used > — what was it? — three times the power
Well, if they want to preserve battery life, maybe users could simply avoid running graphically-intensive apps on it? I think that's a better approach than constraining its graphics even further, which would just extend the pain.
I'm also confused which Atoms you mean. I'm not sure, but I think they didn't have iGPUs until Silvermont, which was already an out-of-order core. And those SoC's only had 4 EUs, which I doubt consumed 3x the power of the CPU cores & certainly not 3x the power of the rest of the chip.
What I liked best about Intel's use of their iGPUs in their low-power SoCs is that the drivers just work. Even in Linux, these chips were well-supported, pretty much right out of the gate.
I still don't follow the logic of the Oxford dude. Would it really have been a good solution to put in even worse graphics, further impinging on the user experience, just to eke out a little more battery life? I'm not defending the overall result, but that strikes me as an odd angle on the issue.
Indeed, if explorer and web browser were as much as their GPU could handle, then it seems the GPU was well-matched to the task.
You should learn about the Atom nonsense before posting opinions about it.
The power consumption chipset + GPU completely negated the entire point of the Atom CPU, from its design philosophy to the huge hype placed behind it by Intel, tech media, and companies peddling netbooks.
It is illustrative of large-scale bait and switch in the tech world. It happened purely because Intel wanted to save a few pennies, not because of technological restriction. The chipset + GPU could have been much more power-efficient.
You don't follow because you're trying to assess what he said by your own (apparently incomplete) knowledge, whereas what would make sense here would be to pay more attention to what he said - because, in this case, it's entirely accurate.
Intel paired the first 45nm Atom chips with one of two chipsets - either the recycled 180nm 945 chipset, designed for Pentium 4 and Core 2 processors, or the 130nm Poulsbo chipset. The latter had an Imagination Technologies mobile-class GPU attached, but Intel never got around to sorting out working Windows drivers for it. In either case, it meant that they'd built an extremely efficient CPU on a cutting-edge manufacturing process and then paired it with a hot, thirsty chipset. It was not a good look; this was back when they were absolutely clobbering TSMC on manufacturing, too, so it was a supreme own-goal.
Thanks for the details. I was confused about which generation he meant. If he'd have supplied even half the specifics you did, that could've been avoided.
Also, dragging up examples from the 2000's just seems like an egregious stretch to engage in Intel-bashing that's basically irrelevant to the topic at hand.
There's another thing that bugs me about his post, and I figured out what it is. Everything he doesn't like seems to be the result of greed or collusion. Whatever happened to plain old incompetence?
And even competent organizations and teams sometimes build a chip with a fatal bug or run behind schedule and miss their market window. Maybe they *planned* on having a suitable GPU, but the plan fell through and they misjudged the suitability of their fallback solution? Intel has certainly shown itself to be fallible, time and again.
Incompetence and folly have worked against these companies over and over again. The engineering has almost always been brilliant but the decisions have often been wrong.
Silvermont was quite good in terms of power and performance. It actually competed very well with the snapdragon 835 at the time and damn the process was efficient. The vcore was 0.35v! and on the igpu you can play rocket league at low stting 720p with 20 fps and 3w total chip power. If that isn't amazing for a 2014 chip then I don't know what it is. Intel actually was very good in between 2006 core 2 duo and 2015 with Skylake. Their process was superior to the competition and the designs are quite good also.
The only thing that's really noteworthy about the first generation is how much of a bait and switch the combination of the CPU and chipset + GPU was — and how the netbook hype succeeded
It was literally paying for pain (the CPU's ideological purist design — pursuing power efficiency too much at the expense of time efficiency) and getting much more pain without (often) knowing it (the disgustingly inefficient chipset + GPU).
As for the hyperthreading... my very hazy recollection is that it didn't do much to enhance the chip's performance. As to why it was dumped for later iterations — probably corporate greed (i.e. segmentation). Greed is the only reason why Atom was such a disgusting product. Had the CPU been paired with a chipset + GPU designed according to the same purist ideological goal it would have been vastly more acceptable.
As it was, the CPU was the tech world's equivalent of the 'active ingredient' scam used in pesticides. For example, one fungicide's ostensible active ingredient is 27,000 times less toxic to a species of bee than one of the 'inert' ingredients used in a formulation.
The first ever Atom platform did indeed use a chipset that used way more power than the CPU itself. 965G I think. I built a lab of those as tiny desktop platform for my daughter's school at the time. They were good enough for basic Google Earth at full screen, 1024x768.
The Next Atom I had was a Something Trail tablet, the HP Stream 7 of 2014. These were crippled by using only one of the two available memory channels, which was devastating to the overall platform performance. Low end chips can push pixels at low spec 7-inch displays, if they don't have to block waiting for DRAM. The low power, small CPU cache Atom pretty much requires a decent pipe to RAM, otherwise you blow the power budget just sitting there.
The most recent Atoms I have used are HP Stream 11, N2000 series little Windows laptops. Perfect for little kids, especially low income families who were caught short this past year, trying to provide one laptop per child as the schools went to remote-only last year.
Currently the Atom N4000 series makes for a decent Chromebook platform for remote learning.
So I can get stuff done on Atom laptops. Not competitive to ARM ones, performance or power efficiency, but my MacBook Pro M1 cost the same as that 10-seat school lab. Both the Mac and the lab are very good value for the money. Neither choice will get you Crysis or Tomb Raider.
"the CPU's ideological purist design — pursuing power efficiency too much at the expense of time efficiency"
Atom may have been terrible, but I respect its design philosophy and Intel's willingness to go back and revive in-order, in the hope it would cut down power considerably. The good designer takes any principle off his shelf to solve a problem and doesn't ban a design because it's old or finished. (Does the good director abandon telling a story through picture because "talkies" took over?) So they decided, why not visit in-order again and see what it can do on a modern process, along with SMT. Out of order, after all, has a lot of overhead.
Agreed on respecting the philosophy. With consistent application and a full, coherent vision for the products using it, it could have been quite a neat product.
Unfortunately MIDs were always a half-baked idea, and Netbooks appeared to be an excuse for clearing out sub-par components.
I think most ARM cores were still in-order, by that point. And that's who Intel was targeting.
I hope people aren't too put off by dredging up other ideas that had a couple poor implementations, such as some of the concepts underlying EPIC. I think there's a lot you can do with some of those ideas, other than where Intel went.
Pentium Pro was brilliant, as long as you weren't running 16-bit code! That's why they branded it as Pro, since I think they figured more business users and prosumers would be using 32-bit apps. Also, it was more expensive, due to its large L2 cache (which was on a separate die).
I remember fragging a bunch of noobs in quake, running on a PPro 200 with a T1 line virtually to myself. This was before quake supported 3D cards, and my framerates were definitely above what you'd hit with a regular Pentium. That's pretty much the last time I played games at work, although it was during evenings/weekends.
I tell you, PPro 200 + Matrox Milennium was a brilliant way to run WinNT 3.51 and 4.0.
Some might laugh at this sentiment, but don't be surprised when the desktop microarchitecture starts borrowing tricks from, and is eventually replaced by, the Atom lineage, reminiscent of Dothan/Yonah becoming Core.
An interesting side effect of the in order design is that these early atom chips are immune to speculative execution vulnerabilities - so they're actually the newest, fastest x86 chips that aren't vulnerable to spectre type issues. I still use an atom D525 based system as my internet router at home, it's fast enough for that for now.
Intel rated Broadwell at 1.5V for RAM as I recall, most likely due to it being 14nm (and the first generation of that as I recall).
I've read more claims about what's fine and stable than I can count but one thing I am wary of is electromigration from exceeding the safe voltage zone.
Voltage doesn't kill processors, amperage does. p = cfv^2, so an increase in voltage will result in approximately an equivalent increase in amperage. It's why processors with better cooling can be overvolted higher, since better cooling reduces current. On Broadwell 1.65V should be pretty safe since that memory controller was built for DDR3.
No, I want results that will accurately influence people, and this isn't how it should have been represented.
Here you go, benchmark results from ddr3 2400, just xmp on, no other tuning. And ddr3-2400 is still avail off the shelf brand new from newegg around $100.
It's as level a playing field as you can get. I think it's pretty justified, given the target market for integrated graphics and overclocking is even smaller than the one for overclocking in general.
Well, considering you can't run many games on the list it would be a pretty short comparison - M1 is faster for the stuff it can run yet still much worse for gaming.
True that it's not so easy to compare, but cross-platform benchmarks certainly exist.
On paper, I think the Apple GPU is definitely faster. They claim 2.6 TFLOPS, whereas I estimate Tiger Lake's G7 has a peak of 2.07. Of course, raw compute is far from the whole story.
Anandtech actually compared it with an i7-1065G7 (Gen11 @ 64 EUs), as implemented in a MS Surface (not fair, when the M1 was housed in a Mac Mini), but you could try to sort of extrapolate the results:
But running in Rosetta should cover all your caveats, while putting the M1 at a disadvantage, and it *still* stomps all the other iGPUs they tested against!
Also, they ran a couple tests running native software vs. an Intel Mac doing the same.
Way behind, is the answer - even 96EU Xe isn't a patch on Apple's custom unit. A lot of that is down to Apple using 5nm to pack in an even larger and wider GPU, but I'd wager Apple's Imagination Technology-derived (lol, sorry, "not"-derived) design is more sophisticated, too.
Perhaps Intel should've dumped desktop IGPs this time around and set their engineers on something more useful, while saving money/time on mobo design as well. Consider all the use cases:
Need power efficiency? You want Tiger Lake, not Rocket Lake.
Cheap gaming? Corporate desktop? Bundle the CPU with DG1.
Bottom barrel systems? Nothing wrong with Comet Lake.
> Need power efficiency? You want Tiger Lake, not Rocket Lake.
Rocket Lake is 14 nm. They don't have enough 10+ nm fab capacity to sell desktop Tiger Lakes, but you can find them in SFF PCs and there are rumored to be NUCs with them, also.
Someone wanting more compute power should still prefer Rocket Lake, due to its higher core-count. We'll have to see what availability of the 8-core Tiger Lakes looks like, but I'm expecting it to be scarce & expensive.
> Cheap gaming? Corporate desktop? Bundle the CPU with DG1.
If someone needs more graphics horsepower, I'm sure that sometimes happens. It's not going to be cheaper than having an iGPU, however. So, if you're suggesting Intel should've left out the iGPU from Rocket Lake, that's going to be a deal-breaker for some customers.
> Bottom barrel systems? Nothing wrong with Comet Lake.
And you can still buy them. However, Rocket Lake offers more PCIe lanes and PCIe Gen 4. Its Xe GPU offers hardware-accelerated AV1 decode. There are some other benefits of Rocket Lake, but I agree that someone on a budget should probably be looking at Comet Lake.
That's rather impressive performance from the 1185g7. Should we be factoring in the power envelope afforded to the mobile chips when comparing performance?
M1 shows what you get if you go all-in on providing a large GPU on a cutting-edge process. Vega 8 is impressive for the performance it provides in a relatively tiny area of the chip. Xe is impressive for being so much better than Intel's previous efforts, and for currently being the best-performing iGPU on an x86 platform.
Well, obviously "making in larger" should be "making it larger".
They mean changing the ratio of ALU pipelines capable of handing different sorts of operations, while also increasing their total number by 25% (i.e. from 4+4 to 8+2, in referring to fp/int + complex operations). And by "complex", presumably they mean things like sqrt, sin, and atan.
The results are pretty bad if you ask me. Considering all the hype on the XE graphics, the improvements over last gen UHD graphics isn’t great. When comparing say i9 11900 and 10900, it’s a comparison between a 32 vs 24 EU part. So whatever the results, we need to account for that 30% increase in EUs. At the end of the day, it’s just good for light games and mostly non 3D intensive use.
This type of small implementation isn't what the hype was about.
> the improvements over last gen UHD graphics isn’t great.
Yeah, but you're comparing 14 nm to 14 nm and not much more die area. So, I don't know why you were expecting any miracles, here.
> When comparing say i9 11900 and 10900, it’s a comparison between a 32 vs 24 EU part. > So whatever the results, we need to account for that 30% increase in EUs.
Just eyeballing it, the results well *exceed* that bar, in most cases!
I'm not saying Rocket Lake's iGPU is *good*, just that performs at least as well as one would expect, considering it's still 14 nm and has only 32 EUs.
> Yeah, but you're comparing 14 nm to 14 nm and not much more die area. So, I don't know why you were expecting any miracles, here.
According to AT's percentages compared to the actual die sizes, Intel spent roughly an extra 13mm^2 of die area, or about a 28% increase. Performance-per-area has definitely gone up, but it's not a giant leap.
>From Intel, its integrated graphics is in almost everything that Intel sells for consumers. AMD used to be that way in the mid-2010s, until it launched Ryzen
Eh? Bulldozer -- no IGP. Phenom II -- no IGP. The various Athlons -- no IGP as far as I recall.
Conclusion: Let's hope Intel gives Alder Lake at least 96 EU Xe graphics as iGPU; as things look right now, dGPUs will remain unaffordable or downright unobtainable. Otherwise, it's back to 10 fps as standard.
I doubt it. For the desktop market, I'd expect more in the range of 48-64. I think die area currently commands too much of a premium.
As Intel has a history of offering bigger iGPUs in some of their laptop chips, I don't see Tiger Lake's 96 EUs as a precedent for what we should expect of their 10 nm desktop processors.
Agreed here. For a vast chunk of their market, it's just not relevant. I doubt we'll see this change much until they start using chiplets, and then we might get the *option* to buy something with a bigger iGPU.
Why is the 4750g even in the list if you can't BUILD with it? They are about to launch a new one and you can't even get it until xmas maybe (they say Q3-Q4, but that means Q4) for an actual DIY build.
Stop making console socs AMD so you'll have more wafers for stuff that makes NET INCOME. I think there are a few people who'd like some 57xx apu's before xmas and surely they'd make more than single digit or mid-teens margins (meaning <15% right? or you'd say 15!). ~1000mm^2 wasted on every soc gone to consoles and HALF of that is the GOOD wafers that could go to chips that make THOUSANDS not $10-15 each. Just saying...500mil NET again...
Do you realize they have pre-made orders from Sony/Microsoft they have to fufil???, and any silicon that goes to anything is great they are selling everthing they have they can't make enough of ANYTHING! more tsmc but still!, I understand you're want to build a pc right now but your going to have to wait, if you want to play games either buy a pre-built and get a 3000 series gpu or buy a console from scalpers either one is a decent option tho I would tend to go the way of prebuilts as funding the fucktardistans of scalpers is not a great idea.
Linustechtips12#6900xt, ignore the jian,the only thing he does on here, is find ways to bash amd in one way or another. most of what he posts, are just rants, no facts, just FUD.
> Stop making console socs AMD so you'll have more wafers for stuff that makes NET INCOME.
I'm not convinced they're actually buying these wafers, rather than Sony and MS. Even if they are, I'm sure they're contractually obligated by Sony and MS and don't have the option to simply divert console-designated wafers to be used for other AMD products.
I don't understand why people keep saying otherwise. It's like they don't understand how business works.
In the big picture, AMD is actively working with Sony and MS against PC gamers. PC gamers need to wise up.
This also means it's helping Nvidia to keep prices high, which means there is a (presumably) legal form of collusion. Letting Nvidia set prices keeps AMD happy because it gets to raise prices, too. The high prices for PC gaming make consoles seem more attractive, even though the high prices are due to the existence of the consoles.
Nvidia is also, to a smaller but still significant extent, actively undermining PC gaming via its effort to prop up the Switch (a third artificial parasitic x86 walled garden).
Bottom line is that all of this is due to inadequate competition. Both AMD and Nvidia are setting record profits/earnings/whatever by not selling GPUs to gamers.
and none of this matters unless you post some links to show proof of your BS console scam conspiracy theory. as i cant remember if you posted anything proving this in all of your posts about it, there fore, is just your opinion, nothing more.
as i said before, you sure come across as one angry person in most of your posts.
"AMD is actively working with Sony and MS against PC gamers. PC gamers need to wise up." 🙄
"The high prices for PC gaming make consoles seem more attractive, even though the high prices are due to the existence of the consoles." ---citations desperately needed---
"Nvidia is also, to a smaller but still significant extent, actively undermining PC gaming via its effort to prop up the Switch (a third artificial parasitic x86 walled garden)." Amazing. Switch isn't x86, consoles have always been walled gardens BY DEFINITION (it's the entire point, and it's a good thing in that context) and the Switch is in *no conceivable way* a direct competitor to PC gaming.
"Both AMD and Nvidia are setting record profits/earnings/whatever by not selling GPUs to gamers." They'd make even more money if they could also sell GPUs to gamers. They can't, because they can't make enough of them.
"Why is the 4750g even in the list if you can't BUILD with it?" Because you can buy it, duh.
"blah blah blah, rabble rabble, I know better than Lisa Su, whine piss moan" Nobody cares what you think. AMD had excellent financial results last quarter -especially for Q1 - and you were conspicuously absent from the comments there.
"Conclusion: Let's hope Intel gives Alder Lake at least 96 EU Xe graphics as iGPU; as things look right now, dGPUs will remain unaffordable or downright unobtainable. Otherwise, it's back to 10 fps as standard"
Don't want to burst your bubble, but the odds of seeing a 96 EU XE graphics on desktop Alder Lake is quite slim. The fact that Intel have adamently stick to 8 performance cores and not more is likely due to constrains in die size and/or power. Moreover, you can tell that Intel's strategy with iGPU is unlike AMD. At least from what we observed, AMD's APU strives to provide a cheap gaming experience, unless you choose to go with the Athlon series. Intel on the other hand is just provide their users with an iGPU for display purpose and almost no focus on making it gaming worthy. In my opinion, both approach makes sense, i.e. no right or wrong. I do feel Intel's approach sound more logical to me because it doesn't make sense for a gamer to spend so much on an APU that can only game at 1080p at low frame rates.
because AMD APUs are mainly aimed for esport titles. where games doesn't the greatest and the bestest chip to run at 1080p@60fps using low settings. this is where internet cafe in Asia thrive, which is exactly where AMD APUs will end up to, at OEM pre-built PC.
it's a matter of perspective—it doesn't makes sense for PC builder, it makes sense if you have tight budget.
The issue isn't 20FPS VS 60FPS (for me), it's the minimum FR. If it dips below 15, or it swings more than 15FPS in anything lower than 60FPS, I feel sick. We should be paying attention to minimum FPS over 15FPS, and measuring frame swing.
So one thing I never get is why Intel don't do high end SKUs without the IGPU. I mean Ryzen has the node advantage and all that, but another thing people don't talk about is they don't need to burn nearly a quarter of the die area on graphics. That would actually give Intel space for a material increase in core-count which is exactly the area where they are massively lagging.
I guess it is something to do with the tape-out cost - but how hard can it be to fuse off the GPU I/O and lay down some more CPU cores down? Especially given how much they are losing in HEDT sales + halo effect by being uncompetitive.
Mystery to me anyhow. Seems something of a no-brainer, tape-out costs aside.. ¯\_(ツ)_/¯
The article says that the IGPU occupies about 20% of the die area, so a chip with 10 cores and no IGPU would be about the same size as their current 8 core chip. I don't think Intel has done variants that are that close in core counts in the past. We've seen similar behavior from AMD, which (for example) decided to do a chip with 2 Zen CCX (8 cores), and a chip with on Zen CCX (4 cores) and an IGPU, but not a chip with 1 Zen CCX but no IGPU. I suspect the explanation is the one you suggest: tape-out costs must be really high.
It was a weird question to pose. I think we could reasonably assume it wouldn't be competitive with anything on 7 nm.
> DG2 will be equally shit.
Depends on how it's priced. As long as its perf/W isn't terrible, it should be possible for them to price it competitively. Especially in today's market.
That's very interesting. I never knew that Intel advertised hardware AV1 encoding. If true (and not just decoding), this would give them quite a feature to boast about. In practice, though, even if it is there, I suspect it'll be no better than SVT-AV1, which is hopeless.
This has a clear answer for me: 16.666 recurring FPS, the cap of The Legend of Zelda: Ocarina of Time, PAL version. We played it back in the day, and we enjoyed it.
I've been revisiting Perfect Dark on the N64 in 16:9, high-res mode, but honestly it often dips in to the totally unplayable FPS range. I still think it's a riot like that, but when you're in to single digits, the performance isn't really acceptable.
Unfortunately, the second part of the headline should almost read "Does it have to compete?". Because right now and for the foreseeable future, almost anything is better than the empty socket in the system that is the alternative. And, unfortunately, unless you have a dGPU sitting around, standard Ryzen desktop CPUs aren't an alternative as they don't have iGPUs.
Spelling and grammar errors: "F1 2019 is certainly a step up grom generation to generation on the desktop,..." "grom" to "from": "F1 2019 is certainly a step up from generation to generation on the desktop,..."
"However, this is relatively little use for gaming, the topic of today." Sentance makes no sense as is. Try: "However, this has seen relatively little use for gaming, the topic of today."
i don't get why to test AAA games with IGP? Who the f is going to play RDR2 on Intel HD graphics? :)) Why not to test mainstream F2P games which are capable to run on some ancient builds...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
165 Comments
Back to Article
sutamatamasu - Friday, May 7, 2021 - link
Intel said QuickSync will active automatically whatever you're using graphics card or not, so if we're using Radeon/GeForce we need just install Intel driver right?brucethemoose - Friday, May 7, 2021 - link
If you're talking encoding, its not completely automatic. The application itself has to specifically support quicksync, and support for it is less common than accelerated decoding.Samus - Sunday, May 9, 2021 - link
I run BlueIris DVR software for security cameras and it is virtually unusable WITHOUT QuickSync unless you have a ridiculously powerful CPU and don't mind the heat and power consumption of running said ridiculously powerful CPU at it's TDP limit 24/7.QuickSync is so important to the application that it unfortunately removes all AMD CPU's out of consideration. There is nVidia encoding support but the performance per watt is around 4x the power consumption compared to QuickSync though BlueIris v5 definitely improved this...but who's going to put a performance GPU in a DVR when QuickSync is supported by even the cheapest Intel CPU's going back almost a decade (though it didn't get very good until Haswell.)
For QuickSync to work, the GPU has to be active (drivers installed) which can be tricky when you have a PCIe GPU installed as the primary display GPU. Some BIOS don't support it, and Windows is clunky about which GPU is used for what. But as long as Intel GPU drivers are installed and the GPU shows up in 'Display adapters' any program that has QuickSync hooks will use the GPU and it is pretty amazing - keep in mind the GPU is a tiny part of the die area, thus a tiny part of the overall TDP. Utilizing the GPU to 100% does impact the CPU core turbo, but that doesn't matter for encoding with QuickSync because it doesn't even use the CPU cores.
It's worth pointing out there is a technical quality difference when comparing two identically encoded (or exported) videos side by side, one with QSV, another without hardware acceleration, and the QSV file is blockier using the same H264 profile. NVENC doesn't seem to produce a different quality file.
Obviously you need to take into consideration the importance of encoding "accuracy" for your application but in most scenarios, QuickSync is amazing and it has only gotten better over the years with improved quality and more efficiency. It used to be an i5-3570 would encode around 150FPS at 37w (that seems to be the power consumption of the total 77w TDP CPU with just GT2 being loaded for QSV.) Now an i5-8600 can encode around 200FPS at 25w (of the total 65w package) with better accuracy.
Oxford Guy - Sunday, May 9, 2021 - link
You use one piece of software to claim things about QuickSync's market competitiveness in general. I presume there are alternatives to BlueIris.mode_13h - Monday, May 10, 2021 - link
@Samus is right, though. It might be a niche market, but there are cases like this where a decent iGPU makes all the differenceI've seen cases where I can get comparable neural network inferencing performance on an intel desktop iGPU as I get with AVX2 code on its CPU cores, but the GPU uses only 20 Watts. And that's just a vanilla 24 EU Gen9 GPU.
Sadly, AMD has not seen fit to equip even their dGPUs with much hardware encoding muscle. It's as if the only use case they care about a person doing live-streaming.
Smell This - Tuesday, May 11, 2021 - link
AMD runs BlueIris just dandy ...
mode_13h - Tuesday, May 11, 2021 - link
What about video transcoding, though? I don't know this software, but Intel's iGPUs have a lot more transcoding muscle than AMD APUs or even dGPUs!ayunatsume - Monday, May 10, 2021 - link
Shouldn't you be able to boot with the iGPU as the primary display GPU with a PCIe GPU as a render device?TheinsanegamerN - Friday, May 7, 2021 - link
Unimpressive performance to say the least. The best place for these iGPUs would be on the pentium and i3 class, whom are stuck with the old design.And I have to wonder what the tiger lake 1185G7 would do with a desktop power limit. There is no way having 3x as many cores can result in such terrible performance unless it is constantly power throttling. It should be obliterating the 11900k.
mode_13h - Friday, May 7, 2021 - link
You're forgetting that it's also somewhat memory-bottlenecked. Performance was never going to scale linearly, but I agree that it should generally look better than it does.Smell This - Friday, May 7, 2021 - link
Huge diminishing point of return at 96EUs. Increase from 64 nets a 60% return?
Same with AMD years ago but they flipped the return __ 10CUs begat 8CUs in Vega, with significant upside in terms of performance and efficiency.
SarahKerrigan - Friday, May 7, 2021 - link
TGL looks pretty good against AMD. RKL looks... better than the legacy graphics, I guess?KaarlisK - Friday, May 7, 2021 - link
Would be interesting to see the results of the i5-11400, with its weaker GPU. To understand whether it's better than 10th gen or not.Spunjji - Monday, May 10, 2021 - link
Based on the 39% average advantage of fully-enabled RKL at actually playable settings, the 33% reduction in execution resources for the i5-11400, plus the clock speed deficit on top, it's safe to assume that it's not really going to be any better than 10th gen.haakon_k - Friday, May 7, 2021 - link
"Mobile Go-getter" -- chuckles.. why not Road warrior ;-)GeoffreyA - Saturday, May 8, 2021 - link
Marketing's taste is impeccable.shabby - Friday, May 7, 2021 - link
My old sandybridge had a gpu? Who knew...mode_13h - Friday, May 7, 2021 - link
Heh, I'm using mine right now!With DDR3-1600, the i7-2600K is fine for desktop graphics at 1440p. I could swear I noticed the difference between that and DDR3-1333, although that was nearly as good.
Samus - Sunday, May 9, 2021 - link
It was a real breakthrough GPU too. Sandy Bridge as a whole was a monumental achievement for Intel, though the microarchitecture of the CPU cores weren't a huge improvement over Nehalem, the simple package, clock headroom, graphics package, power efficiency and the platform (other than the SATA bug debacle on the P67) were all giant evolutionary steps in the right direction.Around Haswell things started falling apart because, at least myself, expected another Sandy Bridge.
mode_13h - Monday, May 10, 2021 - link
Sandybridge delivered a 20% IPC improvement over Nehalem. In addition to that, it overclocked very well.GeoffreyA - Monday, May 10, 2021 - link
Yes, the notable thing about Sandy was the microarchitecture, which they reworked considerably, switching to physical register files, adding AVX in a way that was efficient, and the celebrated micro-op cache.Spunjji - Monday, May 10, 2021 - link
Yup, I think sometimes people forget that the architecture does have a lot to do with how high a processor can be clocked (alongside the manufacturing node of course).GeoffreyA - Monday, May 10, 2021 - link
A good example is how the Pentium 4 reached 3.8 GHz in 2004/5.Hifihedgehog - Friday, May 7, 2021 - link
Now just to buy a 5700G system and harvest one for the benchmarks, Ian!https://www.officedepot.com/a/products/5448005/HP-...
ozzuneoj86 - Friday, May 7, 2021 - link
Yes, this CPU looks like it could be the sleeper CPU of 2021. Early benchmarks are showing it beating almost every other 8-core (or less) chip, usually by a landslide, just like the 5800X... and yet you can buy a whole prebuilt system with one of these right now for $550. I have a 3600 right now and I can see buying one of these systems, finding a cheap APU and swapping them out and reselling the prebuilt so I can get a very cheap processor upgrade.Hifihedgehog - Friday, May 7, 2021 - link
The downside which is stated on the motherboard’s spec sheet, and I can confirm as one who has bought one, is HP locks their B550A motherboard’s firmware to just 5000G APUs. I tried a 4000 series APU—no dice. And another user on Reddit tried a 3200G—no joy.ozzuneoj86 - Friday, May 7, 2021 - link
Wow! Thank you for the information.What a joke! HP and Dell prebuilts these days are the worst in decades IMO. Towers these companies sold from the Athlon 64 through Haswell eras (10+ years) were mainly standard micro-ATX systems with very little proprietary stuff or artificial limitations (aside from chipset and microcode limitations brought on by Intel). I'm not familiar with all of the models made since then, but over the last 5 years I've seen so many desktops\towers with artificial limitations it is sickening.
I kind of doubt it will happen, but it'd be nice if someone figured out how to add other CPUs to the firmware. I wonder if they used an exceedingly small ROM so that it simply can't hold the microcode needed to recognize them (remember all the news about boards getting BIOS updates to add Zen 2 support but requiring that they drop several Zen 1 chips due to space limitations).
This is a very confusing time for PC enthusiasts. We have so many interesting things going on with affordable insanely powerful multi-core CPUs (and software that uses them!), the biggest generational leap in GPU performance in years (RTX 3000 series), hardware ray tracing, extremely fast SSDs (with DirectStorage coming soon!), Intel entering the GPU market, AMD making the best CPUs for the first time since 2005... and yet here we are with massively overpriced hardware that is hard to find, and OEMs like Dell and HP deliberately going out of their way to make their products undesirable to people who actually STILL LIKE DESKTOPS. You'd think all of these companies would be embracing all of the things that make computers (desktops in particular) useful when consoles and mobile devices are such a huge threat to their business.
Hifihedgehog - Friday, May 7, 2021 - link
> You'd think all of these companies would be embracing all of the things that make computers (desktops in particular) useful when consoles and mobile devices are such a huge threat to their business.Greed is a mighty powerful drug. Just look at the quad-core shovelware Intel kept giving us until AMD finally got their act together.
Gigaplex - Saturday, May 8, 2021 - link
The Dell tower I had at work was Haswell and had a ton of proprietary stuff. The PSU and motherboard were not compatible with the 24 pin ATX standard. The motherboard powered the SATA drives.AntonErtl - Saturday, May 8, 2021 - link
Likewise for HP boxes with Sandy Bridge. We inherited several of those. I think that in addition to the PSU-MB connector, the case is non-standard, so you cannot use it with a standard MB. If the PSU or the MB fails, you pretty much can only throw the whole thing away. CPU and RAM are still standard, but if your have to replace the rest, the remaining value of these parts is so small that you probably decide not to reuse them, and instead go for a completely new system.ballsystemlord - Monday, May 10, 2021 - link
That's the new spec. ATX 12v only https://en.wikipedia.org/wiki/ATX#ATX12VO. It's not proprietary stuff. It's just increasing the efficiency of the PSU because of regulations around climate change "It was motivated by stricter power efficiency requirements by the California Energy Commission going into effect in 2021." -- see above link.Ultimately, they are just transferring the electrical inefficiencies to the MB. Thus demonstrating once again that the solution really doesn't solve anything -- just like solar panels and battery powered cars.
Spunjji - Tuesday, May 11, 2021 - link
@ballsystemlord They were talking about a Haswell system, so it's not the new ATX12VO spec as that wasn't finalised until 2019. Dell have had proprietary pin-outs on their PSUs for decades now.That 12VO description is a little bit of an oversimplification, too. Aside from simplifying connector design, it saves on materials costs for power supplies - no need to build circuits for conversion to every voltage a PC might need when many won't even use those circuits (e.g. an ITX gaming PC that only uses NVMe storage). Having drives powered from the board also simplifies cabling - you just run one power+storage connector from the board, instead of threading from both the motherboard and the PSU.
ballsystemlord - Thursday, May 13, 2021 - link
@Spunjji Thanks! I wouldn't have thought to check the release year of the products compared to the spec if you hadn't said anything!Oxford Guy - Sunday, May 9, 2021 - link
Corporations do their best to create the products they want to sell rather than the products people want to buy.Spunjji - Monday, May 10, 2021 - link
Dell have had proprietary components on their motherboards for decades - I remember trying to repair a Pentium 4-based Dell desktop and discovering that I had to *re-wire the 20-pin PSU connector* because they used a nonstandard pin-out. And don't get me started on their front panel connectors... they used to leave the AGP ports off the boards so they couldn't be upgraded with decent graphics adaptors, too.GeoffreyA - Monday, May 10, 2021 - link
Even Compaq too. I've got a Coppermine which I'd like to fire up for the fun of it, but the PSU is blown, and if I remember rightly, it's got a nonstandard connector.Spunjji - Tuesday, May 11, 2021 - link
@GeoffreyA - if you ever feel like getting creative, it looks like people have documented the pin-outs and the changes needed to replicate them with a standard PSU:https://www.ecoustics.com/electronics/forum/comput...
GeoffreyA - Wednesday, May 12, 2021 - link
Thanks for the link.ballsystemlord - Monday, May 10, 2021 - link
Contrary to Nvidia's marketing materials, it's not that big of an uplift. The big GPU uplift has been from Navi -- and just like ryzen's uplift, this is due to AMD being so far behind.Gigaplex - Saturday, May 8, 2021 - link
B550 in general officially doesn't support the 3200G. That's not really an HP thing. The 4000 series though... Ouch.Ashinjuka - Sunday, May 9, 2021 - link
Appreciate the heads up on the OfficeDepot link. I managed to snag one (the only one they had locally) Saturday. One day later and they've already raised the price $70 and are probably OOS all over anyway.NextGen_Gamer - Friday, May 7, 2021 - link
I think there are a number of errors at the bottom of the first page. In the chart and in below paragraph, all of the Rocket Lake processors should say 32 EUs (not 64), and the Core i9 10900K I thought is still Gen9 graphics, not Gen11 (which should only be on 10-nm Ice Lake). Also, no shout-out to the Gen10 Cannon Lake GPU that was too beautiful for this world lol?mode_13h - Friday, May 7, 2021 - link
Good points, especially about Cannon Lake!(one could also say that particular cannon backfired)
NextGen_Gamer - Friday, May 7, 2021 - link
haha, Yeah no doubt! I was just thinking it would be nice to mention it, since that initial chart goes sooo far back all the way to Sandy Bridge, so you see the evolution from Gen6 all the way thru to Gen11, then Xe - but if new readers look at it, wouldn't they wonder what happened to Gen10? I was thinking throw it in there with a strikethrough lolSpunjji - Monday, May 10, 2021 - link
I definitely think this should be done - not just for historical accuracy, but also because Intel are so keen to memory-hole that distressing little period of their recent history!Hulk - Friday, May 7, 2021 - link
It would have been nice to see some GPU uses other than gaming. For example, I use the NLE Vegas Pro with integrated graphics. Not for hardware rendering, but for accelerating playback. I realize I am in a very small minority though.brucethemoose - Friday, May 7, 2021 - link
Hardware decode benchmarking is pretty straightforward. Just run a test video through ffmpeg to null output, with hwaccel on, and time how long it takes.I think our minority isn't so small, and I would love for Anand to bench decode performance more regularly.
29a - Friday, May 7, 2021 - link
It would be awesome if Anand did anything around here anymore.mode_13h - Friday, May 7, 2021 - link
He moved on, right? Does anyone know what he's been up to?Otritus - Friday, May 7, 2021 - link
According to Dr. Ian, Anand works at Apple now doing unknown things.GeoffreyA - Saturday, May 8, 2021 - link
On Twitter, he posts a "Happy New Year" and retweets some Apple things, but other than that, nothing much. I do recall one message saying, he was happy and couldn't complain.Hulk - Friday, May 7, 2021 - link
Beyond hardware video decode I was referring to the OpenCL accelerated video filters Vegas Pro uses to make previewing smooth. Rendering isn't a big deal as I can frameserve a project to Handbrake and transcode overnight. But smooth previews during the editing process are essential for creative on-the-fly work.brucethemoose - Saturday, May 8, 2021 - link
Ahhh, yeah, that's very different. Some sites like Pudget bench the GPU filters, but I'm not sure how well that translates to preview performance, and they don't do IGPs.mode_13h - Friday, May 7, 2021 - link
It'd have been nice if you'd bothered to look up the EU-counts for the earlier CPUs. Here they are:Sandy Bridge i7-2600K Jan 2011 4 Gen6 12 EU 11%
Ivy Bridge i7-3770K April 2012 4 Gen7 16 EU 29%
Haswell i7-4770K June 2013 4 Gen7.5 20 EU 29%
Broadwell i7-5775C June 2015 4 Gen8 48 EU 48%
Skylake i7-6700K Aug 2015 4 Gen9 24 EU 36%
Kaby Lake i7-7700K Jan 2017 4 Gen9 24 EU 36%
Coffee Lake i7-8700K Sept 2017 6 Gen9 24 EU 30%
Coffee Lake i9-9900K Oct 2018 8 Gen9 24 EU 26%
mode_13h - Friday, May 7, 2021 - link
I knew the formatting wouldn't look great, but that's a mess. It kept only one space between words!Here's a simplified table, with the full list:
i7-2600K 12 EUs
i7-3770K 16 EUs
i7-4770K 20 EUs
i7-5775C 48 EUs
i7-6700K 24 EUs
i7-7700K 24 EUs
i7-8700K 24 EUs
i9-9900K 24 EUs
i9-10900K 24 EUs
i9-11900K 32 EUs
Mobile CPUs:
i7-1065G7 64 EUs
i7-1185G7 96 EUs
mode_13h - Friday, May 7, 2021 - link
Note that Broadwell's GT2's mostly had 24 EUs, but that's one of the models with GT3e graphics.mode_13h - Friday, May 7, 2021 - link
Of course, the EU-count doesn't tell the full story. One of the more notable changes was in Gen 7, when they went from 1x 128-bit SIMD to 2x.WaltC - Friday, May 7, 2021 - link
I'm sort of perplexed--I didn't see that the title question was answered in the article...unless Dr. Potato Head is asking whether Intel's current IGPs are "competitive" with older Intel IGPs...which would seem to be the case. I mean, we should hope that Intel's latest should best its previous efforts. But is that really being "competitive"...? Intel seems very confused of late as to who its chief competitor is, imo--at least publicly...;)Oxford Guy - Friday, May 7, 2021 - link
I’m confused by your reference to a child’s game.29a - Friday, May 7, 2021 - link
I'm guessing it's because the Dr has a YouTube channel called Tech Potato or something close to that and he calls low end computers potatoes.mode_13h - Friday, May 7, 2021 - link
> I didn't see that the title question was answered in the articleI think they presume that piece of meat behind your eyes is doing more than keeping your head from floating away. Look at the graphs, and see the answer for yourself.
However, the article does in fact sort of answer it, in the title of the final page:
"Conclusions: The Bare Minimum"
mode_13h - Friday, May 7, 2021 - link
> unless Dr. Ian Cutress is asking whether Intel's current IGPs are "competitive"> with older Intel IGPs...which would seem to be the case.
As is often the case, they're comparing it with previous generations that readers might be familiar with, in order to get a sense of whether/how much better it is.
And it's not as if that's *all* they compared it against!
dwillmore - Friday, May 7, 2021 - link
So your choices are postage stamp or slide show? No thank you.Oxford Guy - Friday, May 7, 2021 - link
My favorite part of the Intel CPU + Intel GPU history is Atom, where serious hype was created over how fabulously efficient the chip was, while it was sold with a GPU+chipset that used — what was it? — three times the power — negating the ostensible benefit from paying for the pain of an in-order CPU (a time-inefficient design sensibly abandoned after the Pentium 1). The amazing ideological purity of the engineering team’s design goal (maximizing the power efficiency of the CPU) was touted heavily. Netbooks were touted heavily. I said they’re a mistake, even before I learned (which wasn’t so easy) that the chipset+GPU solution Intel chose to pair with Atom (purely to save the company money) made the whole thing seem like a massive bait and switch.mode_13h - Friday, May 7, 2021 - link
> fabulously efficient the chip was, while it was sold with a GPU+chipset that used> — what was it? — three times the power
Well, if they want to preserve battery life, maybe users could simply avoid running graphically-intensive apps on it? I think that's a better approach than constraining its graphics even further, which would just extend the pain.
I'm also confused which Atoms you mean. I'm not sure, but I think they didn't have iGPUs until Silvermont, which was already an out-of-order core. And those SoC's only had 4 EUs, which I doubt consumed 3x the power of the CPU cores & certainly not 3x the power of the rest of the chip.
What I liked best about Intel's use of their iGPUs in their low-power SoCs is that the drivers just work. Even in Linux, these chips were well-supported, pretty much right out of the gate.
TheinsanegamerN - Friday, May 7, 2021 - link
Graphically intensive apps, you mean like windows explorer and a web browser? Because that was enought o obliterate battery life.The original atom platform was awful. Plain and simple.
29a - Friday, May 7, 2021 - link
This^ Atoms were awful turning the computer on would be considered graphically intensive.mode_13h - Friday, May 7, 2021 - link
I still don't follow the logic of the Oxford dude. Would it really have been a good solution to put in even worse graphics, further impinging on the user experience, just to eke out a little more battery life? I'm not defending the overall result, but that strikes me as an odd angle on the issue.Indeed, if explorer and web browser were as much as their GPU could handle, then it seems the GPU was well-matched to the task.
Oxford Guy - Sunday, May 9, 2021 - link
You should learn about the Atom nonsense before posting opinions about it.The power consumption chipset + GPU completely negated the entire point of the Atom CPU, from its design philosophy to the huge hype placed behind it by Intel, tech media, and companies peddling netbooks.
It is illustrative of large-scale bait and switch in the tech world. It happened purely because Intel wanted to save a few pennies, not because of technological restriction. The chipset + GPU could have been much more power-efficient.
Spunjji - Monday, May 10, 2021 - link
You don't follow because you're trying to assess what he said by your own (apparently incomplete) knowledge, whereas what would make sense here would be to pay more attention to what he said - because, in this case, it's entirely accurate.Intel paired the first 45nm Atom chips with one of two chipsets - either the recycled 180nm 945 chipset, designed for Pentium 4 and Core 2 processors, or the 130nm Poulsbo chipset. The latter had an Imagination Technologies mobile-class GPU attached, but Intel never got around to sorting out working Windows drivers for it. In either case, it meant that they'd built an extremely efficient CPU on a cutting-edge manufacturing process and then paired it with a hot, thirsty chipset. It was not a good look; this was back when they were absolutely clobbering TSMC on manufacturing, too, so it was a supreme own-goal.
GeoffreyA - Monday, May 10, 2021 - link
180 or 130 nm. Yikes. No wonder.mode_13h - Tuesday, May 11, 2021 - link
Thanks for the details. I was confused about which generation he meant. If he'd have supplied even half the specifics you did, that could've been avoided.Also, dragging up examples from the 2000's just seems like an egregious stretch to engage in Intel-bashing that's basically irrelevant to the topic at hand.
mode_13h - Wednesday, May 12, 2021 - link
There's another thing that bugs me about his post, and I figured out what it is. Everything he doesn't like seems to be the result of greed or collusion. Whatever happened to plain old incompetence?And even competent organizations and teams sometimes build a chip with a fatal bug or run behind schedule and miss their market window. Maybe they *planned* on having a suitable GPU, but the plan fell through and they misjudged the suitability of their fallback solution? Intel has certainly shown itself to be fallible, time and again.
GeoffreyA - Wednesday, May 12, 2021 - link
Incompetence and folly have worked against these companies over and over again. The engineering has almost always been brilliant but the decisions have often been wrong.yeeeeman - Friday, May 7, 2021 - link
Silvermont was quite good in terms of power and performance. It actually competed very well with the snapdragon 835 at the time and damn the process was efficient. The vcore was 0.35v! and on the igpu you can play rocket league at low stting 720p with 20 fps and 3w total chip power. If that isn't amazing for a 2014 chip then I don't know what it is. Intel actually was very good in between 2006 core 2 duo and 2015 with Skylake. Their process was superior to the competition and the designs are quite good also.mode_13h - Friday, May 7, 2021 - link
Thanks for the details.SarahKerrigan - Saturday, May 8, 2021 - link
Silvermont was fine, though not great. The original Atom core family was utterly godawful - dual-issue in-order, and clocked like butt.mode_13h - Sunday, May 9, 2021 - link
The 1st gen had hyperthreading, which Intel left out of all subsequent generations.Gracemont is supposed to be pretty good, but then it's a lot more complex, as well.
Oxford Guy - Sunday, May 9, 2021 - link
The only thing that's really noteworthy about the first generation is how much of a bait and switch the combination of the CPU and chipset + GPU was — and how the netbook hype succeededIt was literally paying for pain (the CPU's ideological purist design — pursuing power efficiency too much at the expense of time efficiency) and getting much more pain without (often) knowing it (the disgustingly inefficient chipset + GPU).
As for the hyperthreading... my very hazy recollection is that it didn't do much to enhance the chip's performance. As to why it was dumped for later iterations — probably corporate greed (i.e. segmentation). Greed is the only reason why Atom was such a disgusting product. Had the CPU been paired with a chipset + GPU designed according to the same purist ideological goal it would have been vastly more acceptable.
As it was, the CPU was the tech world's equivalent of the 'active ingredient' scam used in pesticides. For example, one fungicide's ostensible active ingredient is 27,000 times less toxic to a species of bee than one of the 'inert' ingredients used in a formulation.
watersb - Sunday, May 9, 2021 - link
The first ever Atom platform did indeed use a chipset that used way more power than the CPU itself. 965G I think. I built a lab of those as tiny desktop platform for my daughter's school at the time. They were good enough for basic Google Earth at full screen, 1024x768.The Next Atom I had was a Something Trail tablet, the HP Stream 7 of 2014. These were crippled by using only one of the two available memory channels, which was devastating to the overall platform performance. Low end chips can push pixels at low spec 7-inch displays, if they don't have to block waiting for DRAM. The low power, small CPU cache Atom pretty much requires a decent pipe to RAM, otherwise you blow the power budget just sitting there.
The most recent Atoms I have used are HP Stream 11, N2000 series little Windows laptops. Perfect for little kids, especially low income families who were caught short this past year, trying to provide one laptop per child as the schools went to remote-only last year.
Currently the Atom N4000 series makes for a decent Chromebook platform for remote learning.
So I can get stuff done on Atom laptops. Not competitive to ARM ones, performance or power efficiency, but my MacBook Pro M1 cost the same as that 10-seat school lab. Both the Mac and the lab are very good value for the money. Neither choice will get you Crysis or Tomb Raider.
GeoffreyA - Monday, May 10, 2021 - link
"the CPU's ideological purist design — pursuing power efficiency too much at the expense of timeefficiency"
Atom may have been terrible, but I respect its design philosophy and Intel's willingness to go back and revive in-order, in the hope it would cut down power considerably. The good designer takes any principle off his shelf to solve a problem and doesn't ban a design because it's old or finished. (Does the good director abandon telling a story through picture because "talkies" took over?) So they decided, why not visit in-order again and see what it can do on a modern process, along with SMT. Out of order, after all, has a lot of overhead.
Spunjji - Tuesday, May 11, 2021 - link
Agreed on respecting the philosophy. With consistent application and a full, coherent vision for the products using it, it could have been quite a neat product.Unfortunately MIDs were always a half-baked idea, and Netbooks appeared to be an excuse for clearing out sub-par components.
GeoffreyA - Wednesday, May 12, 2021 - link
"Netbooks appeared to be an excuse for clearing out sub-par components."Truly. I remember my aunt had one back in 2011, and boy, was that thing junk. Had Windows 7 Basic on it. Not a good impression at all.
mode_13h - Tuesday, May 11, 2021 - link
I think most ARM cores were still in-order, by that point. And that's who Intel was targeting.I hope people aren't too put off by dredging up other ideas that had a couple poor implementations, such as some of the concepts underlying EPIC. I think there's a lot you can do with some of those ideas, other than where Intel went.
GeoffreyA - Wednesday, May 12, 2021 - link
First iterations tend to disappoint. Pentium Pro, another example. I believe we even see the same principle at work in games, software, and the arts.As for EPIC, it did have some interesting ideas. Let's hope they can look past the failure of Itanium and salvage some golden pieces.
mode_13h - Thursday, May 13, 2021 - link
Pentium Pro was brilliant, as long as you weren't running 16-bit code! That's why they branded it as Pro, since I think they figured more business users and prosumers would be using 32-bit apps. Also, it was more expensive, due to its large L2 cache (which was on a separate die).I remember fragging a bunch of noobs in quake, running on a PPro 200 with a T1 line virtually to myself. This was before quake supported 3D cards, and my framerates were definitely above what you'd hit with a regular Pentium. That's pretty much the last time I played games at work, although it was during evenings/weekends.
I tell you, PPro 200 + Matrox Milennium was a brilliant way to run WinNT 3.51 and 4.0.
GeoffreyA - Sunday, May 9, 2021 - link
Some might laugh at this sentiment, but don't be surprised when the desktop microarchitecture starts borrowing tricks from, and is eventually replaced by, the Atom lineage, reminiscent of Dothan/Yonah becoming Core.kepstin - Monday, May 10, 2021 - link
An interesting side effect of the in order design is that these early atom chips are immune to speculative execution vulnerabilities - so they're actually the newest, fastest x86 chips that aren't vulnerable to spectre type issues.I still use an atom D525 based system as my internet router at home, it's fast enough for that for now.
GeoffreyA - Saturday, May 8, 2021 - link
Well, Intel == common sense,does not compute.
CrispySilicon - Friday, May 7, 2021 - link
This is just embarassing. "In all situations, we will be testing with JEDEC memory"I run my 5775C on 1866 DDR3l in XMP, let alone the 2400CL9 1.65v I USED to run in it, and this, is a travesty. Justice for Iris Pro 6200!
Oxford Guy - Friday, May 7, 2021 - link
Excessive voltage is unwise.It’s even worse than bottlenecking parts unnecessarily via JEDEC rather than XMP using motherboard-certified RAM.
CrispySilicon - Friday, May 7, 2021 - link
It's not excessive, Broadwell is perfectly capable of 1.65V on the memory.Oxford Guy - Sunday, May 9, 2021 - link
Intel rated Broadwell at 1.5V for RAM as I recall, most likely due to it being 14nm (and the first generation of that as I recall).I've read more claims about what's fine and stable than I can count but one thing I am wary of is electromigration from exceeding the safe voltage zone.
TheinsanegamerN - Friday, May 7, 2021 - link
If the memory is designed forit that shouldnt be any issues.Otritus - Friday, May 7, 2021 - link
Voltage doesn't kill processors, amperage does. p = cfv^2, so an increase in voltage will result in approximately an equivalent increase in amperage. It's why processors with better cooling can be overvolted higher, since better cooling reduces current. On Broadwell 1.65V should be pretty safe since that memory controller was built for DDR3.FunBunny2 - Saturday, May 8, 2021 - link
"Voltage doesn't kill processors, amperage does. "people too :)
GeoffreyA - Monday, May 10, 2021 - link
Good one.Oxford Guy - Sunday, May 9, 2021 - link
'so an increase in voltage will result in approximately an equivalent increase in amperage'So... you've managed to point out something nearly 100% pedantic, unless one is studying to be an electrical engineer and is preparing for an exam.
'On Broadwell 1.65V should be pretty safe since that memory controller was built for DDR3.'
Intel rated it for 1.5.
DigitalFreak - Friday, May 7, 2021 - link
You want a cookie?CrispySilicon - Monday, May 10, 2021 - link
No, I want results that will accurately influence people, and this isn't how it should have been represented.Here you go, benchmark results from ddr3 2400, just xmp on, no other tuning. And ddr3-2400 is still avail off the shelf brand new from newegg around $100.
$5775C $130 @ aliexpress
Z97 $118 @ newegg
DDR3-2400 $56 @ newegg
Left is GTA:V, right is DX:MD, only two games I had from the bench tests. Both @ 1080.
DX is just plain unplayable, but @ 12.3/9.7 vs 9.9/9.2 (anand), that's a ~20% difference.
GTA:V, is great, and touches 60fps.
https://ibb.co/XCjSZWn
The point is, it's very performance competitive, and could be very worth the cost in the gpu limited market.
Spunjji - Monday, May 10, 2021 - link
It's as level a playing field as you can get. I think it's pretty justified, given the target market for integrated graphics and overclocking is even smaller than the one for overclocking in general.Torrijos - Friday, May 7, 2021 - link
Would be interesting to compare with Apple M1...Just to see where Intel is, against another iGPU maker...
Zizy - Friday, May 7, 2021 - link
Well, considering you can't run many games on the list it would be a pretty short comparison - M1 is faster for the stuff it can run yet still much worse for gaming.brucethemoose - Friday, May 7, 2021 - link
Even if one benches the same app, there's so much that's Apples-to-oranges.-Different OS
-Potentially different Graphics API.
-Different CPU ISA (Unless the M1 is running Rosetta).
At that point, its less of a GPU tech comparison and more of a specific platform/product comparison.
mode_13h - Friday, May 7, 2021 - link
True that it's not so easy to compare, but cross-platform benchmarks certainly exist.On paper, I think the Apple GPU is definitely faster. They claim 2.6 TFLOPS, whereas I estimate Tiger Lake's G7 has a peak of 2.07. Of course, raw compute is far from the whole story.
Anandtech actually compared it with an i7-1065G7 (Gen11 @ 64 EUs), as implemented in a MS Surface (not fair, when the M1 was housed in a Mac Mini), but you could try to sort of extrapolate the results:
https://www.anandtech.com/show/16252/mac-mini-appl...
mode_13h - Friday, May 7, 2021 - link
> there's so much that's Apples-to-oranges.But running in Rosetta should cover all your caveats, while putting the M1 at a disadvantage, and it *still* stomps all the other iGPUs they tested against!
Also, they ran a couple tests running native software vs. an Intel Mac doing the same.
Oxford Guy - Sunday, May 9, 2021 - link
The thing Apple's hardware really stomps is the speed with which it runs Apple's planned obsolescence via withheld security patch program.Good thing it can run fast because Apple's program is the quickest in the industry.
GeoffreyA - Monday, May 10, 2021 - link
Well, they're forward-thinking folk, so out with the old, in with the new, right.Spunjji - Monday, May 10, 2021 - link
Way behind, is the answer - even 96EU Xe isn't a patch on Apple's custom unit. A lot of that is down to Apple using 5nm to pack in an even larger and wider GPU, but I'd wager Apple's Imagination Technology-derived (lol, sorry, "not"-derived) design is more sophisticated, too.brucethemoose - Friday, May 7, 2021 - link
Perhaps Intel should've dumped desktop IGPs this time around and set their engineers on something more useful, while saving money/time on mobo design as well. Consider all the use cases:Need power efficiency? You want Tiger Lake, not Rocket Lake.
Cheap gaming? Corporate desktop? Bundle the CPU with DG1.
Bottom barrel systems? Nothing wrong with Comet Lake.
mode_13h - Friday, May 7, 2021 - link
> Need power efficiency? You want Tiger Lake, not Rocket Lake.Rocket Lake is 14 nm. They don't have enough 10+ nm fab capacity to sell desktop Tiger Lakes, but you can find them in SFF PCs and there are rumored to be NUCs with them, also.
Someone wanting more compute power should still prefer Rocket Lake, due to its higher core-count. We'll have to see what availability of the 8-core Tiger Lakes looks like, but I'm expecting it to be scarce & expensive.
> Cheap gaming? Corporate desktop? Bundle the CPU with DG1.
If someone needs more graphics horsepower, I'm sure that sometimes happens. It's not going to be cheaper than having an iGPU, however. So, if you're suggesting Intel should've left out the iGPU from Rocket Lake, that's going to be a deal-breaker for some customers.
> Bottom barrel systems? Nothing wrong with Comet Lake.
And you can still buy them. However, Rocket Lake offers more PCIe lanes and PCIe Gen 4. Its Xe GPU offers hardware-accelerated AV1 decode. There are some other benefits of Rocket Lake, but I agree that someone on a budget should probably be looking at Comet Lake.
MikeMurphy - Friday, May 7, 2021 - link
That's rather impressive performance from the 1185g7. Should we be factoring in the power envelope afforded to the mobile chips when comparing performance?Alistair - Friday, May 7, 2021 - link
that's not impressive, Apple's M1 is impressive, one of the reasons Apple abandoned Intel is their terrible IGP.GeoffreyA - Saturday, May 8, 2021 - link
Well, you know what they say about rats ;)Spunjji - Monday, May 10, 2021 - link
They're all impressive in different ways.M1 shows what you get if you go all-in on providing a large GPU on a cutting-edge process.
Vega 8 is impressive for the performance it provides in a relatively tiny area of the chip.
Xe is impressive for being so much better than Intel's previous efforts, and for currently being the best-performing iGPU on an x86 platform.
Alistair - Friday, May 7, 2021 - link
So AMD is ahead with the 4750G and 5700G, and Apple's M1 is WAY ahead. More crap from Intel.pman6 - Friday, May 7, 2021 - link
this is terrible.IGP is so important now that graphics cards are so damn expensive.
Looks like Alder Lake won't change much.
How long do I have to wait for the next AMD APU that has AV1 decode?
GeoffreyA - Saturday, May 8, 2021 - link
At most, till RDNA2 finds its way into the APUs, but possibly before that, because the video block is separate from the GPU.dsplover - Friday, May 7, 2021 - link
They’re so slow to respond to AMD. Maybe next year when they innovate something instead of more 14 nm cores.supdawgwtfd - Friday, May 7, 2021 - link
"so Intel has rebalanced the math engine while also making in larger per unit. "Say what now???
mode_13h - Sunday, May 9, 2021 - link
Well, obviously "making in larger" should be "making it larger".They mean changing the ratio of ALU pipelines capable of handing different sorts of operations, while also increasing their total number by 25% (i.e. from 4+4 to 8+2, in referring to fp/int + complex operations). And by "complex", presumably they mean things like sqrt, sin, and atan.
watzupken - Friday, May 7, 2021 - link
The results are pretty bad if you ask me. Considering all the hype on the XE graphics, the improvements over last gen UHD graphics isn’t great. When comparing say i9 11900 and 10900, it’s a comparison between a 32 vs 24 EU part. So whatever the results, we need to account for that 30% increase in EUs. At the end of the day, it’s just good for light games and mostly non 3D intensive use.mode_13h - Sunday, May 9, 2021 - link
> Considering all the hype on the XE graphicsThis type of small implementation isn't what the hype was about.
> the improvements over last gen UHD graphics isn’t great.
Yeah, but you're comparing 14 nm to 14 nm and not much more die area. So, I don't know why you were expecting any miracles, here.
> When comparing say i9 11900 and 10900, it’s a comparison between a 32 vs 24 EU part.
> So whatever the results, we need to account for that 30% increase in EUs.
Just eyeballing it, the results well *exceed* that bar, in most cases!
I'm not saying Rocket Lake's iGPU is *good*, just that performs at least as well as one would expect, considering it's still 14 nm and has only 32 EUs.
Spunjji - Monday, May 10, 2021 - link
> Yeah, but you're comparing 14 nm to 14 nm and not much more die area. So, I don't know why you were expecting any miracles, here.According to AT's percentages compared to the actual die sizes, Intel spent roughly an extra 13mm^2 of die area, or about a 28% increase. Performance-per-area has definitely gone up, but it's not a giant leap.
vegemeister - Friday, May 7, 2021 - link
>From Intel, its integrated graphics is in almost everything that Intel sells for consumers. AMD used to be that way in the mid-2010s, until it launched RyzenEh? Bulldozer -- no IGP. Phenom II -- no IGP. The various Athlons -- no IGP as far as I recall.
mode_13h - Sunday, May 9, 2021 - link
Yeah, that was a weird position for them to take.APUs were always a lower-teir product, for AMD.
eastcoast_pete - Friday, May 7, 2021 - link
Conclusion: Let's hope Intel gives Alder Lake at least 96 EU Xe graphics as iGPU; as things look right now, dGPUs will remain unaffordable or downright unobtainable. Otherwise, it's back to 10 fps as standard.mode_13h - Sunday, May 9, 2021 - link
I doubt it. For the desktop market, I'd expect more in the range of 48-64. I think die area currently commands too much of a premium.As Intel has a history of offering bigger iGPUs in some of their laptop chips, I don't see Tiger Lake's 96 EUs as a precedent for what we should expect of their 10 nm desktop processors.
Spunjji - Monday, May 10, 2021 - link
Agreed here. For a vast chunk of their market, it's just not relevant. I doubt we'll see this change much until they start using chiplets, and then we might get the *option* to buy something with a bigger iGPU.TheJian - Saturday, May 8, 2021 - link
Why is the 4750g even in the list if you can't BUILD with it? They are about to launch a new one and you can't even get it until xmas maybe (they say Q3-Q4, but that means Q4) for an actual DIY build.Stop making console socs AMD so you'll have more wafers for stuff that makes NET INCOME. I think there are a few people who'd like some 57xx apu's before xmas and surely they'd make more than single digit or mid-teens margins (meaning <15% right? or you'd say 15!). ~1000mm^2 wasted on every soc gone to consoles and HALF of that is the GOOD wafers that could go to chips that make THOUSANDS not $10-15 each. Just saying...500mil NET again...
Linustechtips12#6900xt - Saturday, May 8, 2021 - link
Do you realize they have pre-made orders from Sony/Microsoft they have to fufil???, and any silicon that goes to anything is great they are selling everthing they have they can't make enough of ANYTHING! more tsmc but still!, I understand you're want to build a pc right now but your going to have to wait, if you want to play games either buy a pre-built and get a 3000 series gpu or buy a console from scalpers either one is a decent option tho I would tend to go the way of prebuilts as funding the fucktardistans of scalpers is not a great idea.Qasar - Saturday, May 8, 2021 - link
Linustechtips12#6900xt, ignore the jian,the only thing he does on here, is find ways to bash amd in one way or another. most of what he posts, are just rants, no facts, just FUD.mode_13h - Sunday, May 9, 2021 - link
> Stop making console socs AMD so you'll have more wafers for stuff that makes NET INCOME.I'm not convinced they're actually buying these wafers, rather than Sony and MS. Even if they are, I'm sure they're contractually obligated by Sony and MS and don't have the option to simply divert console-designated wafers to be used for other AMD products.
I don't understand why people keep saying otherwise. It's like they don't understand how business works.
Oxford Guy - Sunday, May 9, 2021 - link
None of that matters in the big picture.In the big picture, AMD is actively working with Sony and MS against PC gamers. PC gamers need to wise up.
This also means it's helping Nvidia to keep prices high, which means there is a (presumably) legal form of collusion. Letting Nvidia set prices keeps AMD happy because it gets to raise prices, too. The high prices for PC gaming make consoles seem more attractive, even though the high prices are due to the existence of the consoles.
Nvidia is also, to a smaller but still significant extent, actively undermining PC gaming via its effort to prop up the Switch (a third artificial parasitic x86 walled garden).
Bottom line is that all of this is due to inadequate competition. Both AMD and Nvidia are setting record profits/earnings/whatever by not selling GPUs to gamers.
Qasar - Monday, May 10, 2021 - link
and none of this matters unless you post some links to show proof of your BS console scam conspiracy theory. as i cant remember if you posted anything proving this in all of your posts about it, there fore, is just your opinion, nothing more.as i said before, you sure come across as one angry person in most of your posts.
mode_13h - Monday, May 10, 2021 - link
Unless he was a liberal arts major, none of these arguments would pass muster at a place like Oxford.mode_13h - Monday, May 10, 2021 - link
> In the big picture, AMD is actively working with Sony and MS against PC gamers.How? Because they're collaborating in probably not more than a couple % of TSMC's fab capacity being used for console chips?
And what do you think would happen if AMD *didn't* design console chips? It'd just be someone else.
> The high prices for PC gaming make consoles seem more attractive, even though the high prices are due to the existence of the consoles.
In the same way a nearby farmer has less water for crop irrigation because I'm brushing my teeth and flushing my toilet.
> Nvidia is also, to a smaller but still significant extent, actively undermining PC gaming via its effort to prop up the Switch
The Switch's SoC is manufactured on TSMC 16 nm. So, that's not competing with any current CPU or GPU for fab capacity. Also, it's a pretty small SoC.
> Bottom line is that all of this is due to inadequate competition.
Bottom line is that there's a fab capacity crunch, and what you should *really* be raging at is crypto mining and the pandemic.
Spunjji - Monday, May 10, 2021 - link
"In the same way a nearby farmer has less water for crop irrigation because I'm brushing my teeth and flushing my toilet."Nailed it.
Spunjji - Monday, May 10, 2021 - link
"AMD is actively working with Sony and MS against PC gamers. PC gamers need to wise up."🙄
"The high prices for PC gaming make consoles seem more attractive, even though the high prices are due to the existence of the consoles."
---citations desperately needed---
"Nvidia is also, to a smaller but still significant extent, actively undermining PC gaming via its effort to prop up the Switch (a third artificial parasitic x86 walled garden)."
Amazing. Switch isn't x86, consoles have always been walled gardens BY DEFINITION (it's the entire point, and it's a good thing in that context) and the Switch is in *no conceivable way* a direct competitor to PC gaming.
"Both AMD and Nvidia are setting record profits/earnings/whatever by not selling GPUs to gamers."
They'd make even more money if they could also sell GPUs to gamers. They can't, because they can't make enough of them.
Fulljack - Tuesday, May 11, 2021 - link
Switch is using Arm64, not x86. do you even know what you're talking about?Spunjji - Monday, May 10, 2021 - link
"Why is the 4750g even in the list if you can't BUILD with it?"Because you can buy it, duh.
"blah blah blah, rabble rabble, I know better than Lisa Su, whine piss moan"
Nobody cares what you think. AMD had excellent financial results last quarter -especially for Q1 - and you were conspicuously absent from the comments there.
https://www.anandtech.com/show/16645/amd-reports-q...
mode_13h - Tuesday, May 11, 2021 - link
> and you were conspicuously absent from the comments there.SHHHHhhhhh!!
watzupken - Saturday, May 8, 2021 - link
"Conclusion: Let's hope Intel gives Alder Lake at least 96 EU Xe graphics as iGPU; as things look right now, dGPUs will remain unaffordable or downright unobtainable. Otherwise, it's back to 10 fps as standard"Don't want to burst your bubble, but the odds of seeing a 96 EU XE graphics on desktop Alder Lake is quite slim. The fact that Intel have adamently stick to 8 performance cores and not more is likely due to constrains in die size and/or power. Moreover, you can tell that Intel's strategy with iGPU is unlike AMD. At least from what we observed, AMD's APU strives to provide a cheap gaming experience, unless you choose to go with the Athlon series. Intel on the other hand is just provide their users with an iGPU for display purpose and almost no focus on making it gaming worthy. In my opinion, both approach makes sense, i.e. no right or wrong. I do feel Intel's approach sound more logical to me because it doesn't make sense for a gamer to spend so much on an APU that can only game at 1080p at low frame rates.
Fulljack - Tuesday, May 11, 2021 - link
because AMD APUs are mainly aimed for esport titles. where games doesn't the greatest and the bestest chip to run at 1080p@60fps using low settings. this is where internet cafe in Asia thrive, which is exactly where AMD APUs will end up to, at OEM pre-built PC.it's a matter of perspective—it doesn't makes sense for PC builder, it makes sense if you have tight budget.
GeoffreyA - Wednesday, May 12, 2021 - link
"it makes sense if you have tight budget"Fantastic for those on a budget, at least when Raven Ridge and Picasso were priced within the realms of reason. It's a graphics card for free.
scineram - Saturday, May 8, 2021 - link
Rembrandt will begin a new era for sure.vol.2 - Saturday, May 8, 2021 - link
The issue isn't 20FPS VS 60FPS (for me), it's the minimum FR. If it dips below 15, or it swings more than 15FPS in anything lower than 60FPS, I feel sick. We should be paying attention to minimum FPS over 15FPS, and measuring frame swing.Jon Tseng - Saturday, May 8, 2021 - link
So one thing I never get is why Intel don't do high end SKUs without the IGPU. I mean Ryzen has the node advantage and all that, but another thing people don't talk about is they don't need to burn nearly a quarter of the die area on graphics. That would actually give Intel space for a material increase in core-count which is exactly the area where they are massively lagging.I guess it is something to do with the tape-out cost - but how hard can it be to fuse off the GPU I/O and lay down some more CPU cores down? Especially given how much they are losing in HEDT sales + halo effect by being uncompetitive.
Mystery to me anyhow. Seems something of a no-brainer, tape-out costs aside.. ¯\_(ツ)_/¯
KAlmquist - Saturday, May 8, 2021 - link
The article says that the IGPU occupies about 20% of the die area, so a chip with 10 cores and no IGPU would be about the same size as their current 8 core chip. I don't think Intel has done variants that are that close in core counts in the past. We've seen similar behavior from AMD, which (for example) decided to do a chip with 2 Zen CCX (8 cores), and a chip with on Zen CCX (4 cores) and an IGPU, but not a chip with 1 Zen CCX but no IGPU. I suspect the explanation is the one you suggest: tape-out costs must be really high.mode_13h - Sunday, May 9, 2021 - link
> why Intel don't do high end SKUs without the IGPUThey do. They have a whole line of HEDT and workstation CPUs with up to 18 cores and no iGPU.
The_Assimilator - Saturday, May 8, 2021 - link
Short answer: no, and DG2 will be equally shit.mode_13h - Sunday, May 9, 2021 - link
>> Is Rocket Lake Core 11th Gen Competitive?> Short answer: no
It was a weird question to pose. I think we could reasonably assume it wouldn't be competitive with anything on 7 nm.
> DG2 will be equally shit.
Depends on how it's priced. As long as its perf/W isn't terrible, it should be possible for them to price it competitively. Especially in today's market.
csell - Sunday, May 9, 2021 - link
I wonder what happen with the new MEDIA Encoders Up to 4K60 10b 4:2:0 AV1, shown in a the previous published Key Platform Feature slide, in this link:https://www.anandtech.com/show/16535/intel-core-i7...
mode_13h - Sunday, May 9, 2021 - link
Good question. One possibility is that a hardware bug kept them from enabling it.GeoffreyA - Sunday, May 9, 2021 - link
That's very interesting. I never knew that Intel advertised hardware AV1 encoding. If true (and not just decoding), this would give them quite a feature to boast about. In practice, though, even if it is there, I suspect it'll be no better than SVT-AV1, which is hopeless.Serraxer - Sunday, May 9, 2021 - link
384p in RDR2 in 2021 is it a joke?mode_13h - Monday, May 10, 2021 - link
Wow, you're right. I noted some at low resolution, but I missed that one!bill44 - Sunday, May 9, 2021 - link
When can we expect native HDMI 2.1 (and may be even DisplayPort 2.0) output? Not for gaming, in a NUC form factor.yetanotherhuman - Monday, May 10, 2021 - link
This has a clear answer for me: 16.666 recurring FPS, the cap of The Legend of Zelda: Ocarina of Time, PAL version. We played it back in the day, and we enjoyed it.yetanotherhuman - Monday, May 10, 2021 - link
I've been revisiting Perfect Dark on the N64 in 16:9, high-res mode, but honestly it often dips in to the totally unplayable FPS range. I still think it's a riot like that, but when you're in to single digits, the performance isn't really acceptable.Spunjji - Monday, May 10, 2021 - link
I do recall it often being even worse than Goldeneye in that regard, and that was a jank-fest whenever an explosion went off...eastcoast_pete - Monday, May 10, 2021 - link
Unfortunately, the second part of the headline should almost read "Does it have to compete?". Because right now and for the foreseeable future, almost anything is better than the empty socket in the system that is the alternative. And, unfortunately, unless you have a dGPU sitting around, standard Ryzen desktop CPUs aren't an alternative as they don't have iGPUs.ballsystemlord - Tuesday, May 11, 2021 - link
Spelling and grammar errors:"F1 2019 is certainly a step up grom generation to generation on the desktop,..."
"grom" to "from":
"F1 2019 is certainly a step up from generation to generation on the desktop,..."
"However, this is relatively little use for gaming, the topic of today."
Sentance makes no sense as is. Try:
"However, this has seen relatively little use for gaming, the topic of today."
bez5dva - Monday, May 24, 2021 - link
i don't get why to test AAA games with IGP? Who the f is going to play RDR2 on Intel HD graphics? :)) Why not to test mainstream F2P games which are capable to run on some ancient builds...