Ummmmmm, pretty interesting. That being said Intel go ahead and start a triopoly in the gpu space. Competition is always good for the consumer, I always say.
The XE, in any form, is not targeted at "most people", the basic intel iGPU already exists. Doesnt change the fact that competition is a good thing for the dGPU space.
I fail to see who is it targeting. If I do any gaming or creative/compute tasks, I will buy an Nvidia/AMD card. If I don't, I'll go with an iGPU. Competition is good, but this just doesn't make much sense to me.
You're separating the GPU and CPU thermals. If the CPU can dedicate its entire thermal budget to CPU tasks and not use the iGPU, it can run at turbo levels longer than if it ALSO had to provide the graphics power. The Xe can dedicate its entire thermal budget to GPU processing, so it can run faster for longer than the iGPU could. Just removing the sharing within the die should be a decent net overall improvement in the performance of the machine, all else being equal. This should overall boost things like rendering speed, without the expense of other discrete GPU chipsets - I also suspect OEM pricing for Xe will be VERY competitive. ANd you mention 'card' - this isn't for an add-in card, this is for on-mainboard laptop installation. You don't just "plug a car" into a laptop (there are some oddball exceptions, nothing's ever absolute). You have laptops with no discrete GPU, laptops with Xe, and laptops with NVidia/AMD GPUs. Xe is superior to the iGPU, not as fast as the better NV/AMD options - but cheaper.
This isn't "removing sharing", it's doubling the TDP. Similar result could be achieved by slapping better cooling solution (capable of disposing of double the waste heat) onto existing Tiger Lake SoC.
Competition would be wonderful, but the way they avoid showing their product in any sort of valid competitive light (except against a cut-down 1050, be still my beating heart) makes me suspect that this can't really compete in any sense. It looks to be a way for Intel to purchase more market share.
Problem is, if intel are trying to sell this as NOT an entry level part that integrated graphics (even from years ago) can already handle.... it simply isn't good enough. Nobody is going to buy this as a gaming part, and I very much doubt anyone buying it for ML/AI/Compute is going to cheap out not his when they are clearly using it for productivity where time is money - and get something far better from AMD or NVIDIA.
That's just it: From a "good enough" angle, it's handsomely out-competed by Intel's own iGPUs. From a gaming angle, it's hopelessly outclassed. It loses to Renoir on perf/dollar and the execrable MX450 on both outright performance *and* perf/watt. From a Machine Learning angle, Nvidia really seem to be where it's at.
So they've gone with "it makes video encoding faster". I guess we'll see how compelling that is. They'll sell loads, but only because they kind-of /make/ OEMs buy this stuff, and if the OEMs fail to shift the products with these in then they won't be making any friends in the process.
" Integrated Intel GPU have been powerful enough for almost any use case except gaming and 3D CAD."
Exactly (adding graphic pros, Photoshop, etc). If you aren't gaming or doing something specific, what is the benefit of a low end GPU? We do need more competition, but it needs to be starting in the mid range section where there is a tangible difference between built in graphics and discreet.
It's a bit of a headscratcher to me why Intel continued with an igpu for tigerlake when they also developed a separate GPU die with pretty much the same design.
I'd have thought the better solution to be replacing the iGPU on the CPU die with an extra x4 pcie 4.0 lanes, and offer both gpu-less or 'integrated' package.
Throw in a 4GB HMB die and things get even more interesting.
I guess the one interesting part is Deep Link with software that can use engines on both GPUs. Other than that, it's definitely not going to be too interesting for games, or software that isn't able to use both GPUs as it's just a bit above the IGP anyways.
Very interesting, obviously it will take a while to actually compete with Nvidia and AMD even if they pouched most of the ATi team. I think targeting Encoding/Decoding, Adobe work, 3D CAD is a very good idea as there really is no one to serve that niche right now, and its probably a pretty big niche. All people do is buy the most expensive gaming GPU and hope for the best, which mostly doesn't work that well, best bet is throwing a threadripper. But if intel concentrates on those things, i can see them being recommended for PC users that are nongamers.
DG1 was supposed to be paired with Rocket Lake-U which has since been cancelled. Because it wasn’t practical for Intel to backport more than a GT1 Xe LP IGP to 14nm they essentially made a discrete version of the TGL IGP. It was a hedge they had to make once they went down the path of keeping two processes on their roadmap for multiple generations.
I’m surprised they’re bothering to launch it as a companion to Tiger Lake, unless of course they’ve already binned out a pile of TGL-U GT0 dies that they’re hoping to salvage.
4GB HBM is still like $60/unit. These are entry level designs, HBM didn't even really make sense in the ultra-enthusiast space. What makes you think adding it would be sensible?
What I enjoyed is that we were spared Act 2 of the 'Larrabee' hype train! This isn't going to offer AMD or nVidia any competition in the discrete GPU space, and it's always better to have low expectations that are justified than to see tons of crazy hype that isn't, imo.
This sure feels like Larrabee all over again. A piece of garbage that does not compete except if you squint just right on a monday with a hangover.
The real question is how soon do they kill it off. Or will Intel suddenly grow the ovaries needed to birth a proper GPU. You know, like what they said they were working on. We all remember that right? nVidia is making the monie$ at GPU, and so will we!
Intel hasn't made a discrete graphics card before. This is a low-risk design (even the same memory controller) and imo is basically practice + a workaround for Tiger Lake 28W max TDP and memory bandwidth. AMD's first Ryzen APU had less marketplace value than this but look where they are now!
Yes, they have built a dGPU before, and it was rather nice, although a bit late to the game and short lived. It was called intel 740, I had one over 20 years ago.
Yeah, the i740 was actually solidly competitive with the RivaTNT and other such cards (which oddly, because of the Voodoo 1&2, considered unified solutions, as they contained 2D display hardware).
I think Intel has struggled in GPU because they suffer from “not-invented-here” syndrome and buying their own BS. I don’t think they lack the know-how, but they lack the commitment to GPU design principles. They kept trying to repurpose elements of general purpose ISA’s and uarch’s, rather than simply applying themselves to the task of a purpose built high-performance GPU architecture. Meanwhile the iGPU stuff is so constrained by transistor or TDP budgets, those teams can’t stretch their legs (but I bet, if given a green light, the could produce something very impressive).
Intel is a weird company; I worked for many years at Microsoft, and I remember getting into arguments with visiting Intel engineers (specifically around the very real efficiency issues with Netburst). I’d come from the semiconductor business, and I felt that Intel had simply made very poor decisions with Netburst (overly deep pipelining that was prone to bubbles and stalls, questionable branch prediction logic, heavy reliance on compiler optimizations/hints, too many transistors dedicated to depth of pipelines, not enough to number of pipelines, counterproductive obsession with clockspeed). Everyone I spoke to at Intel seemed to buy their own BS. Don’t even get me started on the IA64 folks; hell, I wanted to like Itanium, I wrote a whole compsci research paper on VLIW back in the early 90’s, but man those first sets of test machines we got were really disappointing. I know a couple of the guys who worked on MSVC for IA64, they sure weren’t having a good go of it back then.
Not to bag on Intel, they’ve been home to some of the finest engineers in the industry for longer than I’ve been alive. When they deliver, they deliver big, and from Conroe up to when AMD dropped Zen, they pretty much ruled the roost. Frankly, when Intel does finally stop messing around with bad ideas, and commits their extensive knowhow to building a proper scalable (to the high-end) GPU architecture, I’m willing to bet it’ll be a monster.
If there’s one thing I’ve learned, watching ISA’s come and go, while Intel and the x86 chug along, never underestimate them.
I guess Intel is no longer the engineering company it used to be. It is the for-profit cash cow and it needs to make money now and it just happens to produce silicon along the way. And I guess it is a PC place with plenty of PC external contractors.
I'd ask how you figured that AMD's first Ryzen APU had less marketplace value than this, but it's such a prima-facie absurd statement that I genuinely have very little interest in whatever pretzel logic is behind it. Here's your comment, expanded to a full argument:
"The first x86 mobile CPU with powerful graphics to seriously compete against an Intel CPU for nearly ten years had less value in the marketplace than a dGPU with terrible gaming drivers that barely competes with two-generation-old technology and has no killer apps outside of gaming to properly justify it".
Didn't see your mediocre responses at the time. Ryzen 1 cores had some serious bugs -- it didn't compete that well. It competed in desktop at higher TDPs with higher core counts. Ryzen 1 died quickly though, in favor of Ryzen 1+. Great set of fixes! Much better performance. Fixed a devastating power bug that I'm still dealing with in my Ryzen 1 laptop.
Wait, hold up, devastating power bug? Oh hold up, maybe that means it doesn't seriously compete against an Intel CPU.
I got 4 hours tops from a laptop which has an Intel variant that gets 8 on the same workloads. The 2700U competed in the same way that Steamroller and Excavator competed -- not bad, but not the same tier of performance, and certainly not performance per watt.
That performance gap is probably about the same between DG1 and the GT 1030. So it's actually pretty comparable, and AMD has a whole lot of experience that should've prevented the power bug.
It may be a low risk design from a technical design perspective, but it is entirely irrelevant from a market perspective. Approximately zero people will buy into an intel discrete GPU that does not perform better than AMD integrated graphics or low end nvidia/amd cards.
All the nvidia mx series are using pcie3.0x4 lanes. And have 64bit gddr5 controllers. Consider the positioning of this card, I'd say it's perfectly adequate.
Its an interesting idea but I don't see a great demand to accelerate these niche workloads. I'd much rather have more traditional CPU or GPU performance.
Agreed. I feel like, as has become common with Intel lately, their entire rationale has been created ad-hoc to cope with the calibre of product they were capable of producing.
Your best CPU has worse IPC, fewer cores and higher power than the competition? Clock the nuts off it and pitch it as a gaming CPU.
Your first Big/Little style design has subpar performance? Pitch it as being for "premium always-on' devices and then... Never sell any.
dGPU has crap gaming performance? Make some stuff up about transcoding, surely this is what people do with ultraportables where they paid extra for a second GPU!
Bad first iterations are common across all three big laptop/desktop silicon companies. First gen RTX was pretty awful. Ryzen 1 was mediocre and Ryzen 2 (Zen 1) APUs were awful in laptops. First gen HBM was basically worthless with its 4GB memory cap.
Intel's high-end CPU woes are completely separate from the combined CPU core approach. Lakefield is a tech demo for a niche product segment and a way to validate an advanced manufacturing technique at scale, subsidizing the effort with a few sales.
Intel's dGPU is a first go at it. Intel's GPU team has no experience making a PCIe card, and now they do. Their only other experience would be from networking (completely different team) or the Knight's Landing/Corner/etc team (different team and also in servers, so very different requirements).
I'm not saying they aren't. I was talking about how they keep pitching products that they talk up, launch, then mysteriously go quiet on.
Then boom, another launch, quickly talk about this one and forget the last one!
I don't buy this "they don't have experience making a PCIe card" thing. Aside from this not actually being a PCIe card, per se, they do already have lots of experience both with building actual cards and connecting accelerators over PCIe. You just admitted it, and then fumbled on why it's not relevant.
Many of those other technologies at least showed promise in the first generation. This is just a duplicate iGPU that cannot used in tandem with the existing iGPU in 90% of workloads.
Thing is, this isn't their first iteration of a GPU. They've been doing integrated graphics for 2+ decades at this point and they had a crack at discrete cards back in the 90s.
It isn't even their first discrete GPU in a while. They literally just chopped the thing out of an integrated part and stuck it on a card. Sorry, but that's garbage. Unless they stick a real GPU memory controller on it with a decent amount of bandwidth and significantly increate the EU count .... it is entirely pointless.
This is one of the instances where it demonstrates how Intel can bring itself back up to par with AMD because AMD has had opportunities to do the same thing, but just kept rehashing Vega in their APUs. Now that AMD is on par with Intel in single core IPC, they need to work on the extra features of their CPUs and GPUs.
Well, Cezanne could have been Navi, but it's using Vega yet again. It would have been interesting to see an RDNA2 based APU and discreet RDNA be able to function like a big.LITTLE Navi, especially considering RDNA2 introduced Smart Access Memory.
A good followup discussion would be the state of drivers and software support among the 3 dGPU camps. It seems at the entry level we have hardware parity now.
Yup. Because Intel has the horrible policy of dropping drivers of current-2 generation from the current branch and moving them to legacy. Broadwell, once touted as "you can finally game on it", is already on bugfix support only. It seems they will even be dropping Skylake, simply because it is too old, even if it is almost the same as the still supported Kaby lake.
Could it, though? Maybe RDNA; probably not RDNA 2 based on the time-frames involved, though. Given the performance they get from Vega already - and how absolutely tiny it is - I can see why they didn't go for the interim design that appeared to be fairly inefficient in terms of transistors vs. performance.
Yeah . . . The AMD - APU graphic engine over the last 5 years has reduced the CUs, increased efficiency by 20% and increased performance 30%. Vega8 on the Ryzen 4xxxGs is the 'rope-a-dope' for the Cezanne 5xxxG. The new APU will slobber-knock whatever Chipzillah puts forward
To be honest this does allow a 28W TDP Tiger Lake to boost even higher, which does add value. Switchable graphics with a single driver solution from a more reliable driver vendor than AMD does have some value.
Yet their performance figures showing that it doesn't always outperform the iGPU make me wonder what the point of it all is... Not to mention that you just paid twice over for 96EUs.
It could be pitched as an upgrade for the iGPU-impoverished lower-end models, but then it would be cheaper to get the highest-end model alone.
Honestly I'm baffled as to why this exists, save to say that recently Intel seem to like "launching" experimental products that aren't yet fit for retail as an excuse to beta-test the software stack.
Imo the problem here is that Intel has fought hard to improve OEM designs with good reference designs, EVO certification, and more. Despite all of this, some OEMs still botch cooling. A dGPU is a way to distribute heat more when the OEM can't figure out how to cool one higher-wattage package sanely. Intel's Dynamic PowerShare slide is all but labeled "HP and Acer fix your damn cooling" imo.
I hate the "AMD drivers suck" mantra more than most, but it's a fact that they repeatedly screwed things up with their mobile APUs. My understanding is that the situation has improved but is still not ideal.
Intel CPU drivers suck, though, for sure. They rarely crash the system, but they're bug ridden.
How many people own a laptop with this level of graphics? Maybe it's just because I search for "deals" rather than specific models, but I hardly ever see a laptop with low end discreet graphics that I would consider a good deal. Generally I see good solid systems with the latest AMD or Intel CPUs with integrated graphics or affordable gaming systems with 1650-level graphics for surprisingly low prices.
Who actually needs just a little bit more performance than integrated, but doesn't want too much performance? Seems oddly specific... but Intel clearly knows what is worth investing in, so there must be more of a market for these than I think?
The boost is entirely via memory bandwidth and thermal separation. A 1650 exceeds the thermal usage of even a 35W CPU. An MX450 does not most likely, and obviously an Xe max does not.
This means that 2x a small, thin thermal solution can be allocated to 1 for the GPU, 1 for the CPU.
Prior to Ice Lake and Renoir, this was a moderately popular segment well-served by Nvidia. Pretty much "how much GPU can we fit into this space where once there could have been none".
But Intel just went and made sure pretty much all upcoming ultrabooks will already have a competent iGPU thanks to Tiger (drivers notwithstanding), rendering this product nearly pointless.
Not sure why Intel are going into discrete graphics.
Their Optane tech had real advantages - all Intel had to do was actually implement it as properly tiered storage (a well-understood technology) for almost instant restarting / wake / hibernate / app startup. A few gig in every chip or mobo would be cheap at scale and Intel would be onto a winner. Instead they half-assed it.
Perhaps with this ‘content creation’ dGPU they’re making a play for phones, car sensors, smart device sensors etc, but it’s ferociously competitive and price sensitive. Otherwise, well, I don’t know. dGPU market is tiny for a company like Intel and desktops / laptops are a mature / shrinking market, especially if your name isn’t Apple.
Very meh on this. It is an important stepping stone for Intel releasing a discrete graphics card but feels late to market without anything really going for it.
At ~72 mm^2, why not scale it upward to say 144 mm^2 or 288 mm^2 that'd be more performant? This really does feel like a TigerLake die with the x86 cores hacked off. Ditto for the idea of using the same memory controller as TigerLake. While there are some OEM benefits to sticking with LPDDR4X-4266 (same memory part for CPU and GPU), the lack of GDDR6 support is very surprising. Granted there can be a handful of benefits coupled with Tiger Lake (see Adaptix comments below), the big question is why would an OEM spend time/money on developing this for the advantages it gets on top of a higher bill of materials for the laptops? Intel would practically have to give these away for free
The ability to leverage multi-GPU with Tiger Lake is interesting but likely not enough to be competitive a step up in the nVidia or AMD's mobile line up. While everyone loves (and in many cases deservingly so) to make fun of AMD for poor drivers, Intel historically has been even worse. I'm hyper skeptical that Intel has a good multi-GPU solution working. I can see the marketing slides indicating a solid performance gains in raw FPS but I'd predict that it'll be a microstuttering mess. Doubly so if Intel were to release a dual Iris Xe MAX discrete gaming card for desktops down the road.
In defense of Intel a little bit, the Adaptix technology can be beneficial in terms of power/cooling in some scenarios based upon the laptop design. Thermally you have more silicon area that generates heat and thus less thermal density. IE it is easier to move the generated heat to the cooling solution which would result in lower aggregate system temperatures. Allowing the OEMs control over this is critical as the advantages to using Adaptix will be dependent upon the TigerLake + Iris XE Max Hyper Turbo Black Legendary Collector's Edition implementation. Intel does need to work with OEMs on this to not screw it up to avoid sub-optimal traps.
I will say that this chip does look promising for the encoding/transcoding niche. Intel has the parts to build an impressive replacement to their VCA lineup around this chip. While the usage of LPDDR4X-4266 isn't that great for gaming performance, an expected support for DDR4L to enable SO-DIMM slots does make it interesting as a data center part: you can work on huge data sets if you put in larger SO-DIMMs. Slapping four of these Iris Xe MAX Delux Gold Ti Special Limited Editions on a card, eight SO-DIMM slots with 256 GB would be a transcoding beast to chew through the most demanding 8K footage. Or take one of those chips, some SO-DIMM slots and pair with a high end Intel NIC and/or FPGA on the card makes things interesting for some niche applications. Intel really needs to execute here as these markets are niche but they are lucrative.
Oh, and Intel's marketing name need to figure out a better naming scheme.
AMD's laptop iGPU drivers are substantially worse than their dGPU drivers. Intel took power from OEMs a while ago but AMD has not yet to the same degree. As someone who has had their AMD driver settings panel broken 3+ times by Windows update by an older driver than I previously had installed, I can promise you there is upside.
Didn't get around to refuting your utter garbage comment. The drivers might be the same, but OEMs can push custom drivers. I get custom drivers pushed over my manually-installed ones basically every 4-6mo, and they break the control panel.
Tiger Lake has a different reported die size than Iris XE Max. However there is no reason why Intel couldn't crippled TigerLake SoCs with the CPU portion disabled. It is about harvested the maximum amount of semi-functional dies and if there are enough Tiger Lakes without functional CPUs (or say 1 or 2 cores) or enough where the CPU side fails at power consumption binning, it could happen as a limited OEM only SKU. Such OEM specific SKUs are rarely available at launch so it'll be several months before such part appears and it'll do so with little fan fair.
Intel do have prior for selling "coprocessors" that are an entire CPU, so it wouldn't be the first time. They seem to be claiming it'a a real dedicated GPU, though, albeit one that appears to reuse the CPU's memory controller. It does seem a bit odd for them to use precious 10SF capacity for additional products when they can't get out their 8-core TGL, too. Curious stuff.
So it appears that Intel's "discrete" GPUs are just going to be derivatives of their same old tired rubbish iGPUs.
This product, in particular, will only be chosen by OEMs until NVIDIA decides to adjust the price of the MX450 to undercut it. Intel's sole accomplishment with this GPU, therefore, is to force the retirement of the MX350... almost certainly not what they were hoping for.
This and possible future dGPUs for laptops makes a lot of sense for Intel. Laptops, especially light and ultralight ones, are a key profit center for Intel, so trying to own a large share of the dGPU space there makes sense. The biggest question that only a test can answer is how does Intel's iGPU/dGPU solution compare to AMD's big iGPU in Renoir, especially the 4800/4900 models. If Intel's Xe can't beat the graphics performance of those, this is pretty much useless. If they can beat the 7 or 8 Vega cores in Renoir AND pull even with NVIDIA's MX350, Intel has a leg to stand on. So, test please!
And by beating built-in Vega, I mean by more than 1-3 fps. It has to be pretty decisively, or Renoir's better CPU performance will always wipe the floor with Tiger Lake.
Tiger Lake is confirmed to pass Renoir, which sort of sandbagged on iGPU performance this generation.
Renoir vs Tiger Lake favors Renoir due to core counts, in part aided by how tiny Renoir's GPU is. It'll be interesting how AMD manages a larger iGPU alongside an 8-core laptop part. I hope they can do it but I doubt it'll be cheap, or widely available for purchase. After all, it'll be the biggest die they produce (that requires low leakage, anyway).
I was talking in pure GPU terms, and I'm pretty sure TGL still has a small margin over Renoir in those situations even when they're constrained to comparable power levels (15W constant, ~30W boost).
But like I said, it doesn't really bear out in practice anyway. 🤷♂️
Why is Intel so bad at naming products? "Xe MAX" makes it sound like this is the absolute best the Xe lineup can do. I wish they would hire someone that will fix their names soon.
Honestly I think this makes more sense in a server as an expansion card, but I'm sure there will be some cases where it's the perfect solution. Just not that many.
Even though they're pairing this with ICL-derivative parts, it seems like it would be a better story to pair the Xe MAX dGPU with 14-nm parts, especially Rocket Lake with it's weaker, hotter Xe graphics.
That’s exactly why Intel made this. It was intended to be a companion to the cancelled 14nm RKL-U. Although it could also serve alongside a hypothetical RKL-H in the event that TGL-H doesn’t pan out.
So, with it having a PCIe 4.0 x4 interface, and being quite small, I wonder if it would be at all possible for this to be made into an M.2 card as a sort of expansion adapter for add on video encoding.
Signed up just to compliment someone on a hilarious comment and already know from casual reading that this site doesn’t have an edit button and it will be the bane of my existence here.
Intel doesn't have enough available working 10nm wafer capacity for mobile + Bigger Xe in big volume. So they are stuck with edited Tiger Lake for now. If Intel 10nm had made it to desktop 12 months ago then they could have focused on 2x 96 EU (192 EU total, 4 video on dGPU etc) and then bigger. The Intel 740 was partly a failure because by the time it came to market it was being beaten by existing competitors. The 740 did become the basis for integrated graphics (slightly updated version in the 810 & 815 chipsets). Intel later updated their iGPU tech for entry level performance with CPU's until recent attempts to really improve it. https://www.tomshardware.com/reviews/graphic-chips... https://web.archive.org/web/20140329071342/http://... https://en.wikipedia.org/wiki/Intel740
Well you could say that. They move/work quite slow considering the capabilities of this chip. By the time this comes out, AMD could have an APU/iGPU performing closely to this discrete GPU.
Considering that a 580 is basically a rebadged 480 that takes us to mid 2016. So very very old — and not all that impressive when it debuted since it was designed to be cheap to make more than high in performance.
"We've made this wonderful kettle; but forget boiling water, which it can *also* do, it works beautifully as a jug to pour water from. Pour everyone's water simultaneously."---Intel Xe Kettle Press Release, 2020
Hi Ryan, if and when you or Ian test one of these, please do a side-by-side comparison of the video encode/transcode with NVENC in Turing or Ampere (same ASIC from I I read, so a 1660 should already have the recent, much better version of NVENC). If possible, do a 4K clip from an action cam and maybe one from the new iPhone 12 in HDR or Dolby (self-serving request, likely use scenario for me). At least up to now, Intel's Quicksync was, yes, fast, but with at-best mediocre quality output. If Intel is positioning these Xe iGPU/dGPU combos as good for video, they might have something if (!) they can match or beat NVENC in both speed and quality. Of course, strictly software-based encoding remains the gold standard, but dedicated ASICs beat the pants of even a big Threadripper on encoding speed, and the newest ASICs aren't half bad.
"Meanwhile the part offers a full display controller block as well, meaning it can drive 4 displays over HDMI 2.0b and DisplayPort 1.4a at up to 8K resolutions."
If I can get three 4K60 displays running simultaneously (no gaming) through a port replicator then they'll have a sale from me.
Thanks for the article, good info, but hard to read.
Can I please slip a request - could we get a decoder table with each of the lakes/coves and on which platforms/sockets/processes/etc they run?
Feels to me Intel is FOWing with basically a brand for every core revision, however minor, to make it look like they do something. Lousy marketing; not saying the company or engineering is bad, but gwd is it hard to understand which is what. Maybe you can help Intel with communicating what they released.
How does Intel screw their own products so badly like this???
Xe should be something exciting, yet they introduce it in the lamest way possible. "Here, now you can buy our new dGPU in the one platform that basically takes the "discrete" out - better yet it doesn't even add value to the platform - it's entire functionality is already 100% duplicated by the integrated graphics that is present in every processor we currently make when we don't decide to fuse it off for marketing."
Oh, and lastly the one application you might expect Xe to be used for, you know graphics - no we can't use that to work share with it's identical workmate the iGPU.
arrrrghhh
Honestly I wish Intel would let the chains off their GPU folks and let the YOLO that 4 die Xe part. I'd lobe to see what 96x4 = 384 Intel CUs can do.
"How does Intel screw their own products so badly like this???"
Arrogance.
Why else would Intel sell the Broadwell-C, which showed the power of eDRAM for gaming — and then coast on Skylake for forever without deigning to provide that benefit (an estimated whopping $10 worth of parts).
Also, a little thing called inadequate competition in an allegedly (but not so much) capitalist environment. People complained about how poorly AMD was competing with its terrible Bulldozer family and yet even if AMD had been competiting well that's still only duopoly.
In reality this is integrated "free" graphics. The level will creep growing higher but will be never be what you want for your solution. Accept this is nothing more than that. 15 years plus .... If you care about your GPU you'd look elsewhere. Integrated graphics keep pace but its only that and will not outpace the user workload.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
118 Comments
Back to Article
JfromImaginstuff - Saturday, October 31, 2020 - link
Ummmmmm, pretty interesting.That being said Intel go ahead and start a triopoly in the gpu space.
Competition is always good for the consumer, I always say.
danbob999 - Saturday, October 31, 2020 - link
Most people don't need a powerful GPU. Integrated Intel GPU have been powerful enough for almost any use case except gaming and 3D CAD.TheinsanegamerN - Saturday, October 31, 2020 - link
The XE, in any form, is not targeted at "most people", the basic intel iGPU already exists. Doesnt change the fact that competition is a good thing for the dGPU space.p1esk - Sunday, November 1, 2020 - link
I fail to see who is it targeting. If I do any gaming or creative/compute tasks, I will buy an Nvidia/AMD card. If I don't, I'll go with an iGPU. Competition is good, but this just doesn't make much sense to me.JfromImaginstuff - Sunday, November 1, 2020 - link
This is DG1, so it may be good indicator of their true dGPU DG2rrinker - Tuesday, November 3, 2020 - link
You're separating the GPU and CPU thermals. If the CPU can dedicate its entire thermal budget to CPU tasks and not use the iGPU, it can run at turbo levels longer than if it ALSO had to provide the graphics power. The Xe can dedicate its entire thermal budget to GPU processing, so it can run faster for longer than the iGPU could. Just removing the sharing within the die should be a decent net overall improvement in the performance of the machine, all else being equal. This should overall boost things like rendering speed, without the expense of other discrete GPU chipsets - I also suspect OEM pricing for Xe will be VERY competitive.ANd you mention 'card' - this isn't for an add-in card, this is for on-mainboard laptop installation. You don't just "plug a car" into a laptop (there are some oddball exceptions, nothing's ever absolute). You have laptops with no discrete GPU, laptops with Xe, and laptops with NVidia/AMD GPUs. Xe is superior to the iGPU, not as fast as the better NV/AMD options - but cheaper.
Arnulf - Thursday, November 5, 2020 - link
This isn't "removing sharing", it's doubling the TDP. Similar result could be achieved by slapping better cooling solution (capable of disposing of double the waste heat) onto existing Tiger Lake SoC.Spunjji - Sunday, November 1, 2020 - link
Competition would be wonderful, but the way they avoid showing their product in any sort of valid competitive light (except against a cut-down 1050, be still my beating heart) makes me suspect that this can't really compete in any sense. It looks to be a way for Intel to purchase more market share.throAU - Monday, November 2, 2020 - link
Problem is, if intel are trying to sell this as NOT an entry level part that integrated graphics (even from years ago) can already handle.... it simply isn't good enough. Nobody is going to buy this as a gaming part, and I very much doubt anyone buying it for ML/AI/Compute is going to cheap out not his when they are clearly using it for productivity where time is money - and get something far better from AMD or NVIDIA.Spunjji - Monday, November 2, 2020 - link
That's just it:From a "good enough" angle, it's handsomely out-competed by Intel's own iGPUs.
From a gaming angle, it's hopelessly outclassed. It loses to Renoir on perf/dollar and the execrable MX450 on both outright performance *and* perf/watt.
From a Machine Learning angle, Nvidia really seem to be where it's at.
So they've gone with "it makes video encoding faster". I guess we'll see how compelling that is. They'll sell loads, but only because they kind-of /make/ OEMs buy this stuff, and if the OEMs fail to shift the products with these in then they won't be making any friends in the process.
goatfajitas - Sunday, November 1, 2020 - link
" Integrated Intel GPU have been powerful enough for almost any use case except gaming and 3D CAD."Exactly (adding graphic pros, Photoshop, etc). If you aren't gaming or doing something specific, what is the benefit of a low end GPU? We do need more competition, but it needs to be starting in the mid range section where there is a tangible difference between built in graphics and discreet.
WaltC - Friday, November 6, 2020 - link
Calling this a "discrete GPU" is really pushing the boundaries of semantics, imo...;)Samus - Sunday, November 1, 2020 - link
It's pretty interesting how power efficient it is for being 2.5TFLOPS thats for sure.onewingedangel - Saturday, October 31, 2020 - link
It's a bit of a headscratcher to me why Intel continued with an igpu for tigerlake when they also developed a separate GPU die with pretty much the same design.I'd have thought the better solution to be replacing the iGPU on the CPU die with an extra x4 pcie 4.0 lanes, and offer both gpu-less or 'integrated' package.
Throw in a 4GB HMB die and things get even more interesting.
lmcd - Saturday, October 31, 2020 - link
As best as I can tell it's a nice bit of extra memory bandwidth, experience with packaging, and extra heat dissipation.tipoo - Saturday, October 31, 2020 - link
I guess the one interesting part is Deep Link with software that can use engines on both GPUs. Other than that, it's definitely not going to be too interesting for games, or software that isn't able to use both GPUs as it's just a bit above the IGP anyways.Byte - Sunday, November 1, 2020 - link
Very interesting, obviously it will take a while to actually compete with Nvidia and AMD even if they pouched most of the ATi team. I think targeting Encoding/Decoding, Adobe work, 3D CAD is a very good idea as there really is no one to serve that niche right now, and its probably a pretty big niche. All people do is buy the most expensive gaming GPU and hope for the best, which mostly doesn't work that well, best bet is throwing a threadripper. But if intel concentrates on those things, i can see them being recommended for PC users that are nongamers.Spunjji - Sunday, November 1, 2020 - link
But all those things are done better by other GPUs. The niche is, indeed, already served.repoman27 - Sunday, November 1, 2020 - link
DG1 was supposed to be paired with Rocket Lake-U which has since been cancelled. Because it wasn’t practical for Intel to backport more than a GT1 Xe LP IGP to 14nm they essentially made a discrete version of the TGL IGP. It was a hedge they had to make once they went down the path of keeping two processes on their roadmap for multiple generations.I’m surprised they’re bothering to launch it as a companion to Tiger Lake, unless of course they’ve already binned out a pile of TGL-U GT0 dies that they’re hoping to salvage.
stadisticado - Sunday, November 1, 2020 - link
4GB HBM is still like $60/unit. These are entry level designs, HBM didn't even really make sense in the ultra-enthusiast space. What makes you think adding it would be sensible?WaltC - Saturday, October 31, 2020 - link
What I enjoyed is that we were spared Act 2 of the 'Larrabee' hype train! This isn't going to offer AMD or nVidia any competition in the discrete GPU space, and it's always better to have low expectations that are justified than to see tons of crazy hype that isn't, imo.Azethoth - Saturday, October 31, 2020 - link
This sure feels like Larrabee all over again. A piece of garbage that does not compete except if you squint just right on a monday with a hangover.The real question is how soon do they kill it off. Or will Intel suddenly grow the ovaries needed to birth a proper GPU. You know, like what they said they were working on. We all remember that right? nVidia is making the monie$ at GPU, and so will we!
lmcd - Saturday, October 31, 2020 - link
Intel hasn't made a discrete graphics card before. This is a low-risk design (even the same memory controller) and imo is basically practice + a workaround for Tiger Lake 28W max TDP and memory bandwidth. AMD's first Ryzen APU had less marketplace value than this but look where they are now!velanapontinha - Saturday, October 31, 2020 - link
Yes, they have built a dGPU before, and it was rather nice, although a bit late to the game and short lived. It was called intel 740, I had one over 20 years ago.lmcd - Saturday, October 31, 2020 - link
How many people who worked on that project do you think still work in Intel's GPU division?Buck Turgidson - Saturday, October 31, 2020 - link
Yeah, the i740 was actually solidly competitive with the RivaTNT and other such cards (which oddly, because of the Voodoo 1&2, considered unified solutions, as they contained 2D display hardware).I think Intel has struggled in GPU because they suffer from “not-invented-here” syndrome and buying their own BS. I don’t think they lack the know-how, but they lack the commitment to GPU design principles. They kept trying to repurpose elements of general purpose ISA’s and uarch’s, rather than simply applying themselves to the task of a purpose built high-performance GPU architecture. Meanwhile the iGPU stuff is so constrained by transistor or TDP budgets, those teams can’t stretch their legs (but I bet, if given a green light, the could produce something very impressive).
Intel is a weird company; I worked for many years at Microsoft, and I remember getting into arguments with visiting Intel engineers (specifically around the very real efficiency issues with Netburst). I’d come from the semiconductor business, and I felt that Intel had simply made very poor decisions with Netburst (overly deep pipelining that was prone to bubbles and stalls, questionable branch prediction logic, heavy reliance on compiler optimizations/hints, too many transistors dedicated to depth of pipelines, not enough to number of pipelines, counterproductive obsession with clockspeed). Everyone I spoke to at Intel seemed to buy their own BS. Don’t even get me started on the IA64 folks; hell, I wanted to like Itanium, I wrote a whole compsci research paper on VLIW back in the early 90’s, but man those first sets of test machines we got were really disappointing. I know a couple of the guys who worked on MSVC for IA64, they sure weren’t having a good go of it back then.
Not to bag on Intel, they’ve been home to some of the finest engineers in the industry for longer than I’ve been alive. When they deliver, they deliver big, and from Conroe up to when AMD dropped Zen, they pretty much ruled the roost. Frankly, when Intel does finally stop messing around with bad ideas, and commits their extensive knowhow to building a proper scalable (to the high-end) GPU architecture, I’m willing to bet it’ll be a monster.
If there’s one thing I’ve learned, watching ISA’s come and go, while Intel and the x86 chug along, never underestimate them.
Sivar - Sunday, November 1, 2020 - link
Wisely spoken.Zingam - Monday, November 2, 2020 - link
I guess Intel is no longer the engineering company it used to be. It is the for-profit cash cow and it needs to make money now and it just happens to produce silicon along the way. And I guess it is a PC place with plenty of PC external contractors.Smell This - Monday, November 2, 2020 - link
The i740 sucked. The only good thing about it, the discreet evolved into the i810e GMA IGP chipset (I think I got that right).
Spunjji - Sunday, November 1, 2020 - link
I'd ask how you figured that AMD's first Ryzen APU had less marketplace value than this, but it's such a prima-facie absurd statement that I genuinely have very little interest in whatever pretzel logic is behind it. Here's your comment, expanded to a full argument:"The first x86 mobile CPU with powerful graphics to seriously compete against an Intel CPU for nearly ten years had less value in the marketplace than a dGPU with terrible gaming drivers that barely competes with two-generation-old technology and has no killer apps outside of gaming to properly justify it".
The mind boggles.
lmcd - Thursday, January 28, 2021 - link
Didn't see your mediocre responses at the time. Ryzen 1 cores had some serious bugs -- it didn't compete that well. It competed in desktop at higher TDPs with higher core counts. Ryzen 1 died quickly though, in favor of Ryzen 1+. Great set of fixes! Much better performance. Fixed a devastating power bug that I'm still dealing with in my Ryzen 1 laptop.Wait, hold up, devastating power bug? Oh hold up, maybe that means it doesn't seriously compete against an Intel CPU.
I got 4 hours tops from a laptop which has an Intel variant that gets 8 on the same workloads. The 2700U competed in the same way that Steamroller and Excavator competed -- not bad, but not the same tier of performance, and certainly not performance per watt.
That performance gap is probably about the same between DG1 and the GT 1030. So it's actually pretty comparable, and AMD has a whole lot of experience that should've prevented the power bug.
throAU - Monday, November 2, 2020 - link
It may be a low risk design from a technical design perspective, but it is entirely irrelevant from a market perspective. Approximately zero people will buy into an intel discrete GPU that does not perform better than AMD integrated graphics or low end nvidia/amd cards.It just makes no sense.
Spunjji - Monday, November 2, 2020 - link
This 👆throAU - Monday, November 2, 2020 - link
how soon will they kill it off? I'd suggest after the first round of discrete cards go out and do not sell, and are out-performed by AMD's next APUs.The integrated graphics are an improvement, but the discrete cards are DOA.
powerarmour - Saturday, October 31, 2020 - link
Hardly any competition for Nvidia or AMD this, only 4x PCIe lanes and LPDDR4X-4266 will barely pull the skin off a rice pudding.erinadreno - Sunday, November 1, 2020 - link
All the nvidia mx series are using pcie3.0x4 lanes. And have 64bit gddr5 controllers. Consider the positioning of this card, I'd say it's perfectly adequate.Spunjji - Sunday, November 1, 2020 - link
What exactly is it's positioning?"The dGPU for people who don't want a dGPU"
"It performs like its main competitor's two-gen-old recycled product that has half its resources disabled"
PCIe lanes are neither here nor there, it's inadequate.
SolarBear28 - Saturday, October 31, 2020 - link
Its an interesting idea but I don't see a great demand to accelerate these niche workloads. I'd much rather have more traditional CPU or GPU performance.Spunjji - Saturday, October 31, 2020 - link
Agreed. I feel like, as has become common with Intel lately, their entire rationale has been created ad-hoc to cope with the calibre of product they were capable of producing.Your best CPU has worse IPC, fewer cores and higher power than the competition? Clock the nuts off it and pitch it as a gaming CPU.
Your first Big/Little style design has subpar performance? Pitch it as being for "premium always-on' devices and then... Never sell any.
dGPU has crap gaming performance? Make some stuff up about transcoding, surely this is what people do with ultraportables where they paid extra for a second GPU!
lmcd - Saturday, October 31, 2020 - link
Bad first iterations are common across all three big laptop/desktop silicon companies. First gen RTX was pretty awful. Ryzen 1 was mediocre and Ryzen 2 (Zen 1) APUs were awful in laptops. First gen HBM was basically worthless with its 4GB memory cap.Intel's high-end CPU woes are completely separate from the combined CPU core approach. Lakefield is a tech demo for a niche product segment and a way to validate an advanced manufacturing technique at scale, subsidizing the effort with a few sales.
Intel's dGPU is a first go at it. Intel's GPU team has no experience making a PCIe card, and now they do. Their only other experience would be from networking (completely different team) or the Knight's Landing/Corner/etc team (different team and also in servers, so very different requirements).
Spunjji - Sunday, November 1, 2020 - link
I'm not saying they aren't. I was talking about how they keep pitching products that they talk up, launch, then mysteriously go quiet on.Then boom, another launch, quickly talk about this one and forget the last one!
I don't buy this "they don't have experience making a PCIe card" thing. Aside from this not actually being a PCIe card, per se, they do already have lots of experience both with building actual cards and connecting accelerators over PCIe. You just admitted it, and then fumbled on why it's not relevant.
SolarBear28 - Sunday, November 1, 2020 - link
Many of those other technologies at least showed promise in the first generation. This is just a duplicate iGPU that cannot used in tandem with the existing iGPU in 90% of workloads.throAU - Monday, November 2, 2020 - link
Thing is, this isn't their first iteration of a GPU. They've been doing integrated graphics for 2+ decades at this point and they had a crack at discrete cards back in the 90s.It isn't even their first discrete GPU in a while. They literally just chopped the thing out of an integrated part and stuck it on a card. Sorry, but that's garbage. Unless they stick a real GPU memory controller on it with a decent amount of bandwidth and significantly increate the EU count .... it is entirely pointless.
JustMe21 - Saturday, October 31, 2020 - link
This is one of the instances where it demonstrates how Intel can bring itself back up to par with AMD because AMD has had opportunities to do the same thing, but just kept rehashing Vega in their APUs. Now that AMD is on par with Intel in single core IPC, they need to work on the extra features of their CPUs and GPUs.ballsystemlord - Saturday, October 31, 2020 - link
Navi 1 is also based on Vega. Does that make it a "rehash"?JustMe21 - Saturday, October 31, 2020 - link
Well, Cezanne could have been Navi, but it's using Vega yet again. It would have been interesting to see an RDNA2 based APU and discreet RDNA be able to function like a big.LITTLE Navi, especially considering RDNA2 introduced Smart Access Memory.fmcjw - Saturday, October 31, 2020 - link
A good followup discussion would be the state of drivers and software support among the 3 dGPU camps. It seems at the entry level we have hardware parity now.KaarlisK - Sunday, November 1, 2020 - link
Yup. Because Intel has the horrible policy of dropping drivers of current-2 generation from the current branch and moving them to legacy. Broadwell, once touted as "you can finally game on it", is already on bugfix support only. It seems they will even be dropping Skylake, simply because it is too old, even if it is almost the same as the still supported Kaby lake.Spunjji - Sunday, November 1, 2020 - link
Yup, and Intel's drivers are miserable from a gaming perspective. Several of the games they quote frame rates for have known bugs on their hardware.Spunjji - Monday, November 2, 2020 - link
"Cezanne could have been Navi"Could it, though? Maybe RDNA; probably not RDNA 2 based on the time-frames involved, though. Given the performance they get from Vega already - and how absolutely tiny it is - I can see why they didn't go for the interim design that appeared to be fairly inefficient in terms of transistors vs. performance.
Smell This - Tuesday, November 3, 2020 - link
Yeah . . .The AMD - APU graphic engine over the last 5 years has reduced the CUs, increased efficiency by 20% and increased performance 30%. Vega8 on the Ryzen 4xxxGs is the 'rope-a-dope' for the Cezanne 5xxxG. The new APU will slobber-knock whatever Chipzillah puts forward
Abadonna302b - Saturday, October 31, 2020 - link
They are going the right way. Gaming will seek it's end in future. it is a stupid time wasting activity.Spunjji - Saturday, October 31, 2020 - link
While encoding media slightly faster with an extra GPU is sUpEr ImPoRtAnTantonkochubey - Saturday, October 31, 2020 - link
tl;dr Intel made a dedicated video encoder.powerarmour - Saturday, October 31, 2020 - link
It decodes as well tbflmcd - Saturday, October 31, 2020 - link
To be honest this does allow a 28W TDP Tiger Lake to boost even higher, which does add value. Switchable graphics with a single driver solution from a more reliable driver vendor than AMD does have some value.Spunjji - Saturday, October 31, 2020 - link
Yet their performance figures showing that it doesn't always outperform the iGPU make me wonder what the point of it all is... Not to mention that you just paid twice over for 96EUs.It could be pitched as an upgrade for the iGPU-impoverished lower-end models, but then it would be cheaper to get the highest-end model alone.
Honestly I'm baffled as to why this exists, save to say that recently Intel seem to like "launching" experimental products that aren't yet fit for retail as an excuse to beta-test the software stack.
lmcd - Saturday, October 31, 2020 - link
Imo the problem here is that Intel has fought hard to improve OEM designs with good reference designs, EVO certification, and more. Despite all of this, some OEMs still botch cooling. A dGPU is a way to distribute heat more when the OEM can't figure out how to cool one higher-wattage package sanely. Intel's Dynamic PowerShare slide is all but labeled "HP and Acer fix your damn cooling" imo.Spunjji - Sunday, November 1, 2020 - link
I'm not going to disagree with you on this one. Both of those companies suck at system cooling, and have for decades.One thing I'd say, though - Intel have been fighting for a long time, and still haven't fixed it. So are they actually fighting that hard?
supdawgwtfd - Saturday, October 31, 2020 - link
Intel supplying better drivers that AMD?Put the crack pipe down mate.
Intel Graphics drivers are junk and have been for decades.
Spunjji - Sunday, November 1, 2020 - link
I hate the "AMD drivers suck" mantra more than most, but it's a fact that they repeatedly screwed things up with their mobile APUs. My understanding is that the situation has improved but is still not ideal.Intel CPU drivers suck, though, for sure. They rarely crash the system, but they're bug ridden.
ozzuneoj86 - Saturday, October 31, 2020 - link
Serious question:How many people own a laptop with this level of graphics? Maybe it's just because I search for "deals" rather than specific models, but I hardly ever see a laptop with low end discreet graphics that I would consider a good deal. Generally I see good solid systems with the latest AMD or Intel CPUs with integrated graphics or affordable gaming systems with 1650-level graphics for surprisingly low prices.
Who actually needs just a little bit more performance than integrated, but doesn't want too much performance? Seems oddly specific... but Intel clearly knows what is worth investing in, so there must be more of a market for these than I think?
lmcd - Saturday, October 31, 2020 - link
The boost is entirely via memory bandwidth and thermal separation. A 1650 exceeds the thermal usage of even a 35W CPU. An MX450 does not most likely, and obviously an Xe max does not.This means that 2x a small, thin thermal solution can be allocated to 1 for the GPU, 1 for the CPU.
Spunjji - Saturday, October 31, 2020 - link
Prior to Ice Lake and Renoir, this was a moderately popular segment well-served by Nvidia. Pretty much "how much GPU can we fit into this space where once there could have been none".But Intel just went and made sure pretty much all upcoming ultrabooks will already have a competent iGPU thanks to Tiger (drivers notwithstanding), rendering this product nearly pointless.
Tomatotech - Saturday, October 31, 2020 - link
Not sure why Intel are going into discrete graphics.Their Optane tech had real advantages - all Intel had to do was actually implement it as properly tiered storage (a well-understood technology) for almost instant restarting / wake / hibernate / app startup. A few gig in every chip or mobo would be cheap at scale and Intel would be onto a winner. Instead they half-assed it.
Perhaps with this ‘content creation’ dGPU they’re making a play for phones, car sensors, smart device sensors etc, but it’s ferociously competitive and price sensitive. Otherwise, well, I don’t know. dGPU market is tiny for a company like Intel and desktops / laptops are a mature / shrinking market, especially if your name isn’t Apple.
Spunjji - Saturday, October 31, 2020 - link
They'll find plenty of OEMs to take these off then, and plenty of customers will end up with them - but IMHO it's all utterly pointless.JayNor - Saturday, October 31, 2020 - link
Intel's Xe gpus can use Optane... read their patent applications.Kevin G - Saturday, October 31, 2020 - link
Very meh on this. It is an important stepping stone for Intel releasing a discrete graphics card but feels late to market without anything really going for it.At ~72 mm^2, why not scale it upward to say 144 mm^2 or 288 mm^2 that'd be more performant? This really does feel like a TigerLake die with the x86 cores hacked off. Ditto for the idea of using the same memory controller as TigerLake. While there are some OEM benefits to sticking with LPDDR4X-4266 (same memory part for CPU and GPU), the lack of GDDR6 support is very surprising. Granted there can be a handful of benefits coupled with Tiger Lake (see Adaptix comments below), the big question is why would an OEM spend time/money on developing this for the advantages it gets on top of a higher bill of materials for the laptops? Intel would practically have to give these away for free
The ability to leverage multi-GPU with Tiger Lake is interesting but likely not enough to be competitive a step up in the nVidia or AMD's mobile line up. While everyone loves (and in many cases deservingly so) to make fun of AMD for poor drivers, Intel historically has been even worse. I'm hyper skeptical that Intel has a good multi-GPU solution working. I can see the marketing slides indicating a solid performance gains in raw FPS but I'd predict that it'll be a microstuttering mess. Doubly so if Intel were to release a dual Iris Xe MAX discrete gaming card for desktops down the road.
In defense of Intel a little bit, the Adaptix technology can be beneficial in terms of power/cooling in some scenarios based upon the laptop design. Thermally you have more silicon area that generates heat and thus less thermal density. IE it is easier to move the generated heat to the cooling solution which would result in lower aggregate system temperatures. Allowing the OEMs control over this is critical as the advantages to using Adaptix will be dependent upon the TigerLake + Iris XE Max Hyper Turbo Black Legendary Collector's Edition implementation. Intel does need to work with OEMs on this to not screw it up to avoid sub-optimal traps.
I will say that this chip does look promising for the encoding/transcoding niche. Intel has the parts to build an impressive replacement to their VCA lineup around this chip. While the usage of LPDDR4X-4266 isn't that great for gaming performance, an expected support for DDR4L to enable SO-DIMM slots does make it interesting as a data center part: you can work on huge data sets if you put in larger SO-DIMMs. Slapping four of these Iris Xe MAX Delux Gold Ti Special Limited Editions on a card, eight SO-DIMM slots with 256 GB would be a transcoding beast to chew through the most demanding 8K footage. Or take one of those chips, some SO-DIMM slots and pair with a high end Intel NIC and/or FPGA on the card makes things interesting for some niche applications. Intel really needs to execute here as these markets are niche but they are lucrative.
Oh, and Intel's marketing name need to figure out a better naming scheme.
lmcd - Saturday, October 31, 2020 - link
AMD's laptop iGPU drivers are substantially worse than their dGPU drivers. Intel took power from OEMs a while ago but AMD has not yet to the same degree. As someone who has had their AMD driver settings panel broken 3+ times by Windows update by an older driver than I previously had installed, I can promise you there is upside.supdawgwtfd - Saturday, October 31, 2020 - link
Uhhhh....The drivers are the same.
Has been for a while.
AMD took back control of their drivers from OEMs because the OEMs were NOT updating them ever.
I think you need to do some research before spouting more nonsense.
lmcd - Thursday, January 28, 2021 - link
Didn't get around to refuting your utter garbage comment. The drivers might be the same, but OEMs can push custom drivers. I get custom drivers pushed over my manually-installed ones basically every 4-6mo, and they break the control panel.HarryVoyager - Saturday, October 31, 2020 - link
That's an interesting though. Could these actually be failed Tiger Lake chips that they are simply harvesting for the GPUs, to sell rather than scrap?Kevin G - Sunday, November 1, 2020 - link
Tiger Lake has a different reported die size than Iris XE Max. However there is no reason why Intel couldn't crippled TigerLake SoCs with the CPU portion disabled. It is about harvested the maximum amount of semi-functional dies and if there are enough Tiger Lakes without functional CPUs (or say 1 or 2 cores) or enough where the CPU side fails at power consumption binning, it could happen as a limited OEM only SKU. Such OEM specific SKUs are rarely available at launch so it'll be several months before such part appears and it'll do so with little fan fair.Spunjji - Monday, November 2, 2020 - link
Intel do have prior for selling "coprocessors" that are an entire CPU, so it wouldn't be the first time. They seem to be claiming it'a a real dedicated GPU, though, albeit one that appears to reuse the CPU's memory controller. It does seem a bit odd for them to use precious 10SF capacity for additional products when they can't get out their 8-core TGL, too. Curious stuff.The_Assimilator - Saturday, October 31, 2020 - link
So it appears that Intel's "discrete" GPUs are just going to be derivatives of their same old tired rubbish iGPUs.This product, in particular, will only be chosen by OEMs until NVIDIA decides to adjust the price of the MX450 to undercut it. Intel's sole accomplishment with this GPU, therefore, is to force the retirement of the MX350... almost certainly not what they were hoping for.
Spunjji - Sunday, November 1, 2020 - link
100%vladx - Saturday, October 31, 2020 - link
Do it's a GPU solely for video encoding, that's pretty neat especially if they make it available to NUCs.eastcoast_pete - Saturday, October 31, 2020 - link
This and possible future dGPUs for laptops makes a lot of sense for Intel. Laptops, especially light and ultralight ones, are a key profit center for Intel, so trying to own a large share of the dGPU space there makes sense. The biggest question that only a test can answer is how does Intel's iGPU/dGPU solution compare to AMD's big iGPU in Renoir, especially the 4800/4900 models. If Intel's Xe can't beat the graphics performance of those, this is pretty much useless. If they can beat the 7 or 8 Vega cores in Renoir AND pull even with NVIDIA's MX350, Intel has a leg to stand on. So, test please!eastcoast_pete - Saturday, October 31, 2020 - link
And by beating built-in Vega, I mean by more than 1-3 fps. It has to be pretty decisively, or Renoir's better CPU performance will always wipe the floor with Tiger Lake.lmcd - Saturday, October 31, 2020 - link
Tiger Lake is confirmed to pass Renoir, which sort of sandbagged on iGPU performance this generation.Renoir vs Tiger Lake favors Renoir due to core counts, in part aided by how tiny Renoir's GPU is. It'll be interesting how AMD manages a larger iGPU alongside an 8-core laptop part. I hope they can do it but I doubt it'll be cheap, or widely available for purchase. After all, it'll be the biggest die they produce (that requires low leakage, anyway).
supdawgwtfd - Saturday, October 31, 2020 - link
Confirmed by who?Intel?
Did they have a chiller under the table while running the test?
Can't believe a word from Intel ATM.
They have been far to dodgy with their benchmarking as if late.
Spunjji - Sunday, November 1, 2020 - link
Independent tests show that Tiger passes Renoir, albeit mostly in canned benchmarks.In real games, Tiger is frequently let down by the shitty Intel drivers. Bugs, stuttering, glitches, crashes - not universal, but fairly common.
Haawser - Sunday, November 1, 2020 - link
Hmm. It 'beats' Renoir when the Renoir is at 15W and the TL at 28W...Spunjji - Monday, November 2, 2020 - link
I was talking in pure GPU terms, and I'm pretty sure TGL still has a small margin over Renoir in those situations even when they're constrained to comparable power levels (15W constant, ~30W boost).But like I said, it doesn't really bear out in practice anyway. 🤷♂️
Spunjji - Sunday, November 1, 2020 - link
Well, they already fail that standard with the iGPU, and the dGPU isn't always faster... Go figure!descendency - Saturday, October 31, 2020 - link
Why is Intel so bad at naming products? "Xe MAX" makes it sound like this is the absolute best the Xe lineup can do. I wish they would hire someone that will fix their names soon.Smell This - Monday, November 2, 2020 - link
well . . .ZMax was already taken?
(that is an auto product, I think)
cbm80 - Saturday, October 31, 2020 - link
Would be a great card for an AMD workstation.GreenReaper - Saturday, October 31, 2020 - link
Honestly I think this makes more sense in a server as an expansion card, but I'm sure there will be some cases where it's the perfect solution. Just not that many.davidefreeman - Saturday, October 31, 2020 - link
Even though they're pairing this with ICL-derivative parts, it seems like it would be a better story to pair the Xe MAX dGPU with 14-nm parts, especially Rocket Lake with it's weaker, hotter Xe graphics.repoman27 - Sunday, November 1, 2020 - link
That’s exactly why Intel made this. It was intended to be a companion to the cancelled 14nm RKL-U. Although it could also serve alongside a hypothetical RKL-H in the event that TGL-H doesn’t pan out.mrvco - Saturday, October 31, 2020 - link
Interesting or not, Intel sure is good at keeping their PR at the top of the news feeds.lightningz71 - Saturday, October 31, 2020 - link
So, with it having a PCIe 4.0 x4 interface, and being quite small, I wonder if it would be at all possible for this to be made into an M.2 card as a sort of expansion adapter for add on video encoding.brucethemoose - Sunday, November 1, 2020 - link
Thats actually a neat idea. 4x DG1 on a 16x PCIe riser could transcoded a whole of video, or accelerate little compute insances in a dense serversbrucethemoose - Sunday, November 1, 2020 - link
The lack of an edit button on Anandtech is so painful...Cubesis - Sunday, November 1, 2020 - link
Signed up just to compliment someone on a hilarious comment and already know from casual reading that this site doesn’t have an edit button and it will be the bane of my existence here.Cubesis - Sunday, November 1, 2020 - link
*proof reads obsessively*brucethemoose - Sunday, November 1, 2020 - link
And note that Intel have already made cards like this with laptop IGPs, though IDK if they were ever stuck on M.2 sticks.tygrus - Sunday, November 1, 2020 - link
Intel doesn't have enough available working 10nm wafer capacity for mobile + Bigger Xe in big volume. So they are stuck with edited Tiger Lake for now. If Intel 10nm had made it to desktop 12 months ago then they could have focused on 2x 96 EU (192 EU total, 4 video on dGPU etc) and then bigger.The Intel 740 was partly a failure because by the time it came to market it was being beaten by existing competitors. The 740 did become the basis for integrated graphics (slightly updated version in the 810 & 815 chipsets). Intel later updated their iGPU tech for entry level performance with CPU's until recent attempts to really improve it.
https://www.tomshardware.com/reviews/graphic-chips...
https://web.archive.org/web/20140329071342/http://...
https://en.wikipedia.org/wiki/Intel740
JfromImaginstuff - Sunday, November 1, 2020 - link
For capacity for the chips, they may go with an external fab as they've been hinting at..................,.....(maybezodiacfml - Sunday, November 1, 2020 - link
Such a low performance target. I hope they get this up to around the $200 budget where they can trash an rx 570 or 580 or equivalent.Spunjji - Sunday, November 1, 2020 - link
By then, the 580 will be a distant memory at the $200 price range.zodiacfml - Sunday, November 1, 2020 - link
Well you could say that. They move/work quite slow considering the capabilities of this chip. By the time this comes out, AMD could have an APU/iGPU performing closely to this discrete GPU.Oxford Guy - Monday, November 2, 2020 - link
Considering that a 580 is basically a rebadged 480 that takes us to mid 2016. So very very old — and not all that impressive when it debuted since it was designed to be cheap to make more than high in performance.zodiacfml - Sunday, November 1, 2020 - link
Crazy nothing in here except for decoding AV1.GeoffreyA - Sunday, November 1, 2020 - link
"We've made this wonderful kettle; but forget boiling water, which it can *also* do, it works beautifully as a jug to pour water from. Pour everyone's water simultaneously."---Intel Xe Kettle Press Release, 2020eastcoast_pete - Sunday, November 1, 2020 - link
Hi Ryan, if and when you or Ian test one of these, please do a side-by-side comparison of the video encode/transcode with NVENC in Turing or Ampere (same ASIC from I I read, so a 1660 should already have the recent, much better version of NVENC). If possible, do a 4K clip from an action cam and maybe one from the new iPhone 12 in HDR or Dolby (self-serving request, likely use scenario for me). At least up to now, Intel's Quicksync was, yes, fast, but with at-best mediocre quality output. If Intel is positioning these Xe iGPU/dGPU combos as good for video, they might have something if (!) they can match or beat NVENC in both speed and quality. Of course, strictly software-based encoding remains the gold standard, but dedicated ASICs beat the pants of even a big Threadripper on encoding speed, and the newest ASICs aren't half bad.yeeeeman - Sunday, November 1, 2020 - link
This look pretty competitive with mx450 from the leaks. At the same 25w tdp, xe max is a bit better.Spunjji - Monday, November 2, 2020 - link
MX450 is about 33% faster than MX350 according to those leaks. So no, this isn't "a bit better", it's at least 25% worse on average at the same TDP.Seriously, sometimes I wonder why commenters like you even bother. It took me ~20 seconds on Google to find out you were wrong.
Danvelopment - Sunday, November 1, 2020 - link
"Meanwhile the part offers a full display controller block as well, meaning it can drive 4 displays over HDMI 2.0b and DisplayPort 1.4a at up to 8K resolutions."If I can get three 4K60 displays running simultaneously (no gaming) through a port replicator then they'll have a sale from me.
Oxford Guy - Sunday, November 1, 2020 - link
Iris Xe MAX, eh?It clearly suffers from NEN syndrome (not enough names).
Oxford Guy - Sunday, November 1, 2020 - link
Pardon...Intel Iris Xe MAX.
(Whew... that's better.)
Revv233 - Monday, November 2, 2020 - link
I can see value in laptops you might want to do some games on the road but don't want a gaming laptop.dragosmp - Monday, November 2, 2020 - link
Thanks for the article, good info, but hard to read.Can I please slip a request - could we get a decoder table with each of the lakes/coves and on which platforms/sockets/processes/etc they run?
Feels to me Intel is FOWing with basically a brand for every core revision, however minor, to make it look like they do something. Lousy marketing; not saying the company or engineering is bad, but gwd is it hard to understand which is what. Maybe you can help Intel with communicating what they released.
xrror - Monday, November 2, 2020 - link
How does Intel screw their own products so badly like this???Xe should be something exciting, yet they introduce it in the lamest way possible. "Here, now you can buy our new dGPU in the one platform that basically takes the "discrete" out - better yet it doesn't even add value to the platform - it's entire functionality is already 100% duplicated by the integrated graphics that is present in every processor we currently make when we don't decide to fuse it off for marketing."
Oh, and lastly the one application you might expect Xe to be used for, you know graphics - no we can't use that to work share with it's identical workmate the iGPU.
arrrrghhh
Honestly I wish Intel would let the chains off their GPU folks and let the YOLO that 4 die Xe part. I'd lobe to see what 96x4 = 384 Intel CUs can do.
Oxford Guy - Monday, November 2, 2020 - link
"How does Intel screw their own products so badly like this???"Arrogance.
Why else would Intel sell the Broadwell-C, which showed the power of eDRAM for gaming — and then coast on Skylake for forever without deigning to provide that benefit (an estimated whopping $10 worth of parts).
Also, a little thing called inadequate competition in an allegedly (but not so much) capitalist environment. People complained about how poorly AMD was competing with its terrible Bulldozer family and yet even if AMD had been competiting well that's still only duopoly.
Mrb2k3 - Thursday, November 5, 2020 - link
In reality this is integrated "free" graphics. The level will creep growing higher but will be never be what you want for your solution. Accept this is nothing more than that. 15 years plus .... If you care about your GPU you'd look elsewhere. Integrated graphics keep pace but its only that and will not outpace the user workload.Botts - Thursday, November 5, 2020 - link
This is pretty interesting.We know Xe has AV1 decode support from the intel/media-driver code update right? This could be a big boost for media consumption.
If the encode performance is as strong as the slide suggests, this could have a decent market for security NVRs.
It would also make a pretty badass Plex transcode box. That's definitely what I'll be waiting for.