When I see what a big swing they're making on the software side I can't help but wonder if a big reason for them shipping 22q1 rather than 21q4 is software related. No matter what I hope they're going to have some sort of controls on pricing so that if/when they sell direct to consumers we don't see a used markups.
He's so high-level that I wonder... If he was bad enough, no doubt he could throw several wrenches into the works. However, if the people under him are good enough, they can surely carry the project without his help.
Having an incompetent boss just means you have to "manage up". It's taxing, but it can be done.
It's so sad to see every gpu manufacturer wasting precious die space for the image quality degenerative scaling. I bet in 10 years time, we will have to live with this crap whether we choose to or not. DLSS and the likes are not image improving, why go to such lengths to compete in the image tearing techniques?
It will never compare to native resolutions. It's like interpolation is the new black. I'll never turn that on as long it's not forced. Use the diespace for more shaders instead. Or lower the costs by making smaller chips, hell, lower class cards as 3060 and 6700 costs like high tier cards did a couple of years ago.
The 6700 does not contain anything comparable to the tensor cores found on Nvidia GPUs, so your comparison doesn't make sense. That is an example of exactly what you want: A GPU with more shaders and no dedicated ML hardware. But somehow the 6700 isn't magically cheaper. Weird, huh?
Using die space for tensor cores like Nvidia has done has been nothing but an improvement for users. It means gamers can play games at higher resolutions than would be possible by throwing more shader cores at the GPU and it means that professionals with ML workloads get vastly improved local performance.
I agree with you 99%, although the RDNA cards do burn a bit of die space on "rapid packed math" instructions. Not on par with tensor cores, either in terms of performance or die space.
His comment doesn't imply that he believes the 6700 has tensor cores. He is saying that low and midrange cards are more expensive than they used to be (from current supply not meeting demand, and ever more complex manufacturing), and making the cores bigger to accommodate tensor cores is not going to help.
The extra die space to have the ALUs do to packed math is trivial compared to tensor cores.
It is mathematically better compared to native resolutions. It's a proven fact and you are just ignoring it.
It's no magic here, just by rendering jittered 4 frames and combine them into 1. For a 1440p render this is literally rendering 1 5120x2880 frame across 4 frames and use motion vector to compensate the moving parts. If you stand still it is literally a 5120x2880 5k resolution and no doubt this will be better than a 4k native.
> It is mathematically better compared to native resolutions.
This can only be true, in general, if the native 4k render was done at a lower quality setting, like no-AA.
> It's no magic here, just by rendering jittered 4 frames and combine them into 1.
If they rendered the same frame 4 times, that would be *slower* than native rendering. So, I gather you mean using TAA-like techniques to combine a new frame with the previous 3.
> For a 1440p render this is literally rendering 1 5120x2880 frame > across 4 frames and use motion vector to compensate the moving parts.
Not exactly, because the samples don't line up quite that neatly.
> If you stand still it is literally a 5120x2880
Well, that is the simplest case. The real test is going to be how it handles something like walking through a field of swaying grass.
Flashing and rapidly-moving light sources are going to be another challenge.
Yep it wont surpass native resolution. But in our case, if a model is trained on 16k frames on servers, the inference can produce 4k images that go beyong 4k native.
I think you're right, these things might be here to stay. Higher resolutions are pretty taxing, 4k@60hz and especially 4k*144hz, which is 9.6 TIMES the pixels per second, versus a 1080p*60hz. 9.6 TIMES, that's not gonna work unless you want to bring back quad SLI or something (no thanks). Plus we have consoles to deal with.
All this stuff will look good on screens. It's just a way to achieve better performance per dollar (which won't necessarily benefit us because they'll just raise prices on cards that can upscale), or per watt. And a way to render on 4k screens. I like 1440p screens but TVs use 4k, and anyway for PC the extra resolution is nice for a lot of things besides gaming.
The scaling hardware might end up useless in 5 years though, as more of these newer scaling things come out.
I would like to see a demo that switches a game, while playing, between native 4k and each resolution (from 240p native, to 4k, with every single resolution/upscaling option in between), and sorted by framerate. And then allow the user to switch on-the-fly with F1 and F2 or something. Kind of like the GamersNexus demos of DLSS, except instead of zooming in on images you'd just do it in game.
I'm mostly concerned with the input lag from the scaling.
As far as scaling goes, it's a lot better than you'd get with naive methods. So, I reject the construction "quality degenerative scaling", unless you mean that any scaled image is degenerative by comparison with native rendering @ the same quality level.
> image tearing techniques?
Image-tearing has a precise definition, and it's not what you mean. I think you mean "image-degrading".
And the answer is simple. As the article states, 4k monitors and TVs are pretty cheap and kinda hard to resist. Games are using ever more sophisticated rendering techniques to improve realism and graphical sophistication. So, it's difficult for GPUs ever to get far enough ahead of developers that a mid/low-end GPU could be used to render 4k. For those folks, high-quality scaling is simply the least bad option.
Once you dip your toes into even the basics of sampling theorem and 3D image rendering (or 2D, with the exception of unscaled sprites), it becomes very clear that pixels are a very poor measure of image fidelity. The demand for 'integer scaling; in drivers is a demonstration that this misapprehension is widespread, despite the confusion by those aware of how rendering works of why people are demanding the worst possible scaling algorithm. The idea that the pixels that pop out the end of the rendering pipeline are somehow sacrosanct compared to pixels that pass through postprocess filtering is absurd, given how much filtering and transformation they went though to reach the end of that pipeline in the first place. Anything other than a fixed-scale billboard with no anisotrpic filtering will mean every single texture pixel on your screen is filtered before reaching any postprocessor, but even if you ignore textures altogether (and demand running in some mythical engine that only applies lighting to solid colour primitives, I guess?) you still have MSAA and SSAA causing an output pixel to be a result of combining multiple non-aligned (i.e. not on the 'pixel grid' samples, temporal back- and forward-propagation meaning frames are dependant of previous frames (and occasionally predictive of future frames), screen-space shadows and reflections being at vastly different sample rates than the 'main' render (and being resamples of that render anyway), etc.
Ai-based scalers are mot mincing up your precious pixels. Those were ground down much earlier in the sausage-making render pipeline.
> The demand for 'integer scaling; in drivers is a demonstration > that this misapprehension is widespread
I don't think there's any misapprehension. Integer scaling is mostly for old games that run at very low, fixed resolutions. It's just easier to look at big, chunky pixels than the blurry mess you get from using even a high-quality conventional scaler, at such magnification.
However, some people have gone one better. There are both heuristic-based and neural-based techniques in use, that are far superior to simple integer scaling of old games.
> Ai-based scalers are mot mincing up your precious pixels.
Um, you mean @Kurosaki's precious pixels. I always thought the idea of DLSS had potential, even during the era of its problematic 1.0 version.
>Integer scaling is mostly for old games that run at very low, fixed resolutions.
For those games, where display on a CRT was the design intent, nearest-neighbour scaling is simply making the image worse than it needs to be for no good reason. Given that every game made in that period (anachronistic 'retro games' aside) intentionally took advantage of the intra-line, inter-line, and temporal blending effects of the CRTs they were viewed on, it is preferable to use one of the many CRT-emulation scalers available (not just 'scanline emulation, that's another anachronism that misses the point through misunderstanding them problem) than generating massive square 'pixels' that were intended by nobody actually creating them.
>Um, you mean @Kurosaki's precious pixels. I always thought the idea of DLSS had potential, even during the era of its problematic 1.0 version.
Well, more like "design reality", rather than some kind of self-imposed artistic limitation. And don't forget that the CRTs of that era were also smaller than the displays we use today.
I'm no retro gamer, however. I'm just making the case as I understand it. I'm just not so quick to write off what some enthusiasts say they prefer.
‘Given that every game made in that period (anachronistic 'retro games' aside) intentionally took advantage of the intra-line, inter-line, and temporal blending effects of the CRTs they were viewed on, it is preferable to use one of the many CRT-emulation scalers available (not just 'scanline emulation, that's another anachronism that misses the point through misunderstanding’
Yes. NES games, for instance, look vastly better for certain colors due to the influence of composite cabling — vs. the massively oversaturated colors outputted by most emulators. Browns are brown, not red. Etc.
The blurring makes things look more realistic, too.
Some software CRT emulators go too far with the blurring and some go too far with tube roundness distortion (considering the rather low distortion of a quality Trinitron and that even an ancient Zenith was available as a flat CRT). CRT quality varied and so did the calibration. I used to get guff for fixing oversaturated color from a man who was clearly partially colorblind. Neon TV colors aren’t just a symptom of the showroom.
Dlss2.x and the like produces a blurry, choppy and artifacted image, we will never escape that. All for the purpose of getting to say "it runs in 4k" except it doesn't. It runs in a lower Res and is upscaled to a higher res, there's where we find the performance boost, a boost they could have managed to find via more RT-units for examlpe. AI upscaling is hogwash. But it seems to stick, just because you can claim it's an improving fairy dust mumbo jumbo thingy that makes the sausages not only prettier, but also run faster. AMAZING!
> All for the purpose of getting to say "it runs in 4k" except it doesn't. > It runs in a lower Res and is upscaled to a higher res
The way I look at it is like this: do you want 1440p on a 1440p display, or 1440p being nicely upscaled on 4k, so that you can gain the benefits of having a 4k monitor for things like web browsing and productivity tasks? And if you're going to buy a 4k monitor no matter what, do you want naive upscaling or something higher quality?
I recently upgraded from 1440p to 4k, myself, and I was surprised at the sheer amount of screen realestate I now have. Just for office and productivity purposes, it's like night-and-day. Since this was my work PC, I haven't run any games on it.
In the ideal world, everyone would just buy a couple RTX 3090 cards and could use native rendering. However, that's far from the reality. So, the question is to find the best compromise.
these techniques are already superior to TAA for smoothing out aliasing without destroying the entire image and hallucinate some sharpness. what exactly is the problem? there's nothing indicating that replacing the die space with shaders would give much of an uplift, especially with how memory limited modern GPUs are. AMD didn't ship RDNA2 with tensor, built it on a superior fab process, yet they perform similarly to nvidia's ampere that "wastes" space on tensor. So the chips are less versatile and not even appreciably faster, whats the point?
The only advantage of DLSS is it's better than TAA disaster BS which is muddified garbage. I can be seen in RDR2 DLSS vs Native. Only MSAA is superior but sadly it taxes GPUs to a halt. Only powerful GPUs can handle that level of AA load. That said, yes it's unfortunate to see DLSS, FSR and this new XceSS bs take more space on the silicon.
Intel is like doing marketing slides hyper mode just like CDPR. Esp 4K image quality upscale with AI lol, like what. Intel doesn't even do good GPU technology and it has been like that for a decade+ and magically they conjure Raja and his hype and create a GPU (talking about that Pointe Vecchio) and beat Nvidia top A100 and AMD's MI compute cards.
I'm not going to believe a single thing said by Intel in these past couple of days. Until the 3rd party reviews come. Also this BS GPU has to maintain proper FPS and stability on the past 2 decades of computer games AND Emulators. Intel doing that level of solid development ? Nope. Not a chance.
The whole point is better performance per die space. If you had a 1440p screen and couldn't run the game, until now your only option was to run 1080p or at 85% resolution or something like that. Now you can run at "1440p" with supposedly twice the FPS, and still at better quality than your previous options.
In this sense it actually does improve image quality, and seems to be a pro-midrange user feature. The visible image quality per clock cycle is increased.
> Ian quickly picked up on, the clarity of the “ventilation” text in the above nearly > rivals the native 4K renderer, making it massively clearer than the illegible mess > on the original 1080p frame.
Yup. My eyes went right to that, as well.
> This is solid evidence that as part of XeSS Intel is also doing something outside > the scope of image upscaling to improve texture clarity, possibly by enforcing a > negative LOD bias on the game engine.
LOL wut? No. Pay close attention to the flow diagram in slide 92. See on the left side, where it says "Jitter"? That's the key. Super-resolution techniques *require* camera motion. By shifting the camera subtly, you can collect sub-pixel resolution, which is then usually projected into a high-resolution grid and interpolated. Their AI network can assist with the interpolation, to improve handling of tricky corner-cases.
Where this breaks down is if you have rapidly changing details that vary from one frame to the next, especially in ways that TAA-style motion vectors can't model. The frame-grab included is basically a best-case scenario, since it involves fixed geometry and presumably fixed lights.
Okay, I take the point. Because, even supersampling a low-res texture is going to still result in a blur. So, you'd have to force the render path to use a higher-res version of the texture than it normally would, for that display resolution at that distance.
Of course, we're presuming that it's even MIP-mapped. It's a little hard to tell, since they seem not to have used nearest-neighbor scaling of the 1080p version.
Strange nobody mentions that the 1080p version looks like 640p, I don't think I've ever seen that low quality 1080p, the textures must be of extraordinary low res.
> nobody mentions that the 1080p version looks like 640p
We're not seeing the whole frame. They cut a portion of the 4k, which is displayed at native resolution, and then the 1080p is scaled up (i.e. with conventional, blurry scaling).
I don't know to what you're referring, but it sounds like a misunderstanding. I complain about off-topic posts. Since this thread is about Intel's game-oriented upscaling technology, discussion of upscaling and games is obviously on-topic.
BTW, it's not even so much off-topic posts that concern me, since discussions often meander in various directions. My chief concern, in this regard, is about posts that seek to derail the discussion, seemingly out of nowhere.
Sorry, I still don't know what you're talking about.
I'm not against gaming-related posts. I'm against posts that try to hijack the thread by someone trying to air unrelated grievances. That, and conspiracy theories. And spam.
I am all for spatial filtering, but I'll let the birds take temporal filtering.
I know they tell people to save "game mode" on a TV for twitchy multiplayer games but I can feel even a frame of latency in a game like "Sword Art Online: Fatal Bullet" and it drains out the fun for me. I couldn't win at all in League of Legends using the monitor built into my Alienware laptop (somehow I would get hit by the other player before I had a chance to do anything) but when I hooked up an external monitor that was 30ms faster I could participate meaningfully.
You're misunderstanding TAA, then. It uses analytical motion vectors (in other words, precisely-computed, rather than the approximations produced by video compression engines) to map backwards to the same texture position, in multiple previous frames, and integrates these. It doesn't add a frame of latency. Try it.
DLSS 2.0 is something of a glorified postprocessing step applied to TAA. Similarly, I'm sure Intel's XeSS isn't going to add any frames of latency. It's not necessary and they're smart enough to understand that's a deal-breaker for most gamers.
I have not heard of a ~50ms latency monitor since probably 2005/10 at the latest and that would have been a bad monitor.
Most monitors that are 60Hz that are older (post 2010, pre 2018 or so) have about 10-16ms latency.
Personally I do notice that, anything above ~6/7ms feels odd to me, but that's mostly because I am used to 120Hz on most monitors, and that's with proper fps or 3rd person games, not something top down like lol, these display techs you're talking about add a ms of latency or so, you're not going to have an issue.
I'm just waiting for them to go the other way and whip up ClearFrag for my low-resolution monitor that's hung around for a decade. Can't justify buying anything new until it breaks! (Even then, it'd probably be better for the environment to try fixing it.)
If your old monitor uses a CFL backlighht, then replacing it with a LED-lit model should at least save energy.
I also dislike replacing stuff that still works. However, if a higher-res screen would improve your productivity, then you've got to consider the value of that in terms of the resources that *you* consume.
Good thing Intel chose TSMC for its enthusiast cards or there could have been an opportunity to reduce pricing via avoidance of the output constriction of TSMC.
Yeah, I had pretty much the same reaction. Whether they like it or not, many people are probably going pronounce it like that.
It just underscores how bad Intel is at naming. Xe was a dumb name, to start with. If someone says "zee", the listener wouldn't guess the spelling. If they say "xenon", it's too easily confused with Xeon. And nobody says "X to the power of e".
If I were Intel's CEO for a day, I'd pretty much fire the whole marketing division.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
thestryker - Thursday, August 19, 2021 - link
When I see what a big swing they're making on the software side I can't help but wonder if a big reason for them shipping 22q1 rather than 21q4 is software related. No matter what I hope they're going to have some sort of controls on pricing so that if/when they sell direct to consumers we don't see a used markups.thestryker - Thursday, August 19, 2021 - link
*huge markupLooking forward to the day edit exists!
Kamen Rider Blade - Thursday, August 19, 2021 - link
This launch will prove if Raja Koduri is a real GPU genius, or a joke and AMD was better off once he left.mode_13h - Thursday, August 19, 2021 - link
He's so high-level that I wonder... If he was bad enough, no doubt he could throw several wrenches into the works. However, if the people under him are good enough, they can surely carry the project without his help.Having an incompetent boss just means you have to "manage up". It's taxing, but it can be done.
Kurosaki - Thursday, August 19, 2021 - link
It's so sad to see every gpu manufacturer wasting precious die space for the image quality degenerative scaling. I bet in 10 years time, we will have to live with this crap whether we choose to or not. DLSS and the likes are not image improving, why go to such lengths to compete in the image tearing techniques?mikeztm - Thursday, August 19, 2021 - link
DLSS is a image improving technic.It generate image with smaller PSMR compare to original high resolution image and is mathmaticaly improving image.
Kurosaki - Friday, August 20, 2021 - link
It will never compare to native resolutions. It's like interpolation is the new black. I'll never turn that on as long it's not forced. Use the diespace for more shaders instead. Or lower the costs by making smaller chips, hell, lower class cards as 3060 and 6700 costs like high tier cards did a couple of years ago.jordanclock - Friday, August 20, 2021 - link
The 6700 does not contain anything comparable to the tensor cores found on Nvidia GPUs, so your comparison doesn't make sense. That is an example of exactly what you want: A GPU with more shaders and no dedicated ML hardware. But somehow the 6700 isn't magically cheaper. Weird, huh?Using die space for tensor cores like Nvidia has done has been nothing but an improvement for users. It means gamers can play games at higher resolutions than would be possible by throwing more shader cores at the GPU and it means that professionals with ML workloads get vastly improved local performance.
mode_13h - Saturday, August 21, 2021 - link
I agree with you 99%, although the RDNA cards do burn a bit of die space on "rapid packed math" instructions. Not on par with tensor cores, either in terms of performance or die space.Sushisamurai - Thursday, August 26, 2021 - link
lol, that comment on the 6700 being exactly what he wants and it's not cheaper. I died. So true, yet so sad.Mat3 - Wednesday, September 1, 2021 - link
His comment doesn't imply that he believes the 6700 has tensor cores. He is saying that low and midrange cards are more expensive than they used to be (from current supply not meeting demand, and ever more complex manufacturing), and making the cores bigger to accommodate tensor cores is not going to help.The extra die space to have the ALUs do to packed math is trivial compared to tensor cores.
mikeztm - Friday, August 20, 2021 - link
It is mathematically better compared to native resolutions.It's a proven fact and you are just ignoring it.
It's no magic here, just by rendering jittered 4 frames and combine them into 1.
For a 1440p render this is literally rendering 1 5120x2880 frame across 4 frames and use motion vector to compensate the moving parts.
If you stand still it is literally a 5120x2880 5k resolution and no doubt this will be better than a 4k native.
mode_13h - Saturday, August 21, 2021 - link
> It is mathematically better compared to native resolutions.This can only be true, in general, if the native 4k render was done at a lower quality setting, like no-AA.
> It's no magic here, just by rendering jittered 4 frames and combine them into 1.
If they rendered the same frame 4 times, that would be *slower* than native rendering. So, I gather you mean using TAA-like techniques to combine a new frame with the previous 3.
> For a 1440p render this is literally rendering 1 5120x2880 frame
> across 4 frames and use motion vector to compensate the moving parts.
Not exactly, because the samples don't line up quite that neatly.
> If you stand still it is literally a 5120x2880
Well, that is the simplest case. The real test is going to be how it handles something like walking through a field of swaying grass.
Flashing and rapidly-moving light sources are going to be another challenge.
Bik - Saturday, August 21, 2021 - link
Yep it wont surpass native resolution. But in our case, if a model is trained on 16k frames on servers, the inference can produce 4k images that go beyong 4k native.flyingpants265 - Wednesday, September 15, 2021 - link
I think you're right, these things might be here to stay. Higher resolutions are pretty taxing, 4k@60hz and especially 4k*144hz, which is 9.6 TIMES the pixels per second, versus a 1080p*60hz. 9.6 TIMES, that's not gonna work unless you want to bring back quad SLI or something (no thanks). Plus we have consoles to deal with.All this stuff will look good on screens. It's just a way to achieve better performance per dollar (which won't necessarily benefit us because they'll just raise prices on cards that can upscale), or per watt. And a way to render on 4k screens. I like 1440p screens but TVs use 4k, and anyway for PC the extra resolution is nice for a lot of things besides gaming.
The scaling hardware might end up useless in 5 years though, as more of these newer scaling things come out.
I would like to see a demo that switches a game, while playing, between native 4k and each resolution (from 240p native, to 4k, with every single resolution/upscaling option in between), and sorted by framerate. And then allow the user to switch on-the-fly with F1 and F2 or something. Kind of like the GamersNexus demos of DLSS, except instead of zooming in on images you'd just do it in game.
I'm mostly concerned with the input lag from the scaling.
mode_13h - Friday, August 20, 2021 - link
> quality degenerative scalingAs far as scaling goes, it's a lot better than you'd get with naive methods. So, I reject the construction "quality degenerative scaling", unless you mean that any scaled image is degenerative by comparison with native rendering @ the same quality level.
> image tearing techniques?
Image-tearing has a precise definition, and it's not what you mean. I think you mean "image-degrading".
And the answer is simple. As the article states, 4k monitors and TVs are pretty cheap and kinda hard to resist. Games are using ever more sophisticated rendering techniques to improve realism and graphical sophistication. So, it's difficult for GPUs ever to get far enough ahead of developers that a mid/low-end GPU could be used to render 4k. For those folks, high-quality scaling is simply the least bad option.
edzieba - Friday, August 20, 2021 - link
Once you dip your toes into even the basics of sampling theorem and 3D image rendering (or 2D, with the exception of unscaled sprites), it becomes very clear that pixels are a very poor measure of image fidelity. The demand for 'integer scaling; in drivers is a demonstration that this misapprehension is widespread, despite the confusion by those aware of how rendering works of why people are demanding the worst possible scaling algorithm.The idea that the pixels that pop out the end of the rendering pipeline are somehow sacrosanct compared to pixels that pass through postprocess filtering is absurd, given how much filtering and transformation they went though to reach the end of that pipeline in the first place. Anything other than a fixed-scale billboard with no anisotrpic filtering will mean every single texture pixel on your screen is filtered before reaching any postprocessor, but even if you ignore textures altogether (and demand running in some mythical engine that only applies lighting to solid colour primitives, I guess?) you still have MSAA and SSAA causing an output pixel to be a result of combining multiple non-aligned (i.e. not on the 'pixel grid' samples, temporal back- and forward-propagation meaning frames are dependant of previous frames (and occasionally predictive of future frames), screen-space shadows and reflections being at vastly different sample rates than the 'main' render (and being resamples of that render anyway), etc.
Ai-based scalers are mot mincing up your precious pixels. Those were ground down much earlier in the sausage-making render pipeline.
mode_13h - Saturday, August 21, 2021 - link
> The demand for 'integer scaling; in drivers is a demonstration> that this misapprehension is widespread
I don't think there's any misapprehension. Integer scaling is mostly for old games that run at very low, fixed resolutions. It's just easier to look at big, chunky pixels than the blurry mess you get from using even a high-quality conventional scaler, at such magnification.
However, some people have gone one better. There are both heuristic-based and neural-based techniques in use, that are far superior to simple integer scaling of old games.
> Ai-based scalers are mot mincing up your precious pixels.
Um, you mean @Kurosaki's precious pixels. I always thought the idea of DLSS had potential, even during the era of its problematic 1.0 version.
edzieba - Sunday, August 22, 2021 - link
>Integer scaling is mostly for old games that run at very low, fixed resolutions.For those games, where display on a CRT was the design intent, nearest-neighbour scaling is simply making the image worse than it needs to be for no good reason. Given that every game made in that period (anachronistic 'retro games' aside) intentionally took advantage of the intra-line, inter-line, and temporal blending effects of the CRTs they were viewed on, it is preferable to use one of the many CRT-emulation scalers available (not just 'scanline emulation, that's another anachronism that misses the point through misunderstanding them problem) than generating massive square 'pixels' that were intended by nobody actually creating them.
>Um, you mean @Kurosaki's precious pixels. I always thought the idea of DLSS had potential, even during the era of its problematic 1.0 version.
Yep, hit reply in the wrong nested comment
mode_13h - Monday, August 23, 2021 - link
> where display on a CRT was the design intentWell, more like "design reality", rather than some kind of self-imposed artistic limitation. And don't forget that the CRTs of that era were also smaller than the displays we use today.
I'm no retro gamer, however. I'm just making the case as I understand it. I'm just not so quick to write off what some enthusiasts say they prefer.
Oxford Guy - Tuesday, August 24, 2021 - link
‘Given that every game made in that period (anachronistic 'retro games' aside) intentionally took advantage of the intra-line, inter-line, and temporal blending effects of the CRTs they were viewed on, it is preferable to use one of the many CRT-emulation scalers available (not just 'scanline emulation, that's another anachronism that misses the point through misunderstanding’Yes. NES games, for instance, look vastly better for certain colors due to the influence of composite cabling — vs. the massively oversaturated colors outputted by most emulators. Browns are brown, not red. Etc.
The blurring makes things look more realistic, too.
Some software CRT emulators go too far with the blurring and some go too far with tube roundness distortion (considering the rather low distortion of a quality Trinitron and that even an ancient Zenith was available as a flat CRT). CRT quality varied and so did the calibration. I used to get guff for fixing oversaturated color from a man who was clearly partially colorblind. Neon TV colors aren’t just a symptom of the showroom.
The best CRT emulators should be adjustable.
Kurosaki - Sunday, August 22, 2021 - link
Dlss2.x and the like produces a blurry, choppy and artifacted image, we will never escape that. All for the purpose of getting to say "it runs in 4k" except it doesn't. It runs in a lower Res and is upscaled to a higher res, there's where we find the performance boost, a boost they could have managed to find via more RT-units for examlpe. AI upscaling is hogwash. But it seems to stick, just because you can claim it's an improving fairy dust mumbo jumbo thingy that makes the sausages not only prettier, but also run faster. AMAZING!mode_13h - Monday, August 23, 2021 - link
> All for the purpose of getting to say "it runs in 4k" except it doesn't.> It runs in a lower Res and is upscaled to a higher res
The way I look at it is like this: do you want 1440p on a 1440p display, or 1440p being nicely upscaled on 4k, so that you can gain the benefits of having a 4k monitor for things like web browsing and productivity tasks? And if you're going to buy a 4k monitor no matter what, do you want naive upscaling or something higher quality?
I recently upgraded from 1440p to 4k, myself, and I was surprised at the sheer amount of screen realestate I now have. Just for office and productivity purposes, it's like night-and-day. Since this was my work PC, I haven't run any games on it.
In the ideal world, everyone would just buy a couple RTX 3090 cards and could use native rendering. However, that's far from the reality. So, the question is to find the best compromise.
whatthe123 - Friday, August 20, 2021 - link
these techniques are already superior to TAA for smoothing out aliasing without destroying the entire image and hallucinate some sharpness. what exactly is the problem? there's nothing indicating that replacing the die space with shaders would give much of an uplift, especially with how memory limited modern GPUs are. AMD didn't ship RDNA2 with tensor, built it on a superior fab process, yet they perform similarly to nvidia's ampere that "wastes" space on tensor. So the chips are less versatile and not even appreciably faster, whats the point?Silver5urfer - Saturday, August 21, 2021 - link
The only advantage of DLSS is it's better than TAA disaster BS which is muddified garbage. I can be seen in RDR2 DLSS vs Native. Only MSAA is superior but sadly it taxes GPUs to a halt. Only powerful GPUs can handle that level of AA load. That said, yes it's unfortunate to see DLSS, FSR and this new XceSS bs take more space on the silicon.Intel is like doing marketing slides hyper mode just like CDPR. Esp 4K image quality upscale with AI lol, like what. Intel doesn't even do good GPU technology and it has been like that for a decade+ and magically they conjure Raja and his hype and create a GPU (talking about that Pointe Vecchio) and beat Nvidia top A100 and AMD's MI compute cards.
I'm not going to believe a single thing said by Intel in these past couple of days. Until the 3rd party reviews come. Also this BS GPU has to maintain proper FPS and stability on the past 2 decades of computer games AND Emulators. Intel doing that level of solid development ? Nope. Not a chance.
flyingpants265 - Wednesday, September 15, 2021 - link
The whole point is better performance per die space. If you had a 1440p screen and couldn't run the game, until now your only option was to run 1080p or at 85% resolution or something like that. Now you can run at "1440p" with supposedly twice the FPS, and still at better quality than your previous options.In this sense it actually does improve image quality, and seems to be a pro-midrange user feature. The visible image quality per clock cycle is increased.
mode_13h - Thursday, August 19, 2021 - link
> Ian quickly picked up on, the clarity of the “ventilation” text in the above nearly> rivals the native 4K renderer, making it massively clearer than the illegible mess
> on the original 1080p frame.
Yup. My eyes went right to that, as well.
> This is solid evidence that as part of XeSS Intel is also doing something outside
> the scope of image upscaling to improve texture clarity, possibly by enforcing a
> negative LOD bias on the game engine.
LOL wut? No. Pay close attention to the flow diagram in slide 92. See on the left side, where it says "Jitter"? That's the key. Super-resolution techniques *require* camera motion. By shifting the camera subtly, you can collect sub-pixel resolution, which is then usually projected into a high-resolution grid and interpolated. Their AI network can assist with the interpolation, to improve handling of tricky corner-cases.
Where this breaks down is if you have rapidly changing details that vary from one frame to the next, especially in ways that TAA-style motion vectors can't model. The frame-grab included is basically a best-case scenario, since it involves fixed geometry and presumably fixed lights.
SeannyB - Friday, August 20, 2021 - link
Maybe what was meant was a negative mipmap bias (which Nvidia calls "LOD bias" in their control panel IIRC).mode_13h - Saturday, August 21, 2021 - link
Okay, I take the point. Because, even supersampling a low-res texture is going to still result in a blur. So, you'd have to force the render path to use a higher-res version of the texture than it normally would, for that display resolution at that distance.Of course, we're presuming that it's even MIP-mapped. It's a little hard to tell, since they seem not to have used nearest-neighbor scaling of the 1080p version.
Zoolook - Monday, August 23, 2021 - link
Strange nobody mentions that the 1080p version looks like 640p, I don't think I've ever seen that low quality 1080p, the textures must be of extraordinary low res.mode_13h - Tuesday, August 24, 2021 - link
> nobody mentions that the 1080p version looks like 640pWe're not seeing the whole frame. They cut a portion of the 4k, which is displayed at native resolution, and then the 1080p is scaled up (i.e. with conventional, blurry scaling).
Oxford Guy - Tuesday, August 24, 2021 - link
A lot of talk about video game tech from someone who chided me for posting about PC gaming here, via the claim that this isn’t the forum for it.mode_13h - Wednesday, August 25, 2021 - link
I don't know to what you're referring, but it sounds like a misunderstanding. I complain about off-topic posts. Since this thread is about Intel's game-oriented upscaling technology, discussion of upscaling and games is obviously on-topic.mode_13h - Wednesday, August 25, 2021 - link
BTW, it's not even so much off-topic posts that concern me, since discussions often meander in various directions. My chief concern, in this regard, is about posts that seek to derail the discussion, seemingly out of nowhere.Oxford Guy - Wednesday, August 25, 2021 - link
‘I don't know to what you're referring, but it sounds like a misunderstanding. I complain about off-topic posts.’Not a misunderstanding.
You referred to me in the third person to preen by claiming I’m wasting my words here since the place doesn’t attract many PC gamers.
mode_13h - Thursday, August 26, 2021 - link
Sorry, I still don't know what you're talking about.I'm not against gaming-related posts. I'm against posts that try to hijack the thread by someone trying to air unrelated grievances. That, and conspiracy theories. And spam.
PaulHoule - Friday, August 20, 2021 - link
I am all for spatial filtering, but I'll let the birds take temporal filtering.I know they tell people to save "game mode" on a TV for twitchy multiplayer games but I can feel even a frame of latency in a game like "Sword Art Online: Fatal Bullet" and it drains out the fun for me. I couldn't win at all in League of Legends using the monitor built into my Alienware laptop (somehow I would get hit by the other player before I had a chance to do anything) but when I hooked up an external monitor that was 30ms faster I could participate meaningfully.
mode_13h - Saturday, August 21, 2021 - link
> I can feel even a frame of latencyYou're misunderstanding TAA, then. It uses analytical motion vectors (in other words, precisely-computed, rather than the approximations produced by video compression engines) to map backwards to the same texture position, in multiple previous frames, and integrates these. It doesn't add a frame of latency. Try it.
DLSS 2.0 is something of a glorified postprocessing step applied to TAA. Similarly, I'm sure Intel's XeSS isn't going to add any frames of latency. It's not necessary and they're smart enough to understand that's a deal-breaker for most gamers.
RSAUser - Saturday, September 18, 2021 - link
I have not heard of a ~50ms latency monitor since probably 2005/10 at the latest and that would have been a bad monitor.Most monitors that are 60Hz that are older (post 2010, pre 2018 or so) have about 10-16ms latency.
Personally I do notice that, anything above ~6/7ms feels odd to me, but that's mostly because I am used to 120Hz on most monitors, and that's with proper fps or 3rd person games, not something top down like lol, these display techs you're talking about add a ms of latency or so, you're not going to have an issue.
GreenReaper - Friday, August 20, 2021 - link
I'm just waiting for them to go the other way and whip up ClearFrag for my low-resolution monitor that's hung around for a decade. Can't justify buying anything new until it breaks! (Even then, it'd probably be better for the environment to try fixing it.)mode_13h - Saturday, August 21, 2021 - link
If your old monitor uses a CFL backlighht, then replacing it with a LED-lit model should at least save energy.I also dislike replacing stuff that still works. However, if a higher-res screen would improve your productivity, then you've got to consider the value of that in terms of the resources that *you* consume.
Oxford Guy - Tuesday, August 24, 2021 - link
Good thing Intel chose TSMC for its enthusiast cards or there could have been an opportunity to reduce pricing via avoidance of the output constriction of TSMC.phoenix_rizzen - Monday, August 30, 2021 - link
So, is it pronounced "Intel Excess"? :)mode_13h - Tuesday, August 31, 2021 - link
Yeah, I had pretty much the same reaction. Whether they like it or not, many people are probably going pronounce it like that.It just underscores how bad Intel is at naming. Xe was a dumb name, to start with. If someone says "zee", the listener wouldn't guess the spelling. If they say "xenon", it's too easily confused with Xeon. And nobody says "X to the power of e".
If I were Intel's CEO for a day, I'd pretty much fire the whole marketing division.
RSAUser - Saturday, September 18, 2021 - link
In my head it's just "zess"