very curious abt these "new low-precision modes". Does that means FP16? INT8? INT4? How will the throughput compare to the competition in those modes? Also, despite the modest/lack of increase in FP32 perf, DP/FP64 perf is up massively- from 1/16 of SP perf in Vega 10, to 1/2 in Vega 20- which means unless AMD gimps Double Precision compared to the MI50 (something they're not known to do), this card will be good for a whopping ~6.5TFLOPs at FP64 which if you need it is hecking massive.
but really though, I am primarily curious just what the Low-Precision modes implemented on shader cores will look like and how they will compare to the competition's hardware implementation of such (tensor cores)
Yeah, it should be interesting to see - it looks like AMD is taking a pretty different ML approach (though part of that may just be due to the legacy of the fact that it turns out that GCN was really pretty good at it without anyone having planned it that way).
Are there common tasks where memory bandwidth is an issue? If so, I can see the Radeon VII being great, since it's got 60% more than an RTX 2080 (not to mention twice the memory on board).
i couldn't name specific applications that are memory bw bound (cinematographer/content creator and gamer here; as an aside those Davinci Resolve numbers are quite enticing) but they're out there in the HPC sector. Also worth noting is that the big 16GB VRAM pool could do wonders for latency-bound tasks, where the ability to fit a larger dataset into VRAM without having to go back and forth into system memory can produce nonlinear performance increases.
You are not wrong there, this is what I've been praying for! 16GB means my entire data will fit into GPU memory, which will allow me to come significantly closer to the theoretical maximum FLOPS and use double precision where needed. (mostly just in accumulating results, but still that happens quite often along the way.)
And where I couldn't convince my boss to shell out for a $ 3000 Titan V or $ 9000 Radeon Instinct or Tesla card (small company and product is not shipping yet) I'm certain I can get him to pay $700 for this card!
That's exactly what I was wondering. Being able to move 16GB back and forth at 1TB/s without leaving the card has to offer performance advantages to some people. I've done some GPU compute work but really nothing that major, and its been small datasets so I've just been happy that it's way faster than CPU could be.
Yeah that's exactly it. The 16GB of 60% faster memory will be of more interest to ML researchers, massive CAD modelling, video rendering, etc, more so than is needed for games as of yet. Seems like another Titan or Frontier play, more for professionals wanting a cheaper Pro card than for gamers.
Wonder if the iMac Pro will be updated with it soon. Never know with them now.
They don't have anything equivalent to Nvidia's Tensor cores, or else their touted performance numbers would've reflected this, in previous Vega 7 nm launch announcements. It's not surprising, either, since the Tensor cores probably caught them off-guard and modern GPU design cycles are too long for us yet to see their response.
The Radeon Instinct MI60 supports both INT8 and INT4 (apart from the standard FP16). According to the article I linked below it supports 118 TOPS of INT4 and 59 TOPS of INT8. The MI50 is a bit less performing. I have no idea if these features will be preserved in Radeon VII, but assuming they will I don't think there is a functional difference between them and Nvidia's tensor cores, so I don't see why they couldn't be used for stuff like DLSS and AI denoising. Volta can also do ray-tracing without ray-tracing cores, though it is a (beastly) exception).
The only difference might be that they cannot be used concurrently with the FP32 shaders, since they are apparently not dedicated. I mean they cannot *all* be used along with the shaders (since they appear to stem from them) but a subset of them certainly could. However lacking support from AMD -at least for DLSS or some analogue of it- I don't see these modes being used at all in gaming.
The "new low-precision" modes is just a couple new dot product instructions (2,4 and 8 bit IIRC), that's all. DP perf could well be gimped, AMD have usually done this in the past too. But it could still be more than 1/16 (Hawaii chip was 1/2 natively, and consumer cards hat 1/8). 1/8 rate would actually be pretty decent, and blow the RTX 2080Ti out of the water there if you need that for cheap (as nvidia on recent consumer chips (since maxwell) has 1/32 rate).
Yeah, but they don't make GPU compute cards justifying product segmentation. Gen. 11 won't change that. Once Intel Xe is out though... it depends on how much encroachment they're worried about from integrated offerings.
Going from 24 -> 64 EU in their mainstream desktop chips, and you say they're going to hold at 1/2 fp64 compute? I'm not saying it won't happen, but I don't think we can take it as a given. They could cut back to 1/4th and still deliver more fp64 performance than Gen10.
Navi in an APU would be nice, but Renoir is probably 2020. :-/ On the plus side, they should have AV1 support sorted by then, and hopefully bugs ironed out.
I don't think any of us were expecting to see AMD come out swinging at the RTX 2080 in 2019, let alone in the first half of 2019, so that's a plus and bodes well for people wanting a high-end GPU at something close to the MSRP. It's interesting that the branding doesn't readily allow for variations (like 590/580/570, or 64/56), making me think that AMD may have rushed this to market as a stop-gap product (possibly due to a combination of yields being better on the chips for Radeon Instinct than expected and how most of us thought that this was going to be another lost year for Radeon).
Having said that, I'm surprised that AMD's doubling down on HBM2 - I wonder if prices have come down more than I've heard, since I've thought that the high cost of that memory (both in terms of chips, but also the expensive interposer and board design that the tech requires) has limited AMD's ability to lower the cost of Vega boards to the point where they're $/perf competitive with Nvidia.
In either case, it puts AMD in the unique situation of offering 3 generations of products (Vega 20, Vega 10 and Polaris) at 4 different processes (7nm Radeon VII, 14nm Vega and Polaris, and 12nm Polaris) to target different parts of the dGPU market.
I hadn't thought about how much more expensive GDDR6 is compared to 5. When AMD didn't release any HBM-based cards in the 400/500 series, I'd thought the tech might be going the way of RAMBUS; we'll see how the Radeon VII performs but it now seems like it may actually offer an interesting way of increasing performance.
i agree that this is not what was expected. rumors were pointing to a Navi-based GTX1080/RTX2060 competitor at similar price point to 2060, not a GTX1080Ti/RTX2080 competitor at the same price point as the 2080. Which leaves the question, what is AMD going to do in the midrange space? Right now they have nothing to go against the 2060. AMD painted themselves into a corner in the midrange with the RX590- too hot, too expensive, too slow. At $149-$199 and tweaked to bring the power down to =<175W Polaris might still make sense in 2019 but as-is they have nothing unless they rebrand Vega56 and drop it to $249-$299, which isn't going to happen.
Totally agree- they're in a weird space at the moment. The "Radeon VII" branding with no other suffix makes me think that this is the only chip that AMD will release on VIInm for a while, so other releases are likely to be on the 14/12nm.
The 590 move is really confusing to me - why bother porting a design from 2015 (since the 580 is just a 480 that's factory overclocked) to a slightly smaller process, particularly when you don't gain much in headroom (since if Intel was naming things, GloFo's 12nm is more like a '14nm++'). The 580 was the best deal below $200 until the 2060 dropped pushing 1060 6B prices down (though it's still not a *bad* deal). Maybe an OEM was willing to do a bulk purchase or something.
There's two moves I can see: One, a Radeon VII with less memory - following Nvidia's lead with say, the 1060 3/6GB; the other, they do for Vega what they did with the 590, shrink it to 12nm (and possibly pair it with GDDR, though it looks like Vega performance is VERY bandwidth dependent), or possibly pair it with less HBM2. They could release it under the Vega name and use a number that indicated performance instead of ROPs (so like, a 56 ROP clocked faster after the shrink might be the Vega 60 or something).
Still, any way you cut it, it's way more confusing than Nvidia, which is pushing Turing and letting Pascal stocks dry up.
You have to consider when the RX 590 would've originated. That would be back when cryptomining was still hot.
Given what they've said about their 12 nm Ryzen die, it sounds like the process transition to 12 nm was very low effort. So, my guess is that it was seen as low-hanging fruit that made them more competitive vs. GTX 1060 (6 GB). Perhaps they were (typically) optimistic about the power savings or clock headroom of the 12 nm process. If it were a bit more efficient, then I think it would look much more compelling.
what is NVIDIA doing in midrange space? Looking at prices 2060 is not midrange. 2060 is not 1080 performance. it is more 1070Ti. And people are saying that 2070 already has issues with ray tracing performance to be useful. So 2060 completely negates RTX part of the name anyway.
AMD is in a really weird place now, having just re-released shrunk versions of its architectures from 2015 (RX 590, a 12nm shrink since its still Polaris from the RX 480 (which was overclocked to become the 580)) and 2017 (Vega 7nm), but not really a new unified architecture to target high/mid (low?).
My bet is that Radeon VII (and the 590) are stopgap products until Navi is ready. I think most of us were surprised about the VII, which may mean that Radeon Instinct MI yields were better than expected, or demand was lower than expected, or just that AMD realized they could make money on them at $700.
"which may mean that Radeon Instinct MI yields were better than expected"
By dropping CUs from 64 to 60, you allow for up to 4 dead CUs on a 64-CU chip, effectively increasing yields. The same probably allowed to increase frequency, as they can choose 60/64 which can take this higher freq.
I'd venture to guess third party will offer lower memory (8 or 12) card options at a lower price. Either way, RTX just isn't worth the cost of admission.
My guess is that AMD were never expecting to launch this as a gaming GPU until they got wind of Nvidia's cash grab with the RTX 2080. AMD thinks, "if Nvidia can charge $700 for 1080ti performance in 2019, then why can't we!?" So they rush a mainstream release. If Nvidia launched the 2080 at a more reasonable $550 then I'm willing to bet this Vega VII card wouldn't even have been contemplated.
RE 16GB HBM2, the card has and obviously needs a 4096bit memory bus, can AMD actually do that with 8GB RAM, or do they need the full 16GB?
AMD needed HBM2 for their Compute Intensive Applications. And Vega was first and foremost designed to for Compute. Which means it is a little too late to fit GDDR6 into Vega.
Memory Cost has also come down a lot, might be even more than 50%. But they are still lots more expensive than GDDR6, and even GDDR5.
I'd usually say that's true, except so much of AMD's graphics business is semi-custom, and they've shown an ability to mix Vega CUs with standard DDR (in the Ryzen APUs), and even lop off Vega's HBM2 controller and add it to Polaris (for the GPU it made for Intel's Kaby Lake-G processor).
Having said all that, I agree its late in the cycle and there isn't an obvious reason AMD would do this - though it wouldn't surprise me if Navi included both HBM2 and GDDR parts at different points of the product stack (though it also wouldn't surprise me if HBM became Radeon Instinct-only and they stuck with GDDR for all the consumer parts, as was the case with Polaris).
I had thought about that; reject chips with only a small portion disabled. If they had some with, say, 56 or 52 CUs enabled, and HBM2 somehow miraculously became cheaper to manufacture, they could bring out a Vega 56 replacement which would equal or beat the 2070... just a thought.
My guess is that AMD saw the raster performance of the 2080 wasn't so spectacular and realized that their 7nm Vega could get reasonably close at around the same price. I suspect that AMD is still losing money on these even at $699 due to the high cost of HBM2. I also suspect that either Navi isn't quite as ready as they expected or they realized they needed to allocate more resources towards console chips. Either way, it's clearly a stop-gap, but if performance lives up to the claims, I would say it's not a very bad one. Had they stuck to the original plan, I imagine we would have had to wait significantly longer for a new card but that it would probably be a Navi-based one.
699? I thought the AMD fanboys said it'll be 399 and they'd never rip them off unlike the other guys. What happened to the comments about competition would bring down prices? Trolls haven't woke up yet?
AMD fanatics said it would be Navi at $399 with 1080/2070-level performance (not a fangirl here but i spend too much time reading tech rumors). This is a bit of a curveball
I can't see it selling at this price, it's just too high. Doesn't matter how good it is, AMD needs something far cheaper to gain both market share and mindshare. Tech sits will also bitch about power/heat/noise even though most gamers really don't care at this level. People simply don't buy AMD often enough, even when it makes perfect sense to do so. AMD isn't going to get back into the GPU game by targeting what now passes for high end these days. Roughly the same as a 2080 at about the same cost won't work. It needs to be waaaay cheaper to really make tech sites yell yes, buy this thing, and also be such a gap that NVIDIA can't simply lower its prices to counter the difference without making it very clear their own products never had to be so expensive in the first place. Doing it like this, NVIDIA can respond merely by making the 2080 slightly cheaper, and it doesn't have to be much of a margin to have the desired effect. Worse, there are people who openly say they wanted AMD to release something decent just so they could get a cheaper NVIDIA card (these are the NPCs who help to maintain market stagnation; if people don't buy AMD based on such a deluded minset then AMD just isn't going to bother with this market at all, why should they?).
Paul ("not an apple fan" on youtube) has talked about these issues at length, worth checking out.
A product like this needed to be $500 or below. Until the NPC zoids stop defaulting to NVIDIA whenever things are more or less equal, this situation won't change, and NVIDIA can continue to screw the consumer with overpriced tech passed off as advances which are just the table scraps from the compute market (what we're getting now are not gaming cards, they're not designed for *gaming*, as The Good Old Gamer explained so well some months ago).
I suppose "AMD fanatics" is just your shorthand for "folks trying to predict what graphics cards AMD would be able to release in 2019", and somehow not meant to be offensive.
AMD and Nvidia have really different architectures - we'll see if the GCN successor (next year?) starts looking more like Nvidia but I doubt it.
AMD DOES support GPU-enhanced ray tracing on Vulkan, and there's nothing "magical" about the cores in Nvidia's RTX series that enable it - they're just optimized. We've seen that GCN is actually surprisingly good at doing a lot of compute tasks, and so it wouldn't surprise me if AMD could support hardware DirectX raytracing on existing GPUs in the future.
We *still* don't know how DLSS works, but it seems like it's just an AI algorithm that allows most of a scene to be rendered at below the advertised resolution while upscaling select parts (and applying TAA on others?), so again, it's not "nothing" but it's just software around which Nvidia may have optimized parts of its hardware. There is a chance AMD could announce something similar, again, on existing hardware, though I'd kind of doubt it.
DLSS uses deep learning to infer what the upsampled output would look like. While I'm not aware of published information about the specific model + pre/post processing they're using, it's less magical than you make it sound.
Aside from adding integer support to their Tensor cores (which they were probably going to do anyhow), I think DLSS is more a case of them developing new software tech to utilize existing hardware engines (i.e. their Tensor cores, which they introduced in Volta).
I was surprised at how grud awful DLSS actually looks. GN's review of FFXV showed absolutely dreadful texture blurring, distance popping and even aliasing artefects in some cases, the very thing AA tech is supposed to get rid off. It did help with hair rendering, though the effect still looked kinda dumb with the hair spikes clipping through the character's collar (gotta love that busted 4th wall).
maybe an edge case here, but i would. I do content creation as well as gaming; if a Radeon 7 can hit 2080 performance in DX11 and Vulkan /and/ render my videos faster I'd choose the Radeon. I downgraded from Win10 back to Win 8.1 (flame away, i am unbothered lol) and have no plans of going back up for at least a year or two so Win10-only DXR does nothing for me.
That will be actually interesting to see - how much support these new cards can get on system that is year past end of support already... at absolutely worst case you might not be able to run a new card on that.
yeah :-/ I've a GTX 1070 in my gaming system right now (bought second-hand autumn 2018) and there's a distinct possibility that my only reasonable upgrade option for will be a used 1080Ti. My notion is to stick with the 1070 till this winter or so when used 1080Ti prices hopefully hit ~$300, and then stick with that until DXR/Vulkan Raytracing becomes mainstream and by that time (late 2020?) i should be able to snag a used 2080Ti or whatever midrange cards are out at the time, whichever has better perf/$$.
Or I might just keep using Windows 8.1 forever like people are doing with Win7. I'll get security updates till 2023 IIRC and i don't care about bugfixes or feature upgrades as the OS has been completely stable for me for years now
Hey KateH, I signed up just to reply to you :) I completely agree with you . As a 1070 owner that bought it last year for $300 was thinking the same thing. For us 1070 owners the only reasonable upgrade option is 1080 ti. or if you feeling fancy for twice the price RTX 2080 or Radeon 7. Wouldn't go for Titan RTX or 2080 TI even if I had the money for it, just not worth it, would rather donate to charity... And 1080 ti would last for at least a couple more years. Since 1080 ti would be of a RTX 3070/3060 and RTX 4050 (at worst- judging by current trends) performance. Which means it's gonna be equal to a mid range GPU in around 2021-2024. and the performance would be between 1/2 and 1/3 of RTX 4080 ti. But by that time there is a possibility that a new fancy tech will arrive and RTX will mature... So it would be a worthwhile upgrade for RTX 4080/ 4070 for twice the performance boost from 1080 ti + new tech...
Can I ask you why you switched to Win 8.1 though? I don't get it...
do you mean why i downgraded at all, or why i chose to downgrade to Win 8.1 specifically?
I got tired of the feeling that Windows 10 is perpetually in Beta, or at least that it's perpetually a "Pre-Service Pack 1" version of Windows, from release up to 1703 when I gave up. Too many difficulties getting it to install on a wide variety of hardware. Too many driver issues (despite the fantastically wide range of drivers available automatically via Windows Update), too many instances of bizarrely high CPU usage at idle, BSODs that I couldn't troubleshoot, etc, again across a lot of hardware. And then if I got an install that worked and performed as expected, I had to deal with the weekly barrage of updates, the litany of crapware that MS bundles (and reinstalls without asking!), application incompatibility, utterly titanic install footprint (that just grows and grows and grows with each update) and so on. It's a shame, because i for the most part love the Win10 UX and appreciate it's fast boot times and responsible RAM usage.
I went back to Windows 8.1 specifically because I unabashedly actually like it. It has many of the UX improvements that 10 has. I like the flat, clean GUI that feels responsive even on low-end hardware, a far cry from the bogged-down garishness of Aero on Vista and 7. I like the fast boot times- my gaming PC has a single 1TB HDD for OS and 2x 1TB RAID stripe for gamez and stripped-down 8.1 boots from pressing power button to ready at desktop in ~10 seconds, as fast as loaded-up 8.1 on my laptop's SSD (remember booting Win7 from spinning rust? Better bring a good book to read while you wait!). I like that Win 8.1 uses a tiny amount of memory- 800MB on my 8GB equipped gaming system, 200MB of which is for the Nvidia driver alone, and ~1GB on my 16GB laptop that I've loaded up with a silly amount of startup apps and utilities. The weird Start screen doesn't bother me- on my laptop I just hit the windows key and type the name of the application and on my gaming system I actually like the tile-based UI because it makes it feel more console-like.
Overall Win8.1 just feels like a stable, mature OS which is what I want. I know its features and I know they won't change. I download my weekly Windows Defender definitions and otherwise don't have to worry about updating other than once a month or so and I have full control over said updates out-of-box. 8.1 hasn't crashed for me in literally years aside from overclocking follies. It's fast. It just works. To me (and i know this is an unpopular opinion) Windows 8.1 is the ultimate Windows- most of the efficiency and UX advantages of 10 with the feeling of being in control of one's own computer that 7 had.
Windows 8.1 was my favorite version of windows and I've used all of them (server versions included) since 3.0. It was the fastest and most stable version of windows ever (for desktops). Windows 10 is less stable, slower, and more bloated.
But I use windows 10 on all my desktops now because it's the current version and works reasonably well and seems like it'll be supported forever if they don't eventually release a new version.
Content creation is actually one of the better use-cases for RTX. It doesn't require DXR - Nvidia has supported ray-tracing acceleration for content creation via a separate API (OptiX), which has been around for quite a while.
The joke being actual pro users aren't buying cards like the 2080 Ti and RTX Titan, they don't have enough VRAM. I had a chat with a guy at a major movie company (can't say which, but bigger than ILM), they're already approaching the 24GB RAM of the Quadro M6000s in their Flame systems, so they're moving up to either RTX 8000 (48GB RAM) or 32GB GV100 (think that's the right name). This means hundreds of upgrades in offices all over the world, mega money. This is why modern GPUs are not GPUs are all, they are compute engines; gamers get the table scraps. I long for a new genuine gamer card designed for games, but that doesn't look like happening any time soon.
In case you're wondering, the workload in Flame that's gobbling the VRAM is 8K editing, which of course in a pro studio is uncompressed. They have PCIe SSD arrays doing almost 10GB/sec to handle the data, and a SAN that can do 10GB/sec aswell. The building I'm familiar with has its own 1MW power generator (last I asked, 9000 core renderfarm).
Prosumers and solo pros on a budget might find RTX attractive, but all that Tensor/AI stuff won't matter a hoot unless apps are recoded to exploit it. Studios are already talking to Autodesk about the possibilities, so who knows. Don't expect anything anytime soon though, Flame still has weird stuff in it from 20 years ago.
In the past, "realtime raytracing" would've been my answer to any question involving a 9000 core render farm. Now, I'll have to think of something else...
If it actually performs the same as the RTX 2080, then it could be a good buy depending on what is cheaper in your country, especially because AMD is bundling 3 games, NVidia only 1 (you have the choice between Anthem or Battlefield 5, but don't get both). So you can save some money there if you want those games, or sell them and get a bit of the price back and make it a better deal overall.
But I could see this be a good GPU for GPGPU compute or other non-gaming applications. 16 GB HBM2 is enough space for huge data sets and is fast enough to keep fast algorithms fed with data. Double precision is also higher than on other consumer cards, and they support lower accuracy with higher speed (although only special instructions). And 700$ is not much if you compare it to professional cards. Also I don't think AMD has restricted consumer cards from commercial use, like NVidia did?
So where is ~700 million Transistor Count different against Vega 64 goes specially it has less compute unit , they increase the frequency and increase ROPs unit number ( a lot ) so it will looks more like nVidia architecture Better graphic/gaming performance less computing/mining power which is fair enough for consumers .
That's pretty clear, but are you saying they're going to get much from significantly increasing bandwidth without touching ROPs? I assume they're pretty well balanced.
They increased shader throughput by increasing the clock speed. And as silverblue points out, the benchmarks suggest that vega was either starved for fillrate or overall bandwidth.
Note that not all rasterized pixels get shaded. Shading can often be deferred until after stencil and depth testing.
It doesn't - but in the same way that the Vega 64 doesn't have 64 discrete ROPs; instead it has 16 that are capable of writing 4 independent pixels per clock. The Vega 20 architecture has simply doubled the count of 16 to 32, along with the number of memory controllers, and now has 4 x 32 = 128 "ROPS"
I suspect I misinterpreted your original; I think you're questioning the proof over whether the number of render back ends have been truly doubled or not. Nosing around the web has flung up this image:
That's still 4 discrete ROP units per graphics pipeline; can't quite find a similar image for Vega 10, so unable to tell if AMD have done something with the ROPS, e.g. double write rate from 4 pix/clk to 8.
$700? No thanks. If it only matches the 2080 in performance unless you need the VRAM I would get a 2080: possiblity of ray tracing in the future and you can use both freesync and g sync
By the time games support it well, Nvidia will be onto RTX 3000 and their 2000-series cards will lag badly.
Usually, when a new tech is introduced, there are big improvements made between the first and second generation implementations.
I wouldn't buy into ray tracing before there's compelling content that I know will run on the hardware at satisfactory speeds and resolutions. Anything else is just a gamble.
this, yes. Reminds me of when Geforce 8 came out with DirectX 10... that was an even bigger performance uplift compared to the previous generation than Geforce RTX and it still wasn't enough to really do DX10 games well. Same with Geforce 400/Radeon 5000 and DirectX 11- sure it's there, but only as a preview of what is to come later.
GPU history is filled with examples showing buying solely for the sake of new features is bad idea. You needed a DX10 GPU like the 8800GT to run the later DX9 games like TES4 fast enough anyway. HD5xxx DX11 tessellation hype also became "who cares" in a short order. Raytracing for games NEEDs next gen console support for mainstream adoption and that ball is firmly in AMD's court.
Guess where graphics go from here on is pretty much given::
Lisa Su: I think that ray tracing is an important capability and we view that as important. We are continuing to work on ray tracing both on the hardware side and the software side, and you’ll hear more about our plans as we go through the year.
Yeah, but whether/when to buy into DXR acceleration is not an abstract question about the future of graphics, but rather a specific question about how well said hardware will perform on real apps/games.
One trace per pixel at 60fps at 1080p for a 2080ti is unacceptable in terms of raytracing quality, fps, and resolution. How nvidia didnt realize this was a joke I dont know. If amd is smart they will hold off on raytracing for at least 1 more generation beyond radeon vii.
They will hold off, whether they like it or not. Nvidia caught them by surprise and the hardware development cycle is too long for them to respond any sooner than that.
Wonder if crypto lunacy is at least partially responsible for costly features like accelerated raytracing now instead of... 2-3 generations later. Insane margins and permanent 'sold-out's could have hasten things a bit (allowing more $$$ into research).
Now I just wish graphic cards price ranges returned to Pascal range at highest. They are way too high now and with AMD setting up on nV price ladder it probably won't improve anytime soon. (At least with Radeon 7, but it's still just PR-ware until Ryan reviews it and shops start selling it)
No, since 1080Ti could manage 4K60. The market for 60+hz or higher resolution is 0.00001%. So instead of just adding performance than 99.999999% of the users could not use: why not put in new graphics features that use the performance? The average gamer, that is a niche market, uses 2560x1440 today. A resolution that pros like me used in 2004. It takes decades to bump resolution for the masses. I bought 4K/60 in 2013 and maybe 2020-2021 that will be mainstream in gaming/consoles. (and 8K is just stupid. Having screens OVER retina: what your eye can see pixels is just marketing for the IT clickers that rate stuff with large numbers)
I'm disappointed about the price mostly. 25% more performance than a (I assume reference) Vega 64 at 75% higher price than you can currently buy a Vega 64 reference card for isn't exactly a good deal from a price/performance perspective. I bet it's the 16GB HBM2 that is a main culprit in that price hike.
The transistor count got increased by the new deep learning instructions and the fp64 compute.
Like the GV100 that ended up in Titan V, this die was not originally intended for consumer usage. And like the Titan cards, you have the option of paying a *lot* more for a *little* more performance. For some people, it's an acceptable tradeoff.
12-14nm vs 7nm: It cost almost 3 times as much per mm2. Even if the chip is half the size, it actually cost more to produce. People have this strange idea that smaller nodes are cheaper. No.. That is gone since 28nm. That is one of the reasons why majority of chips are on 65nm and companies invest in 22nmSOI. FinFet design, tapeout cost 10-40 times more than plane. Only reason why we have seen the rapid shrinking last 6 years is that Apple prepay fab lines and Samsung competes with them. That's why only TSMC/Samsung have 7nm lines. Intel could eat the fab cost because of their 95%+ margin on X86, but that only works when they are nodes ahead. On the same node, RISC/real chips will always win = why Intel left the 8+ billion chip/year "mobile" market. The point is that AMDs margin didn't go up 75%. It's probably the same margin. (and this is the classic capitalism problem and why for example ARM haven't killed off X86. People think its normal to pay 300+ dollars for X86/motherboard. 30% margin = 90 dollars. Let's say they use ARM with the same performance and it cost 100 dollars. With 50% margin, they only make 50 dollars. That's why they want high price stuff. It's not about % margin, but that the same % point is more worth at higher price. That's why stupid investors loved Apple 2017 when they sold fewer iPhones but had higher ASP. Stupitidy of today's "experts" (same reason why we have Games as service. It didnt matter for EA if they sold 5 million copies of DeadSpace3 and made a profit. Profit means nothing today. Today its about profit margin. So it's better to kill off something that makes a profit, and has other stuff with higher profit margins. Yes. It's that stupid. That's why Apple killed Airport/WiFi line. It could just have a 10-20% profit margin so it takes down the average profit of their other stuff. It does not matter that they make money. In their sick minds, it's better to have a higher profit margin on fewer products. But mathematically: you get less profit. Anything you sell with profit: sell it. In the real world: Why care about margins if you actually make money.
Chip cost is also only one part of total GPU cost. If 7nm were as expensive as you are claiming then the RTX 2060 would have to sell for $1000+ to break even.
In competitive gaming, speed is more important than visual quality, so if AMD can become the FPS king, then their cards will sell despite not having RTX or DLSS.
I am by no means an engineer, I’m just someone who likes to keep in touch with technology and who has been reading this website for probably more than 8 years. But I’ve always felt that AMD has been lacking in the back end, since X800s and X1950s era. They have focused a lot on the shaders, but it have always seemed to the that their ROPs were always behind. So it’s good to see they improving their ROPs number and I think they might have something special here.
Anyways, they really need an architectural change. GCN is limiting their shader/processor counts to 4096 and it is awful from an energetic efficiency point of view. This might give NVIDIA some competition but is very far from something really unique as Ryzen. It just gave AMD time to breathe. Once NVIDIA hits 7nm they’ll be slaughtered if they can’t deliver another architecture.
I agree that GCN is in need of replacement. I also suspected there was something about it that was limiting the shader count to 4096, but an AMD engineer I asked said that's not true. So, I guess there are probably other (scalability-related) reasons they're limiting it to 4096.
So two years after Nvidia released the 1080ti, AMD finally has a competitor that matches the launch 1080 ti at the same price while using more power even at more modern transistors? I kind of disppointed. I which they had 8 GB version for 550$, that would have been a good deal. Also, if this is the same price as RTX 2080, why would anyone choose this over 2080? the 2080 comes with DLSS and Ray-tracing. Unless that double amount of memory is really very important
Looking only at gaming use cases, I think you're right. Also, this doesn't catch Nvidia on deep learning.
I look at this almost like a Titan card - something that doesn't really make a lot of sense for the mass market, but it's a really good deal for a few significant niches.
RT is a fair point but of limited use and lets be real, RT will improve greatly next gen. 1st gen is always meh. DLSS however I don't see it. Not that DLSS isn't useful. Its far more useful than RT ATM. However, and this is a big however, this card is targeted @ 4k where DLSS isn't really useful. TBS, for VR, would/could DLSS be useful? In the case of VR & 4k gaming, I can see that doubling of the frame buffer being extremely useful.
zaza, DLSS looks terrible (GN's FFXV review), very little support; raytracing is a gimmick that's too slow; but NPCs lap it up.
You should reverse the question: if the cards are about the same, why is your choice NVIDIA? This is the NPC mindset, and it's why AMD's new card won't sell; it needs to be at a much lower price to really make people switch, no matter how fast it is. I've sen tech sites on numerous occasions simply default to NVIDIA in their casual conversation, even when referring to performance/spec situations when AMD made way more sense (eg. RX 580 for 1080p).
RT implementation has gotten better. There wasn't a lot of lead time for the GD for RT. TBS it's gotten better, but how much more will it on current HW? It may end up being decent. Again, odds are, the next gen will have more than tangible benefits if GD implement it. DLSS as with any post processing will vary per game. FFXV is just one game. I've seen a few games where the results are pretty good. The issue is this card is being used for 4K. At 4K DLSS is pointless.
What can VEGA 20 do? It can run 4k, it can run VR quite well supposedly. It will boil down to your use case and what benefits you more. RT/DLSS or double the frame buffer? No need to reverse anything.
As far as sites being stuck up NVidias or AMD's ass. I don't care. I tend to ignore sites like that. Plenty of sites that provide unbiased information & data to form an opinion and decide what cards provides the value to you. Im happy AMD has a card theyre pitting against the 2080. If theyre that confident then maybe when anand reviews it, we will have one hell of a face off. Competition is great. Choices are great.
A few of us did query about this happening, glad to see we weren't wrong. I'm not overly excited about it, though - if it's functionally the same as Vega 10, we won't be getting driver-based automatic primitive shaders, meaning we'll get a mostly-unshackled Vega performing close to 2080 levels and doing well at higher resolutions, but at higher power. Also, will they be losing money on having 16GB HBM2? Will there be a fully enabled Radeon VII in future?
It's not as interesting as Navi is bound to be, whenever we get to hear more about it.
They are ok to price it in a way that not many people want to buy it because they will only make 4 of them.If they tried to make more and tried to price them competively they would lose money.
@Ryan: Thanks! Mainly good news, I guess. One question I had for a long time is why AMD is not coming out with a beefy GPU like this one, but combine it with a good helping (12 GB or so) of significantly cheaper GDDR 5x or GDDR 6? I read many times that a key reason why Vega 56 and 64 were pricey was the cost for HBM2 vs. GDDR 5 or 6. I know that HBM has higher throughput, but is it really that decisive? Appreciate any comments you may have!
Vega's price floor was also due to the die size, which had 12.5 B transistors. Compare that to the GTX 1080 Ti's 12 B transistors and you get the idea.
Yes, the 64 die was/is big, but I remember reading a number of times how the extra costs for HBM-type memory kept the build costs for Vega cards high. Just wondering how much performance would be sacrificed vs. how much cheaper a 8 or 12 GB card would be with GDDR 5x or 6.
I didn't mean to contradict the HBM2 effect. Just saying that die size was also clearly a factor.
I was somewhat surprised to see Vega 64 dip around the $400 mark, recently. I wonder if they were taking a loss on those, selling off a bit of cryptomining overstock.
AMD stopped using Doom and Wolfenstein II for vulkan test. Instead they use Strange Brigade, a game that nobody plays (and it is even less popular than Dragon quest XII on steam)
1fps faster in two AAA game that are know to run better on AMD GPU ?! This does not look good for AMD. If AMD marketing slides having trouble showing it winning by more than 1fps, then imagine what happens when independent reviewers test it. LOL
No need to imagine. I fully expect Anandtech to have a full review placeholder up as soon as the GPU is in retail this February - and for that review to be incrementally filled in over the ensuing several weeks thereafter :P
OR... perhaps Navi is delayed. AMD created Radeon VII to fill the space temporarily. There might be a case where RayTracing became a viable option and they are doing last minute additions.
Vega on 7nm before Navi has been in the roadmap for a while. If it was Navi now that would have been great, but this seems opportunistic for getting a new fab earlier than Nvidia.
I know you're trying to be cute, but that's not true. They did significant engineering work to add in the new deep learning instructions and move from GloFo's process to TSMC. So, it stands to reason there might've been other fixes or tweaks in there.
Believe it or not, chips have bugs. Working around them usually has a performance impact. Those are often fixed in the next generation, meaning the performance-robbing workarounds are no longer needed.
But this was expected. Turing was made as a consumer product, while Vega 20 was made for enterprise/cloud.
It's a little like why the Titan V is bigger, more expensive, less function (no RT cores) and yet still a bit slower than the RTX 2080 Ti, even though they're on the same process node.
Good lord, are you serious?? :D Turing is not in any way a consumer product. It's just the table scraps from the compute market packaged up with some flimflam & b.s. marketing to sell to gamers who are too NPC minded to realise they're being duped up the wazoo (we've seen how the PR was fake, the months with no RTX, the terrible performance, the speedup only achieved by reducing amount of raytracing, and who cares in such a grud awful game as BF5 anyway). These are not *gaming* cards, they're compute dies that didn't make the pass for Enterprise.
What would be interesting would be if card makers release these Radeon 7 cards in an 8GB configuration to drop the price. I'm not sure how much of an improvement in performance we would see with that jump from 8GB to 16GB, so it might just increase the cost without much of an improvement in performance for MOST people.
Well, if you look at the gaming performance increase, it's a lot more than the ratio of fp32 compute. That strongly suggests most of the benefit is from more than doubling the memory bandwidth.
If you cut down the memory back to 8 GB, then you'd probably be left with something that performs almost the same as Vega 64, but a bit more power-efficient. Probably still not as efficient as a GTX 1080, however. And the large die will probably still impose a fairly high price floor. Using cloud/HPC-oriented GPUs for consumers is never going to be a cost-effective solution.
The better solution would be if AMD has some kind of scalable, multi-die solution with Navi.
100 Comment and not a single comment on Why VEGA 20 for Gamers.
I doubt it was AMD trying to counter RTX2080 and made the move, it was simply AMD had a choice. TSMC will likely not be fully utilising their 7nm Fab capacity now and in the months to come, none of the other 7nm players around have the demand. So AMD is likely taking this advantage and might as well lunch their 7nm GPU to consumers.
And the real reason is: These chips are harvested chips from the full working die. So let's sell them instead of scrapping them. It's not about TSMC have excess 7nm wafer starts. It does not work like that. AMD bought X amount of wafer starts since other companies have priority, the only thing that happens to AMD is that their ordered wafer starts are filled quicker.
Only if people actually buy the AMD product. If the response is to simply buy the NVIDIA card, as so many want to do, then this market will die a death of innovation for years just like the CPU market did. People who take advantage of new AMD products as a route to a cheaper NVIDIA product are sustaining the market stagnation in GPU tech we've already seen result in the overpriced RTX line. This is why AMD's new card won't sell well, it needs to be far cheaper to attain both market share and mind share; it has to break past the consumer and tech media NPC mindset, and that isn't going to happen with price parity, especially not if there's any kind of power/heat/noise element people can yell about (even if they never moan when NVIDIA acts the same way, or actually promote AMD when they have a product that doesn't have such issues).
AMD needs something that's a huge amount cheaper than NVIDIA to really get some talk going on forums and tech sites, and that isn't going to happen with anything targeting upper RTX; NVIDIA can simply lower its prices in response, it has plenty of existing overpriced margin with which to do so. AMD needs a pricing gap large enough so that NVIDIA dropping its price to match would make people instantly understand how much they were previously being ripped off. This is a battle that needs to play out first in the mid range, what was the 1060 market (for that there's Navi). AMD can't win this struggle at the high end. I really don't understand why they're trying to do this, it isn't going to work.
You want to avoid the death of innovation by encouraging people to buy the rehash of old tech that wasn't good to begin with over the product that is being lamented for having too much innovation?
All of the higher end parts are priced outside of where we would like them, I get that, trying to champion AMD support in the name of innovation seems comically out of touch with reality in the GPU sector.
If you want absolutely zero innovation, you just hate it with every fibre of your being, this is your graphics card. Without question.
It is a huge difference. Intel did NOTHING between 2006-2017. AMD gained 1% market share in 2017 and somehow this changed the whole X86 industry? Not the 8+ billion ARM chips that were sold? Nvidia makes their money on Tesla cards and WS. Gaming cards are not huge profit items especially since high-end cards is not a huge market. Nvidia at least aprox doubled performance every 24 months. Most normal gamers think 4K/60hz is ok. When you have cards that manage that, its stupid to add more performance without putting on new features, like raytracing. DLSS is genius. In the future, we can have small local graphics cards/cloud gaming and let the supercomputer render it. It's such a fanboy comment that we should buy AMD just to have competition. Make good products and we will buy them. If you want a 1080P card: I would recommend AMD. If you want a high-end card: its Nvidia.
While this GPU may be able to at least compete with the RTX 2080 in performance and price (or serve as a niche product for those looking for FP64 compute), this is achieved by a combination of factors which do not shine a good light on Vega as an architecture.
At roughly the same number of transistors, it matches the perfomance while using more power even though it is manufactured with a superior process. So the performance metrics aren't just worse than Nvidia's but actually terrible. Additionally, Nvidia dedicates lots of its transistor budget to tensor cores and ray tracing units. That means that AMD needs more transistors for the same functionality. And then there is the fact that the RTX 2080 achieves its performance level with lower theoretical throughput hinting that Vega simply isn't able to actually make use of all its compute resources.
As much as I'd like AMD to catch up or even pass Nvidia for many reasons (such as open source support etc.), I cannot shake the feeling that this is more of a bandaid to keep the graphics devision from bleeding out this year. It is cleverly positioned considering AMD's options (7nm being the only thing they have going for them) but it is by no means technologically on par. Let's hope that Navi can do better.
None of that lovely tech analysis is a rationale for not buying the AMD product, but it's this kind of banter which disuades people from doing so. The NPC mindset marches on.
How much cost can AMD save if they use 8GB VRAM instead of 16? HBM2 gotta be pretty expensive. A $100-150 savings would make the VII very compelling vs. 2080.
So maybe a touch faster than 2080 performance for about the same 700 dollars, but having to get there by being a fabrication node down and without any of the silicon Nvidia is spending on RTX, and all 5 months later, is I guess an ok slightly better value play, but not exactly awe inspiring, even a little worrying.
The benchmarks are no doubt picked to be favorable, so the primary draw would seem to be the extra HBM2 memory, might be an interesting card for researchers but few titles are hurting on 8GB for gamers yet. Seems like the Frontier again, or what some Titans have been.
Then again this is only die shrunk Vega, Navi will be the interesting one to see as the new architecture.
Footnote 3. It's a pain to read, so probably easier to watch the following link from TechEpiphany which also includes percentages (there is a slide somewhere showing the same information but I can't find it, unfortunately):
Wow, really thought AMD would stick it to Nvidia in performance per dollar. Disappointed they're essentially just matching Nvidia here. Seems like a missed opportunity for AMD to take the lead on value. And a big loss for consumers who were hoping for some good competition in the GPU space to drag prices down. With AMD cards sucking up more watts and outputting more heat, it's a tough sell.
I think the card has some merit on the 16GB of HBM2 and FP64 performance (both double the nvidia 2080). I think it's a stop gap until their new products late this year or next year, but it is a compelling value for some users.
Nvidia is not overcharging for their cards. Why do not people understand that it cost money to produce huge dies with fast memory? Both AMD/Nvidia charges 3K+ for same graphics cards but with "pro drivers". I hope people remember this when Ryzen 3 is released. Doing 7nm chips at least doubles the manufacturing costs for AMD, so people who believe we will get magic 16 core mainstream ryzen for 300 dollars: forget it. Its also funny that A12X is larger than ryzen 3, and still all "experts" put a value/BOM of A12X at 25 dollars. Somehow if a chip is X86 = worth 10 times more.
I think the Limited Edition Vega 64 with the cool, brushed aluminum cover had an msrp of $599, so maybe it's more appropriate to compare that to the Vega VII, since that's what's being shown by AMD. Or perhaps the "base" Vega VII will be $699 and the "limited edition" will be $799. ;)
The Radeon VII must be doing something right: Nvidia's head honcho Huang made some quite derogatory comments about it at CES. I don't believe that Nvidia is upset over the Radeon VII being a lousy card - that could only help Nvidia move more 2070 and 2080 cards. So, why, Jensen Huang: Worried much? Is the Radeon VII really that good?
It performs close to a 2080, using more power even though being at a smaller process node, and offers far fewer features, and those that it does offer are essentially useless for the gaming market it is being released for (16GB VRAM, double precision).
It is what it is: a rebadged Radeon Instinct M160 cut down from 64CU to 60CU that was released last November as an enterprise accelerator, which they are releasing now as a gaming GPU to coast them over in the consumer market until Navi comes out next year.
Don't disagree completely, but the question remains: why did Nvidia's Jensen Huang go on a rant about the Radeon VII? If it is indeed such an inferior card, the smart thing to do would be to wait for the first tests to be published, and then just say how pleased Nvidia is with the good showing of their 2080. Huang's trash-talking the Radeon seems to suggest he is worried. His unprofessional behavior just made the Radeon VII a contender to watch.
I mean, this represents a massive change. Something this large should have probably been leaked at some point; at least a couple of benchmark sites with unrecognisable device ID's and unusual configs, but nothing.
The only thing we got on consumer Vega 7nm, is two leaks. The first saying that Consumer Vega was binned, then more recently Consumer Vega is coming.
If both of these leaks are confirmed, and there's no reason why they can't both be confirmed, it points to a last minute change of plans. That, and the lack of spec/performance leaks, point to the Radeon 7 being a reheated MI50.
Unfortunately (for AMD & fans), the Radeon VII has (only -?) 64 ROPs, so just like the Vega 64. That changes the value proposition, from exciting to so-so, at least at the launch price; it's now at least $ 100-150 too high. Still great for certain compute tasks, but, as it stands, not a 2080 GTX slayer anymore. Bummer! Nvidia could use a hard kick in the pants so they come down in price.
Here another thought why AMD is coming out with this 16 GB HBM2 card now. I suspect AMD and several of its partner companies either have a lot of HBM2 silicon in stock or firmly ordered, something that happened before the Crypto-craze collapsed. Now, that HBM memory has to be shifted so it won't hang around their necks like a millstone. One good way to do so is be generous with HBM2 on a higher-end consumer card. They might even loose money on it, but nowhere near as much as if it just sits around. Cynical, I know.
I'm not a huge graphics fanboy for either camp. That being said, this is embarrassing for $600 given the hype around 7nm and all that jazz. AMD should take a loss, undercut Nvidia @ $450 and wreck them. Lame move, AMD.
I will have to buy this GPU, no other choice. I can perhaps find a used 1080Ti but I have a FreeSync monitor and I am not sure how well it will work with NVIDIA. However, so far with Vega 64, it's good, but I need more performance for UWQHD+. I want to at least be in my FreeSync range (48-75FPS). I usually need to lower down some settings a notch to get around 50-60FPS at my resolution (3840x1600). I am guessing the Radeon VII can do the job with a Ryzen 9 @ 5.1 (if rumors hold true) and a fast B-die memory. I know it's made for compute and RTX 2080 is an equivalent with the RT & DLSS features, which BTW I don't plan to use even if the games that come out support it. I don't need RT at all...And besides, the RTX 2080 has some very aggressive power limit. I want a card that allows as much power so it can stretch its legs. I know that Radeon VII will consume a lot, but power consumption does NOT matter to me at all.
use vega 64 + radeon7 crossfire.. i'm running crossfire setup, been doing that for some years.. when software support crossfire it really fly's and never been unstable.. it's only when you force cf on software it isn meant to crossfire.. i don't game much, but know application have alot better support for crossfire
"If AMD manages to reach RTX 2080 performance, then I expect this to be another case of where the two cards are tied on average but are anything but equal; there will be games where AMD falls behind, games where they do tie the RTX 2080, and then even some games they pull ahead in."
Here is the problem with mind-share bias. Instead of wording this with a positive first, followed by the equal and ending with negative, we get what we see written here. Whether intended or not, it is giving an affirmation to Nvidia and placing AMD as an also ran wannabe. Even if they are, it is not unbiased journalism's job to tell us this. It is their job to present the facts and let the reader form his or her own opinion about these facts.
Anyone who believes AMD Radeon is across the board equal to Nvidia at this point in time is deluding themselves. That still doesn't account for journalistic mistreatment, whether intended or not, for products that do compete in/at whatever area/whatever level, be it price, performance, features, efficiency, price, any combination of or all.
A better way to have written this: If the Radeon VII reaches RTX 2080 performance, as testing has shown in the past, the competing cards will trade performance wins depending on the game title. There will games where the Radeon VII takes the win, games where they tie and games where the RTX 2080 wins.
I want competition. This will never happen if the tech press is always downplaying some brands and giving the nod to others, again, even if accidentally/unintentionally. It's indicative of the space Nvidia's owns (rent free I might add) in the mind of tech writers.
As a journalist reporting, this is a shockingly weak part being nothing but a die shrunk Vega, at 7 nm with no ray tracing cores, no tensor cores, AMD should be *crushing* the 2080ti. This shouldn't be close, they should be using less power, smaller die, better performance than anything team green has and clobber them on pricing.
As a tech journalist this reporting comes across as borderline AMD marketing slides, but to be fair that is what they were covering. Your attempt at reading implied bias wasn't very strong, I'm curious if we've ever seen a new process node produce a GPU that was both more power hungry and considerably slower than the older node, I can't think of any before now.
I don't think you comprehend what "unbiased" means. As if tech journos don't handle AMD with kid gloves already. The sad truth of the matter is, this is a pretty awful product that only the most misinformed gamer will buy, period.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
201 Comments
Back to Article
KateH - Wednesday, January 9, 2019 - link
very curious abt these "new low-precision modes". Does that means FP16? INT8? INT4? How will the throughput compare to the competition in those modes? Also, despite the modest/lack of increase in FP32 perf, DP/FP64 perf is up massively- from 1/16 of SP perf in Vega 10, to 1/2 in Vega 20- which means unless AMD gimps Double Precision compared to the MI50 (something they're not known to do), this card will be good for a whopping ~6.5TFLOPs at FP64 which if you need it is hecking massive.KateH - Wednesday, January 9, 2019 - link
but really though, I am primarily curious just what the Low-Precision modes implemented on shader cores will look like and how they will compare to the competition's hardware implementation of such (tensor cores)sing_electric - Wednesday, January 9, 2019 - link
Yeah, it should be interesting to see - it looks like AMD is taking a pretty different ML approach (though part of that may just be due to the legacy of the fact that it turns out that GCN was really pretty good at it without anyone having planned it that way).Are there common tasks where memory bandwidth is an issue? If so, I can see the Radeon VII being great, since it's got 60% more than an RTX 2080 (not to mention twice the memory on board).
KateH - Wednesday, January 9, 2019 - link
i couldn't name specific applications that are memory bw bound (cinematographer/content creator and gamer here; as an aside those Davinci Resolve numbers are quite enticing) but they're out there in the HPC sector. Also worth noting is that the big 16GB VRAM pool could do wonders for latency-bound tasks, where the ability to fit a larger dataset into VRAM without having to go back and forth into system memory can produce nonlinear performance increases.Znaak - Wednesday, January 9, 2019 - link
You are not wrong there, this is what I've been praying for! 16GB means my entire data will fit into GPU memory, which will allow me to come significantly closer to the theoretical maximum FLOPS and use double precision where needed. (mostly just in accumulating results, but still that happens quite often along the way.)And where I couldn't convince my boss to shell out for a $ 3000 Titan V or $ 9000 Radeon Instinct or Tesla card (small company and product is not shipping yet) I'm certain I can get him to pay $700 for this card!
sing_electric - Wednesday, January 9, 2019 - link
That's exactly what I was wondering. Being able to move 16GB back and forth at 1TB/s without leaving the card has to offer performance advantages to some people. I've done some GPU compute work but really nothing that major, and its been small datasets so I've just been happy that it's way faster than CPU could be.KOneJ - Friday, January 11, 2019 - link
Vega Frontier Edition often snuck down into a price range your boss might've gone for. e.g. https://www.reddit.com/r/Amd/comments/7tz0gq/those...tipoo - Thursday, January 10, 2019 - link
Yeah that's exactly it. The 16GB of 60% faster memory will be of more interest to ML researchers, massive CAD modelling, video rendering, etc, more so than is needed for games as of yet. Seems like another Titan or Frontier play, more for professionals wanting a cheaper Pro card than for gamers.Wonder if the iMac Pro will be updated with it soon. Never know with them now.
mode_13h - Wednesday, January 9, 2019 - link
First, read this: https://gpuopen.com/first-steps-implementing-fp16/According to this, the new low-precision integer instructions look analogous to Vega 10's fp16 support (i.e. standard packed-arithmetic operations): https://www.phoronix.com/scan.php?page=news_item&a...
They don't have anything equivalent to Nvidia's Tensor cores, or else their touted performance numbers would've reflected this, in previous Vega 7 nm launch announcements. It's not surprising, either, since the Tensor cores probably caught them off-guard and modern GPU design cycles are too long for us yet to see their response.
Santoval - Friday, January 11, 2019 - link
The Radeon Instinct MI60 supports both INT8 and INT4 (apart from the standard FP16). According to the article I linked below it supports 118 TOPS of INT4 and 59 TOPS of INT8. The MI50 is a bit less performing. I have no idea if these features will be preserved in Radeon VII, but assuming they will I don't think there is a functional difference between them and Nvidia's tensor cores, so I don't see why they couldn't be used for stuff like DLSS and AI denoising. Volta can also do ray-tracing without ray-tracing cores, though it is a (beastly) exception).The only difference might be that they cannot be used concurrently with the FP32 shaders, since they are apparently not dedicated. I mean they cannot *all* be used along with the shaders (since they appear to stem from them) but a subset of them certainly could. However lacking support from AMD -at least for DLSS or some analogue of it- I don't see these modes being used at all in gaming.
Santoval - Friday, January 11, 2019 - link
https://www.anandtech.com/show/13562/amd-announces...mczak - Wednesday, January 9, 2019 - link
The "new low-precision" modes is just a couple new dot product instructions (2,4 and 8 bit IIRC), that's all.DP perf could well be gimped, AMD have usually done this in the past too. But it could still be more than 1/16 (Hawaii chip was 1/2 natively, and consumer cards hat 1/8). 1/8 rate would actually be pretty decent, and blow the RTX 2080Ti out of the water there if you need that for cheap (as nvidia on recent consumer chips (since maxwell) has 1/32 rate).
mode_13h - Wednesday, January 9, 2019 - link
Intel has actually been good, on this front - all their iGPUs have had 1/2. Let's see if they change course, with Gen11.KOneJ - Friday, January 11, 2019 - link
Yeah, but they don't make GPU compute cards justifying product segmentation. Gen. 11 won't change that. Once Intel Xe is out though... it depends on how much encroachment they're worried about from integrated offerings.mode_13h - Monday, January 14, 2019 - link
Going from 24 -> 64 EU in their mainstream desktop chips, and you say they're going to hold at 1/2 fp64 compute? I'm not saying it won't happen, but I don't think we can take it as a given. They could cut back to 1/4th and still deliver more fp64 performance than Gen10.mode_13h - Wednesday, January 9, 2019 - link
Yes, this will be a fantastic bargain for fp64 performance. Performance will be in the ballpark of Titan V, at only a quarter of the price.jabber - Wednesday, January 9, 2019 - link
And for us mere mortals that fancy something new and not warmed over....bupkis I guess?mode_13h - Wednesday, January 9, 2019 - link
Still waiting for Navi...GreenReaper - Wednesday, January 9, 2019 - link
Navi in an APU would be nice, but Renoir is probably 2020. :-/On the plus side, they should have AV1 support sorted by then, and hopefully bugs ironed out.
Alexvrb - Wednesday, January 9, 2019 - link
We weren't even expecting anything new in the consumer market until Navi, so this is fairly interesting even if it is a stopgap.KOneJ - Friday, January 11, 2019 - link
I think they had some MI50/60 dies that they couldn't use and figured they throw this out there.KOneJ - Friday, January 11, 2019 - link
Was kind of wondering about this when Instinct cards launched. forgot about it until I saw the VII trademark logo.sing_electric - Wednesday, January 9, 2019 - link
I don't think any of us were expecting to see AMD come out swinging at the RTX 2080 in 2019, let alone in the first half of 2019, so that's a plus and bodes well for people wanting a high-end GPU at something close to the MSRP. It's interesting that the branding doesn't readily allow for variations (like 590/580/570, or 64/56), making me think that AMD may have rushed this to market as a stop-gap product (possibly due to a combination of yields being better on the chips for Radeon Instinct than expected and how most of us thought that this was going to be another lost year for Radeon).Having said that, I'm surprised that AMD's doubling down on HBM2 - I wonder if prices have come down more than I've heard, since I've thought that the high cost of that memory (both in terms of chips, but also the expensive interposer and board design that the tech requires) has limited AMD's ability to lower the cost of Vega boards to the point where they're $/perf competitive with Nvidia.
In either case, it puts AMD in the unique situation of offering 3 generations of products (Vega 20, Vega 10 and Polaris) at 4 different processes (7nm Radeon VII, 14nm Vega and Polaris, and 12nm Polaris) to target different parts of the dGPU market.
Meaker10 - Wednesday, January 9, 2019 - link
My guess is yields have done a fair bit better than expected and this is helping eat the price penalty of HBM along with GDDR6 being closer in price.sing_electric - Wednesday, January 9, 2019 - link
I hadn't thought about how much more expensive GDDR6 is compared to 5. When AMD didn't release any HBM-based cards in the 400/500 series, I'd thought the tech might be going the way of RAMBUS; we'll see how the Radeon VII performs but it now seems like it may actually offer an interesting way of increasing performance.KateH - Wednesday, January 9, 2019 - link
i agree that this is not what was expected. rumors were pointing to a Navi-based GTX1080/RTX2060 competitor at similar price point to 2060, not a GTX1080Ti/RTX2080 competitor at the same price point as the 2080. Which leaves the question, what is AMD going to do in the midrange space? Right now they have nothing to go against the 2060. AMD painted themselves into a corner in the midrange with the RX590- too hot, too expensive, too slow. At $149-$199 and tweaked to bring the power down to =<175W Polaris might still make sense in 2019 but as-is they have nothing unless they rebrand Vega56 and drop it to $249-$299, which isn't going to happen.sing_electric - Wednesday, January 9, 2019 - link
Totally agree- they're in a weird space at the moment. The "Radeon VII" branding with no other suffix makes me think that this is the only chip that AMD will release on VIInm for a while, so other releases are likely to be on the 14/12nm.The 590 move is really confusing to me - why bother porting a design from 2015 (since the 580 is just a 480 that's factory overclocked) to a slightly smaller process, particularly when you don't gain much in headroom (since if Intel was naming things, GloFo's 12nm is more like a '14nm++'). The 580 was the best deal below $200 until the 2060 dropped pushing 1060 6B prices down (though it's still not a *bad* deal). Maybe an OEM was willing to do a bulk purchase or something.
There's two moves I can see: One, a Radeon VII with less memory - following Nvidia's lead with say, the 1060 3/6GB; the other, they do for Vega what they did with the 590, shrink it to 12nm (and possibly pair it with GDDR, though it looks like Vega performance is VERY bandwidth dependent), or possibly pair it with less HBM2. They could release it under the Vega name and use a number that indicated performance instead of ROPs (so like, a 56 ROP clocked faster after the shrink might be the Vega 60 or something).
Still, any way you cut it, it's way more confusing than Nvidia, which is pushing Turing and letting Pascal stocks dry up.
mode_13h - Wednesday, January 9, 2019 - link
You have to consider when the RX 590 would've originated. That would be back when cryptomining was still hot.Given what they've said about their 12 nm Ryzen die, it sounds like the process transition to 12 nm was very low effort. So, my guess is that it was seen as low-hanging fruit that made them more competitive vs. GTX 1060 (6 GB). Perhaps they were (typically) optimistic about the power savings or clock headroom of the 12 nm process. If it were a bit more efficient, then I think it would look much more compelling.
Karmena - Thursday, January 10, 2019 - link
what is NVIDIA doing in midrange space? Looking at prices 2060 is not midrange. 2060 is not 1080 performance. it is more 1070Ti. And people are saying that 2070 already has issues with ray tracing performance to be useful. So 2060 completely negates RTX part of the name anyway.sing_electric - Thursday, January 10, 2019 - link
AMD is in a really weird place now, having just re-released shrunk versions of its architectures from 2015 (RX 590, a 12nm shrink since its still Polaris from the RX 480 (which was overclocked to become the 580)) and 2017 (Vega 7nm), but not really a new unified architecture to target high/mid (low?).My bet is that Radeon VII (and the 590) are stopgap products until Navi is ready. I think most of us were surprised about the VII, which may mean that Radeon Instinct MI yields were better than expected, or demand was lower than expected, or just that AMD realized they could make money on them at $700.
peevee - Thursday, January 10, 2019 - link
"which may mean that Radeon Instinct MI yields were better than expected"By dropping CUs from 64 to 60, you allow for up to 4 dead CUs on a 64-CU chip, effectively increasing yields. The same probably allowed to increase frequency, as they can choose 60/64 which can take this higher freq.
Replicant007 - Thursday, January 10, 2019 - link
I'd venture to guess third party will offer lower memory (8 or 12) card options at a lower price. Either way, RTX just isn't worth the cost of admission.KOneJ - Friday, January 11, 2019 - link
2060 without RT or TC at 580/590 pricing could happen and would be a real problem without Navi for AMD.rhysiam - Wednesday, January 9, 2019 - link
My guess is that AMD were never expecting to launch this as a gaming GPU until they got wind of Nvidia's cash grab with the RTX 2080. AMD thinks, "if Nvidia can charge $700 for 1080ti performance in 2019, then why can't we!?" So they rush a mainstream release. If Nvidia launched the 2080 at a more reasonable $550 then I'm willing to bet this Vega VII card wouldn't even have been contemplated.RE 16GB HBM2, the card has and obviously needs a 4096bit memory bus, can AMD actually do that with 8GB RAM, or do they need the full 16GB?
KateH - Wednesday, January 9, 2019 - link
I believe that the smallest HBM2 stacks are 4GB, so no 8GB option with four stacks as used here.mode_13h - Wednesday, January 9, 2019 - link
I wonder why they didn't copy Titan V and just go with 3 stacks. Maybe Vega's architecture wouldn't cope well with the imbalance.iwod - Thursday, January 10, 2019 - link
AMD needed HBM2 for their Compute Intensive Applications. And Vega was first and foremost designed to for Compute. Which means it is a little too late to fit GDDR6 into Vega.Memory Cost has also come down a lot, might be even more than 50%. But they are still lots more expensive than GDDR6, and even GDDR5.
sing_electric - Thursday, January 10, 2019 - link
I'd usually say that's true, except so much of AMD's graphics business is semi-custom, and they've shown an ability to mix Vega CUs with standard DDR (in the Ryzen APUs), and even lop off Vega's HBM2 controller and add it to Polaris (for the GPU it made for Intel's Kaby Lake-G processor).Having said all that, I agree its late in the cycle and there isn't an obvious reason AMD would do this - though it wouldn't surprise me if Navi included both HBM2 and GDDR parts at different points of the product stack (though it also wouldn't surprise me if HBM became Radeon Instinct-only and they stuck with GDDR for all the consumer parts, as was the case with Polaris).
KOneJ - Friday, January 11, 2019 - link
The best explanation for this I can see is as a way to use more silicon and reduce waste from the 7nm Instinct cards.silverblue - Friday, January 11, 2019 - link
I had thought about that; reject chips with only a small portion disabled. If they had some with, say, 56 or 52 CUs enabled, and HBM2 somehow miraculously became cheaper to manufacture, they could bring out a Vega 56 replacement which would equal or beat the 2070... just a thought.rarson - Monday, January 21, 2019 - link
My guess is that AMD saw the raster performance of the 2080 wasn't so spectacular and realized that their 7nm Vega could get reasonably close at around the same price. I suspect that AMD is still losing money on these even at $699 due to the high cost of HBM2. I also suspect that either Navi isn't quite as ready as they expected or they realized they needed to allocate more resources towards console chips. Either way, it's clearly a stop-gap, but if performance lives up to the claims, I would say it's not a very bad one. Had they stuck to the original plan, I imagine we would have had to wait significantly longer for a new card but that it would probably be a Navi-based one.Cellar Door - Wednesday, January 9, 2019 - link
To be honest - this is NOT gonna fly at $699. Why would anyone pick this over the 2080 that has the raytrascing tech and DLSS.AMD seemed to miss their opportunity here.
marees - Wednesday, January 9, 2019 - link
Ray tracing not efficient at 4kand DLSS equivalent to 1800p super sampling with TAA
so people who have 4k and/or high frequency monitors have a choice now
FMinus - Wednesday, January 9, 2019 - link
Because raytracing is a buzzword nothing more nothing less, maybe in 2025 it will mean something, now it doesn't.webdoctors - Wednesday, January 9, 2019 - link
699? I thought the AMD fanboys said it'll be 399 and they'd never rip them off unlike the other guys. What happened to the comments about competition would bring down prices? Trolls haven't woke up yet?But wow, 16 GB of HBM2......
KateH - Wednesday, January 9, 2019 - link
AMD fanatics said it would be Navi at $399 with 1080/2070-level performance (not a fangirl here but i spend too much time reading tech rumors). This is a bit of a curveballGreenReaper - Wednesday, January 9, 2019 - link
There probably will be one like that... just not yet.mapesdhs - Thursday, January 10, 2019 - link
I can't see it selling at this price, it's just too high. Doesn't matter how good it is, AMD needs something far cheaper to gain both market share and mindshare. Tech sits will also bitch about power/heat/noise even though most gamers really don't care at this level. People simply don't buy AMD often enough, even when it makes perfect sense to do so. AMD isn't going to get back into the GPU game by targeting what now passes for high end these days. Roughly the same as a 2080 at about the same cost won't work. It needs to be waaaay cheaper to really make tech sites yell yes, buy this thing, and also be such a gap that NVIDIA can't simply lower its prices to counter the difference without making it very clear their own products never had to be so expensive in the first place. Doing it like this, NVIDIA can respond merely by making the 2080 slightly cheaper, and it doesn't have to be much of a margin to have the desired effect. Worse, there are people who openly say they wanted AMD to release something decent just so they could get a cheaper NVIDIA card (these are the NPCs who help to maintain market stagnation; if people don't buy AMD based on such a deluded minset then AMD just isn't going to bother with this market at all, why should they?).Paul ("not an apple fan" on youtube) has talked about these issues at length, worth checking out.
A product like this needed to be $500 or below. Until the NPC zoids stop defaulting to NVIDIA whenever things are more or less equal, this situation won't change, and NVIDIA can continue to screw the consumer with overpriced tech passed off as advances which are just the table scraps from the compute market (what we're getting now are not gaming cards, they're not designed for *gaming*, as The Good Old Gamer explained so well some months ago).
Arbie - Friday, January 11, 2019 - link
I suppose "AMD fanatics" is just your shorthand for "folks trying to predict what graphics cards AMD would be able to release in 2019", and somehow not meant to be offensive.sing_electric - Wednesday, January 9, 2019 - link
AMD and Nvidia have really different architectures - we'll see if the GCN successor (next year?) starts looking more like Nvidia but I doubt it.AMD DOES support GPU-enhanced ray tracing on Vulkan, and there's nothing "magical" about the cores in Nvidia's RTX series that enable it - they're just optimized. We've seen that GCN is actually surprisingly good at doing a lot of compute tasks, and so it wouldn't surprise me if AMD could support hardware DirectX raytracing on existing GPUs in the future.
We *still* don't know how DLSS works, but it seems like it's just an AI algorithm that allows most of a scene to be rendered at below the advertised resolution while upscaling select parts (and applying TAA on others?), so again, it's not "nothing" but it's just software around which Nvidia may have optimized parts of its hardware. There is a chance AMD could announce something similar, again, on existing hardware, though I'd kind of doubt it.
mode_13h - Wednesday, January 9, 2019 - link
> there's nothing "magical" about the cores in Nvidia's RTX series that enable it - they're just optimized.You got a source on that? I'm not aware of any public information about what's inside of their RT cores.
mode_13h - Wednesday, January 9, 2019 - link
DLSS uses deep learning to infer what the upsampled output would look like. While I'm not aware of published information about the specific model + pre/post processing they're using, it's less magical than you make it sound.Aside from adding integer support to their Tensor cores (which they were probably going to do anyhow), I think DLSS is more a case of them developing new software tech to utilize existing hardware engines (i.e. their Tensor cores, which they introduced in Volta).
mapesdhs - Thursday, January 10, 2019 - link
I was surprised at how grud awful DLSS actually looks. GN's review of FFXV showed absolutely dreadful texture blurring, distance popping and even aliasing artefects in some cases, the very thing AA tech is supposed to get rid off. It did help with hair rendering, though the effect still looked kinda dumb with the hair spikes clipping through the character's collar (gotta love that busted 4th wall).KateH - Wednesday, January 9, 2019 - link
maybe an edge case here, but i would. I do content creation as well as gaming; if a Radeon 7 can hit 2080 performance in DX11 and Vulkan /and/ render my videos faster I'd choose the Radeon. I downgraded from Win10 back to Win 8.1 (flame away, i am unbothered lol) and have no plans of going back up for at least a year or two so Win10-only DXR does nothing for me.HollyDOL - Wednesday, January 9, 2019 - link
That will be actually interesting to see - how much support these new cards can get on system that is year past end of support already... at absolutely worst case you might not be able to run a new card on that.KateH - Wednesday, January 9, 2019 - link
yeah :-/ I've a GTX 1070 in my gaming system right now (bought second-hand autumn 2018) and there's a distinct possibility that my only reasonable upgrade option for will be a used 1080Ti. My notion is to stick with the 1070 till this winter or so when used 1080Ti prices hopefully hit ~$300, and then stick with that until DXR/Vulkan Raytracing becomes mainstream and by that time (late 2020?) i should be able to snag a used 2080Ti or whatever midrange cards are out at the time, whichever has better perf/$$.Or I might just keep using Windows 8.1 forever like people are doing with Win7. I'll get security updates till 2023 IIRC and i don't care about bugfixes or feature upgrades as the OS has been completely stable for me for years now
Don't matter - Wednesday, January 9, 2019 - link
Hey KateH, I signed up just to reply to you :)I completely agree with you . As a 1070 owner that bought it last year for $300 was thinking the same thing. For us 1070 owners the only reasonable upgrade option is 1080 ti. or if you feeling fancy for twice the price RTX 2080 or Radeon 7. Wouldn't go for Titan RTX or 2080 TI even if I had the money for it, just not worth it, would rather donate to charity...
And 1080 ti would last for at least a couple more years. Since 1080 ti would be of a RTX 3070/3060 and RTX 4050 (at worst- judging by current trends) performance. Which means it's gonna be equal to a mid range GPU in around 2021-2024. and the performance would be between 1/2 and 1/3 of RTX 4080 ti. But by that time there is a possibility that a new fancy tech will arrive and RTX will mature... So it would be a worthwhile upgrade for RTX 4080/ 4070 for twice the performance boost from 1080 ti + new tech...
Can I ask you why you switched to Win 8.1 though? I don't get it...
KateH - Wednesday, January 9, 2019 - link
do you mean why i downgraded at all, or why i chose to downgrade to Win 8.1 specifically?I got tired of the feeling that Windows 10 is perpetually in Beta, or at least that it's perpetually a "Pre-Service Pack 1" version of Windows, from release up to 1703 when I gave up. Too many difficulties getting it to install on a wide variety of hardware. Too many driver issues (despite the fantastically wide range of drivers available automatically via Windows Update), too many instances of bizarrely high CPU usage at idle, BSODs that I couldn't troubleshoot, etc, again across a lot of hardware. And then if I got an install that worked and performed as expected, I had to deal with the weekly barrage of updates, the litany of crapware that MS bundles (and reinstalls without asking!), application incompatibility, utterly titanic install footprint (that just grows and grows and grows with each update) and so on. It's a shame, because i for the most part love the Win10 UX and appreciate it's fast boot times and responsible RAM usage.
I went back to Windows 8.1 specifically because I unabashedly actually like it. It has many of the UX improvements that 10 has. I like the flat, clean GUI that feels responsive even on low-end hardware, a far cry from the bogged-down garishness of Aero on Vista and 7. I like the fast boot times- my gaming PC has a single 1TB HDD for OS and 2x 1TB RAID stripe for gamez and stripped-down 8.1 boots from pressing power button to ready at desktop in ~10 seconds, as fast as loaded-up 8.1 on my laptop's SSD (remember booting Win7 from spinning rust? Better bring a good book to read while you wait!). I like that Win 8.1 uses a tiny amount of memory- 800MB on my 8GB equipped gaming system, 200MB of which is for the Nvidia driver alone, and ~1GB on my 16GB laptop that I've loaded up with a silly amount of startup apps and utilities. The weird Start screen doesn't bother me- on my laptop I just hit the windows key and type the name of the application and on my gaming system I actually like the tile-based UI because it makes it feel more console-like.
Overall Win8.1 just feels like a stable, mature OS which is what I want. I know its features and I know they won't change. I download my weekly Windows Defender definitions and otherwise don't have to worry about updating other than once a month or so and I have full control over said updates out-of-box. 8.1 hasn't crashed for me in literally years aside from overclocking follies. It's fast. It just works. To me (and i know this is an unpopular opinion) Windows 8.1 is the ultimate Windows- most of the efficiency and UX advantages of 10 with the feeling of being in control of one's own computer that 7 had.
andrewaggb - Thursday, January 10, 2019 - link
Windows 8.1 was my favorite version of windows and I've used all of them (server versions included) since 3.0. It was the fastest and most stable version of windows ever (for desktops). Windows 10 is less stable, slower, and more bloated.But I use windows 10 on all my desktops now because it's the current version and works reasonably well and seems like it'll be supported forever if they don't eventually release a new version.
mode_13h - Wednesday, January 9, 2019 - link
Content creation is actually one of the better use-cases for RTX. It doesn't require DXR - Nvidia has supported ray-tracing acceleration for content creation via a separate API (OptiX), which has been around for quite a while.mapesdhs - Thursday, January 10, 2019 - link
The joke being actual pro users aren't buying cards like the 2080 Ti and RTX Titan, they don't have enough VRAM. I had a chat with a guy at a major movie company (can't say which, but bigger than ILM), they're already approaching the 24GB RAM of the Quadro M6000s in their Flame systems, so they're moving up to either RTX 8000 (48GB RAM) or 32GB GV100 (think that's the right name). This means hundreds of upgrades in offices all over the world, mega money. This is why modern GPUs are not GPUs are all, they are compute engines; gamers get the table scraps. I long for a new genuine gamer card designed for games, but that doesn't look like happening any time soon.In case you're wondering, the workload in Flame that's gobbling the VRAM is 8K editing, which of course in a pro studio is uncompressed. They have PCIe SSD arrays doing almost 10GB/sec to handle the data, and a SAN that can do 10GB/sec aswell. The building I'm familiar with has its own 1MW power generator (last I asked, 9000 core renderfarm).
Prosumers and solo pros on a budget might find RTX attractive, but all that Tensor/AI stuff won't matter a hoot unless apps are recoded to exploit it. Studios are already talking to Autodesk about the possibilities, so who knows. Don't expect anything anytime soon though, Flame still has weird stuff in it from 20 years ago.
mode_13h - Thursday, January 10, 2019 - link
Sounds fun.In the past, "realtime raytracing" would've been my answer to any question involving a 9000 core render farm. Now, I'll have to think of something else...
AlexDaum - Wednesday, January 9, 2019 - link
If it actually performs the same as the RTX 2080, then it could be a good buy depending on what is cheaper in your country, especially because AMD is bundling 3 games, NVidia only 1 (you have the choice between Anthem or Battlefield 5, but don't get both). So you can save some money there if you want those games, or sell them and get a bit of the price back and make it a better deal overall.But I could see this be a good GPU for GPGPU compute or other non-gaming applications. 16 GB HBM2 is enough space for huge data sets and is fast enough to keep fast algorithms fed with data. Double precision is also higher than on other consumer cards, and they support lower accuracy with higher speed (although only special instructions). And 700$ is not much if you compare it to professional cards. Also I don't think AMD has restricted consumer cards from commercial use, like NVidia did?
wr3zzz - Thursday, January 10, 2019 - link
Like the article said, VII has 16gb VRAM vs. 8 of the 2080.bananaforscale - Saturday, January 12, 2019 - link
That would depend on what the performance is actually like. Raytracing is kinda useless at the moment.del42sa - Wednesday, January 9, 2019 - link
I don´t think it has 128 ROPs ....SalemF - Wednesday, January 9, 2019 - link
So where is ~700 million Transistor Count different against Vega 64 goes specially it has less compute unit , they increase the frequency and increase ROPs unit number ( a lot ) so it will looks more like nVidia architecture Better graphic/gaming performance less computing/mining power which is fair enough for consumers .mode_13h - Wednesday, January 9, 2019 - link
Doubling the memory bandwidth would only make sense if they also increase the ROPs.As for the increased transistor count, don't forget about the new, low-precision arithmetic ops + high-rate fp64 compute.
del42sa - Thursday, January 10, 2019 - link
ROP´s are not tied to memory channels. I guess Fiji has 128 ROPs too...mode_13h - Thursday, January 10, 2019 - link
That's pretty clear, but are you saying they're going to get much from significantly increasing bandwidth without touching ROPs? I assume they're pretty well balanced.del42sa - Friday, January 11, 2019 - link
whats the point of increasing ROP´s without increasing Shader Engine count and Geometry/triangle rate ? Pointless...silverblue - Friday, January 11, 2019 - link
Not if Vega was starved for fill rate to begin with.mode_13h - Monday, January 14, 2019 - link
They increased shader throughput by increasing the clock speed. And as silverblue points out, the benchmarks suggest that vega was either starved for fillrate or overall bandwidth.Note that not all rasterized pixels get shaded. Shading can often be deferred until after stencil and depth testing.
del42sa - Thursday, January 10, 2019 - link
are you serious ?nick.evanson - Friday, January 11, 2019 - link
It doesn't - but in the same way that the Vega 64 doesn't have 64 discrete ROPs; instead it has 16 that are capable of writing 4 independent pixels per clock. The Vega 20 architecture has simply doubled the count of 16 to 32, along with the number of memory controllers, and now has 4 x 32 = 128 "ROPS"nick.evanson - Friday, January 11, 2019 - link
I suspect I misinterpreted your original; I think you're questioning the proof over whether the number of render back ends have been truly doubled or not. Nosing around the web has flung up this image:https://heise.cloudimg.io/width/1920/q50.png-lossy...
That's still 4 discrete ROP units per graphics pipeline; can't quite find a similar image for Vega 10, so unable to tell if AMD have done something with the ROPS, e.g. double write rate from 4 pix/clk to 8.
del42sa - Sunday, January 13, 2019 - link
yes, it was obvious from the begining ( at least to some of us )Thernn - Wednesday, January 9, 2019 - link
I'm confused. You list AMD Radeon VII as pci-e 4.0 capable in the tech specs but then state it won't have 4.0 support?boeush - Wednesday, January 9, 2019 - link
I second that...mode_13h - Wednesday, January 9, 2019 - link
Perhaps those specs were blindly taken from the MI150.Ryan Smith - Wednesday, January 9, 2019 - link
Someone jumped the gun on the tables. It is confirmed to be PCIe 3 only.Dr. Swag - Wednesday, January 9, 2019 - link
$700? No thanks. If it only matches the 2080 in performance unless you need the VRAM I would get a 2080: possiblity of ray tracing in the future and you can use both freesync and g synccheshirster - Wednesday, January 9, 2019 - link
You are right. Ray tracing IS the future.mode_13h - Wednesday, January 9, 2019 - link
...the future.mode_13h - Wednesday, January 9, 2019 - link
By the time games support it well, Nvidia will be onto RTX 3000 and their 2000-series cards will lag badly.Usually, when a new tech is introduced, there are big improvements made between the first and second generation implementations.
I wouldn't buy into ray tracing before there's compelling content that I know will run on the hardware at satisfactory speeds and resolutions. Anything else is just a gamble.
KateH - Wednesday, January 9, 2019 - link
this, yes. Reminds me of when Geforce 8 came out with DirectX 10... that was an even bigger performance uplift compared to the previous generation than Geforce RTX and it still wasn't enough to really do DX10 games well. Same with Geforce 400/Radeon 5000 and DirectX 11- sure it's there, but only as a preview of what is to come later.StrangerGuy - Wednesday, January 9, 2019 - link
GPU history is filled with examples showing buying solely for the sake of new features is bad idea. You needed a DX10 GPU like the 8800GT to run the later DX9 games like TES4 fast enough anyway. HD5xxx DX11 tessellation hype also became "who cares" in a short order. Raytracing for games NEEDs next gen console support for mainstream adoption and that ball is firmly in AMD's court.HollyDOL - Thursday, January 10, 2019 - link
Guess where graphics go from here on is pretty much given::Lisa Su: I think that ray tracing is an important capability and we view that as important. We are continuing to work on ray tracing both on the hardware side and the software side, and you’ll hear more about our plans as we go through the year.
mode_13h - Thursday, January 10, 2019 - link
Yeah, but whether/when to buy into DXR acceleration is not an abstract question about the future of graphics, but rather a specific question about how well said hardware will perform on real apps/games.Opencg - Thursday, January 10, 2019 - link
One trace per pixel at 60fps at 1080p for a 2080ti is unacceptable in terms of raytracing quality, fps, and resolution. How nvidia didnt realize this was a joke I dont know. If amd is smart they will hold off on raytracing for at least 1 more generation beyond radeon vii.mode_13h - Thursday, January 10, 2019 - link
They will hold off, whether they like it or not. Nvidia caught them by surprise and the hardware development cycle is too long for them to respond any sooner than that.mapesdhs - Thursday, January 10, 2019 - link
The NPC mindset marches on. :Dedzieba - Wednesday, January 9, 2019 - link
"produce enough of them to meet market demand" As we saw from the Vega (and Fury) launches, this is far from a reasonable assumption.KateH - Wednesday, January 9, 2019 - link
the difference now is the GPU cryptomining craze is died down. that should help tremendously with availabilityHollyDOL - Thursday, January 10, 2019 - link
Wonder if crypto lunacy is at least partially responsible for costly features like accelerated raytracing now instead of... 2-3 generations later. Insane margins and permanent 'sold-out's could have hasten things a bit (allowing more $$$ into research).Now I just wish graphic cards price ranges returned to Pascal range at highest. They are way too high now and with AMD setting up on nV price ladder it probably won't improve anytime soon. (At least with Radeon 7, but it's still just PR-ware until Ryan reviews it and shops start selling it)
shompa - Sunday, January 13, 2019 - link
No, since 1080Ti could manage 4K60. The market for 60+hz or higher resolution is 0.00001%. So instead of just adding performance than 99.999999% of the users could not use: why not put in new graphics features that use the performance? The average gamer, that is a niche market, uses 2560x1440 today. A resolution that pros like me used in 2004. It takes decades to bump resolution for the masses. I bought 4K/60 in 2013 and maybe 2020-2021 that will be mainstream in gaming/consoles. (and 8K is just stupid. Having screens OVER retina: what your eye can see pixels is just marketing for the IT clickers that rate stuff with large numbers)TheinsanegamerN - Sunday, January 13, 2019 - link
If the market is .00001%, why are there so many high refresh rate monitors on the market these days?do you have a source for your .00001% number?
SaturnusDK - Wednesday, January 9, 2019 - link
I'm disappointed about the price mostly. 25% more performance than a (I assume reference) Vega 64 at 75% higher price than you can currently buy a Vega 64 reference card for isn't exactly a good deal from a price/performance perspective. I bet it's the 16GB HBM2 that is a main culprit in that price hike.mode_13h - Wednesday, January 9, 2019 - link
The transistor count got increased by the new deep learning instructions and the fp64 compute.Like the GV100 that ended up in Titan V, this die was not originally intended for consumer usage. And like the Titan cards, you have the option of paying a *lot* more for a *little* more performance. For some people, it's an acceptable tradeoff.
shompa - Sunday, January 13, 2019 - link
12-14nm vs 7nm: It cost almost 3 times as much per mm2. Even if the chip is half the size, it actually cost more to produce. People have this strange idea that smaller nodes are cheaper. No.. That is gone since 28nm. That is one of the reasons why majority of chips are on 65nm and companies invest in 22nmSOI. FinFet design, tapeout cost 10-40 times more than plane. Only reason why we have seen the rapid shrinking last 6 years is that Apple prepay fab lines and Samsung competes with them. That's why only TSMC/Samsung have 7nm lines. Intel could eat the fab cost because of their 95%+ margin on X86, but that only works when they are nodes ahead. On the same node, RISC/real chips will always win = why Intel left the 8+ billion chip/year "mobile" market. The point is that AMDs margin didn't go up 75%. It's probably the same margin. (and this is the classic capitalism problem and why for example ARM haven't killed off X86. People think its normal to pay 300+ dollars for X86/motherboard. 30% margin = 90 dollars. Let's say they use ARM with the same performance and it cost 100 dollars. With 50% margin, they only make 50 dollars. That's why they want high price stuff. It's not about % margin, but that the same % point is more worth at higher price. That's why stupid investors loved Apple 2017 when they sold fewer iPhones but had higher ASP. Stupitidy of today's "experts" (same reason why we have Games as service. It didnt matter for EA if they sold 5 million copies of DeadSpace3 and made a profit. Profit means nothing today. Today its about profit margin. So it's better to kill off something that makes a profit, and has other stuff with higher profit margins. Yes. It's that stupid. That's why Apple killed Airport/WiFi line. It could just have a 10-20% profit margin so it takes down the average profit of their other stuff. It does not matter that they make money. In their sick minds, it's better to have a higher profit margin on fewer products. But mathematically: you get less profit. Anything you sell with profit: sell it. In the real world: Why care about margins if you actually make money.TheinsanegamerN - Sunday, January 13, 2019 - link
Chip cost is also only one part of total GPU cost. If 7nm were as expensive as you are claiming then the RTX 2060 would have to sell for $1000+ to break even.Znaak - Wednesday, January 9, 2019 - link
Did I see that correct, it retains the MI50's 1:2 double precision performance?Where can I send my money, this is epic for $699, nVidia has nothing that can compete with that even near that price!
richough3 - Wednesday, January 9, 2019 - link
In competitive gaming, speed is more important than visual quality, so if AMD can become the FPS king, then their cards will sell despite not having RTX or DLSS.Jredubr - Wednesday, January 9, 2019 - link
I am by no means an engineer, I’m just someone who likes to keep in touch with technology and who has been reading this website for probably more than 8 years.But I’ve always felt that AMD has been lacking in the back end, since X800s and X1950s era. They have focused a lot on the shaders, but it have always seemed to the that their ROPs were always behind.
So it’s good to see they improving their ROPs number and I think they might have something special here.
Anyways, they really need an architectural change. GCN is limiting their shader/processor counts to 4096 and it is awful from an energetic efficiency point of view. This might give NVIDIA some competition but is very far from something really unique as Ryzen. It just gave AMD time to breathe. Once NVIDIA hits 7nm they’ll be slaughtered if they can’t deliver another architecture.
mode_13h - Wednesday, January 9, 2019 - link
I agree that GCN is in need of replacement. I also suspected there was something about it that was limiting the shader count to 4096, but an AMD engineer I asked said that's not true. So, I guess there are probably other (scalability-related) reasons they're limiting it to 4096.tipoo - Thursday, January 10, 2019 - link
That's right, it's the number of shader engines, every GCN card has been limited to 4.I think we will not see this removed until "Next Gen", after Navi.
Manch - Wednesday, January 9, 2019 - link
I was wondering if I would regret my VEGA 64 purchase in Nov. Picked it up for $420 with the three game bundle. Nope, Im good!zaza - Wednesday, January 9, 2019 - link
So two years after Nvidia released the 1080ti, AMD finally has a competitor that matches the launch 1080 ti at the same price while using more power even at more modern transistors? I kind of disppointed. I which they had 8 GB version for 550$, that would have been a good deal.Also, if this is the same price as RTX 2080, why would anyone choose this over 2080? the 2080 comes with DLSS and Ray-tracing. Unless that double amount of memory is really very important
mode_13h - Wednesday, January 9, 2019 - link
Looking only at gaming use cases, I think you're right. Also, this doesn't catch Nvidia on deep learning.I look at this almost like a Titan card - something that doesn't really make a lot of sense for the mass market, but it's a really good deal for a few significant niches.
tipoo - Thursday, January 10, 2019 - link
Should be a shoe-in for an iMac Pro refresh.mode_13h - Thursday, January 10, 2019 - link
I think you're onto something...shompa - Sunday, January 13, 2019 - link
Apple uses pro-cards, not home cards.Manch - Thursday, January 10, 2019 - link
RT is a fair point but of limited use and lets be real, RT will improve greatly next gen. 1st gen is always meh. DLSS however I don't see it. Not that DLSS isn't useful. Its far more useful than RT ATM. However, and this is a big however, this card is targeted @ 4k where DLSS isn't really useful. TBS, for VR, would/could DLSS be useful? In the case of VR & 4k gaming, I can see that doubling of the frame buffer being extremely useful.mapesdhs - Thursday, January 10, 2019 - link
zaza, DLSS looks terrible (GN's FFXV review), very little support; raytracing is a gimmick that's too slow; but NPCs lap it up.You should reverse the question: if the cards are about the same, why is your choice NVIDIA? This is the NPC mindset, and it's why AMD's new card won't sell; it needs to be at a much lower price to really make people switch, no matter how fast it is. I've sen tech sites on numerous occasions simply default to NVIDIA in their casual conversation, even when referring to performance/spec situations when AMD made way more sense (eg. RX 580 for 1080p).
Manch - Thursday, January 10, 2019 - link
RT implementation has gotten better. There wasn't a lot of lead time for the GD for RT. TBS it's gotten better, but how much more will it on current HW? It may end up being decent. Again, odds are, the next gen will have more than tangible benefits if GD implement it. DLSS as with any post processing will vary per game. FFXV is just one game. I've seen a few games where the results are pretty good. The issue is this card is being used for 4K. At 4K DLSS is pointless.What can VEGA 20 do? It can run 4k, it can run VR quite well supposedly. It will boil down to your use case and what benefits you more. RT/DLSS or double the frame buffer? No need to reverse anything.
As far as sites being stuck up NVidias or AMD's ass. I don't care. I tend to ignore sites like that. Plenty of sites that provide unbiased information & data to form an opinion and decide what cards provides the value to you. Im happy AMD has a card theyre pitting against the 2080. If theyre that confident then maybe when anand reviews it, we will have one hell of a face off. Competition is great. Choices are great.
Glanton - Sunday, January 13, 2019 - link
If there's a real "NPC mindset" it's fascism-lite nerds repeating moody teen sociopath philosphy from 4chansilverblue - Wednesday, January 9, 2019 - link
A few of us did query about this happening, glad to see we weren't wrong. I'm not overly excited about it, though - if it's functionally the same as Vega 10, we won't be getting driver-based automatic primitive shaders, meaning we'll get a mostly-unshackled Vega performing close to 2080 levels and doing well at higher resolutions, but at higher power. Also, will they be losing money on having 16GB HBM2? Will there be a fully enabled Radeon VII in future?It's not as interesting as Navi is bound to be, whenever we get to hear more about it.
Thernn - Wednesday, January 9, 2019 - link
Paul Alcorn says it IS 4.0 capable.Ryan Smith - Wednesday, January 9, 2019 - link
Paul hadn't heard the whole story at first. We've since confirmed it's PCIe 3 only. AMD is restricting to PCIe 4 to the MI50 & MI60.Yojimbo - Wednesday, January 9, 2019 - link
They are ok to price it in a way that not many people want to buy it because they will only make 4 of them.If they tried to make more and tried to price them competively they would lose money.eastcoast_pete - Wednesday, January 9, 2019 - link
@Ryan: Thanks! Mainly good news, I guess. One question I had for a long time is why AMD is not coming out with a beefy GPU like this one, but combine it with a good helping (12 GB or so) of significantly cheaper GDDR 5x or GDDR 6?I read many times that a key reason why Vega 56 and 64 were pricey was the cost for HBM2 vs. GDDR 5 or 6. I know that HBM has higher throughput, but is it really that decisive? Appreciate any comments you may have!
mode_13h - Wednesday, January 9, 2019 - link
Vega's price floor was also due to the die size, which had 12.5 B transistors. Compare that to the GTX 1080 Ti's 12 B transistors and you get the idea.eastcoast_pete - Wednesday, January 9, 2019 - link
Yes, the 64 die was/is big, but I remember reading a number of times how the extra costs for HBM-type memory kept the build costs for Vega cards high. Just wondering how much performance would be sacrificed vs. how much cheaper a 8 or 12 GB card would be with GDDR 5x or 6.mode_13h - Thursday, January 10, 2019 - link
I didn't mean to contradict the HBM2 effect. Just saying that die size was also clearly a factor.I was somewhat surprised to see Vega 64 dip around the $400 mark, recently. I wonder if they were taking a loss on those, selling off a bit of cryptomining overstock.
maroon1 - Wednesday, January 9, 2019 - link
AMD stopped using Doom and Wolfenstein II for vulkan test. Instead they use Strange Brigade, a game that nobody plays (and it is even less popular than Dragon quest XII on steam)1fps faster in two AAA game that are know to run better on AMD GPU ?! This does not look good for AMD. If AMD marketing slides having trouble showing it winning by more than 1fps, then imagine what happens when independent reviewers test it. LOL
boeush - Wednesday, January 9, 2019 - link
No need to imagine. I fully expect Anandtech to have a full review placeholder up as soon as the GPU is in retail this February - and for that review to be incrementally filled in over the ensuing several weeks thereafter :Pjabbadap - Wednesday, January 9, 2019 - link
There is a reason for that. Just look hardware unboxed game review on youtube(hint DX12 vs Vulkan).gsalkin - Wednesday, January 9, 2019 - link
So based on the raw TFLOPS, we're looking at a Vega 64 Liquid with the TDP of Vega 64 air-cooled? That's not too bad but tbh I expected more from 7nm.Bet the ROPs situation will help a lot
tommo1982 - Wednesday, January 9, 2019 - link
OR... perhaps Navi is delayed. AMD created Radeon VII to fill the space temporarily. There might be a case where RayTracing became a viable option and they are doing last minute additions.mode_13h - Wednesday, January 9, 2019 - link
I think Navi is targeted at a lower market segment. As others have said, this was pretty clearly an opportunistic move by AMD.tipoo - Thursday, January 10, 2019 - link
Vega on 7nm before Navi has been in the roadmap for a while. If it was Navi now that would have been great, but this seems opportunistic for getting a new fab earlier than Nvidia.mode_13h - Thursday, January 10, 2019 - link
Yeah, but they've always said Vega 7 nm was primarily about GPU compute & deep learning. They never committed to a consumer version.mode_13h - Wednesday, January 9, 2019 - link
"it’s the Vega we all know and love. There are no new graphical features here"Any chance they've fixed or improved recent additions, such as the DSBR?
webdoctors - Wednesday, January 9, 2019 - link
They took the Vega64 you loved at $400, and re-released it for $700 so you can have 1.75X the love :-Pmode_13h - Wednesday, January 9, 2019 - link
I know you're trying to be cute, but that's not true. They did significant engineering work to add in the new deep learning instructions and move from GloFo's process to TSMC. So, it stands to reason there might've been other fixes or tweaks in there.Believe it or not, chips have bugs. Working around them usually has a performance impact.
Those are often fixed in the next generation, meaning the performance-robbing workarounds are no longer needed.
Ryan Smith - Wednesday, January 9, 2019 - link
"Any chance they've fixed or improved recent additions, such as the DSBR?"Nothing to that effect has been discussed.
mode_13h - Thursday, January 10, 2019 - link
Thank you for confirming.And thanks for the CES coverage.
Ryan Smith - Wednesday, January 9, 2019 - link
"Any chance they've fixed or improved recent additions, such as the DSBR?"Nothing to that effect has been discussed.
Digidi - Wednesday, January 9, 2019 - link
@AnandtechAny news if NGG and primitive shaders are now working?
randycool279 - Wednesday, January 9, 2019 - link
so they aren't competing with the 2080ti? That sucks...mode_13h - Wednesday, January 9, 2019 - link
But this was expected. Turing was made as a consumer product, while Vega 20 was made for enterprise/cloud.It's a little like why the Titan V is bigger, more expensive, less function (no RT cores) and yet still a bit slower than the RTX 2080 Ti, even though they're on the same process node.
mapesdhs - Thursday, January 10, 2019 - link
Good lord, are you serious?? :D Turing is not in any way a consumer product. It's just the table scraps from the compute market packaged up with some flimflam & b.s. marketing to sell to gamers who are too NPC minded to realise they're being duped up the wazoo (we've seen how the PR was fake, the months with no RTX, the terrible performance, the speedup only achieved by reducing amount of raytracing, and who cares in such a grud awful game as BF5 anyway). These are not *gaming* cards, they're compute dies that didn't make the pass for Enterprise.mode_13h - Thursday, January 10, 2019 - link
Yes, I'm serious. Turing doesn't have the raw fp64 throughput or HBM2 of their big HPC chips.Targon - Wednesday, January 9, 2019 - link
What would be interesting would be if card makers release these Radeon 7 cards in an 8GB configuration to drop the price. I'm not sure how much of an improvement in performance we would see with that jump from 8GB to 16GB, so it might just increase the cost without much of an improvement in performance for MOST people.mode_13h - Thursday, January 10, 2019 - link
Well, if you look at the gaming performance increase, it's a lot more than the ratio of fp32 compute. That strongly suggests most of the benefit is from more than doubling the memory bandwidth.If you cut down the memory back to 8 GB, then you'd probably be left with something that performs almost the same as Vega 64, but a bit more power-efficient. Probably still not as efficient as a GTX 1080, however. And the large die will probably still impose a fairly high price floor. Using cloud/HPC-oriented GPUs for consumers is never going to be a cost-effective solution.
The better solution would be if AMD has some kind of scalable, multi-die solution with Navi.
iwod - Thursday, January 10, 2019 - link
100 Comment and not a single comment on Why VEGA 20 for Gamers.I doubt it was AMD trying to counter RTX2080 and made the move, it was simply AMD had a choice. TSMC will likely not be fully utilising their 7nm Fab capacity now and in the months to come, none of the other 7nm players around have the demand. So AMD is likely taking this advantage and might as well lunch their 7nm GPU to consumers.
shompa - Sunday, January 13, 2019 - link
And the real reason is: These chips are harvested chips from the full working die. So let's sell them instead of scrapping them. It's not about TSMC have excess 7nm wafer starts. It does not work like that. AMD bought X amount of wafer starts since other companies have priority, the only thing that happens to AMD is that their ordered wafer starts are filled quicker.crotach - Thursday, January 10, 2019 - link
If this results in a price drop of 2080 RTX then it's already an achievement. The GPU market badly needs competition.mapesdhs - Thursday, January 10, 2019 - link
Only if people actually buy the AMD product. If the response is to simply buy the NVIDIA card, as so many want to do, then this market will die a death of innovation for years just like the CPU market did. People who take advantage of new AMD products as a route to a cheaper NVIDIA product are sustaining the market stagnation in GPU tech we've already seen result in the overpriced RTX line. This is why AMD's new card won't sell well, it needs to be far cheaper to attain both market share and mind share; it has to break past the consumer and tech media NPC mindset, and that isn't going to happen with price parity, especially not if there's any kind of power/heat/noise element people can yell about (even if they never moan when NVIDIA acts the same way, or actually promote AMD when they have a product that doesn't have such issues).AMD needs something that's a huge amount cheaper than NVIDIA to really get some talk going on forums and tech sites, and that isn't going to happen with anything targeting upper RTX; NVIDIA can simply lower its prices in response, it has plenty of existing overpriced margin with which to do so. AMD needs a pricing gap large enough so that NVIDIA dropping its price to match would make people instantly understand how much they were previously being ripped off. This is a battle that needs to play out first in the mid range, what was the 1060 market (for that there's Navi). AMD can't win this struggle at the high end. I really don't understand why they're trying to do this, it isn't going to work.
BenSkywalker - Thursday, January 10, 2019 - link
You want to avoid the death of innovation by encouraging people to buy the rehash of old tech that wasn't good to begin with over the product that is being lamented for having too much innovation?All of the higher end parts are priced outside of where we would like them, I get that, trying to champion AMD support in the name of innovation seems comically out of touch with reality in the GPU sector.
If you want absolutely zero innovation, you just hate it with every fibre of your being, this is your graphics card. Without question.
shompa - Sunday, January 13, 2019 - link
It is a huge difference. Intel did NOTHING between 2006-2017. AMD gained 1% market share in 2017 and somehow this changed the whole X86 industry? Not the 8+ billion ARM chips that were sold? Nvidia makes their money on Tesla cards and WS. Gaming cards are not huge profit items especially since high-end cards is not a huge market. Nvidia at least aprox doubled performance every 24 months. Most normal gamers think 4K/60hz is ok. When you have cards that manage that, its stupid to add more performance without putting on new features, like raytracing. DLSS is genius. In the future, we can have small local graphics cards/cloud gaming and let the supercomputer render it. It's such a fanboy comment that we should buy AMD just to have competition. Make good products and we will buy them. If you want a 1080P card: I would recommend AMD. If you want a high-end card: its Nvidia.GruenSein - Thursday, January 10, 2019 - link
While this GPU may be able to at least compete with the RTX 2080 in performance and price (or serve as a niche product for those looking for FP64 compute), this is achieved by a combination of factors which do not shine a good light on Vega as an architecture.At roughly the same number of transistors, it matches the perfomance while using more power even though it is manufactured with a superior process. So the performance metrics aren't just worse than Nvidia's but actually terrible. Additionally, Nvidia dedicates lots of its transistor budget to tensor cores and ray tracing units. That means that AMD needs more transistors for the same functionality. And then there is the fact that the RTX 2080 achieves its performance level with lower theoretical throughput hinting that Vega simply isn't able to actually make use of all its compute resources.
As much as I'd like AMD to catch up or even pass Nvidia for many reasons (such as open source support etc.), I cannot shake the feeling that this is more of a bandaid to keep the graphics devision from bleeding out this year. It is cleverly positioned considering AMD's options (7nm being the only thing they have going for them) but it is by no means technologically on par. Let's hope that Navi can do better.
mapesdhs - Thursday, January 10, 2019 - link
None of that lovely tech analysis is a rationale for not buying the AMD product, but it's this kind of banter which disuades people from doing so. The NPC mindset marches on.D. Lister - Friday, January 11, 2019 - link
Going through this comment section, it is easy to see that your tenacity in shilling rises far above the merely average. Keep at it,.. it amuses.wr3zzz - Thursday, January 10, 2019 - link
How much cost can AMD save if they use 8GB VRAM instead of 16? HBM2 gotta be pretty expensive. A $100-150 savings would make the VII very compelling vs. 2080.Veradun - Thursday, January 10, 2019 - link
I can't see those 128 ROPs. They should still be 64.Veradun - Thursday, January 10, 2019 - link
https://i.postimg.cc/vByxh61z/Vega10.pnghttps://i.postimg.cc/Kz5kHvBy/Vega20.png
del42sa - Sunday, January 13, 2019 - link
yes , yeo were righttipoo - Thursday, January 10, 2019 - link
So maybe a touch faster than 2080 performance for about the same 700 dollars, but having to get there by being a fabrication node down and without any of the silicon Nvidia is spending on RTX, and all 5 months later, is I guess an ok slightly better value play, but not exactly awe inspiring, even a little worrying.The benchmarks are no doubt picked to be favorable, so the primary draw would seem to be the extra HBM2 memory, might be an interesting card for researchers but few titles are hurting on 8GB for gamers yet. Seems like the Frontier again, or what some Titans have been.
Then again this is only die shrunk Vega, Navi will be the interesting one to see as the new architecture.
silverblue - Thursday, January 10, 2019 - link
https://www.amd.com/en/press-releases/2019-01-09-a...Footnote 3. It's a pain to read, so probably easier to watch the following link from TechEpiphany which also includes percentages (there is a slide somewhere showing the same information but I can't find it, unfortunately):
https://www.youtube.com/watch?v=9Ir8WJ8ctWM
What I find weird is that AMD conducted the benchmarking suite on a 7700K with 16GB DDR4 3000. Perhaps the 7700K is not the limiting factor.
PEJUman - Thursday, January 10, 2019 - link
Is it pronounced seven? Or Vee Two (Vega two)?tipoo - Thursday, January 10, 2019 - link
SevenPEJUman - Thursday, January 10, 2019 - link
It’s a joke....mikato - Thursday, January 24, 2019 - link
Vee Eye EyeHixbot - Thursday, January 10, 2019 - link
Wow, really thought AMD would stick it to Nvidia in performance per dollar. Disappointed they're essentially just matching Nvidia here. Seems like a missed opportunity for AMD to take the lead on value. And a big loss for consumers who were hoping for some good competition in the GPU space to drag prices down.With AMD cards sucking up more watts and outputting more heat, it's a tough sell.
andrewaggb - Thursday, January 10, 2019 - link
I think the card has some merit on the 16GB of HBM2 and FP64 performance (both double the nvidia 2080). I think it's a stop gap until their new products late this year or next year, but it is a compelling value for some users.Znaak - Thursday, January 10, 2019 - link
Double the FP64 performance? Tryabout 7,000 GFLOPS vs 300 GFLOPS for an RTX 2080, if it retains the 1:2 ratio.Combined with a massive memory bandwidth advantage to keep the cores fed and it's not just compelling :)
shompa - Sunday, January 13, 2019 - link
Nvidia is not overcharging for their cards. Why do not people understand that it cost money to produce huge dies with fast memory? Both AMD/Nvidia charges 3K+ for same graphics cards but with "pro drivers". I hope people remember this when Ryzen 3 is released. Doing 7nm chips at least doubles the manufacturing costs for AMD, so people who believe we will get magic 16 core mainstream ryzen for 300 dollars: forget it. Its also funny that A12X is larger than ryzen 3, and still all "experts" put a value/BOM of A12X at 25 dollars. Somehow if a chip is X86 = worth 10 times more.mattkiss - Thursday, January 10, 2019 - link
I thought the msrp for the base Vega 64 was $499 not $599?Ryan Smith - Thursday, January 10, 2019 - link
You're right. Thanks! I pulled the wrong value from the original review, since AMD had their weird bundle program going on.mattkiss - Thursday, January 10, 2019 - link
I think the Limited Edition Vega 64 with the cool, brushed aluminum cover had an msrp of $599, so maybe it's more appropriate to compare that to the Vega VII, since that's what's being shown by AMD. Or perhaps the "base" Vega VII will be $699 and the "limited edition" will be $799. ;)eastcoast_pete - Thursday, January 10, 2019 - link
The Radeon VII must be doing something right: Nvidia's head honcho Huang made some quite derogatory comments about it at CES. I don't believe that Nvidia is upset over the Radeon VII being a lousy card - that could only help Nvidia move more 2070 and 2080 cards. So, why, Jensen Huang: Worried much? Is the Radeon VII really that good?D. Lister - Friday, January 11, 2019 - link
"Is the Radeon VII really that good?"It performs close to a 2080, using more power even though being at a smaller process node, and offers far fewer features, and those that it does offer are essentially useless for the gaming market it is being released for (16GB VRAM, double precision).
It is what it is: a rebadged Radeon Instinct M160 cut down from 64CU to 60CU that was released last November as an enterprise accelerator, which they are releasing now as a gaming GPU to coast them over in the consumer market until Navi comes out next year.
eastcoast_pete - Friday, January 11, 2019 - link
Don't disagree completely, but the question remains: why did Nvidia's Jensen Huang go on a rant about the Radeon VII? If it is indeed such an inferior card, the smart thing to do would be to wait for the first tests to be published, and then just say how pleased Nvidia is with the good showing of their 2080. Huang's trash-talking the Radeon seems to suggest he is worried. His unprofessional behavior just made the Radeon VII a contender to watch.lilkwarrior - Saturday, January 12, 2019 - link
Because at the end of the day, it slows innovation in the mainstream GPU market. They really had one job.Finally also enable a GPU with dedicated TPUs like Tensor cores to handle compute more efficiently for a vast majority of heavy pro users of GPUs.
W/ DXR, AMD should be adopting that sooner (confirmed they are, but apparently waiting for Navi seemingly)
mikato - Thursday, January 24, 2019 - link
I think it just suggests, nay confirms, he is a dbag.peevee - Thursday, January 10, 2019 - link
"Instead, the biggest difference between the two cards is on the memory/ROP backend"So it is specifically for higher resolution, not for more complex scenes. Makes sense for beyond-vision-res-obsessed kiddies.
Storris - Friday, January 11, 2019 - link
128 ROPS. Discuss.Storris - Friday, January 11, 2019 - link
I mean, this represents a massive change. Something this large should have probably been leaked at some point; at least a couple of benchmark sites with unrecognisable device ID's and unusual configs, but nothing.The only thing we got on consumer Vega 7nm, is two leaks. The first saying that Consumer Vega was binned, then more recently Consumer Vega is coming.
If both of these leaks are confirmed, and there's no reason why they can't both be confirmed, it points to a last minute change of plans. That, and the lack of spec/performance leaks, point to the Radeon 7 being a reheated MI50.
del42sa - Saturday, January 12, 2019 - link
no 128 ROPS. Period ! End of Storyeastcoast_pete - Saturday, January 12, 2019 - link
Unfortunately (for AMD & fans), the Radeon VII has (only -?) 64 ROPs, so just like the Vega 64. That changes the value proposition, from exciting to so-so, at least at the launch price; it's now at least $ 100-150 too high. Still great for certain compute tasks, but, as it stands, not a 2080 GTX slayer anymore. Bummer! Nvidia could use a hard kick in the pants so they come down in price.eastcoast_pete - Friday, January 11, 2019 - link
Here another thought why AMD is coming out with this 16 GB HBM2 card now. I suspect AMD and several of its partner companies either have a lot of HBM2 silicon in stock or firmly ordered, something that happened before the Crypto-craze collapsed. Now, that HBM memory has to be shifted so it won't hang around their necks like a millstone. One good way to do so is be generous with HBM2 on a higher-end consumer card. They might even loose money on it, but nowhere near as much as if it just sits around. Cynical, I know.DillholeMcRib - Friday, January 11, 2019 - link
I'm not a huge graphics fanboy for either camp. That being said, this is embarrassing for $600 given the hype around 7nm and all that jazz. AMD should take a loss, undercut Nvidia @ $450 and wreck them. Lame move, AMD.del42sa - Saturday, January 12, 2019 - link
we know base clcok, why it is not in the chart ? Huh ?del42sa - Saturday, January 12, 2019 - link
your info about Vega ROP´s was edited ( in silence ) but not apologies for misleading information ?WhiteSkyMage - Saturday, January 12, 2019 - link
I will have to buy this GPU, no other choice. I can perhaps find a used 1080Ti but I have a FreeSync monitor and I am not sure how well it will work with NVIDIA. However, so far with Vega 64, it's good, but I need more performance for UWQHD+. I want to at least be in my FreeSync range (48-75FPS). I usually need to lower down some settings a notch to get around 50-60FPS at my resolution (3840x1600). I am guessing the Radeon VII can do the job with a Ryzen 9 @ 5.1 (if rumors hold true) and a fast B-die memory. I know it's made for compute and RTX 2080 is an equivalent with the RT & DLSS features, which BTW I don't plan to use even if the games that come out support it. I don't need RT at all...And besides, the RTX 2080 has some very aggressive power limit. I want a card that allows as much power so it can stretch its legs. I know that Radeon VII will consume a lot, but power consumption does NOT matter to me at all.noxlar - Sunday, January 13, 2019 - link
use vega 64 + radeon7 crossfire.. i'm running crossfire setup, been doing that for some years.. when software support crossfire it really fly's and never been unstable.. it's only when you force cf on software it isn meant to crossfire.. i don't game much, but know application have alot better support for crossfiresilverblue - Monday, January 14, 2019 - link
For those who haven't noticed, the Radeon VII actually comes with 64 ROPs. There's an article on ExtremeTech about it, here:https://www.extremetech.com/gaming/283649-the-amd-...
BigMamaInHouse - Monday, January 14, 2019 - link
What with the FP64? it turns out it indeed disabled on vega7!:https://techgage.com/news/radeon-vii-caps-fp64-per...
Questor - Wednesday, January 16, 2019 - link
"If AMD manages to reach RTX 2080 performance, then I expect this to be another case of where the two cards are tied on average but are anything but equal; there will be games where AMD falls behind, games where they do tie the RTX 2080, and then even some games they pull ahead in."Here is the problem with mind-share bias. Instead of wording this with a positive first, followed by the equal and ending with negative, we get what we see written here. Whether intended or not, it is giving an affirmation to Nvidia and placing AMD as an also ran wannabe. Even if they are, it is not unbiased journalism's job to tell us this. It is their job to present the facts and let the reader form his or her own opinion about these facts.
Anyone who believes AMD Radeon is across the board equal to Nvidia at this point in time is deluding themselves. That still doesn't account for journalistic mistreatment, whether intended or not, for products that do compete in/at whatever area/whatever level, be it price, performance, features, efficiency, price, any combination of or all.
A better way to have written this: If the Radeon VII reaches RTX 2080 performance, as testing has shown in the past, the competing cards will trade performance wins depending on the game title. There will games where the Radeon VII takes the win, games where they tie and games where the RTX 2080 wins.
I want competition. This will never happen if the tech press is always downplaying some brands and giving the nod to others, again, even if accidentally/unintentionally. It's indicative of the space Nvidia's owns (rent free I might add) in the mind of tech writers.
BenSkywalker - Wednesday, January 16, 2019 - link
As a journalist reporting, this is a shockingly weak part being nothing but a die shrunk Vega, at 7 nm with no ray tracing cores, no tensor cores, AMD should be *crushing* the 2080ti. This shouldn't be close, they should be using less power, smaller die, better performance than anything team green has and clobber them on pricing.As a tech journalist this reporting comes across as borderline AMD marketing slides, but to be fair that is what they were covering. Your attempt at reading implied bias wasn't very strong, I'm curious if we've ever seen a new process node produce a GPU that was both more power hungry and considerably slower than the older node, I can't think of any before now.
D. Lister - Thursday, January 17, 2019 - link
@QuestorI don't think you comprehend what "unbiased" means. As if tech journos don't handle AMD with kid gloves already. The sad truth of the matter is, this is a pretty awful product that only the most misinformed gamer will buy, period.
BigMamaInHouse - Wednesday, January 23, 2019 - link
http://www.sapphiretech.com/productdetial.asp?pid=...Radeon 7 listed!