Reading the release notes, this is the new driver stack (430). No more 3D Vision & no more Mobile Kepler support. 356MB, a bit smaller than the 550MB+ drivers previously offered.
"356MB, a bit smaller than the 550MB+ drivers previously offered" The driver download page lists the file (430.39) as 356MB, however once you click the download link, it comes up as 537MB actual download. Very frustrating. I have to pay for bandwidth and sure miss the old style releases that had English (which were small file size downloads) and International as two separate choices. These international only downloads are bloated and a waste of bandwidth.
Maybe you don't know, but tools like curl (curl -I URL (-I as in the letter i not the letter L)), can issue a head request and that will tell you the files length at only the cost of a few bytes set both ways.
I suppose this brings up a few questions: is this the end of fabbing at Samsung for Nvidia? How close to 75W is this card? I remember the GTX1050ti having a fair bit of headroom on that 75W claim. Is there even a possibility for a passive card? They seem to be getting fewer in number, every generation.
Some of those OEM boards are huge and feature obnoxious (possibly unnecessary) multi-slot coolers. Given the TDP, I'm betting the card could operate very well with lots less fan and radiation surface, but OEM perception of gamer expectation is probably the onus for the huge coolers. Maybe there are cost savings in using relatively standardized, giant double slot solutions that were already developed for more demanding GPUs.
If the board draws 75watts at default clocks and lacks an external power connector than you are pretty much hit when it comes to overclocking(within a fairly small variance). If you are an enthusiast considering this after reviews hit, an external power connector should be on the top of your list(unless you are looking for a quiet sff type setup).
I wouldn't argue that a 75W card should get an external power connector for overclocking, but the triple slot cooler with two fans is overkill even if the OC pushes the card up another 25-30W.
While only rumours, Navi 12 is supposed to be a 75W or less solution. Let's hope it comes sooner rather than later as a new AMD 7nm solution should blow this P.O.S. out of the water.
navi will likely be alot better. its main design target is performance/production cost ratio improvement. and navi 20 will target the high end according to amd. this is an indication that their multi die gpu tech is a success. they will likely do more than break even with current offerings in the value market this year with navi 10.
AMD needs to step up their game in driver support and software. Nvidia has some amazing software that comes with their graphics card and intel has pretty much all features in terms of now technologies
The current thinking is that Navi will not be very preformant, but it will be cheaper -- much cheaper. Navi is, after all, just another refinement on GCN. Ignore the Arcturus and other myths. AMD's GCN was designed to be general purpose compute and GFX arch, it scaled in clock freq in line with current cards of the time. Nvidia's new generation was Maxwell which was so much faster because it clocks so much higher. AMD's original clocks were 0.8GHz, Maxwell's were 1.2GHz. That's over 34% faster clock for clock. And that's also before overclocking which AMD's cards were terrible at and Maxwell was very good at. AMD later clocked up their cards to 1050MHz but were still having a time of it with Nvidia's Pascal arch (an improvement on Maxwell mostly video codecs and RAM speed), was clocked at over 1.7GHz. AMD released Polaris which runs at ~1.3GHz. Even if you clocked GCN up, it's still just a little slower. It's not an amazing arch. It was their first try at something different. It's also been rather memory BW starved. What works better, overclocking Vega 64's HBM or cores? I've heard, and believe, that AMD management was not so good either which would place GCN as being underfunded and possibly mismanaged. Theirs something holding them back, I don't know what. Not the clocks now with Radeon VII, not the memory with Radeon VII. The ROPs aren't a likely candidate either with Vega 64 not benefiting much from overclocking -- even after you speed up the HBM. I suspect that their is a small piece of Vega, the warp (from memory), scheduler seems most likely, to be particularly bad at what it does, hence a 20% performance increase when doing parallel GPU processing (think mantle). Resulting in poor performance. Perhaps overclocking the cores does not even apply to the piece.
I currently own only AMD HW (Ok, some ARM stuff too, but that does not count), so I'm not writing this to shill. This is their story. Albeit I don't know all of it.
So 1/3rd more theoretical performance than the 1050Ti or 1050. Yet only 10% more memory bandwidth (possibly better bandwidth compression features in Turing?). But at a higher cost (10-35%), despite being launched 2.5 years later.
This would be a good generational release if the price wasn't this high. This pricing (because of the 200mm^2 die no doubt) really allows AMD to position the 4GB 570 as a viable competitor until they launch Navi 10 Lite, or Navi 12, against it.
Although it's good to see NV responding to the market by offering closer to reasonable prices on Turing-based GPUs, it would have been really interesting to see what the 20x0 GPUs could have been without the flubbed bet on ray-tracing and tensor. Power consumption and performance could have both been better and the generational improvements from the 10x0 cards maybe could have been significant at the same number of CUDA cores per FPS. The 16-series shows us what could have been in the price-performance-cost balance had RT not been mistakenly pushed to a set of disinterested, cost-sensitive set of game publishers that were never going to pursue the capability due to the sub-par performance even the highest end 20x0 cards delivered using said feature.
You're crazy if you actually expected significant RT adoption at this point - high-end GPUs don't sell in enough volume for anyone to move beyond token support.
We will see a lot more of it once the next-gen consoles are out with similar hardware.
That is a good point. If RT takes off in consoles, we might see ports on the PC side taking advantage of the feature. Interestingly, with AMD at the lead in the console market, I wonder how the difference in implementations between AMD RT and NV RT will impact performance. Of course, by the time RT has a chance at relevance, NV will have released a new GPU. That second gen RT will have to do a lot better than the 2080's barely acceptable performance at 1080p with RT titles we're seeing today.
That is a good point. I wonder how well Nvidia's RTX dedicated cores will work with what is likely to be the console standard (AMD). I would also guess that if Intel is smart they will also support the same AMD standard. Like was mentioned in a earlier comment game developers would likely want to use the same standard for PC gaming as consoles. I am quite sure Navi will in some way support Ray Tracing for next gen consoles. While Nvidia's solution is probably better, it's unlikely game developers will support them both separately on the same game. I can't help but feel Nvidia's Ray Tracing solution, while likely better, will eventually fade away the same way G-sync is destined to. And this is all due to the crazy high Nvidia prices for both their RTX cards as well as the "G-sync tax". We live in a free market economy. I recognize that companies are in business to make money, but if you want your execution of Ray Tracing to be the standard (most widely adopted), you need to bring your princing in line with what buyers can afford or watch all your money in R&D go down the toilet.
Not sure were we'll end up. Nvidia has a much larger share of the market in PC gaming GPUs whereas AMD is the only game in town on consoles. I'd almost think that would force developers to spend the time to code RT to work on Nvidia graphics for games that will use ray tracing and exist on both sides of the fence...one would hope so anyway. We'll have to see how it turns out.
" we might see ports on the PC side taking advantage of the feature " too bad ports from consoles to the comp.. usually have the same limitations as the console had... if they went from comp to consoles.. then maybe..
The typical development route for multi-platform games is to start on the consoles and then port to the PC. There is more profit for individual titles on the console and less hassle to develop for a standardized hardware set. The PC tends to be an afterthought for a good number of large studios which is disappointing, but those studios have bills to pay and whatnot.
yep.. now.. but before the xbox 360/PS3 era.. it was comp 1st then console... a good example... supreme commander vs supreme commander 2.. Sup Com 2.. was VERY limited vs Sup Com.. just goes to show how that market has changed.....
IMO the only thing that should be below a 256bit bus is IGP such as found on APU and majority of Intel cpu for the better part of a decade now, for them, 128bit with fast memory is "good enough" but the performance difference (for gaming) going from 128bit to 256bit (with no other changes) helps to keep things that much smoother and the power draw, wiring etc is not all that extra complicated until they go above 256, a 256bus has to be "chopped" to run at 192, 128 etc where if they run at "native" 256 it does not need to waste draw calls and such.
I hear "turing" I think "ray trace" alas nothing below 2060Ti to my knowledge will be able to do so (Nv choice) so 1660/1650/1640 or w/e makes little difference, being tromped on by a 128bit absulotely does however, cost to Nv maybe $4, so ~$14 to consumer with at least this increase in performance to be had as well, seems "smart money" leave the dinky 128bit or w/e to the true budget non gaming focused products...
NV enabled software-based ray tracing in their drivers for some non 20x0 cards. It's obviously a lot slower, but its there. I do agree that we need more RAM pipe. I've been on 64-bit DDR3 dGPUs before (about 14gb/s) and it's not a fun place to be which is one of the reasons my Windows 10 laptop's GT 720m is only a hair or two faster than the HD 4400 iGPU on the CPU package to the point where I've just told the nvidia driver to globally send everything to the iGPU. The dGPU's presence is only an advantage in keeping fan noise down since the OEM had to use a bigger heatpipe that is far more than the CPU alone needs. GDDR5 at 64-bit helps, but you're still in the neighborhood of about 40gb/s of bandwidth which is just not enough to push 1080p resolutions and forget about anything above that.
I agree. Nvidia has to keep a very significant performance increase on RTX cards to justify the crazy high pricing. They didn't want to make this card "too good" LoL.
AMD has been forced to reduce prices on their existing parts, significantly in some cases in response to the Turing lineup. A true monopolistic move from nVidia would have been to price Turing straight across from Pascal. AMD would have been destroyed at every price point with no hope of competing and without the resources to get in a price war.
You may not like nVidia's current pricing, but it is allowing AMD to survive in this segment. Anyone who thinks we'd be better off long term if Turing was using old price points must truly hate AMD and competition.
I also think the price reductions are due to large amounts of stock sitting on shelves due to contracts signed with chip fabs during the crypto craze. With all these chips out there so close to the Navi launch they had no choice but to discount these chips significantly.
Perhaps we'll see a 1630 or 1640 card. Maybe 2030 and 2040.
Nvidia used to have product lines filling the entire bracket such as the 220, 230, 240, 250, 260, 270, and 280. Maybe with all the jacked up pricing in this 2000 series, perhaps they are planning to release new low numbered models.
I was thinking the same thing, and I hope so, because at $150 this is not a replacement for the 1050. I would think that many buyers of the xx50 cards (450-1050) cards probably did so cause they're on a budget, and another $50 is a significant jump in price! Not cool.
No, it is definitely faster than the GTX 1050 Ti (it is also a chip which is 50% larger, after all...). By quite a bit from some early review numbers so far (maybe 30% or so on average). So, compared to this card, it's not bad really, a bit more expensive but quite a bit faster. The problem here is entirely the RX 570 which are very cheap nowadays. It's more expensive (hence amd saying the competition is the RX 570 8GB rather than the 4GB version), yet still quite a bit slower (at least 10% on average) than the RX 570 4GB. And unfortunately for nvidia, their biggest advantage they have over the RX 570 (only half as high power consumption) isn't really enough of a focus in the desktop graphic card market. (Of course, the 1050Ti looks even more silly against the RX 570 nowadays as does the GTX 1650, but it wasn't the case when these products launched due to them being in different price brackets.)
Ok, so will pricing on 1050/1050 to finally normalize post crypto rush? They are currently way over priced if Nvidia is even close in their claim that 1650 offers 70% performance boost over 1050. So, 1050 is what $80?? 1050 time is $100-$110?
Do I understand correctly that 1650 does not enable resolution or texture quality any higher than 1050 Ti (or even 1050 for that matter), just a bit more triangles in the scene?
I've been patiently waiting for the GTX 1650 release because I wanted a card for video encoding with the new Turing NVenc support for B-frame. The previous generation 1050, 1060, 1070, 1080 all had the same encoder, just more of them as you moved up the lineup. I tried the 1050 for encoding and it was great in terms of speed, but, due to the lack of B-frame support on that generation, the HEVC files produced were 30% larger than x265 software encoding, so I returned it. I was hoping the 1650 would give us the Turing B-frame support at the $150 price level.
Sadly, today is the release and according to the specs on Nvidia's site, the GTX 1650 under Full Specs, says, "NVIDIA Encoder (NVENC): Yes (Volta)". This as compared to the GTX 1660, which says "Yes (Turing)". So strange that they would make a new Turning chip with an old encoder. This seems to imply that the 1650 doesn't have B-Frame support for NVenc HEVC encoding.
This is a huge disappointment because I don't game much, so since the card would pretty much exclusively be for encoding, it's hard to justify the +$120 added to get to the 1660. I wish these release articles would point this oversight out.
A very interesting point. Let's hope this will soon get covered in the corresponding review!
Up until now hardware encoding for HEVC is a major disappointment at least for the average user. Intel's QuickSync is conveniently fast but at best reaches the efficiency of libx264 software encoding at medium settings. Old NVenc wasn't any better and keeping power consumption in mind it was even worse. I didn't get a chance to try AMD's VCE, but since there aren't any reviews stating it to be way superior over Intel and Nvidia, I don't expect any major surprises there either.
A decently efficient HEVC hardware encoder below $200 would be really tempting to invest in a new card.
I could not see buying a low end Nvidia card at this time with AMD Navi on the horizon. While these are only rumors, the idea of GTX 1080 performance for aprox. $250 is too good to waste money on this P.O.S. in my humble opinion. 100-120 FPS at 1080p with ultra settings is the only way to play in today's day and age. I easily achieve this on my Asus ROG STRIX 1070ti with a very mild overclock. Let's hope AMD can achieve some level of parity with NAVI and bring about real competition in the GPU space. I could care less about the FPS sapping Ray Tracing right now as so few games support it. Cudos to Nvidia for pushing the envelope, but I'm not willing to pay to be their test subject for a technology not widely adopted by game developers yet. And frankly the current generation of hardware is not powerful enough to implement. I also expect it will be a must have a couple of generations from now.
So a GTX1080 is somewhere around a Vega64 which goes for ~$400. You really think AMD is gonna release a product right now at $250 that completely undercuts their existing lineup? They're already selling their Radeon 7 at cost so I guess sure they could sell the Navi for just the BOM cost.
Same reason Nvidia launched a $250 card to compete with the $400 1070/1070Ti; progress, and they gotta offer something worth buying over Nvidia. As you point out, Vega is notoriously expensive to produce, so they may even make more money from $250 Navi. Vega 7 was never more than a stopgap until they can get Navi out the door.
I love the people saying "wait for Navi". Just how many generations have we been waiting for new AMD cards and been disappointed everytime since the Hawaii chips? AMD is running at half the power efficiency of Nvidia's chips right now, and it really will take an absolute miracle with their much smaller R&D budget to make decent headway towards catching up at this point. I would really, really love to see it happen, but I honestly have a lot more hope for Intel becoming the new competition for Nvidia.
I agree, there may be more hope for Intel than AMD. Though maybe once AMD escapes GCN they will do better. That won't be for another couple of years though. AMD have been in better financial position for the last year or so because of the success of Ryzen and because of the crypto bubble. Their R&D expenditure has been increasing. So a couple of years from now they may be in better position ss far as their GPU hardware (their compute software ecosystem is probably an entirely different story).
Perhaps we'll see a 1630 or 1640 card. Maybe 2030 and 2040.
Nvidia used to have product lines filling the entire bracket such as the 220, 230, 240, 250, 260, 270, and 280. Maybe with all the jacked up pricing in this 2000 series, perhaps they are planning to release new low numbered models. https://gayfucktube.name/en/category/4294967308/Bl...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
55 Comments
Back to Article
BigMamaInHouse - Tuesday, April 23, 2019 - link
Saw links to the driver listed already: Version: 430.39 WHQL.https://www.nvidia.com/Download/driverResults.aspx...
jeremyshaw - Tuesday, April 23, 2019 - link
Reading the release notes, this is the new driver stack (430). No more 3D Vision & no more Mobile Kepler support. 356MB, a bit smaller than the 550MB+ drivers previously offered.LiquidSilverZ - Tuesday, April 23, 2019 - link
"356MB, a bit smaller than the 550MB+ drivers previously offered"The driver download page lists the file (430.39) as 356MB, however once you click the download link, it comes up as 537MB actual download. Very frustrating.
I have to pay for bandwidth and sure miss the old style releases that had English (which were small file size downloads) and International as two separate choices. These international only downloads are bloated and a waste of bandwidth.
ballsystemlord - Saturday, April 27, 2019 - link
Maybe you don't know, but tools like curl (curl -I URL (-I as in the letter i not the letter L)), can issue a head request and that will tell you the files length at only the cost of a few bytes set both ways.jeremyshaw - Tuesday, April 23, 2019 - link
I suppose this brings up a few questions: is this the end of fabbing at Samsung for Nvidia? How close to 75W is this card? I remember the GTX1050ti having a fair bit of headroom on that 75W claim. Is there even a possibility for a passive card? They seem to be getting fewer in number, every generation.PeachNCream - Tuesday, April 23, 2019 - link
Some of those OEM boards are huge and feature obnoxious (possibly unnecessary) multi-slot coolers. Given the TDP, I'm betting the card could operate very well with lots less fan and radiation surface, but OEM perception of gamer expectation is probably the onus for the huge coolers. Maybe there are cost savings in using relatively standardized, giant double slot solutions that were already developed for more demanding GPUs.jeremyshaw - Tuesday, April 23, 2019 - link
LOL, I just saw EVGA's late April Fools joke. Triple slot, dual fan, GTX1650 card, with PCIe 6 pin, and stock clocks.marsdeat - Tuesday, April 23, 2019 - link
I thought you were JOKING, but had to go check it out. They even list the power at 75W! What on earth are they DOING?!BenSkywalker - Tuesday, April 23, 2019 - link
Overclocking.If the board draws 75watts at default clocks and lacks an external power connector than you are pretty much hit when it comes to overclocking(within a fairly small variance). If you are an enthusiast considering this after reviews hit, an external power connector should be on the top of your list(unless you are looking for a quiet sff type setup).
PeachNCream - Tuesday, April 23, 2019 - link
I wouldn't argue that a 75W card should get an external power connector for overclocking, but the triple slot cooler with two fans is overkill even if the OC pushes the card up another 25-30W.flyingpants265 - Wednesday, April 24, 2019 - link
Jeez, first time I see someone complaining about high-end GPU cooling.jharrison1185 - Tuesday, April 23, 2019 - link
While only rumours, Navi 12 is supposed to be a 75W or less solution. Let's hope it comes sooner rather than later as a new AMD 7nm solution should blow thisP.O.S. out of the water.
malukkhel - Tuesday, April 23, 2019 - link
Please Navi be better, I'm tired of Nvidia Intel monopolyOpencg - Tuesday, April 23, 2019 - link
navi will likely be alot better. its main design target is performance/production cost ratio improvement. and navi 20 will target the high end according to amd. this is an indication that their multi die gpu tech is a success. they will likely do more than break even with current offerings in the value market this year with navi 10.thetelkingman - Tuesday, April 23, 2019 - link
AMD needs to step up their game in driver support and software. Nvidia has some amazing software that comes with their graphics card and intel has pretty much all features in terms of now technologiesjharrison1185 - Tuesday, April 23, 2019 - link
One can hopeballsystemlord - Saturday, April 27, 2019 - link
The current thinking is that Navi will not be very preformant, but it will be cheaper -- much cheaper.Navi is, after all, just another refinement on GCN. Ignore the Arcturus and other myths.
AMD's GCN was designed to be general purpose compute and GFX arch, it scaled in clock freq in line with current cards of the time. Nvidia's new generation was Maxwell which was so much faster because it clocks so much higher.
AMD's original clocks were 0.8GHz, Maxwell's were 1.2GHz. That's over 34% faster clock for clock. And that's also before overclocking which AMD's cards were terrible at and Maxwell was very good at.
AMD later clocked up their cards to 1050MHz but were still having a time of it with Nvidia's Pascal arch (an improvement on Maxwell mostly video codecs and RAM speed), was clocked at over 1.7GHz. AMD released Polaris which runs at ~1.3GHz.
Even if you clocked GCN up, it's still just a little slower. It's not an amazing arch. It was their first try at something different.
It's also been rather memory BW starved. What works better, overclocking Vega 64's HBM or cores?
I've heard, and believe, that AMD management was not so good either which would place GCN as being underfunded and possibly mismanaged.
Theirs something holding them back, I don't know what. Not the clocks now with Radeon VII, not the memory with Radeon VII. The ROPs aren't a likely candidate either with Vega 64 not benefiting much from overclocking -- even after you speed up the HBM.
I suspect that their is a small piece of Vega, the warp (from memory), scheduler seems most likely, to be particularly bad at what it does, hence a 20% performance increase when doing parallel GPU processing (think mantle). Resulting in poor performance. Perhaps overclocking the cores does not even apply to the piece.
I currently own only AMD HW (Ok, some ARM stuff too, but that does not count), so I'm not writing this to shill. This is their story. Albeit I don't know all of it.
https://www.anandtech.com/show/7728/battlefield-4-...
https://www.anandtech.com/show/10446/the-amd-radeo...
https://www.anandtech.com/show/11180/the-nvidia-ge...
https://www.anandtech.com/show/8526/nvidia-geforce...
https://www.anandtech.com/show/5881/amd-announces-...
psychobriggsy - Tuesday, April 23, 2019 - link
So 1/3rd more theoretical performance than the 1050Ti or 1050. Yet only 10% more memory bandwidth (possibly better bandwidth compression features in Turing?).But at a higher cost (10-35%), despite being launched 2.5 years later.
This would be a good generational release if the price wasn't this high. This pricing (because of the 200mm^2 die no doubt) really allows AMD to position the 4GB 570 as a viable competitor until they launch Navi 10 Lite, or Navi 12, against it.
Yojimbo - Tuesday, April 23, 2019 - link
The theoretical performance doesn't matter for price/performance arguments. You really have to wait for reviews to decide how it stacks up.PeachNCream - Tuesday, April 23, 2019 - link
Although it's good to see NV responding to the market by offering closer to reasonable prices on Turing-based GPUs, it would have been really interesting to see what the 20x0 GPUs could have been without the flubbed bet on ray-tracing and tensor. Power consumption and performance could have both been better and the generational improvements from the 10x0 cards maybe could have been significant at the same number of CUDA cores per FPS. The 16-series shows us what could have been in the price-performance-cost balance had RT not been mistakenly pushed to a set of disinterested, cost-sensitive set of game publishers that were never going to pursue the capability due to the sub-par performance even the highest end 20x0 cards delivered using said feature.A5 - Tuesday, April 23, 2019 - link
You're crazy if you actually expected significant RT adoption at this point - high-end GPUs don't sell in enough volume for anyone to move beyond token support.We will see a lot more of it once the next-gen consoles are out with similar hardware.
PeachNCream - Tuesday, April 23, 2019 - link
That is a good point. If RT takes off in consoles, we might see ports on the PC side taking advantage of the feature. Interestingly, with AMD at the lead in the console market, I wonder how the difference in implementations between AMD RT and NV RT will impact performance. Of course, by the time RT has a chance at relevance, NV will have released a new GPU. That second gen RT will have to do a lot better than the 2080's barely acceptable performance at 1080p with RT titles we're seeing today.jharrison1185 - Tuesday, April 23, 2019 - link
That is a good point. I wonder how well Nvidia's RTX dedicated cores will work with what is likely to be the console standard (AMD). I would also guess that if Intel is smart they will also support the same AMD standard. Like was mentioned in a earlier comment game developers would likely want to use the same standard for PC gaming as consoles. I am quite sure Navi will in some way support Ray Tracing for next gen consoles. While Nvidia's solution is probably better, it's unlikely game developers will support them both separately on the same game. I can't help but feel Nvidia's Ray Tracing solution, while likely better, will eventually fade away the same way G-sync is destined to. And this is all due to the crazy high Nvidia prices for both their RTX cards as well as the "G-sync tax". We live in a free market economy. I recognize that companies are in business to make money, but if you want your execution of Ray Tracing to be the standard (most widely adopted), you need to bring your princing in line with what buyers can afford or watch all your money in R&D go down the toilet.PeachNCream - Wednesday, April 24, 2019 - link
Not sure were we'll end up. Nvidia has a much larger share of the market in PC gaming GPUs whereas AMD is the only game in town on consoles. I'd almost think that would force developers to spend the time to code RT to work on Nvidia graphics for games that will use ray tracing and exist on both sides of the fence...one would hope so anyway. We'll have to see how it turns out.Opencg - Wednesday, April 24, 2019 - link
dont worry guys rt in realtime games is gonna take off in about 40 yearsKorguz - Tuesday, April 23, 2019 - link
" we might see ports on the PC side taking advantage of the feature " too bad ports from consoles to the comp.. usually have the same limitations as the console had... if they went from comp to consoles.. then maybe..PeachNCream - Wednesday, April 24, 2019 - link
The typical development route for multi-platform games is to start on the consoles and then port to the PC. There is more profit for individual titles on the console and less hassle to develop for a standardized hardware set. The PC tends to be an afterthought for a good number of large studios which is disappointing, but those studios have bills to pay and whatnot.Korguz - Wednesday, April 24, 2019 - link
yep.. now.. but before the xbox 360/PS3 era.. it was comp 1st then console... a good example... supreme commander vs supreme commander 2.. Sup Com 2.. was VERY limited vs Sup Com..just goes to show how that market has changed.....
Dragonstongue - Tuesday, April 23, 2019 - link
IMO the only thing that should be below a 256bit bus is IGP such as found on APU and majority of Intel cpu for the better part of a decade now, for them, 128bit with fast memory is "good enough" but the performance difference (for gaming) going from 128bit to 256bit (with no other changes) helps to keep things that much smoother and the power draw, wiring etc is not all that extra complicated until they go above 256, a 256bus has to be "chopped" to run at 192, 128 etc where if they run at "native" 256 it does not need to waste draw calls and such.I hear "turing" I think "ray trace" alas nothing below 2060Ti to my knowledge will be able to do so (Nv choice) so 1660/1650/1640 or w/e makes little difference, being tromped on by a 128bit absulotely does however, cost to Nv maybe $4, so ~$14 to consumer with at least this increase in performance to be had as well, seems "smart money" leave the dinky 128bit or w/e to the true budget non gaming focused products...
PeachNCream - Tuesday, April 23, 2019 - link
NV enabled software-based ray tracing in their drivers for some non 20x0 cards. It's obviously a lot slower, but its there. I do agree that we need more RAM pipe. I've been on 64-bit DDR3 dGPUs before (about 14gb/s) and it's not a fun place to be which is one of the reasons my Windows 10 laptop's GT 720m is only a hair or two faster than the HD 4400 iGPU on the CPU package to the point where I've just told the nvidia driver to globally send everything to the iGPU. The dGPU's presence is only an advantage in keeping fan noise down since the OEM had to use a bigger heatpipe that is far more than the CPU alone needs. GDDR5 at 64-bit helps, but you're still in the neighborhood of about 40gb/s of bandwidth which is just not enough to push 1080p resolutions and forget about anything above that.jharrison1185 - Tuesday, April 23, 2019 - link
I agree. Nvidia has to keep a very significant performance increase on RTX cards to justify the crazy high pricing. They didn't want to make this card "too good" LoL.isthisavailable - Tuesday, April 23, 2019 - link
I hope Navi is a slap in the face to Nvidia like ryzen was to Intel. Have had enough of this monopolyBenSkywalker - Tuesday, April 23, 2019 - link
AMD has been forced to reduce prices on their existing parts, significantly in some cases in response to the Turing lineup. A true monopolistic move from nVidia would have been to price Turing straight across from Pascal. AMD would have been destroyed at every price point with no hope of competing and without the resources to get in a price war.You may not like nVidia's current pricing, but it is allowing AMD to survive in this segment. Anyone who thinks we'd be better off long term if Turing was using old price points must truly hate AMD and competition.
jharrison1185 - Tuesday, April 23, 2019 - link
I also think the price reductions are due to large amounts of stocksitting on shelves due to contracts signed with chip fabs during the crypto craze. With all these chips out there so close to the Navi launch they had no choice but to discount these chips significantly.
jharrison1185 - Tuesday, April 23, 2019 - link
Well said!pixelstuff - Tuesday, April 23, 2019 - link
Perhaps we'll see a 1630 or 1640 card. Maybe 2030 and 2040.Nvidia used to have product lines filling the entire bracket such as the 220, 230, 240, 250, 260, 270, and 280. Maybe with all the jacked up pricing in this 2000 series, perhaps they are planning to release new low numbered models.
domboy - Tuesday, April 23, 2019 - link
I was thinking the same thing, and I hope so, because at $150 this is not a replacement for the 1050. I would think that many buyers of the xx50 cards (450-1050) cards probably did so cause they're on a budget, and another $50 is a significant jump in price! Not cool.Great_Scott - Tuesday, April 23, 2019 - link
My suspicion is that the GTX 1050 Ti is either faster or the exact same speed.There was obviously an attempt to hide the card from initial reviews, and that's my best guess as to why.
mczak - Tuesday, April 23, 2019 - link
No, it is definitely faster than the GTX 1050 Ti (it is also a chip which is 50% larger, after all...). By quite a bit from some early review numbers so far (maybe 30% or so on average).So, compared to this card, it's not bad really, a bit more expensive but quite a bit faster.
The problem here is entirely the RX 570 which are very cheap nowadays. It's more expensive (hence amd saying the competition is the RX 570 8GB rather than the 4GB version), yet still quite a bit slower (at least 10% on average) than the RX 570 4GB. And unfortunately for nvidia, their biggest advantage they have over the RX 570 (only half as high power consumption) isn't really enough of a focus in the desktop graphic card market.
(Of course, the 1050Ti looks even more silly against the RX 570 nowadays as does the GTX 1650, but it wasn't the case when these products launched due to them being in different price brackets.)
jharrison1185 - Tuesday, April 23, 2019 - link
That's Nvidia for you. Remember the "Partner Program"?Wulfom - Tuesday, April 23, 2019 - link
Ok, so will pricing on 1050/1050 to finally normalize post crypto rush? They are currently way over priced if Nvidia is even close in their claim that 1650 offers 70% performance boost over 1050. So, 1050 is what $80?? 1050 time is $100-$110?Wulfom - Tuesday, April 23, 2019 - link
*1050/1050ti pricingpeevee - Tuesday, April 23, 2019 - link
Do I understand correctly that 1650 does not enable resolution or texture quality any higher than 1050 Ti (or even 1050 for that matter), just a bit more triangles in the scene?Sounds awfully underwhelming.
hiroo - Tuesday, April 23, 2019 - link
I've been patiently waiting for the GTX 1650 release because I wanted a card for video encoding with the new Turing NVenc support for B-frame. The previous generation 1050, 1060, 1070, 1080 all had the same encoder, just more of them as you moved up the lineup. I tried the 1050 for encoding and it was great in terms of speed, but, due to the lack of B-frame support on that generation, the HEVC files produced were 30% larger than x265 software encoding, so I returned it. I was hoping the 1650 would give us the Turing B-frame support at the $150 price level.Sadly, today is the release and according to the specs on Nvidia's site, the GTX 1650 under Full Specs, says, "NVIDIA Encoder (NVENC): Yes (Volta)". This as compared to the GTX 1660, which says "Yes (Turing)". So strange that they would make a new Turning chip with an old encoder. This seems to imply that the 1650 doesn't have B-Frame support for NVenc HEVC encoding.
This is a huge disappointment because I don't game much, so since the card would pretty much exclusively be for encoding, it's hard to justify the +$120 added to get to the 1660. I wish these release articles would point this oversight out.
ifThenError - Tuesday, April 30, 2019 - link
A very interesting point. Let's hope this will soon get covered in the corresponding review!Up until now hardware encoding for HEVC is a major disappointment at least for the average user. Intel's QuickSync is conveniently fast but at best reaches the efficiency of libx264 software encoding at medium settings. Old NVenc wasn't any better and keeping power consumption in mind it was even worse. I didn't get a chance to try AMD's VCE, but since there aren't any reviews stating it to be way superior over Intel and Nvidia, I don't expect any major surprises there either.
A decently efficient HEVC hardware encoder below $200 would be really tempting to invest in a new card.
ajp_anton - Tuesday, April 23, 2019 - link
"NVIDIA has held their GTX xx50 cards at 75W (or less) for a few generations now, and the GTX 1650 continues this trend."I wouldn't go as far as saying 1 is "a few"... The GTX 950 was 90W.
jharrison1185 - Tuesday, April 23, 2019 - link
I could not see buying a low end Nvidia card at this time with AMD Navi on the horizon. While these are only rumors, the idea of GTX 1080 performance for aprox. $250 is too good to waste money on this P.O.S. in my humble opinion. 100-120 FPS at 1080p with ultra settings is the only way to play in today's day and age. I easily achieve this on my Asus ROG STRIX 1070ti with a very mild overclock. Let's hope AMD can achieve some level of parity with NAVI and bring about real competition in the GPU space. I could care less about the FPS sapping Ray Tracing right now as so few games support it. Cudos to Nvidia for pushing the envelope, but I'm not willing to pay to be their test subject for a technology not widely adopted by game developers yet. And frankly the current generation of hardware is not powerful enough to implement. I also expect it will be a must have a couple of generations from now.webdoctors - Wednesday, April 24, 2019 - link
So a GTX1080 is somewhere around a Vega64 which goes for ~$400. You really think AMD is gonna release a product right now at $250 that completely undercuts their existing lineup? They're already selling their Radeon 7 at cost so I guess sure they could sell the Navi for just the BOM cost.How would they pay for future R&D at that rate?
OTG - Friday, April 26, 2019 - link
Same reason Nvidia launched a $250 card to compete with the $400 1070/1070Ti; progress, and they gotta offer something worth buying over Nvidia.As you point out, Vega is notoriously expensive to produce, so they may even make more money from $250 Navi.
Vega 7 was never more than a stopgap until they can get Navi out the door.
Korguz - Saturday, April 27, 2019 - link
webdoctors...where does it say that AMD sells the radeon 7 at cost ??
Dizoja86 - Wednesday, April 24, 2019 - link
I love the people saying "wait for Navi". Just how many generations have we been waiting for new AMD cards and been disappointed everytime since the Hawaii chips? AMD is running at half the power efficiency of Nvidia's chips right now, and it really will take an absolute miracle with their much smaller R&D budget to make decent headway towards catching up at this point. I would really, really love to see it happen, but I honestly have a lot more hope for Intel becoming the new competition for Nvidia.Yojimbo - Thursday, April 25, 2019 - link
I agree, there may be more hope for Intel than AMD. Though maybe once AMD escapes GCN they will do better. That won't be for another couple of years though. AMD have been in better financial position for the last year or so because of the success of Ryzen and because of the crypto bubble. Their R&D expenditure has been increasing. So a couple of years from now they may be in better position ss far as their GPU hardware (their compute software ecosystem is probably an entirely different story).lmcd - Friday, April 26, 2019 - link
Hope there ends up being single-slot versions of this card.clegmir - Tuesday, April 30, 2019 - link
It should be noted that the 1650 runs with the Volta NVENC, not the Turing variant.https://www.tomshardware.com/news/nvidia-geforce-g...
https://twitter.com/NVIDIAStreaming/status/1120854...
letellieriq - Saturday, September 7, 2019 - link
Perhaps we'll see a 1630 or 1640 card. Maybe 2030 and 2040.Nvidia used to have product lines filling the entire bracket such as the 220, 230, 240, 250, 260, 270, and 280. Maybe with all the jacked up pricing in this 2000 series, perhaps they are planning to release new low numbered models. https://gayfucktube.name/en/category/4294967308/Bl...