Comments Locked

69 Comments

Back to Article

  • SquattingDog - Tuesday, April 5, 2011 - link

    "it's LEANING from the past..." should be it's LEARNING from the past. Otherwise a great article, very speedily put out. Looks to be an interesting card, depending on what pricing it is available at here. Of course the 6850/6870 are much better and the 6850 looks to be better bang for buck, but when people cannot squeeze a few extra bucks out of the wallet, it should be pretty reasonable. Especially once the GTX460s are out of circulation, which is bound to happen soon enough.

    Glad to see more competition and finally some products reaching further down in the retail sector not just OEM from AMD.
  • ZL1Corvette - Tuesday, April 5, 2011 - link

    2nd to last page possible typo:
    "Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 550 Ti."

    Did you mean 6970?
  • ZL1Corvette - Tuesday, April 5, 2011 - link

    See, we all make mistakes. I meant 6790.
  • stm1185 - Tuesday, April 5, 2011 - link

    800 Stream Procs, 40 Texture Units, 16 ROPs, 256 bit memory bus. Only the 6970 has a 840mhz clock compared to 750 mhz, and 1050mhz on the memory compared to 900mhz on the 4870.

    My card has held up well it seems.
  • james.jwb - Tuesday, April 5, 2011 - link

    I have a 4890 with a 2560x1440 screen, and to my surprise, i can play quite a few games at 30fps, and most older ones at 60+fps. Like F1 2010, around 50fps.

    What a card, eh?
  • WhatsTheDifference - Tuesday, April 5, 2011 - link

    hello. nice card. I play everything at 19x12 with my 4890 (in this case, an msi cyclone soc) with never a problem of any kind. it bloodies the 285's face.

    kind of a wondrous thing, then, that anandtech.com has banned the 4980 from all benchmarks...ain't it? XD
  • B3an - Wednesday, April 6, 2011 - link

    *rolls eyes* ... They dont have any 2xx series here either. The ATI 4xxx and NV 2xx series do not belong here.

    If you want to see how your 4890 stacks up simply go the the GPU Bench page to compare.
  • WhatsTheDifference - Saturday, April 9, 2011 - link

    what would you say these are, at least at first sight?

    NVIDIA GeForce GTX 295
    NVIDIA GeForce GTX 285
    NVIDIA GeForce GTX 260 Core 216

    and

    AMD Radeon HD 4870X2
    AMD Radeon HD 4870

    ......?
    please make your way to 'index' and 'the test'. or, maybe I'm confusing articles.

    now adding the always-juvenile *rolls eyes*
  • pandemonium - Tuesday, April 5, 2011 - link

    Gotta take the bad with that good though.

    Both the 4870 and 4870x2 run much hotter and take a butt load more power to produce lower framerates. And the 4870 was never a lower tier card by any means. Plus, no DX11 support on the 4870. Not that it's incredibly important, but just another note.

    I'd say this is a good example of hardware evolution.
  • edpierce - Tuesday, April 5, 2011 - link

    I disagree. Recent video card innovations have been real unimpressive lately. It really does seem like we are not much further than 3 years ago in terms of visual video card performance. Are we hitting a massive roadblock here?
  • silverblue - Tuesday, April 5, 2011 - link

    Until we shift from 40nm, probably.
  • mgl888 - Tuesday, April 5, 2011 - link

    Not much they can do without a process shrink. Architectural improvements can only go so far.
  • Taft12 - Tuesday, April 5, 2011 - link

    No, there is no wall to speak of. The 4870 was a "first tier" part and this 6790 is "third tier". Compare the performance of the 4870 with a 6970 instead (and indeed the launch price of the 4870 with the launch price of the 6970) and you'll see we are doing just fine thank you very much.
  • tno - Tuesday, April 5, 2011 - link

    But not long after release the 4890 was retailing after discounts for $160-150. That's what I bought mine at and I have yet to find a compelling card to take its place. Part of that does have to do with a decrease in my gaming, but if I was a budget gamer, I would look long and hard at a used 4890.
  • tno - Tuesday, April 5, 2011 - link

    +1
  • pandemonium - Wednesday, April 6, 2011 - link

    Only if you don't consider the broad picture and are looking at performance individually.
  • marc1000 - Tuesday, April 5, 2011 - link

    I have a 5770. and even with the desire to upgrade to 6950/gtx570, i try to remain honest with me and tell to myself: "I don't need it"

    this level of performance if perfectly fine to play all current games, because we are all stuck with console ports...
  • jordanclock - Tuesday, April 5, 2011 - link

    I have a 5770 as well, and at 1680x1050, almost everything runs flawlessly, even with AA at 2x or 4x. The situation with multiplatform development is starting to really agitate PC gamers, I think. Crysis 2 looks infinitely better on PC (And ultra high end setups run tri-monitor quite well!), Dragon Age 2 has a texture pack that consoles wouldn't even have the memory to use, and so many games are looking identical on PC and console, meaning that while they run at high framerates on modest hardware, there is no option to increase visual fidelity to offset the increased hardware.
  • tno - Tuesday, April 5, 2011 - link

    I concur. I have a 4890 that I picked up for $160 not long after its release after discounts. This thing was top tier, and it was pretty unique product coming out when ATI started treating their multi-GPU single card solutions as their true halo products. And for its flaws (noisy, power hungry and no DX11) it competes for performance at its original price.

    This is mirrored, frankly, in the PC market where the effective performance increase, that is performance that the average PC user (not us) will notice, has remained fairly flat since Conroe. What has improved is features. For that same budget dual-core Conroe price you get an integrated GPU worth its salt, improved efficiency, improved encoding/decoding performance (the thing users might notice most) and, possibly, more cores.
  • kmmatney - Tuesday, April 5, 2011 - link

    I also have a HD4890, bought for $170, and attached an Accelero S1 cooler, so its virtually silent (only a very slow spinning fan that I can't hear across the heatsink). I'm still amazed after so much time that I cannot get a better card for the same price - things just haven't progressed much in the bang-for-buck department.
  • deputc26 - Tuesday, April 5, 2011 - link

    On the last page.
    "and it would make NVIDIA think long and hard about what to do with what to do with the GTX 460 768MB"

    Oh and this comments section never remembers me despite always ticking the "remember me" box, (W7 Chrome)... annoying
  • Belard - Tuesday, April 5, 2011 - link

    Why not just reduce the price of a 6850 a bit more... And calling it the 6830 wouldn't have hurt that much - since AMD totally screwup the model names of the entire 6000 series.

    Pretty much everyone knows the 6800s are cheaper and slightly updated 5800s.

    Think I'll wait until the 7000s come out... but that maybe in 2012.
  • enterco - Tuesday, April 5, 2011 - link

    So, 6790 looks like a 5770 with a little performance improvement. For a 5770 / 450 owner it doesn't make any sense. Maybe for a new computer build on a specific tight budget.
  • Arnulf - Tuesday, April 5, 2011 - link

    And 30+% the price. Why not go for the best price-performance at low overall price (which 6790 clearly isn't) when on extremely tight budget ?
  • medi01 - Tuesday, April 5, 2011 - link

    Yet another round of "uhm", "oh", "but", "duh" about AMD product.
    In other news, we compare three 350$ AMD cards vs 3 500$ nVidia cards.

    Way to go, Anand.
  • Mecavity - Tuesday, April 5, 2011 - link

    Oh, yay. Wouldn't be a proper article without someone complaining about an nVidia bias.

    A) The most expensive card included is an AMD...?
    B) The article is about a 150$ AMD card...? The case is being argued fairly, and the actual FOCUS is on comparing cards at the same level of pricing...?
    C) Critique works better if you state what you'd like to see included...?
    D) Derp.
  • medi01 - Tuesday, April 5, 2011 - link

    The article is about 6790 and most of the conclusion page is about how 5830 sucks? (And how much did 5830 suck? Oh', they've dared to add more features while sacrificing a bit of performance and charge a bit more for it, how shameless... And this made it into the title of the product review. Pathetic.)
  • strikeback03 - Tuesday, April 5, 2011 - link

    I noticed they gave this the exact same title they did for the 550Ti - "Coming up short at $150". And really the criticisms are the same - it is overpriced compared to both internal and external competition.

    How would you suggest they get excited over this? And how do you claim bias when one of the products they keep pointing to is the 6850?
  • medi01 - Tuesday, April 5, 2011 - link

    Jesus Christ, AMD 3x350$ cards vs 3x500$ nVidia cards, where is bashing of the latter? (expensive, under performing and power hungry)

    How about if bashing, then bash both (who was there "duking out for performance king" eh?) if using softer words, then for all?

    Don't have balls to bash both anymore (stinky nVidia stories, *cough*)? DON'T BASH ANY!!!

    Oh, and last time I've checked, 460 was 160-200$ card (with MSRP 190$). And that was today.
  • cknobman - Tuesday, April 5, 2011 - link

    I agree with ya.

    This article is focusing on the wrong things. Nvidia 460 768 MB is on the way out and has been publicly stated by Nvidia so that is going to leave a huge hole at $150 price point which is where the 6790 fits in. If you check out some of the other review sites the card performs pretty well (Anand your game library for benches sucks, HAWX - really? get with the times already!!!) plus the 6790 overclocks like a champ.

    Sure this is not the perfect $150 card but its most likely going to be the best there is in the immediate future.

    Im disappointed in this article.
  • Ryan Smith - Tuesday, April 5, 2011 - link

    As a matter of editorial policy I don't like to base my conclusions around future card availability; the only thing for sure about the future is that it's not what I expect it to be.

    "Discontinued" cards are normally available for quite a long time after they're launched, and for the time being the GTX 460 768MB is readily available online and at retail for very good prices.
  • silverblue - Tuesday, April 5, 2011 - link

    You only have to think of other "discontinued" products to remember how popular they were for a good time after they were no longer produced - the Radeon HD 4850, NVIDIA GeForce GTX 250 and the AMD Phenom II X3 720 BE spring to mind.
  • jabber - Tuesday, April 5, 2011 - link

    ...simply install another 5770.

    Well its what I did.

    Wonder how the 5770 stacks up to the 6790 with 900/5000+ OC.
  • marc1000 - Tuesday, April 5, 2011 - link

    I would love to see some RECENT tests with the 5770 CF... all reviews available are from the time of launch, comparing it with the 5870 only and in old games.

    it would be nice to have a recent comparison of CF/SLI from previously mainstream cards (5770/460)

    :-(
  • fingerbob69 - Tuesday, April 5, 2011 - link

    ...this card plays most games with all the effects on full. I want to change it but can't see the point.

    I think the HD7xxx are gonna be game changers though with the bump in performance quite huge compared to HD4xxx and HD5xxx levels.

    As to wether any games are about to test the "*nm cards is another question.
  • fingerbob69 - Tuesday, April 5, 2011 - link

    "*nm should read 28nm.
  • jabber - Tuesday, April 5, 2011 - link

    Yeah I feel the 5XXX series were a major improvement that probably did far better than AMD expected, especially as they are still largely an attractive buy, two years later.

    So I saw the 6XXX series as merely a refresh of the 5XXX series. I was in no hurry to buy.

    Hopefully the 7XXX series will be the one to watch out for.
  • Hrel - Tuesday, April 5, 2011 - link

    Prices have already "crept up". The GTX460 768MB was available for 130 after rebate for quite a while. Or at least 150 on more expensive brands. Now 150 is the cheapest I can find it. Also, every time you say the GTX460 768MB or the HD6850, if you're getting a 6850 you can get a 1GB GTX 460 for the same price. Just sayin', seems like you're unfairly giving more attention to the AMD product.

    In general though it seems like the GPU manufacturers had a secret meeting where they all got together and decided to start raising prices (profit margins) on all their GPU's. Cause based on performance and past prices the 550ti should be real real close to 100. The 560ti should be basically 200 bucks; 200 is where I'd start recomending that card to people. The GTX460 768MB should be 130 and it should stay there; 1GB should be about 150. And the same can be said for the AMD variants; 6850 would be about 150 and so on.
  • Hrel - Tuesday, April 5, 2011 - link

    hmm, just checked prices. Cheapest 1GB 460 I'd buy is 170, gigabyte, cheapest 6850 I'd buy is 155, XFX. So I kinda sorta rescind my statement; but not really. You guys still seem like you root for AMD regardless of who they're compared to. I'm not saying you skew your results and test unfairly; that's why I come here, it just seems like you're all kinda rooting for AMD to get on top in every market. Ofcourse if they did maybe you'd start rooting for Intel and Nvidia... we'll probably never know.
  • H8ff0000 - Tuesday, April 5, 2011 - link

    I know this is unrelated to the article, but does anyone know when AnandTech is going to do some P67 reviews? I'd like to see the Sabertooth P67 Rev 3 reviewed, possibly with some other boards for comparison.
  • geniekid - Tuesday, April 5, 2011 - link

    Tom's used a higher end card paired with the 6790 to test the Crossfire performance of this thing. The results suggest there might be some value to this card if used in Xfire configuration compared to single cards around the same price. It would be nice to explore that possibility!
  • BPB - Tuesday, April 5, 2011 - link

    I have been happily running with two HD4850's for a few years now, and want to upgrade. It seems to me that if I stick to a 24" 1920x1200 monitor practically any card will do. Still, I don't like the idea of getting a 6800 series since it's practically 2+ year old technology. Wondering if I should go 6900 series, or wait till 7000 series. Come AMD, man up, put some real upgrades out there that make it easy for me to decide.
  • richardginn - Tuesday, April 5, 2011 - link

    it is nice to see a review the AMD 6790 video card, but how about a review of the AMD 6450, AMD 6550, and AMD 6670 OEM video cards???
  • jabber - Tuesday, April 5, 2011 - link

    Arent they just the 5XXX cards re-branded with a 6? If so a waste of time.

    Plus as they are OEM we wont be buying them.
  • richardginn - Wednesday, April 6, 2011 - link

    Actually no. These OEM cards are very different.

    The 6450 video card which I have only seen sold as an option at HP.com is supposed to be like twice as fast as the 5450 based on the specs listed from the AMD website.

    If you are talking just a rebranded video card you have to be talking about the OEM 6770 and 6750 video cards which have no performance boost in the FPS area.

    Will these cards go off OEM status when Bulldozer CPU'S are released or just move on to something like a 7450 video card????
  • Ryan Smith - Tuesday, April 5, 2011 - link

    Funny you should mention that...
  • BoFox - Tuesday, April 5, 2011 - link

    Is the memory bus really 128-bit instead of 256 bits wide? I'm wondering why Anandtech put a lot of effort into checking up on GTX 550 Ti's 192-bit bandwidth with an odd number of chips, but not on either the 5830 or 6790 that claims 256-bit bus while the ROPs are cut in half.

    We all know that the number of ROPs is tied with the memory bus for a given architecture design. This is why NV's GTX 550 Ti seems much more valid, as it is linear with 24 ROPs.

    If we look at the 5830 here:
    http://techreport.com/articles.x/18521/5
    3D Mark Vantage color fill test is strongly correlated with the memory bandwidth. If the 5830 were 256-bit, it would have had identical bandwidth with 5850. However, the performance shows that it is not the case. It is barely half of 5850's performance, and also much slower than HD 4870 which has only 16 ROPs at a lower clock than that of 5830.

    Next, if we look at BeHardware's ultimate scrutiny (just as respectable as Anandtech's examination of GTX 550 Ti): http://www.behardware.com/articles/783-3/preview-r...
    We see that the 5830 has far lower FP16 and FP32 GPixel/s writes than not only the 5770 that has a slightly higher fill rate, but also the 4890 to a far greater degree. The test is directly linear to the available bandwidth as the 4890 is so much faster than the 5770, let alone 5830 in that respect.

    One more thing is that as with Barts architecture, we should all know that it is based on VLIW4 architecture, not the traditional VLIW5 one. It seems that AMD wanted to save the "thunder" for Cayman's launch by reserving the announcement for what desperately needed as much thunder as possible. Just look at how close 6870's performance is to 5870 while comparing 6870's 1120sp and 4.2Gbps bandwidth to 5870's 1600sp and 4.8Gbps bandwidth.

    Hope you guys enjoyed a little bit of exposure! Is AMD deliberately giving us wrong information? That's not my problem, but if I were the one reviewing the product, I would definitely point these things out in my article.
  • Ryan Smith - Tuesday, April 5, 2011 - link

    From a graphics point of view it's not possible to separate the performance of the ROPs from memory bandwidth. Color fill, etc are equally impacted by both. To analyze bandwidth you'd have to work from a compute point of view. However with that said I don't have any reason to believe AMD doesn't have a 256-bit; achieving identical performance with half the L2 cache will be harder though.

    And Barts is VLIW5, not VLIW4. Only Cayman is VLIW4.
  • BoFox - Tuesday, April 5, 2011 - link

    Barts is also Northern Islands--the keynote of the architecture being VLIW4.

    See, the 6790 wouldn't come within 2-3% of 5830 according to all of the benchmarks at the review here.

    If it were VLIW5 like the 5830, the 6790 would've been MUCH slower (or the 5830 much faster).

    This is because HD 5830 has 1120sp, which is 40% more than 6790's 800sp. It also has 56 TMU's, which is 40% more than 6790's 40 TMU's.

    All of the other specs are shockingly similar, with only 5% difference in core and memory clock speeds. Both the 5830 and 6790 have the same "alleged 256-bit bus".

    In spite of the whopping 40% shader and TMU difference, the 6790 comes SO close to the 5830--close enough that it's only possible if the 800sp were VLIW4 (multiply it by 5/4 and you get performance like 1000sp). It would only make sense there, as 1000sp is about 10% less than 1120sp, but with 5% higher clock, it comes to within 2-3% of 5830's performance.

    If not for VLIW4, what would it be? Tell me.
  • BoFox - Tuesday, April 5, 2011 - link

    Typo: I forgot to add "if it weren't for VLIW4" before the second sentence above. Sorry if it's confusing.

    Shouldn't we all already know that Barts XT was of the Northern Islands VLIW4 architecture from how close it was to the 5870 (1120sp vs 1600sp which is 43% higher)? Even after adjusting for the clock differences, the shader/TMU operations per second is still 35% higher for the 5870, yet the 5870 turned out to be only 9% faster overall. It would've made perfect sense if Barts XT had 1400 VLIW5 shaders (using the 5:4 ratio over 1120sp).

    Using the math: 1600sp x 850Mhz is ... 8% faster than 1400sp x 900MHz. The memory bandwidth does not affect the performance too much since the proportion of bandwidth to GPU muscle is not changed by much. So, it's really close to the overall 9% actual performance difference between 5870 and 6870. Is that coincidental? The reason the difference is actually greater than the calculation is because 1120sp VLIW4 does not exactly translate to 1400sp VLIW5. VLIW4 is not perfectly efficient as to scale 100% at a 5/4 ratio, but it's pretty close.

    What else would it be, sincerely?
  • Amoro - Tuesday, April 5, 2011 - link

    I'm pretty sure that only Cayman is VLIW4.
  • Ryan Smith - Tuesday, April 5, 2011 - link

    Correct. NI is a very broad family; it doesn't definite a single architecture. Cayman is VLIW4, Barts, Turks, and Caicos are VLIW5 and are basically optimized versions of Evergreen (5000 series) hardware.

    http://www.anandtech.com/show/3987/amds-radeon-687...
  • Amoro - Tuesday, April 5, 2011 - link

    If you look at some of the raw performance specifications for the two cards it seems to indicate that texture fillrate and raw processing power don't have as much of an impact on Anandtech's testing suite.

    Radeon HD 5830

    Fillrates
    12.8GP/s
    44.8GT/s

    Memory Bandwidth
    128GB/s

    GFLOPS
    1792

    Radeon HD 6790

    Fillrates
    13.4GP/s
    33.6GT/s

    Memory Bandwidth
    134.4GB/s

    GFLOPS
    1344

    The 6790 wins in pixel fillrate and memory bandwidth but loses horribly in raw processing power and texture fillrate yet it still manages to keep within -10% and even manages to beat the 5830 in some cases.
  • BoFox - Wednesday, April 6, 2011 - link

    Thanks for some more of those numbers!
    We can see that the 5830 has far higher numbers in these areas:
    44.8 GT/s
    1790 GFLOPs

    And the 6790 has only
    33.6 GT/s
    1344 GFLOPs

    While the 6790 has greater pixel fillrate and memory bandwidth than the 5830.

    If it were not for VLIW4, why is the 5830 only 2-3% faster than 6790 in this review here, if you look at all of the benchmarks? Why?

    Another way we could find out is to see how much it affects DP performance in applications like Milkyway@home. Cards with VLIW4 should have 1/4 the FP64 output ratio to FP32 output, so I wouldn't be surprised if we see 6790's being 20% faster than the similarly spec'ed 4890.
  • BoFox - Thursday, April 7, 2011 - link

    Ahh, your article reminded me that FP64 was disabled for Barts GPUs.. I must've forgot about it and wanted to test it to prove that it's VLIW4.

    But the numbers in the replies below strongly point to the 6790 being boosted by VLIW4 in order to basically match up to a 5830 with 40% more shaders and TMU's.

    Any explanation for this, sir Ryan?
  • BoFox - Friday, April 8, 2011 - link

    RE: "From a graphics point of view it's not possible to separate the performance of the ROPs from memory bandwidth. Color fill, etc are equally impacted by both. To analyze bandwidth you'd have to work from a compute point of view. However with that said I don't have any reason to believe AMD doesn't have a 256-bit; achieving identical performance with half the L2 cache will be harder though."

    1) If it's not possible to separate the performance from a "graphics" rather than "compute" point of view, then should not the performance be linked for all "graphics" point of views (as it is a "graphics" card to begin with)? Even the "compute" applications (FP16 and FP32 analysis at http://www.behardware.com/articles/783- ... -5830.html ) show the card to behave like as if it's 128-bit.
    2) Why does Ryan not have any reason to believe.. because AMD said so? If a manufacturer of a LCD panel advertises 1ms G2G response time, but it looks like 16ms, does he still have no reason to believe it's 16ms just because the manufacturer said so?
    3) If the L2 cache is cut down in proportion with the castrated shaders/TMUs/ROPs, then it should not affect performance, let alone "harder though".
  • Soldier1969 - Tuesday, April 5, 2011 - link

    2 x 6970s FTW at 2560 x 1600 res.
  • JimmiG - Tuesday, April 5, 2011 - link

    Is it just me or is all this talk about price difference of $10 or less getting a little ridiculous? I mean, if you're prepared to spend $150 (or $160...) on something that is completely non-essential, what difference is $10 going to make? If you're so poor that $10 is a big deal, you're probably not spending your money on gaming products anyway since you need everything for stuff like food and rent.

    It seems the video card companies are the guilty ones, constantly trying to outmaneuver each other with new pricing schemes. I miss the old days when there was one $100 card, one $200 card, one $300 card etc. Now there can easily be a dozen different models in the range of $100 - $300.
  • liveonc - Tuesday, April 5, 2011 - link

    This looks like a prime candidate for a mini-ITX for those who'd want a desktop replacement, but don't want to pay so damn much for something that has 30minutes of battery life, doesn't have a chance to outperform a desktop, & costs too much.
  • lorribot - Tuesday, April 5, 2011 - link

    Might be just me but since the 4000 series i dont actually understand AMDs numbering scheme anymore.
    There seem to be a great variety of of 6000 cards all with very similar performance and different prices.

    There is the 6990 at the top then a couple more 69xx cards then some 68xx and some 67xx, all well and good but it seems the 5870 is faster card then the 6870, which is odd and not what i would have expected, indeed it has similar performance to the 5850.

    The 5xxx series came in 53, 54, 55, 56, 57, 58 and 59 flavours with one, two or three sub versions in each band giving something like 15 or 16 different cards.

    It seems to me that with so many variations and a numbering scheme that seems to change from version to version AMD seem to actually want to confuse the buying public.

    They really need to get a handle on this, less is more in some cases.

    Nvidia's numbering scheme on the whole seems to be much more sensible in recent times, apart from the odd hiccup with 460 and 465.
  • Belard - Tuesday, April 5, 2011 - link

    Actually - AMD's 3000~5000 have been rather consistent. The 6000s is kind of like the 2000s. They should have stuck with the 3-5 series, everything would have made more sense.

    So that the 6850 should have been the 6750 since its not really better than an actual 5850. They said they did this to not cause confusion with the 5700 series which was not to be replaced - but INSTEAD ended up being relabeled into the 6600~6700s for the OEM market. The new "AMD" GPU first step with their branded was stupid and confusing. Not impressed.

    Nvidia? Those nut-jobs are master at confusion.
    GTX / GT / GTS are meaningless, especially in front of a model number. Seriously, to say "GeForce GTX 550 Ti" is plain stupid. "GeForce 550" is all that is needed or "Geforce 5 GTX, Geforce 5 GT etc".

    Are there GF GTX 560 and a GF GT 560 and a GF GTS 560? Uh, no.

    What happen to the Geforce 300 series? Oh yeah - OEM relabeled bottom end GF200 series. They skipped into the 400 series. Then there is the GF465 and GF 465 768MB... which is more than JUST a memory difference. They should have called it the 465 (with 1GB) and 460 (with 768) since there is ALWAYS a performance hit with the 768mb version of the card.
    Then Nvidia brings back the "TI" tag to remind us OLD TIMERS of the days of excellent GF4000 series... which is idiotic as hell.

    What?! Is there a GF 550 and a GF 550TI? Screw that, I want the NON TI version because its faster. Oh yeah, its just a few letters suck on the end that are meaningless.

    The GF 500 series is the exact same tech as the GF400 series, but fully functional. ie: fixed. But calling them the 500 series makes them look better / newer.

    I expect Nvidia to have the "GeForce GTX 785 TI Ultra" on the market around March 2012.
  • kedesh - Tuesday, April 5, 2011 - link

    my question is, can i crossfire this card up with my current one (5750)? consitering i have the correct motherboard? no where on the internet can i find an answer.
  • WhatsTheDifference - Tuesday, April 5, 2011 - link

    is the 4890 excluded from ALL benchies? the problem is...? ban the 285 from just one article and we'll witness just exactly what?

    thanks.
  • Lex Luger - Thursday, April 7, 2011 - link

    There hasnt been much improvement with video cards since the 90 nm 8800 GTX and GTS.

    Those were probalby the greatest cards ever in terms of performance boost vs the previous generation.
  • slickr - Thursday, April 7, 2011 - link

    What is the point testing a mediocre graphic card at 1600 and higher resolutions?

    Most people are going to be playing at lower resolutions with that graphic card and their PC's are certainly not Core I 950 with 12GB ram, so they would be playing at resolutions of 1200x1024 or 1400x1050.

    I mean we need more realistic representation of these cards and not some scenario that would never happen.
  • shady28 - Friday, April 8, 2011 - link


    Seems GPUs are dead on the PC until and unless something that can actually use them comes along.

    I agree with a previous poster - the 8800GT / GTS series was 'good enough'. There are only a handfull of games that really need anything more, so now all these cards are relegated to a niche market.

    Now tablet GPUs, that's a different story, but people are still mostly developing for the lowest common denominator there too.
  • IloveCharleneChoi - Friday, April 8, 2011 - link

    I don‘t care more about HD6790 and 550TI,The cards new released are usually expensive . hd6850 here is less than $150 in Nanjing,China.HD6850 can do better than GTX460 in many games,and of course it can beat 6790 or 550TI.SO WHY CHOOSE 6790?
  • iamezza - Sunday, April 10, 2011 - link

    Thanks Ryan for another great article :)

    I can't believe all the fruit loops posting in the comments here!
  • thenemesis2 - Monday, October 3, 2011 - link

    I this the best card for mild gaming on a Shuttle SandyBridge box with only one 6 pci-e connector and 300w psu?

Log in

Don't have an account? Sign up now