Shouldn't there be a little more... hoopla? Anything? That it is, anything other than some bogus data sheet? Is the lack of information or promotion outside of channel partners a play to avoid dropping prices (or sales of ) the current 5xxx?
Yeah apart from fanboy kids i think most people with more than a handful of braincells realise these cards cant be much better than the 5xxx series being as they're on 40nm. Can only go so much faster before power and heat become too much of a problem. Cant change the laws of physics.
well isn't this cute, the nVidia fans are rallying together.
Fact of the matter is that we are stuck at 40nm, however AMD has a ton of wiggle room between what they had with Cypress and where nVidia set the bar with Fermi GF100. The superior performance per watt/chip area means AMD can go brute force and produce a larger and hotter chip to get more performance because GF100 set that bar VERY high.
I just wish I could use a new 6000 series card for games but use my GTX470 for CUDA. 28nm Kepler can't get here fast enough :(
Nvida fans? I have AMD cards myself. Just because someone says something against AMD does not make them an NV fanboy. Not all of us are immature morons that have some ridiculous bizarre love for -insert company name- that cares nothing of them. Some of us actually have lives.
As for the performance of this series, see my other comment below. If you think the 6xxx series are going to be massively faster then you're dreaming.
Hmmm...I think it's actually being more efficient and overall quicker is the key thing most PEOPLE would be looking at, all Fanboyism (new word? sure) aside. The logic behind your comment eludes people due to the lack there of, and insted of trying to explain, the way one would if they had any decency or morality to them at all, you hurl insults. Bravo. Saying that you'd have to have more then a handful of braincells, that most fanboy's don't have lives, etc borderlines flamming, and, to be honest, lowers your credibility on anything, considering you're being so rude on nothing but speculation alone.
That being said, I'd love to see a GOOD card come out in the 200-300$ range that really stands apart. The 5850 is now in that area due to price dropping, but, seeing 5870 or 480 perfomance for that range, even at 300, would be quite incredible, especially if it had even the power consumption of a 460ish.
The GTX 470 gives 5870 performance at $300 today, although it's about 30W more power hungry at load.
The 5850 and 5870 may be great cards, but value for money they are not. If the 6870 launches at $250 with sub-5870 performance, it's not going to be much progress for 12 months of development.
Factory overclocked 460s are also available in the $200 range that match and exceed 5850s.
nvidia may be behind on efficiency, and they're certainly not gaining much love among AIBs and consumers these days, but they're certainly being very competitive on price and price/performance.
You're forgetting, though, that AMD is changing the naming of the cards so 6800 is going to be the 5700 replacement and they double the performance of 5700 for not much more power usage. I think they shouldn't have been quite so ambitious in their naming, however. It would have been more appropriate to name them 6830 and 6850 since those are the only two cards in the 5800 lineup that they actually outperform. It will be interesting to see what nVidia does with 460 and 470 pricing to stay competitive. The next few months are going to be very good for anyone looking to upgrade their video.
Nope. Definitely not forgetting that. And they're wholesale rebadging Juniper (5700) into the 6700 cards.
GTX 460s already compete with 5850 cards today. If the 6800 cards are about the same speed or slower as predicted, nvidia and AMD will actually hit price parity.
I think $250 for a mainstream card (call a rose a rose) is a bit rich. I'll reserve judgement until I see final pricing and performance, but I'm not too thrilled at the prospect of the 6800 cards at the moment.
Dear customers, thank you for your support of our company. Here, there's good news to tell you: The company recently launched a number of new fashion items! ! Fashionable and welcome everyone to come buy. If necessary, please plut:==== http://www.vipshops.org ======
I think most people with a handful of braincells realise that chip design and card architecture, not to mention transistor count can play a very large part in the performance of a chip.
Your statement is equivalent to saying that chip architecture is a non issue which kind of makes me wander why compagnies like intel, amd, nvidia, arm and others have spend so many millions on researching that particular subject.
I didn't mention architecture because this should have been obvious, but clearly not to you lot... do you really think AMD is going to release a completely new architecture just 12 months after the 5xxx series? Especially when theres no need for it to gain the performance crown, as they can just increase the clocks and maybe transistor count of the 5870.
AMD WAS planning on a new architecture on the 32nm process TSMC killed, which was design finalized BEFORE AMD knew Nvida's Fermi was a massive faceplant.
AMD was preparing for fierce competition. That didn't happen, but AMD didn't know that in advance.
AMD can (and did) utilize design improvements, architectural improvements, process improvements and copious TDP headroom in their 6000 series.
Barts (6850, 6870) and Cayman (6970?) are entirely new architectures.
The shader and SIMD units have been completely refurbished, and are now comprised of 4 complex shaders in a SIMD array instead of 4 simple + 1 complex shader as in their oolder designs. In addition, the "uncore" of the chip - the scheduling, branching and cache have been completely rearchitected.
This is the most significant change that AMD has done to their GPUs since R600 (HD 2000).
Unfortunately, it's only happening at the high end. The 6700 series will just be rebadged Juniper chips (5750, 5770).
Yeah just turn the gain knob on the transistors and BAM faster card. You sound like a lot of the managers at my work - i mean my engineering degree is all about turning up arbitrary transistor accounts...clearly.
A 20 percent boost in performance at the same price isn't a horrible thing. The top end cards are what, 5 percent of the overall profit margin in the GPU industry? If AMD brings higher performance to the mid range($200 price point), that means much more for profits than a top end card that beats the competition by only 5 percent.
If architectural advances on the same node don't yield any performance advantage, why has intel been successfully executing their 'tick-tock' strategy for the past 4 years?
Conroe was on the same node as Presler (65 nm).
Bloomfield and Lynnfield were on the same node as Wolfdale (45 nm).
Sandy Bridge will be on the same node as Gulftown (32 nm).
You can't change the laws of physics, but I guess you also can't cut down on the number of idiots who continue to misquote and misattribute them either. -_-
Presler was essentially a 65nm die shrink of a 90nm design. Conroe was designed for 65nm.
Wolfdale was essentially a die shrink of Conroe to 45nm. Nehalem/Bloomfield/Lynnfield were designed for 45nm.
Gulftown is essentially a die shrink of Nehalem to 32nm. Sandy Bridge was designed for 32nm.
Sandy Bridge wouldn't work nearly as well at 45nm as 32nm. Nehalem wouldn't work nearly as well at 65nm as 45nm. Conroe wouldn't work nearly as well at 90nm as at 65nm.
Similarly, Northern Islands won't work nearly as well as AMD intended at 40nm.
That is the very definition of the Intel tick-tock strategy. New design on old process, then new process with the same design, then new design on the last process, back and forth.
Of course, there is a limit to how far this can go since there is an increasing degree of difficulty in getting a new fab process to work.
Northern Islands/Southern Islands was initially set as a 32nm part, but TSMC issues changed that into a two phase design rollout.
Now, due to the use of a 40nm process, AMD has had to cut out a bit of the design for the 6800 series due to thermal constraints. That doesn't mean that the new design is bad, but it does limit how fast it will be on this "next generation" part.
We will see in a week how well the new design works.
Given it's been couple of months I hope the 6800's are good chunk faster then the 5800 and still within a good thermal envelope.
The 5800 are already at 334mm2, on 40nm, and that already pretty efficient compared to nVidia's GPU. Though nVidia GPU does have alot of extra "stuff" that isn't needed in the consumer market.
I just hope ATi can make things interesting again.
R600 (2/3xxx) was designed for 80/65nm and got shrunk to 55nm (the 80nm 2900XT was about the same fps-performance wise as a 55nm 3870.) R700 (4xxx) was designed for 55nm.
Evergreen (5xxx) was designed for 40nm. Northern Islands (6xxx) was also forced to be designed for 40nm.
Kind of a big difference. You can't make two good designs at the same node and expect revolutionary performance differences (as with the 4 and 5 series) unless you add heat, power, and cost for significantly more transistors in one design. 40nm is pretty mature now, so AMD will be able to push the boundaries more with the 6-series - just not to the extent we've grown accustomed to.
It's definitely possible sometimes to significantly improve the performance without a die shrink. Take NV20 vs NV25 vs R300 for example. All built on the same 150nm process, but with a significant performance improvement each time. G80 to G92 was far less exciting in spite of a die shrink.
According to Anand's RV870 article, the chip was originally meant to be much bigger, but it was changed in 2008 due to concerns about cost, yields etc. This last-minute change must mean that some parts of the chip are not as optimized as they could be since they had to throw out stuff in panic to stop the ship from sinking. It also means the design for a much larger, higher performing chip is already there. Now, with a more mature 40nm process, I don't see why they wouldn't resurrect that.
RV300 had almost 2x as many transistors as NV25 (~110M vs ~63M.)
NV20 had ~57m, but IIRC at the end of the day the GeForce3 Ti500 wasn't really much slower than a GeForce4 Ti4600 in most games - after a quick search I found ~139FPS in Quake III vs ~159FPS @1280x1024. That's ~114% of the FPS for ~111% of the transistors.
If you need X bits flipped and have Y transistors, a new architecture isn't going to make your task much faster unless your current architecture isn't well suited for flipping.
If the 5xxx parts are much better at adding bits than flipping them, the 6xxx parts might be significantly faster - but I really don't think that's the case. That means any dramatic FPS-performance increase from NI will have to come with a similarly dramatic heat, power, and/or cost increase, minus the small benefits AMD can derive from a more mature 40nm manufacturing process.
Can you make parts on the same process that are faster then old ones... sure...but at some expense
RV670 and RV770 are a good example of that... ~ 2x the performance for only a 25% or so increase in die area. I think that is a more one time thing more then the "norm"
RV870 to these Northern Islands parts may achieve the unlikely.
I expect Northern Islands to be more then NV20 -> NV25 but not like RV670 -> RV770 either... somewhere in between the two.
If AMD increases the Die Size to 400mm2 or thereabouts, they maybe able to do a decent improvement. More then just the die size would show.
This comment system won't let me post a link, but at the German site computerbase (add a .de after that) you have an article that shows a massive die-shrink which should lead to massive power savings.
A good product sells itself. NVidia is so far behind right now and with all this time that AMD has had I think this will be a case of "look at how awesome our shiz iz".
That's generally untrue, unfortunately. That's the classic mistake that an engineering firm selling a product makes.
You still have to have a solid marketing, manufacturing, supply chain and business division to help get the buzz out, then work out the logistics of selling a physical product. Which is the problem that marketeers also face - good marketing ultimately won't sell products if the product isn't particularly good. Even if the marketing is solid and the product is good, poor supply chain means you can't actually ship those products, and you fail.
Many many many many failed businesses have relied on the notion that "our product will sell itself". For the vast majority of cases, that simply isn't the case.
Now, given that AMD isn't going to just sit around on their butts while 6xxx ships, then given a reasonably equivalent marketing campaign, with a solid supply channel, they'll do well (as they eventually did with the 5xxx series).
"Shouldn't there be a little more... hoopla? Anything? That it is, anything other than some bogus data sheet?"
The very first sentence of this story tells you that its an earnings call. Not a product launch. They just mentioned it because it effects the next quarters earnings.
Hell, I wasn't really criticizing AMD or nVidia for that matter. I just thought it was kinda weird that AMD was doing the low key launch. I reserve the right to wildly speculate as to the reason and nature of the decisions. I don't believe wondering "what's up with that?" constitutes any sort of partisanship.
Launch date is the 22nd, the big presentation under NDA was already on the 14th and Anand and the rest are probably busy testing cards right now. They just can't talk about it or even talk about what they can't talk about. What's the surprise here? They don't want leaks and rumors, they want it to hit the media in one big boom.
It's now official, AMD is launching their 6xxx series next week. Good to know, rumors were set on Oct. 19th for a while. Speaking of rumors, I've read a few reports of Nvidia lowering their price on the GTX 460 further down. Any info on that Ryan?
Now, we're left to wonder about performance, power consumption, price and, of course, whether the cards will be available as they launch and if yes, whether it will be in limited quantities or not.
More unanswered questions...I can't wait for next week, I'm waiting on reviews to figure what video card I'll be buying. $170 GTX 460 768MB is highly tempting but I told myself that I would wait for the Radeon HD 6xxx series reviews first. Let's hope that they don't disappoint!
I'm actually worried about the value aspect of the new series, in the mainstream ($150-$250) market anyway
Remember when the new 5xxx series was introduced, the Radeon HD 5750/5770 replaced the 4850/4870?
It did so while offering slightly faster performance (4850 -> 5750) or actually slower performance (4870 -> 5770) and costing more: 4870 1GB was down to $150 when the 5770 was $170 or $180 was it was introduced, if I remember well. You bought a 5770 because you wanted DirectX11 and/or a lower power consumption.
Mind you, Two of them (5770) in Crossfire are still a great option today, for ~$300 or so, if you get cards with decent coolers. With now mature drivers, scaling and early issues have mostly disappeared.
The Radeon HD 5830 will a letdown as well, just like the GTS 465 was with Nvidia; however Nvidia replaced the GTX 465 with the GTX 460, which is a huge hit.
I'm hoping that AMD releases a card that will have the appeal that the GTX 460 has right now: Great 1080p performance at a sub-$200 cost, with SLI available, which scales very well.
While I have no doubt that the high-end/performance segment performance for the $ will improve, I have my doubts regarding the mainstream ($150-$250) market, which is exactly the range that I'm aiming for my next upgrade.
There won't be any DirectX 12 (AFAIK), improved Eyefinity (looks interesting according to leaks/rumors) and power consumption is definitely going up going by the pictures leaked so far on the web, with two 6-pin power connectors. Not much of a problem for me, as long as the stock cooler (Only one that's usually available at launch) is alright (Temps and noise under control that is)
However, we have absolutely no idea in what ballpark the performance of the Radeon 6xxx will be, since of the major following change, according to Voldenuit's comment above:
"The shader and SIMD units have been completely refurbished, and are now comprised of 4 complex shaders in a SIMD array instead of 4 simple + 1 complex shader as in their oolder designs. In addition, the "uncore" of the chip - the scheduling, branching and cache have been completely rearchitected.
This is the most significant change that AMD has done to their GPUs since R600 (HD 2000)."
Because of that, we can't really predict performance based off stream processors/frequencies numbers vs the current generation of video card.
Until that is confirmed, I'm holding my breath...In any case, I know that there will be a lot going up in the next few weeks in the video cards market, which makes me smile: competition is good for us consumers =D
"Because of that, we can't really predict performance based off stream processors/frequencies numbers vs the current generation of video card."
We never really could anyway. RV870 had twice as many stream processors as RV770 at a higher clock speed, but was only ~30% faster.
The standing speculation is that Barts will be about the same speed or a bit slower than RV870, and there are leaked 3DMark scores that corroborate this.
It's disappointing that AMD chose to rebrand Barts into the 6800 series and rebadge the old Juniper chips (the already underperforming 5700s) as the 6700 series. nvidia has them soundly beat on performance and value right now, but they seem oblivious to the competition. Fermi may not have been the killer blow nvidia wanted it to be, but the GF100 and GF104 cards are taking no prisoners in the pricing department.
we don't know... but should they? current 5850 runs every dx9 game with all settings on high. There is nothing to gain in that area. It is on dx11 that they should increase their performance.
I'm really stoked for next week! I haven't been this excited about a GPU launch since the 8800GT. I happen to be in the market for a new GPU, either high-end single or a CF setup of midrange cards. Shame that the high-end 6xxx part (Cayman 6970?) won't be released until next month. I'm as curious about what will happen to the price of current cards as I am about the new ATI cards. Hoping to see a price war of decent magnitude.
The only thing i saw that may stand out especially in the mobile sector is better power consumption over the 5xxx series. Other then that I don't expect anything big. This seems more like a update to the current 5xxx design then anything new much like what Nvidia's 8xxxx series were to the 9xxxx series.
As a AMD only kinda guy I have been planning a cpu/gpu upgrade in the late first quarter of 2011 when the new Zambezi core cpu's are released so that means a 6xxx series gpu also. I am expecting the Bart core 6xxx gpu's to be just an update on 5xxx series with a one tier step-up in performance, I.E. a 6770 will be very much the equivalent to a 5870 in muscle and power usage. Bart will be more about bringing AMD's performance per watt down even a little further. When the Cayman core gpu's are released those cards will be targeted at giving AMD the fastest single gpu cards and increasing their lead in the dual gpu card segment.
I don't expect big performance increases until GF and TSMC role out 28nm cards. I also don't think either company can bring these next gen parts in quantity till the beginning of 2012 as 28nm has shown itself to be very difficult to implement.
This means you guys have a test sample already. ;) I hope you are hard at work and I expect to see a high quality review (as usual) next week once the NDA is lift off. Please rerun some of the old cards with the new driver. This way we can see the actual performance increase between the to generations. The Catalyst 10.10 seems to give HD58XX some 5-10% performance increase and it won't be fair to compare the HD 68XX w/ Cat. 10.10 to HD 58XX w/ Cat. 10.3 (or whatever version you tested it the last time). Thanks in advance!
The 460 1GB is a great card (I assume you have the 1GB?), however the vast majority of the 5xxx line beats it on price/performance, and this is nVidia's best in terms of price/performance.
The 6xxx series has, for one thing, a revised and more extensive shader system, and should address some of the 5xxx series' shortcomings. If the price is right, even the 460 won't be enough for nVidia, at least until Fermi's next revision. Sad, but true.
What vast majority of the 5xxx line beats two 460 1GB overclocked in SLI in price/performance?
Furmark benchmarks we have going at OverclockersClub @ 1024 x 768 full screen for 1 minute: Two GTX 460 1GB in SLI at 895/1070/1790 = $430; 16,681 points Dual HD 5850 1GB in Crossfire at 920/1200 = $540; 16,220 points Dual HD 5870 1GB in Crossfire at 900/1300 = $740; 16,668 points
Not to mention I play at 1920 x 1200 with AAx8 when I do play games.
Assuming you are not a troll ( i may be wrong), then you are the kind of person who uses $420 worth of video cards to play at 1024x768, hardly a smart idea.
And the price comparison is not only outdated, but misleadingly oversimplified. To use the 460s in SLi one must include an extra $50 for an SLi capable mobo.
Now back to the subject: the battle is Dx11 ( that nobody plays/uses) and price x performance.
it is good for the future of Dx game developers that a larger base of Dx11 capable cards is installed, so expect better Dx11 numbers and prices. this is good for every user and owner of Dx11 cards, regardless of green/red issues.
And it is better for everyone that newer/faster/cheaper cards come out, most competition on the market is a win/win situation.
I am still playing with a 4870x2, and yes if i was on the market for a card by the time the 460 was launched i could give it a passing look, but the 460 come out almost a year and 2 whole buying seasons after the 5870, most people are usisng videos cards for gaming, not for bragging rights and fanboyism, and wont wait months for a card that is not magival, something that the 460 can not claim to be.
The card that define the current generation is the 5750 that introduded the 1080p gaming to low profile systems, fanless systems.
The king of the hill battle will be fighted on triple monitor setups and 2560x1600. anything below it is a matter of price since most cards achieve greater than 30fps minimum at 1080p in most games.
and yes i am curious to see 6970 numbers at 2560x1600, especially idle power.
I'm the troll for pointing out real user-data for overclocked benchmark comparisons with prices when you obviously didn't read the 2nd to last line saying, "Not to mention I play at 1920 x 1200 with AAx8 when I do play games."?
To use SLI, you don't need an extra $50? Are you insane? The Asus Sabertooth x58 is $180, and a decebt AMD Nvidia chipset AM3 motherboard is $60 - $100.
If I had to buy right now, I'd get a 460. However, it'd probably make more sense to wait and see what comes out next week. Anyway, I've been enjoying my 5750 for about a year now :)
btw, I've bought 3 nvidia cards and 0 ATI, and I'll tell you that ATI is far ahead of nvidia right now. The 5850 and 5870 consume 50-70 Watts less than the GTX465 and GTX470, while beating their performance most of the time. That's 5 Amps! But the GTX460 only consumes 5% more power than the 5850, while costing $170 (768MB version) instead of $270, and being 0-30% slower, so it turns out to be a good value, due to pricing and not technical merit.
Do you overclock? Overclocked the GTX 460 1GB at $215 comes close to par with an overclocked 5850 1GB's at $270 in lower resolution games, and better at higher resolutions. Not to take anything away with AMD, but the claim that AMD owns the price/performance card is rediculous.
It seems like when you're discussing price/performance comparisons between ATI and NVidia, you're only interested in talking about one very specific price point. What about every other card that NVidia offers that isn't a GTX 460?
Depends on the discussion. If it's the best bang for the buck, then the GTX 460 1GB in SLI wins hands down. Before that, I'd give credit to the 5770 1GB in Crossfire.
As for people wanting to spend more than $450 for card(s), I'll admit, I don't know.
I really don't understand why the Dailytech and Twitter sections (which are laid out the same way) can't be integrated into one frame along with another tab for Press Releases. Seems like it would be a more efficient use of space and could even let you put in another ad. I just don't like clicking on an article in Anandtech and finding out that the blurb on the front was actually the article...I was hoping for something a bit more concrete that we wouldn't find elsewhere.
As mentioned earlier, if they are releasing next week you for sure have at least 1 card if not more of this new series. And probably have for a couple weeks.
There's a ton of information about these cards on dozens of sites right now but virtually nothing here - are you guys under an NDA or something?
October 22 - 6870 and 6850 launch (Barts XT and Pro) ... the early performance numbers from different sites don't agree but a couple sites have them faster than the 5850/5870 which would be quite a jump considering the ~$300 price on the 6870.
November 19 - 6970 and 6950 launch (Cayman XT and Pro)... faster than the GTX 480 by as much as 30-40%
Mid-December - 6990? - dual-GPU card - >30% faster than 5970 ... its unclear which GPUs will be used as the power consumption of the 6970 is expected to GTX 480-like. Its possible this could be a dual 6950 or dual 6870 card.
One of the most exciting elements of these cards is the upgrade to Displayport 1.2 which increases the bandwidth limit considerably. This will allow the introduction of high-res monitors operating at 120Hz and even 240Hz .... even 1600p at 120Hz.
You know, I used to think that whole "console fanboy" phenomenon was pathetic. But even that doesn't hold a candle to the threads that accompany every new GPU launch.
I mean it took *some* time and effort for both ATI and Nvidia to launch there next line of cards after their 4xxx and 2xx series cards. How could they be launching their next line cards so soon.
I've read all posts here, so I can comment on them. A few points
1) Its true, one can not break the laws of physics of the limitations of the specification...However, human ingenuity is also very surprising. ATI is all about gaming cards...and they have released good products in the past. I owned a lot of them back then ^_^
2) The great divide
Nvidia "The way its meant to be played" is a quote that comes from the fact the industry relies a lot on Nvidia cards. The Fermi Cards were created as multi-functional cards. ATI needs to start integrating technologies together as well...
I own both, Nvidia and ATI cards. a single 480 GTX as far as OpenCL and GPGPU performance goes actually beats out a Quad Crossfire. I've tested it and seen the results, but as far as gaming goes the performance is there on ATI and Nvidia cards...
3) The release of OpenGL 4.1 was important because currently only Fermi cards support this. Nvidia 400 series cards are the only cards supporting this and I am hoping the 6XXX series ATI releases supports it as OpenGL is actually used by the heavy professional graphics industry and 4.1 is the first OpenGL implementation that blew Direct X 11 out of the water in many way through mass integration of technologies.
4) The entire idea of "power saving" features.
I don't mind power saving features when I am on the desktop doing simple things. However, I want nothing to save power when I am at a new game full screen. Nothing disrupts stability more than voltages being changed and power being scaled by processors and video cards.
Each time there is a change in power, there is a chance of losing stability and even a chance of spiking or voltage loss.
Ok, now that im done with these four points...I can state about what Nvidia did do for me.
While ATI took me through many gaming tournaments into the winners circle, Nvidia really helped me with mod entries and development + graphics testing later on. It just offers tons of things...but in these last 10 years, the greatest milestone in computers in my life goes to Nvidia.
Nvidia set me free as a gamer. ^_^.
I used to always run windows for everything, but then I got into Linux many years ago. Windows used to be my main OS because of games. However, Nvidia Drivers have more features on Linux...while most ATI drivers are horrible on Linux (but great on Apple)
Nvidia set me free to the point, any PC game that breaks 60 Frames Per Second on Linux + Wine + Winetricks and other measures...I bring the game over to it. The game Guild Wars, I get max performance on Windows and Linux, but Linux is more secure and even going through Wine I record LOWER PING on Shooters AND better response because Linux is built around servers and networking...
Multiplayer Game Performance is better while Framerates are Lower on Linux on the average vs windows, but once one achieves a 60 FPS constant on any video game under the same settings, it makes no sense to run it on windows anymore.
Thanks to Nvidia SLI technologies and drivers....On practically every major game compatible with Linux and Wine...many from 2009 and even 2010...I break 60 FPS to the point for the first time I've been able to say "I don't need windows anymore"
Now I only power windows 7 when I want to play a newly released game with max settings and max compatibility and worry-free gaming. However the majority of my time (even when playing Civilization V, Runes of Magic, Guild Wars, Modern Warfare II, etc...I am on Linux with over 60 FPS on 1920x1080 on my hardware..:)
The best choice I ever made in computers is to try 10+ linux builds and then get into Linux Modification and Game Optimization and Server Modding, and although its a pain in the ass, the results from it have really made it so that Nvidia + Linux has set me free...
I get more performance on Linux using its Multimedia Programs than those found on Mac OS X and as long as I break 60 FPS in a game on Linux, I don't need windows for that title...Just windows for the Nprotect Titles..While most Linux Equivalents of Windows Software are so efficient in programming they will run on ONE CORE than being forced to run 2 - 4 cores for the very same thing on Windows.
$250 AND beat a 4870x2? well, you can wait for 28nm cards, but dont count on it.
The first 6xxx cards will be 5750/5770 replacements, aimed directly to beat a 260 on price AND performance, but the cascade effect will drive prices on all cards.
yesterday i found a geforce 270 for $280 on newegg, imagine a pair of those for $450 in 3 weeks!
The ones they are releasing next week are targeting the mainstream folks that still have a 4000 series or lower end 5700 series cards from last year. The one Im waiting on is the 6790 XT that truly is a boost to the current 5870 which I use on a 30' 2560 x 1600 panel. Dont let the new naming scheme fool you the 6850,6870 are NOT the upgrade your looking for if you currently have the 5850/5870. Wait till November.
Well... I guess I should have waited 3 weeks before I got my 480s. sli 480s water cooled, if I went with SI i could have probably save 20$ per month due to power consumption, and also maybe save a few hundred not going with water cooling. Either way, this should be a interesting release, can't wait for the reviews :)
Still need a update for my other PC's mid range 6xxx here we go :D
Now would be a good time for AMD to sort out all their Linux driver issues. Stability and usability are still major concerns. Take lead in that regard as well, don't drop support for cards, and support them throughout all new releases. Especially of Ubuntu.
(Currently I don't like the fact that a new release of Ubuntu can come along and AMD will just decide not to support it, while I can still get hardware acceleration for a GeForce MX4000...)
If they do that, not only will I buy AMD cards for a while to come, but I'll also buy AMD CPUs and motherboards for a little synergy between the parts.
Take advantage of this lead and use it as an opportunity!
speaking solely from a personal computing viewpoint - unnecessary; expensive; insecure. The whole PC revolution was AWAY from central computers with dumb terminals.
Would it possible for you to provide some benchmarks where the CPU cores are forced to 1.0GHz? I'd like to see how an Ontario system performs versus Atom.
Basically, I'm in the market for a new 10" netbook. I'd like to know if Ontario is worth waiting for (I don't care much about GPU performance, just CPU performance).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
84 Comments
Back to Article
ckryan - Friday, October 15, 2010 - link
Shouldn't there be a little more... hoopla? Anything? That it is, anything other than some bogus data sheet? Is the lack of information or promotion outside of channel partners a play to avoid dropping prices (or sales of ) the current 5xxx?Hmmm....
softdrinkviking - Friday, October 15, 2010 - link
i think you called it.that, and they don't want to stir things up beacuse things are going well for them right now.
GeorgeH - Friday, October 15, 2010 - link
I wouldn't expect much - AMD is really too constrained by 40nm for the 6-series be much more than evolutionary parts.B3an - Friday, October 15, 2010 - link
Yeah apart from fanboy kids i think most people with more than a handful of braincells realise these cards cant be much better than the 5xxx series being as they're on 40nm. Can only go so much faster before power and heat become too much of a problem. Cant change the laws of physics.xype - Friday, October 15, 2010 - link
You mean like how Intel was always unable to deliver updates to their cores that were faster, clock-for-clock, even though they used the same process?anactoraaron - Friday, October 15, 2010 - link
+1That's also like saying there wasn't much difference between Presler and Conroe...
Logic Fail
bunnyfubbles - Friday, October 15, 2010 - link
well isn't this cute, the nVidia fans are rallying together.Fact of the matter is that we are stuck at 40nm, however AMD has a ton of wiggle room between what they had with Cypress and where nVidia set the bar with Fermi GF100. The superior performance per watt/chip area means AMD can go brute force and produce a larger and hotter chip to get more performance because GF100 set that bar VERY high.
I just wish I could use a new 6000 series card for games but use my GTX470 for CUDA. 28nm Kepler can't get here fast enough :(
B3an - Friday, October 15, 2010 - link
Nvida fans? I have AMD cards myself. Just because someone says something against AMD does not make them an NV fanboy. Not all of us are immature morons that have some ridiculous bizarre love for -insert company name- that cares nothing of them. Some of us actually have lives.As for the performance of this series, see my other comment below. If you think the 6xxx series are going to be massively faster then you're dreaming.
AMD_Pitbull - Friday, October 15, 2010 - link
Hmmm...I think it's actually being more efficient and overall quicker is the key thing most PEOPLE would be looking at, all Fanboyism (new word? sure) aside. The logic behind your comment eludes people due to the lack there of, and insted of trying to explain, the way one would if they had any decency or morality to them at all, you hurl insults. Bravo. Saying that you'd have to have more then a handful of braincells, that most fanboy's don't have lives, etc borderlines flamming, and, to be honest, lowers your credibility on anything, considering you're being so rude on nothing but speculation alone.That being said, I'd love to see a GOOD card come out in the 200-300$ range that really stands apart. The 5850 is now in that area due to price dropping, but, seeing 5870 or 480 perfomance for that range, even at 300, would be quite incredible, especially if it had even the power consumption of a 460ish.
Voldenuit - Saturday, October 16, 2010 - link
The GTX 470 gives 5870 performance at $300 today, although it's about 30W more power hungry at load.The 5850 and 5870 may be great cards, but value for money they are not. If the 6870 launches at $250 with sub-5870 performance, it's not going to be much progress for 12 months of development.
Factory overclocked 460s are also available in the $200 range that match and exceed 5850s.
nvidia may be behind on efficiency, and they're certainly not gaining much love among AIBs and consumers these days, but they're certainly being very competitive on price and price/performance.
animekenji - Sunday, October 17, 2010 - link
You're forgetting, though, that AMD is changing the naming of the cards so 6800 is going to be the 5700 replacement and they double the performance of 5700 for not much more power usage. I think they shouldn't have been quite so ambitious in their naming, however. It would have been more appropriate to name them 6830 and 6850 since those are the only two cards in the 5800 lineup that they actually outperform. It will be interesting to see what nVidia does with 460 and 470 pricing to stay competitive. The next few months are going to be very good for anyone looking to upgrade their video.Voldenuit - Sunday, October 17, 2010 - link
Nope. Definitely not forgetting that. And they're wholesale rebadging Juniper (5700) into the 6700 cards.GTX 460s already compete with 5850 cards today. If the 6800 cards are about the same speed or slower as predicted, nvidia and AMD will actually hit price parity.
I think $250 for a mainstream card (call a rose a rose) is a bit rich. I'll reserve judgement until I see final pricing and performance, but I'm not too thrilled at the prospect of the 6800 cards at the moment.
nbjknk - Thursday, November 25, 2010 - link
Dear customers, thank you for your support of our company.Here, there's good news to tell you: The company recently
launched a number of new fashion items! ! Fashionable
and welcome everyone to come buy. If necessary, please
plut:==== http://www.vipshops.org ======
==== http://www.vipshops.org ======
==== http://www.vipshops.org ======
baba264 - Friday, October 15, 2010 - link
I think most people with a handful of braincells realise that chip design and card architecture, not to mention transistor count can play a very large part in the performance of a chip.Your statement is equivalent to saying that chip architecture is a non issue which kind of makes me wander why compagnies like intel, amd, nvidia, arm and others have spend so many millions on researching that particular subject.
B3an - Friday, October 15, 2010 - link
I didn't mention architecture because this should have been obvious, but clearly not to you lot... do you really think AMD is going to release a completely new architecture just 12 months after the 5xxx series? Especially when theres no need for it to gain the performance crown, as they can just increase the clocks and maybe transistor count of the 5870.spigzone - Friday, October 15, 2010 - link
AMD WAS planning on a new architecture on the 32nm process TSMC killed, which was design finalized BEFORE AMD knew Nvida's Fermi was a massive faceplant.AMD was preparing for fierce competition. That didn't happen, but AMD didn't know that in advance.
AMD can (and did) utilize design improvements, architectural improvements, process improvements and copious TDP headroom in their 6000 series.
Voldenuit - Friday, October 15, 2010 - link
Barts (6850, 6870) and Cayman (6970?) are entirely new architectures.The shader and SIMD units have been completely refurbished, and are now comprised of 4 complex shaders in a SIMD array instead of 4 simple + 1 complex shader as in their oolder designs. In addition, the "uncore" of the chip - the scheduling, branching and cache have been completely rearchitected.
This is the most significant change that AMD has done to their GPUs since R600 (HD 2000).
Unfortunately, it's only happening at the high end. The 6700 series will just be rebadged Juniper chips (5750, 5770).
pcfxer - Friday, October 15, 2010 - link
Yeah just turn the gain knob on the transistors and BAM faster card. You sound like a lot of the managers at my work - i mean my engineering degree is all about turning up arbitrary transistor accounts...clearly.Zoomer - Friday, October 15, 2010 - link
Architectural improvements count for a good bit. I'm going to say ~20% improvement for given area.Targon - Sunday, October 17, 2010 - link
A 20 percent boost in performance at the same price isn't a horrible thing. The top end cards are what, 5 percent of the overall profit margin in the GPU industry? If AMD brings higher performance to the mid range($200 price point), that means much more for profits than a top end card that beats the competition by only 5 percent.OneArmedScissorB - Friday, October 15, 2010 - link
Radeon 3870 -55nmRadeon 4870 -55nm
You fail.
animekenji - Sunday, October 17, 2010 - link
4870 was a huge jump in performance over 3870, ran cooler and was more power efficient. I don't see what point you're trying to make here?Voldenuit - Saturday, October 16, 2010 - link
If architectural advances on the same node don't yield any performance advantage, why has intel been successfully executing their 'tick-tock' strategy for the past 4 years?Conroe was on the same node as Presler (65 nm).
Bloomfield and Lynnfield were on the same node as Wolfdale (45 nm).
Sandy Bridge will be on the same node as Gulftown (32 nm).
You can't change the laws of physics, but I guess you also can't cut down on the number of idiots who continue to misquote and misattribute them either. -_-
GeorgeH - Saturday, October 16, 2010 - link
Presler was essentially a 65nm die shrink of a 90nm design.Conroe was designed for 65nm.
Wolfdale was essentially a die shrink of Conroe to 45nm.
Nehalem/Bloomfield/Lynnfield were designed for 45nm.
Gulftown is essentially a die shrink of Nehalem to 32nm.
Sandy Bridge was designed for 32nm.
Sandy Bridge wouldn't work nearly as well at 45nm as 32nm.
Nehalem wouldn't work nearly as well at 65nm as 45nm.
Conroe wouldn't work nearly as well at 90nm as at 65nm.
Similarly, Northern Islands won't work nearly as well as AMD intended at 40nm.
Targon - Sunday, October 17, 2010 - link
That is the very definition of the Intel tick-tock strategy. New design on old process, then new process with the same design, then new design on the last process, back and forth.Of course, there is a limit to how far this can go since there is an increasing degree of difficulty in getting a new fab process to work.
Northern Islands/Southern Islands was initially set as a 32nm part, but TSMC issues changed that into a two phase design rollout.
Now, due to the use of a 40nm process, AMD has had to cut out a bit of the design for the 6800 series due to thermal constraints. That doesn't mean that the new design is bad, but it does limit how fast it will be on this "next generation" part.
We will see in a week how well the new design works.
coldpower27 - Monday, October 18, 2010 - link
Given it's been couple of months I hope the 6800's are good chunk faster then the 5800 and still within a good thermal envelope.The 5800 are already at 334mm2, on 40nm, and that already pretty efficient compared to nVidia's GPU. Though nVidia GPU does have alot of extra "stuff" that isn't needed in the consumer market.
I just hope ATi can make things interesting again.
Fergy - Friday, October 15, 2010 - link
You might want to look up the Radeon 3800 and 4800 series. Both 55nm.GeorgeH - Friday, October 15, 2010 - link
R600 (2/3xxx) was designed for 80/65nm and got shrunk to 55nm (the 80nm 2900XT was about the same fps-performance wise as a 55nm 3870.)R700 (4xxx) was designed for 55nm.
Evergreen (5xxx) was designed for 40nm.
Northern Islands (6xxx) was also forced to be designed for 40nm.
Kind of a big difference. You can't make two good designs at the same node and expect revolutionary performance differences (as with the 4 and 5 series) unless you add heat, power, and cost for significantly more transistors in one design. 40nm is pretty mature now, so AMD will be able to push the boundaries more with the 6-series - just not to the extent we've grown accustomed to.
JimmiG - Friday, October 15, 2010 - link
It's definitely possible sometimes to significantly improve the performance without a die shrink. Take NV20 vs NV25 vs R300 for example. All built on the same 150nm process, but with a significant performance improvement each time. G80 to G92 was far less exciting in spite of a die shrink.According to Anand's RV870 article, the chip was originally meant to be much bigger, but it was changed in 2008 due to concerns about cost, yields etc. This last-minute change must mean that some parts of the chip are not as optimized as they could be since they had to throw out stuff in panic to stop the ship from sinking. It also means the design for a much larger, higher performing chip is already there. Now, with a more mature 40nm process, I don't see why they wouldn't resurrect that.
GeorgeH - Friday, October 15, 2010 - link
RV300 had almost 2x as many transistors as NV25 (~110M vs ~63M.)NV20 had ~57m, but IIRC at the end of the day the GeForce3 Ti500 wasn't really much slower than a GeForce4 Ti4600 in most games - after a quick search I found ~139FPS in Quake III vs ~159FPS @1280x1024. That's ~114% of the FPS for ~111% of the transistors.
If you need X bits flipped and have Y transistors, a new architecture isn't going to make your task much faster unless your current architecture isn't well suited for flipping.
If the 5xxx parts are much better at adding bits than flipping them, the 6xxx parts might be significantly faster - but I really don't think that's the case. That means any dramatic FPS-performance increase from NI will have to come with a similarly dramatic heat, power, and/or cost increase, minus the small benefits AMD can derive from a more mature 40nm manufacturing process.
coldpower27 - Monday, October 18, 2010 - link
Very poor comparison for the Geforce 3 Ti500 and Geforce 4 Ti4600...The differences is more significant if you use other games besides Quake 3 which even then showed it's age as a Geforce 4 MX can play that just fine..
http://www.anandtech.com/show/875/12
http://www.anandtech.com/show/875/11
Can you make parts on the same process that are faster then old ones... sure...but at some expense
RV670 and RV770 are a good example of that... ~ 2x the performance for only a 25% or so increase in die area. I think that is a more one time thing more then the "norm"
RV870 to these Northern Islands parts may achieve the unlikely.
I expect Northern Islands to be more then NV20 -> NV25
but not like RV670 -> RV770 either... somewhere in between the two.
If AMD increases the Die Size to 400mm2 or thereabouts, they maybe able to do a decent improvement. More then just the die size would show.
Finally - Friday, October 15, 2010 - link
This comment system won't let me post a link, but at the German site computerbase (add a .de after that) you have an article that shows a massive die-shrink which should lead to massive power savings.JonnyDough - Friday, October 15, 2010 - link
A good product sells itself. NVidia is so far behind right now and with all this time that AMD has had I think this will be a case of "look at how awesome our shiz iz".erple2 - Friday, October 15, 2010 - link
That's generally untrue, unfortunately. That's the classic mistake that an engineering firm selling a product makes.You still have to have a solid marketing, manufacturing, supply chain and business division to help get the buzz out, then work out the logistics of selling a physical product. Which is the problem that marketeers also face - good marketing ultimately won't sell products if the product isn't particularly good. Even if the marketing is solid and the product is good, poor supply chain means you can't actually ship those products, and you fail.
Many many many many failed businesses have relied on the notion that "our product will sell itself". For the vast majority of cases, that simply isn't the case.
Now, given that AMD isn't going to just sit around on their butts while 6xxx ships, then given a reasonably equivalent marketing campaign, with a solid supply channel, they'll do well (as they eventually did with the 5xxx series).
Taft12 - Friday, October 15, 2010 - link
You got that right, take one look at where the Palm Pre ended up. Hell, OS/2 for that matter.retrospooty - Friday, October 15, 2010 - link
"Shouldn't there be a little more... hoopla? Anything? That it is, anything other than some bogus data sheet?"The very first sentence of this story tells you that its an earnings call. Not a product launch. They just mentioned it because it effects the next quarters earnings.
ckryan - Friday, October 15, 2010 - link
Hell, I wasn't really criticizing AMD or nVidia for that matter. I just thought it was kinda weird that AMD was doing the low key launch. I reserve the right to wildly speculate as to the reason and nature of the decisions. I don't believe wondering "what's up with that?" constitutes any sort of partisanship.Kjella - Sunday, October 17, 2010 - link
Launch date is the 22nd, the big presentation under NDA was already on the 14th and Anand and the rest are probably busy testing cards right now. They just can't talk about it or even talk about what they can't talk about. What's the surprise here? They don't want leaks and rumors, they want it to hit the media in one big boom.Mathieu Bourgie - Friday, October 15, 2010 - link
It's now official, AMD is launching their 6xxx series next week. Good to know, rumors were set on Oct. 19th for a while. Speaking of rumors, I've read a few reports of Nvidia lowering their price on the GTX 460 further down. Any info on that Ryan?Now, we're left to wonder about performance, power consumption, price and, of course, whether the cards will be available as they launch and if yes, whether it will be in limited quantities or not.
More unanswered questions...I can't wait for next week, I'm waiting on reviews to figure what video card I'll be buying. $170 GTX 460 768MB is highly tempting but I told myself that I would wait for the Radeon HD 6xxx series reviews first. Let's hope that they don't disappoint!
AstroGuardian - Friday, October 15, 2010 - link
They shouldn't disappoint. 6xxx should be better than Fermi. Otherwise why should AMD release it in first placeMathieu Bourgie - Friday, October 15, 2010 - link
I'm actually worried about the value aspect of the new series, in the mainstream ($150-$250) market anywayRemember when the new 5xxx series was introduced, the Radeon HD 5750/5770 replaced the 4850/4870?
It did so while offering slightly faster performance (4850 -> 5750) or actually slower performance (4870 -> 5770) and costing more: 4870 1GB was down to $150 when the 5770 was $170 or $180 was it was introduced, if I remember well. You bought a 5770 because you wanted DirectX11 and/or a lower power consumption.
Mind you, Two of them (5770) in Crossfire are still a great option today, for ~$300 or so, if you get cards with decent coolers. With now mature drivers, scaling and early issues have mostly disappeared.
The Radeon HD 5830 will a letdown as well, just like the GTS 465 was with Nvidia; however Nvidia replaced the GTX 465 with the GTX 460, which is a huge hit.
I'm hoping that AMD releases a card that will have the appeal that the GTX 460 has right now: Great 1080p performance at a sub-$200 cost, with SLI available, which scales very well.
While I have no doubt that the high-end/performance segment performance for the $ will improve, I have my doubts regarding the mainstream ($150-$250) market, which is exactly the range that I'm aiming for my next upgrade.
There won't be any DirectX 12 (AFAIK), improved Eyefinity (looks interesting according to leaks/rumors) and power consumption is definitely going up going by the pictures leaked so far on the web, with two 6-pin power connectors. Not much of a problem for me, as long as the stock cooler (Only one that's usually available at launch) is alright (Temps and noise under control that is)
However, we have absolutely no idea in what ballpark the performance of the Radeon 6xxx will be, since of the major following change, according to Voldenuit's comment above:
"The shader and SIMD units have been completely refurbished, and are now comprised of 4 complex shaders in a SIMD array instead of 4 simple + 1 complex shader as in their oolder designs. In addition, the "uncore" of the chip - the scheduling, branching and cache have been completely rearchitected.
This is the most significant change that AMD has done to their GPUs since R600 (HD 2000)."
Because of that, we can't really predict performance based off stream processors/frequencies numbers vs the current generation of video card.
Until that is confirmed, I'm holding my breath...In any case, I know that there will be a lot going up in the next few weeks in the video cards market, which makes me smile: competition is good for us consumers =D
Voldenuit - Saturday, October 16, 2010 - link
"Because of that, we can't really predict performance based off stream processors/frequencies numbers vs the current generation of video card."We never really could anyway. RV870 had twice as many stream processors as RV770 at a higher clock speed, but was only ~30% faster.
The standing speculation is that Barts will be about the same speed or a bit slower than RV870, and there are leaked 3DMark scores that corroborate this.
It's disappointing that AMD chose to rebrand Barts into the 6800 series and rebadge the old Juniper chips (the already underperforming 5700s) as the 6700 series. nvidia has them soundly beat on performance and value right now, but they seem oblivious to the competition. Fermi may not have been the killer blow nvidia wanted it to be, but the GF100 and GF104 cards are taking no prisoners in the pricing department.
CrystalBay - Friday, October 15, 2010 - link
Yeah , whats up with price cuts . Is it true that this 2nd gen part is not faster in DX9 ?flyck - Friday, October 15, 2010 - link
we don't know... but should they? current 5850 runs every dx9 game with all settings on high. There is nothing to gain in that area. It is on dx11 that they should increase their performance.MichaelD - Friday, October 15, 2010 - link
I'm really stoked for next week! I haven't been this excited about a GPU launch since the 8800GT. I happen to be in the market for a new GPU, either high-end single or a CF setup of midrange cards. Shame that the high-end 6xxx part (Cayman 6970?) won't be released until next month. I'm as curious about what will happen to the price of current cards as I am about the new ATI cards. Hoping to see a price war of decent magnitude.SteelCity1981 - Friday, October 15, 2010 - link
The only thing i saw that may stand out especially in the mobile sector is better power consumption over the 5xxx series. Other then that I don't expect anything big. This seems more like a update to the current 5xxx design then anything new much like what Nvidia's 8xxxx series were to the 9xxxx series.Kamen75 - Friday, October 15, 2010 - link
As a AMD only kinda guy I have been planning a cpu/gpu upgrade in the late first quarter of 2011 when the new Zambezi core cpu's are released so that means a 6xxx series gpu also. I am expecting the Bart core 6xxx gpu's to be just an update on 5xxx series with a one tier step-up in performance, I.E. a 6770 will be very much the equivalent to a 5870 in muscle and power usage. Bart will be more about bringing AMD's performance per watt down even a little further. When the Cayman core gpu's are released those cards will be targeted at giving AMD the fastest single gpu cards and increasing their lead in the dual gpu card segment.I don't expect big performance increases until GF and TSMC role out 28nm cards. I also don't think either company can bring these next gen parts in quantity till the beginning of 2012 as 28nm has shown itself to be very difficult to implement.
jonup - Friday, October 15, 2010 - link
This means you guys have a test sample already. ;) I hope you are hard at work and I expect to see a high quality review (as usual) next week once the NDA is lift off. Please rerun some of the old cards with the new driver. This way we can see the actual performance increase between the to generations. The Catalyst 10.10 seems to give HD58XX some 5-10% performance increase and it won't be fair to compare the HD 68XX w/ Cat. 10.10 to HD 58XX w/ Cat. 10.3 (or whatever version you tested it the last time).Thanks in advance!
samspqr - Friday, October 15, 2010 - link
+1killerclick - Friday, October 15, 2010 - link
I got a 460, your stupid Radeons can suck itsilverblue - Friday, October 15, 2010 - link
The 460 1GB is a great card (I assume you have the 1GB?), however the vast majority of the 5xxx line beats it on price/performance, and this is nVidia's best in terms of price/performance.The 6xxx series has, for one thing, a revised and more extensive shader system, and should address some of the 5xxx series' shortcomings. If the price is right, even the 460 won't be enough for nVidia, at least until Fermi's next revision. Sad, but true.
El_Capitan - Friday, October 15, 2010 - link
What vast majority of the 5xxx line beats two 460 1GB overclocked in SLI in price/performance?Furmark benchmarks we have going at OverclockersClub @ 1024 x 768 full screen for 1 minute:
Two GTX 460 1GB in SLI at 895/1070/1790 = $430; 16,681 points
Dual HD 5850 1GB in Crossfire at 920/1200 = $540; 16,220 points
Dual HD 5870 1GB in Crossfire at 900/1300 = $740; 16,668 points
Not to mention I play at 1920 x 1200 with AAx8 when I do play games.
What Kool-Aid are you drinking?
Parhel - Friday, October 15, 2010 - link
Seriously, what is the relevance of low-res Furmark benches to someone looking to buy a video card? Also, your prices are way off. Try:2x GTX 460 1GB = $420
2x HD 5850 = $480
2x HD 5870 1GB = $600
El_Capitan - Friday, October 15, 2010 - link
There's some relevance, but if you want to try me with some 1920 x 1200 with AAx8 benchmarks on a couple of games, I'd be happy to oblige.geok1ng - Friday, October 15, 2010 - link
Assuming you are not a troll ( i may be wrong), then you are the kind of person who uses $420 worth of video cards to play at 1024x768, hardly a smart idea.And the price comparison is not only outdated, but misleadingly oversimplified. To use the 460s in SLi one must include an extra $50 for an SLi capable mobo.
Now back to the subject: the battle is Dx11 ( that nobody plays/uses) and price x performance.
it is good for the future of Dx game developers that a larger base of Dx11 capable cards is installed, so expect better Dx11 numbers and prices. this is good for every user and owner of Dx11 cards, regardless of green/red issues.
And it is better for everyone that newer/faster/cheaper cards come out, most competition on the market is a win/win situation.
I am still playing with a 4870x2, and yes if i was on the market for a card by the time the 460 was launched i could give it a passing look, but the 460 come out almost a year and 2 whole buying seasons after the 5870, most people are usisng videos cards for gaming, not for bragging rights and fanboyism, and wont wait months for a card that is not magival, something that the 460 can not claim to be.
The card that define the current generation is the 5750 that introduded the 1080p gaming to low profile systems, fanless systems.
The king of the hill battle will be fighted on triple monitor setups and 2560x1600. anything below it is a matter of price since most cards achieve greater than 30fps minimum at 1080p in most games.
and yes i am curious to see 6970 numbers at 2560x1600, especially idle power.
El_Capitan - Friday, October 15, 2010 - link
I'm the troll for pointing out real user-data for overclocked benchmark comparisons with prices when you obviously didn't read the 2nd to last line saying, "Not to mention I play at 1920 x 1200 with AAx8 when I do play games."?To use SLI, you don't need an extra $50? Are you insane? The Asus Sabertooth x58 is $180, and a decebt AMD Nvidia chipset AM3 motherboard is $60 - $100.
ggathagan - Saturday, October 16, 2010 - link
Just thought I'd point out that silverblue said nothing about SLI, which would infer single card.nafhan - Friday, October 15, 2010 - link
If I had to buy right now, I'd get a 460. However, it'd probably make more sense to wait and see what comes out next week.Anyway, I've been enjoying my 5750 for about a year now :)
AnnonymousCoward - Friday, October 15, 2010 - link
"Poor AMD fanboys"? What are you talking about?btw, I've bought 3 nvidia cards and 0 ATI, and I'll tell you that ATI is far ahead of nvidia right now. The 5850 and 5870 consume 50-70 Watts less than the GTX465 and GTX470, while beating their performance most of the time. That's 5 Amps! But the GTX460 only consumes 5% more power than the 5850, while costing $170 (768MB version) instead of $270, and being 0-30% slower, so it turns out to be a good value, due to pricing and not technical merit.
El_Capitan - Friday, October 15, 2010 - link
Do you overclock? Overclocked the GTX 460 1GB at $215 comes close to par with an overclocked 5850 1GB's at $270 in lower resolution games, and better at higher resolutions. Not to take anything away with AMD, but the claim that AMD owns the price/performance card is rediculous.Parhel - Friday, October 15, 2010 - link
It seems like when you're discussing price/performance comparisons between ATI and NVidia, you're only interested in talking about one very specific price point. What about every other card that NVidia offers that isn't a GTX 460?AMD_Pitbull - Friday, October 15, 2010 - link
"What about every other card that NVidia offers that isn't a GTX 460 1GB overclock?"There, I fixed it for ya :)
El_Capitan - Friday, October 15, 2010 - link
Depends on the discussion. If it's the best bang for the buck, then the GTX 460 1GB in SLI wins hands down. Before that, I'd give credit to the 5770 1GB in Crossfire.As for people wanting to spend more than $450 for card(s), I'll admit, I don't know.
El_Capitan - Friday, October 15, 2010 - link
Don't even mention to me about the drivers for AMD. I haven't had to revert back to any previous Nvidia drivers for a long while now.Taft12 - Friday, October 15, 2010 - link
I guess that's technically true if the new driver cooks your GPU for good - not much point reverting after that happens!http://news.softpedia.com/news/Latest-NVIDIA-Drive...
El_Capitan - Friday, October 15, 2010 - link
Touche... though I didn't experience any problems. I probably didn't update to that driver before they removed it.sinPiEqualsZero - Friday, October 15, 2010 - link
I really don't understand why the Dailytech and Twitter sections (which are laid out the same way) can't be integrated into one frame along with another tab for Press Releases. Seems like it would be a more efficient use of space and could even let you put in another ad. I just don't like clicking on an article in Anandtech and finding out that the blurb on the front was actually the article...I was hoping for something a bit more concrete that we wouldn't find elsewhere.Nice to know they're coming, though.
7Enigma - Friday, October 15, 2010 - link
As mentioned earlier, if they are releasing next week you for sure have at least 1 card if not more of this new series. And probably have for a couple weeks.No sleep for you until the review is out. :)
vozmem - Friday, October 15, 2010 - link
Ticked, now will be Tock. No need to have headache.mac2j - Friday, October 15, 2010 - link
There's a ton of information about these cards on dozens of sites right now but virtually nothing here - are you guys under an NDA or something?October 22 - 6870 and 6850 launch (Barts XT and Pro) ... the early performance numbers from different sites don't agree but a couple sites have them faster than the 5850/5870 which would be quite a jump considering the ~$300 price on the 6870.
November 19 - 6970 and 6950 launch (Cayman XT and Pro)... faster than the GTX 480 by as much as 30-40%
Mid-December - 6990? - dual-GPU card - >30% faster than 5970 ... its unclear which GPUs will be used as the power consumption of the 6970 is expected to GTX 480-like. Its possible this could be a dual 6950 or dual 6870 card.
One of the most exciting elements of these cards is the upgrade to Displayport 1.2 which increases the bandwidth limit considerably. This will allow the introduction of high-res monitors operating at 120Hz and even 240Hz .... even 1600p at 120Hz.
SteelCity1981 - Friday, October 15, 2010 - link
You mean or 2560 × 1600 × 30 bit @120 Hzresolution not 1600p...
Taft12 - Friday, October 15, 2010 - link
OF COURSE THERE IS AN NDA INVOLVED!How do you think Ryan is able to post comprehensive benchmarks in his GPU reviews the very day the product is released?
Chlorus - Friday, October 15, 2010 - link
You know, I used to think that whole "console fanboy" phenomenon was pathetic. But even that doesn't hold a candle to the threads that accompany every new GPU launch.ittoong - Saturday, October 16, 2010 - link
i do think that AMD really good stuff for usi like to gaming
and i already have amd phenom now :D
hehe
Azfar - Sunday, October 17, 2010 - link
I mean it took *some* time and effort for both ATI and Nvidia to launch there next line of cards after their 4xxx and 2xx series cards. How could they be launching their next line cards so soon.Setsunayaki - Sunday, October 17, 2010 - link
Hi everyone *waves*I've read all posts here, so I can comment on them. A few points
1) Its true, one can not break the laws of physics of the limitations of the specification...However, human ingenuity is also very surprising. ATI is all about gaming cards...and they have released good products in the past. I owned a lot of them back then ^_^
2) The great divide
Nvidia "The way its meant to be played" is a quote that comes from the fact the industry relies a lot on Nvidia cards. The Fermi Cards were created as multi-functional cards. ATI needs to start integrating technologies together as well...
I own both, Nvidia and ATI cards. a single 480 GTX as far as OpenCL and GPGPU performance goes actually beats out a Quad Crossfire. I've tested it and seen the results, but as far as gaming goes the performance is there on ATI and Nvidia cards...
3) The release of OpenGL 4.1 was important because currently only Fermi cards support this. Nvidia 400 series cards are the only cards supporting this and I am hoping the 6XXX series ATI releases supports it as OpenGL is actually used by the heavy professional graphics industry and 4.1 is the first OpenGL implementation that blew Direct X 11 out of the water in many way through mass integration of technologies.
4) The entire idea of "power saving" features.
I don't mind power saving features when I am on the desktop doing simple things. However, I want nothing to save power when I am at a new game full screen. Nothing disrupts stability more than voltages being changed and power being scaled by processors and video cards.
Each time there is a change in power, there is a chance of losing stability and even a chance of spiking or voltage loss.
Ok, now that im done with these four points...I can state about what Nvidia did do for me.
While ATI took me through many gaming tournaments into the winners circle, Nvidia really helped me with mod entries and development + graphics testing later on. It just offers tons of things...but in these last 10 years, the greatest milestone in computers in my life goes to Nvidia.
Nvidia set me free as a gamer. ^_^.
I used to always run windows for everything, but then I got into Linux many years ago. Windows used to be my main OS because of games. However, Nvidia Drivers have more features on Linux...while most ATI drivers are horrible on Linux (but great on Apple)
Nvidia set me free to the point, any PC game that breaks 60 Frames Per Second on Linux + Wine + Winetricks and other measures...I bring the game over to it. The game Guild Wars, I get max performance on Windows and Linux, but Linux is more secure and even going through Wine I record LOWER PING on Shooters AND better response because Linux is built around servers and networking...
Multiplayer Game Performance is better while Framerates are Lower on Linux on the average vs windows, but once one achieves a 60 FPS constant on any video game under the same settings, it makes no sense to run it on windows anymore.
Thanks to Nvidia SLI technologies and drivers....On practically every major game compatible with Linux and Wine...many from 2009 and even 2010...I break 60 FPS to the point for the first time I've been able to say "I don't need windows anymore"
Now I only power windows 7 when I want to play a newly released game with max settings and max compatibility and worry-free gaming. However the majority of my time (even when playing Civilization V, Runes of Magic, Guild Wars, Modern Warfare II, etc...I am on Linux with over 60 FPS on 1920x1080 on my hardware..:)
The best choice I ever made in computers is to try 10+ linux builds and then get into Linux Modification and Game Optimization and Server Modding, and although its a pain in the ass, the results from it have really made it so that Nvidia + Linux has set me free...
I get more performance on Linux using its Multimedia Programs than those found on Mac OS X and as long as I break 60 FPS in a game on Linux, I don't need windows for that title...Just windows for the Nprotect Titles..While most Linux Equivalents of Windows Software are so efficient in programming they will run on ONE CORE than being forced to run 2 - 4 cores for the very same thing on Windows.
hclarkjr - Sunday, October 17, 2010 - link
if the pricing is what i am seeing at around $250 and it beats my 4870x2 i am getting one. waiting for the reviews though before i decidegeok1ng - Sunday, October 17, 2010 - link
$250 AND beat a 4870x2? well, you can wait for 28nm cards, but dont count on it.The first 6xxx cards will be 5750/5770 replacements, aimed directly to beat a 260 on price AND performance, but the cascade effect will drive prices on all cards.
yesterday i found a geforce 270 for $280 on newegg, imagine a pair of those for $450 in 3 weeks!
Soldier1969 - Sunday, October 17, 2010 - link
The ones they are releasing next week are targeting the mainstream folks that still have a 4000 series or lower end 5700 series cards from last year. The one Im waiting on is the 6790 XT that truly is a boost to the current 5870 which I use on a 30' 2560 x 1600 panel. Dont let the new naming scheme fool you the 6850,6870 are NOT the upgrade your looking for if you currently have the 5850/5870. Wait till November.El_Capitan - Monday, October 18, 2010 - link
I'll wait for the expensively priced GTX 580. http://www.hexus.net/content/item.php?item=26976I'm guessing $620.
cookiesowns - Monday, October 18, 2010 - link
Well... I guess I should have waited 3 weeks before I got my 480s. sli 480s water cooled, if I went with SI i could have probably save 20$ per month due to power consumption, and also maybe save a few hundred not going with water cooling. Either way, this should be a interesting release, can't wait for the reviews :)Still need a update for my other PC's mid range 6xxx here we go :D
SniperSlap - Tuesday, October 19, 2010 - link
Now would be a good time for AMD to sort out all their Linux driver issues. Stability and usability are still major concerns. Take lead in that regard as well, don't drop support for cards, and support them throughout all new releases. Especially of Ubuntu.(Currently I don't like the fact that a new release of Ubuntu can come along and AMD will just decide not to support it, while I can still get hardware acceleration for a GeForce MX4000...)
If they do that, not only will I buy AMD cards for a while to come, but I'll also buy AMD CPUs and motherboards for a little synergy between the parts.
Take advantage of this lead and use it as an opportunity!
billt9 - Thursday, October 21, 2010 - link
speaking solely from a personal computing viewpoint - unnecessary; expensive; insecure. The whole PC revolution was AWAY from central computers with dumb terminals.Dan Fay - Tuesday, November 9, 2010 - link
Hi Anand,Would it possible for you to provide some benchmarks where the CPU cores are forced to 1.0GHz? I'd like to see how an Ontario system performs versus Atom.
Basically, I'm in the market for a new 10" netbook. I'd like to know if Ontario is worth waiting for (I don't care much about GPU performance, just CPU performance).
Thanks!