If they manage to make their gpu's do x86-64 flawlessly (well as flawlessly as anyone can, but at least on par with intel/amd) without a dedicated x86 core to do the logic I'll be gawking foolishly... Anyway its gonna be at least a year or two.
I think it is the other way around: make x86 cores do cuda calculations. So that everyone will easily adopt cuda and turns to NVIDIA afterwards for a significant performance increase of their new cuda programs.
x86 CUDA only for programmers to debug code and for some cases then complied application was run on non CUDA system, so now such applications need to have other branch of code in native x86, but with x86 CUDA they will work slower thinking, that still running on CUDA NVidia card.
No no no - CUDA for X86, basically you can run CUDA programs on x86, it was announced in GTC and widely reported in other sits. Just wondering why anand did not menton it.
Doesn't that defeat the purpose of nVidia's GPU computing efforts ? Afterall nVidia has been touting GPU computing as offering a lot more performance than conventional x86 CPUs. It will be hilarious if we see one piece of CUDA code run faster on Sandy Bridge or Bulldozer than on Kepler :D
As another said, if they make it available for x86, more programs will be written that use it. Then it is easier for people to transition over to Nvidia GPUs for a large speed increase.
It won't. The kinds of things it is good for are generally tens to thousands of times faster on the GPU. If new CPUs double the cores, and increase IPC by 20% v. last gen, that's not even a 3x speedup. Even if the CPU could compete in raw performance, the GPU would have it in performance/watt by miles--which is part of why that is what they stress on their graph. CPUs are scaling up better w/o hitting power walls, but when those walls get reached, the CPU is at a disadvantage.
Quite so; apparently it's not difficult to convert x87 to SSE, yet nVidia deliberately didn't in order to make a better case for hardware physics processing. If they're going to be giving CUDA a helping hand, then they should put some effort into releasing version 3.0 of the PhysX SDK too.
Dear customers, thank you for your support of our company. Here, there's good news to tell you: The company recently launched a number of new fashion items! ! Fashionable and welcome everyone to come buy. If necessary, please$$$$$$$$$$__$$$_$$$$$$$$$$$ http://www.vipshops.org $$_____$$$_$$$_________$$$ $$$_____$$$_$$$______ $$$ $$$ ____$$$_ $$$_____ $$$ $$$$$$$$$$__$$$____$$$ $$$_____$$$_$$$___$$$ $$$_____$$$_$$$__$$$ $$$$$$$$$$$_$$$_$$$ $$$$$$$$$$__$$$_$$$$$$$$$$$$ !::! http://www.vipshops.org Thursday, 21 October 2010 at 9:48 PM
CUDA-x86 is a commercial product by the Portland Group. It's not even the first x86 runtime for CUDA; Ocelot offered this last year (although I suspect Portland's implementation will be much better). In any case since it's a commercial product, don't expect to see it used in consumer products; it really doesn't sound like it's intended for that.
--- NVIDIA took some criticism for introducing Fermi months before it shipped, but it seems to have worked out well for the company anyhow. ---
It did? Fermi is a disaster for Nvidia, at least up until now. AMD is about to move on to a new(er) architecture, Nvidia has not even put Fermi into all market segments. Your statement is highly suspect.
Keep in mind we're at GTC, so we're talking about the compute side of things. The guys here weren't any happier than anyone else about GF100's delay, but the fact of the matter is that there are halls full of developers happily clobbering Tesla products. For them, having Fermi announced ahead of its actual launch was a big deal. And that's why the early announcement worked out well for the company - it for the developers, not the consumers.
Dude you need to grow up. When the next generation of ATI the 2900XT came it was "hot, loud and power hungry...., and under performing to the at that time brand new 8800 series. It has taken this long for the architecture to mature. It has matured, you haven't. As far as re branding amd media whores, both sides do it man..., whatever.
The only thing suspect here is the motives of some posters.
Fermi is loud, hot, and power hungry? Really? I guess you've been living under a rock, and never heard of the GTX 460 or the GTS 450. Neither of them are loud, hot, or that power-hungry.
You want to know what's loud, hot, and power-hungry? An ATI 5970 card.
The above posters did NOT specify what products they were talking about, yet I don't see you criticizing them? I wonder why?
The above posters claimed Fermi in general has been a failure, and that Fermi cards in general are loud, hot, and power-hungry. Both claims as I pointed out are incorrect.
No he isn't.When people say "Fermi" most users understand that you are talking about GTX480 or 470. He didn't say "Half a Fermi" which is a more apt description of GTX460. So "Fermi" does indeed carry an H/P/N penalty and hasn't been a market success along the lines of AMD Cypress. GTX460 is their only popular product and is an admirable part.
"Half" a Fermi? First, Fermi is NOT one specific card or chipset layout. Fermi is a GENERAL architecture. GTX 480 and GTX 470 were the first Fermi cards.
So no, most informed users understand Fermi as being the 4xx series of Nvidia cards, NOT as only the GTX 480 and GTX 470.
The 480 is the full blown Fermi. Calling the 460 half a Fermi makes sense, it is a reduced version of the full monte. Which itself happens to be loud, hot and power hungry. If you cut it down enough it does get better but the 460 isn't fast enough. For my uses I wouldn't call it an admirable part, just the only NV part worth purchasing at low/mid range if you're unwilling to buy AMD.
460 is not a reduced version of the 480, it's a different chip. The 460's built from a GF104 while the 480/470 are built from GF100. The common thread is that they're all based on the Fermi architecture.
Maybe so, but the 5970 is a dual-GPU solution, and as such, taken as given that it will be hot and hungry. It would've been more correct had muhahaaha referred directly to the 480, 470 and 465 instead of generalising Fermi, but there's no escaping the fact that even the 460 loses out, albeit barely, in a performance/watt comparison (though I suppose the 5830 is an exception in the 58xx line) with Evergreen.
The 465 upwards use far too much power and are a little too warm when compared to the direct competition that is the 5850 and 5870. The 460 and below should help nVidia recoup some of the money lost on developing the higher spec parts, and the 460 1GB is a fantastic card.
Fermi in general is a failure. GTX460 is only slightly less so, but being 11 months younger and larger, but still quite slower than 5870 does make it a failure.
Except 5870 is more expensive than GTX 460 and the two cards do not compete directly against each other. As has already been stated by many websites on here before, AMD/ATI currently has nothing at the GTX 460 price-range that competes strongly performance-wise.
The only reason ATI has nothing competitive in the 460 price range is because they're making too much money selling the 5850s and 5870s for larger margins.
They could lower the price of the 5850 and 5870 to be more competitive, but why? Cypress and GF104 cost roughly the same to produce (based on die sizes), but Cypress sells for significantly more and sells well at those prices. AMD makes more money by not "competing" with the 460 and just letting it sweep up the leftovers of the high-end.
Wait, so now people are talking about how ATI doesn't need to be the price/performance leader? I recall seeing in various forums people yelling from the rooftops about how the ATI 5xxx series is so great because of the price/performance aspect.
So now ATI can act like Nvidia or Intel and keep prices high just because? That's fine, but ATI/AMD supporters should then not use price/performance as a main argument of why the 5xxx cards are so good.
Getting back to the point, the GTX 460 *currently* offers the best performance at its price point. Period.
ATI wanting high margins and keeping their prices high is irrelevant; the GTX 460 remains the best mid-range card you can get right now for the price.
You have to look at it from both perspectives. GTX460 is a great card for customers, but it's terrible for Nvidia. They hardly make any money from it, if at all. Whole Fermi fiasco is terrible for everyone except for ATI. Nvidia is in trouble, and we don't get cheaper cards.
And yes, ATI can, is and will act just like Nvidia or Intel or any other company out there when there is no competition. If they sell everything they produce anyway, why lower their margins? They wouldn't sell more because there's nothing to sell, only earn less.
ATI could sell 5870 for way less than GTX460 and still make a nice profit. It's sad that Fermi line is not making them do that.
Sorry, AMD can't sell things cheaper. You apparently don't understand ATI doesn't exist any more. It seems you're unaware of the fact AMD has been losing money for over 4 years (more?...I can't even remember when they made money and I've owned the stock and follow their cpu/gpu's...LOL). NVDA has a few billion in the bank while AMD has a few billion in DEBT. I'm guessing AMD isn't spending 2Billion on their next GPU. They just can't afford it.
The problem isn't AMD charging more. I'm an AMD lover...so want them to succeed, and would rather have an Intel/AMD 50/50 split for obvious reasons. It's fanboys freaking out when nvidia charges for their product, but complete silence when AMD does the same. Note I recently got my Radeon 5830 (sorry COOL/QUIET Fermi was a little late, Amazon beat them and finally shipped a card 7 months later...LOL).
If you actually think a company with a few billion in the bank, no debt, making money (not much for this year or last, but still profit - especially if you remove TSMC chips and payouts for them crapping out), currently the top product, investing in future heavily, diversifying into 2 brand new markets for them, about to complete their shrink of Fermi across the lines (more RED for AMD next 3 quarters - Until they do the same...and probably still not make money) you really need to read more about business. This business looks VERY strong for the next 18 months at least. Sandy Bridge will do nothing more than replace motherboard gpus. Performance sucks to bad to do anything else. AMD just gets weaker by the day (can't continue paying higher and higher interest to put off your debt each few years, you have to PAY IT OFF).
I really only see Intel as a longer term threat. With ARM tops in phone cpu's, and NVDA married to them they don't need a cpu (for now?) to add to the bottom line a LOT with little market share. Tegra2 is what Tegra1 should have been but the competition had good products so the first effort was just like larabee (DOA). Tegra3 however looks to make NVDA jump like google has in phones (from like 5% to 17% in about 9 months?). It should make most makers take notice (Tegra2 looks to get some, signing LG and looking like droids gpu of choice) and NVDA's bottom line finally rise significantly. Intel will need another rev of Atom (or two) or Sandy Bridge's next rev to change anything at NVDA. By then of course those chips will be facing Tegra3 and another rev on the desktop chips for nvda.
We won't have another TSMC chip fiasco with other fabs competing for that business (AMD's fabs...OUCH), I think all will avoid bumpgate this time. If the process does go bad at least they can launch with 1/2 chips from another fab and minimize damage. Also, I believe NVDA will eventually either sue TSMC (they said they would in their first statement released about bumpgate) for all money related to bumpgate as Jen Hsun said they would or they've got a pretty sweet manufacturing deal for ages coming because of it (maybe already got one). Whichever way NVDA goes with TSMC, either will boost bottom line. They've already paid for the damage over the last 2-3 years (note huge write-offs in statements, 200mil here, 180 mil there etc).
I haven't heard TSMC paid anything yet, though you could blame badly made chips on the actual CHIP MAKER easily I think. It's their silicon that failed not NVDA's. Arguments can be made about designer fault but it's really hard to ignore TSMC made them all (and even caused problems for AMD). It seems that all cost about 500-800mil. Even if NVDA gets half back that's a lot of money. HP ponied up 100mil adding to Nvidia's argument that notebook makers ignored heat recommendations (I don't pay that much unless I think I can lose in court). Something tells me it's being paid in production price reduction over years..
Again, they just started up HPC stuff and it looks promising as another market for NVDA to step into and it appears the foundation work is almost laid (vendors, guests etc tell the story at GTC 2010).
Chipset business death has been writing on the wall for a few years. Apple's profits should show what entering a new large market can do for a company. So I am hugely confident 2 new markets will win over the old dead one. High margins in HPC, and a billion+ phone market are great places to enter with 0%. You can only gain with a good product and it looks like a GREAT GPU is becoming very important on phones. Just $10/chip at 10% share would be adding a billion or so to revenue for NVDA. Interesting. AMD can dream if they want (I hope, but...), but I hope NVDA gets back into the 20's and just buys AMD for the cpu's/engineers. AMD vs. Intel=AMD death. AMD+NVDA vs. Intel=hmm...great race for consumer dollars :) AMD will likely go to $2-3 again by next xmas and NVDA could buy them at $20 pretty easily probably. AMD would have a market cap of about 2bil or so which is just NVDA's cash account.
Moral of the story is AMD is weak and losing money. They don't dictate pricing these days and certainly can't charge whatever they want. NVDA can bleed AMD to death just like Intel has. AMD couldn't have recovered from a product snafu that nvidia just rode out easily. Intel did the same for 3 years as AMD slaughtered them with Athlon64. AMD still couldn't capitalize on OWNING the crown for 3 YEARS! But hey, buy some AMD. I'll keep my NVDA. We'll compare notes next xmas and see who's right :)
If you actually think AMD will be stronger than NVDA in the next 18 months I think you're high on crack ;) I don't see 2 Bill in profits from them over the next 2 years to pay that debt off, not to mention left over profits to pay employees, R&D etc.
With a cancelled larabee (delayed etc...whatever), NVDA has smooth sailing for another 18 months. Atom is weak cpu, Sandy weak gpu. Only problem would be TSMC, solved by 1/2 supply over Global Foundries etc. GF has the money to put behind fabs so NV/AMD shouldn't have process problems for a while. GF is investing Intel style while TSMC looks like AMD. They already put in 10Bil! Charter wasn't good enough (not enough money), but GF is. GF is making NV's job easier a bit while it all just hurt AMD. GF will also make TSMC better. Charter couldn't kill TSMC over a failed process, but GF will have a few 15in fabs that would make bumpgate a huge problem if TSMC does it again. I think their employees will be told to perform a bit better for the next few years :) Fear does that.
As Goty said, it's more expensive because ATI sells everything they make regardless. That's what no competition brings to us customers - no price drops.
GTX460 is more expensive for Nvidia to make than 5870 is for ATI, yet ATI sells theirs for more. That is extreme failure from Nvidia's perspective.
6xxx series or price reduction for 5xxxx after 6 is launched will fill that 5770-5850 gap for ATI.
Yet AMD hasn't made money for YEARS (was 2004 the last year they made money?). Stock has also diluted from 300mil shares outstanding to 671mil outstanding. NVDA has went from 400 to 561mil shares outstanding. AMD had to dump assets to sta afloat (while dominating graphics for a year...LOL). Also NVDA is still in the middle of a buyback of $2Bil. Not the sign of a weak company. If you're selling everything you make, you need to make MORE.
The GTX 460 (and other shrinks on low end) will just get back whatever market share AMD took (and still couldn't profit from). Thus more debt for AMD soon. I think it's extreme success that a company can withstand a complete fiasco with production over 3 years and still keep the enemy from making money, keep market share from dwindling too much, still have a few billion in the bank, and be looking to capitalize in more than one way in the next year or two. Intel did the same for 3 years while AMD dominated the cpu's. Note Intel has taken back everything AMD gained over that 3 years. Here we go again.
GTX 460's direct competition is 5830 which is $10-20 more than GTX460 and SLOWER. That means, that while AMD sells out, NVDA can sell more than enough to profit handily while AMD is probably going to continue to lose money for the year (heck I think next year too). I just hope NVDA doesn't pull an AMD and buy way too expensive thus killing them. AMD should have paid 2Bil or less for ATI (they never made more than $60mil in a year! why 4.5bil? Smoking crack...LOL).
Will that gap be filled after xmas is totally over? Already missed back to school. How much can they get out the door with an Oct25th release date? At least they aimed at GTX 460 first (that was smart). Don't forget this card is going to be 40nm also (400mm range? vs 528mm for GTX 460). So not a lot to play with in margin for AMD vs the last year. At 32nm this would be a win, but at 40nm NV will just cut the price to whatever keeps you from getting market share. I see no way for AMD to win this xmas. AMD pulled a product forward (supposedly) to get this done. That can't be good for the bottom line when you're already losing money. You're already talking price reductions for 5xxxx which means competetion is back :) Also NVDA looks pretty stinking healthy company wise so competition isn't leaving any time soon.
I actually have a GTX 460 and I hate it. Its louder and more power hungry than the GTX 260 it replaced and as a bonus it seems to be unstable (crashes with screen corruption) at stock clocks.
It's unfortunate that you have had that experience with your 460, but most 460 owners are happy with their cards. Look at any reviews of the 460 you want, or any gaming/3D websites that have 460 owners and you'll find the cards for the most part run cool, are not overly power-hungry, and are fairly quiet, all in the context of the performance the cards offer.
I woudn't say Fermi is a utter failure, even if it's almost true. IMO only the GTX 460 is a valuable gpu and it's itx friendly. It fits nicely in a Sugo SG05 and it's really quiet. Though overclocking is very unstable compared to what i experienced with a HD 5770 or HD 5850. Don't know why exactly, poor Nvidia memory controller i guess. I often had Nvidia cards which were whining too, pretty annoyin.
My own GTX 460 runs faster and also cooler than the non-reference 4870 it replaced. Granted it's different generations , but it's still a far cry from the jokes that 480 and 470 are , not to mention the super huge EPIC FAIL that GTX 465 is.
Have you considered getting rid of it? And what one is it (Zotac, Palit, eVGA, by some chance?)? Crashing to the desktop w/ corruption sounds bad. Like bitching at the maker and RMAing it bad.
<-- GB "Windforce" GTX 460 1GB; no problems in Win7 64-bit or Arch Linux 64-bit.
First make sure that you are running the card with factory-defaults clocks and using the factory-default auto fan-control !! Next, PLEASE CHECK THAT THE HEAT-SINK IS FIRMLY SCREWED DOWN TO THE CIRCUIT-BOARD AND THAT NO SCREWS ARE MISSING. A very early batch of GTX460 from an unnamed manufacturer had a little (??) problem in this area....
Otherwise, desperate erratic problems need desperate resolution........
DISCLAIMER: You run these tests at your own risk !! Test #2 and Test #3 below are NOT recommended for problem-free situations !!
Run Furmark1.8.2 in the following sequence, gradually increasing the stress. 1. Benchmarking (only) 2. Stability Test (only) 3. Stability Test plus Xtreme Burning Mode (only).
Allow >3minutes of cooldown between the tests. If the card is physically A-OK, then you will be able to complete all 3 tests without experiencing crashes or glitches.
For each of the above tests:-
Run full-screen at your desired resolution and watch the on-screen Furmark temperature plot (GPU Core Temperature). It should very rapidly go up and either (a) gradually flatten out at a temperature less than 100degreesC or (b) reach ~100degreesC and suddently flatten as the GF104 goes into thermal-throttling... you will then also see a corresponding frame-rate drop-off in Furmark's on-screen parameter readout . (Unlike the GTX260, many of the current GTX460 air-coolers do not have sufficient thermal-mass/surface-area/airflow to prevent the GF104 reaching its built-in thermal throttle point in Furmark particularly in the Stability-plus-Extreme Burning Mode... this is potentially true of the factory-overclocked cards, especially the "external exhaust" variety.)
DO NOT leave the card running at the maximum temperature in either TEST#2 or TEST#3 for any length of time!! Seriously stresses the voltage regulators.
==================================
If the display crashes while the card is still cold, first carefully check for a problem elsewhere.... power-supply (unlikely since you are replacing the ~equal-wattage GTX260) driver corruption..etc... If the display is OK while the card is still cold but shows glitches or crashes during the heat-cycles, I suggest that you RMA the card. .
They made two GTX 260's. Core216 and the old one. I still think this guy doesn't own one, but great troubleshooting for anyone that just replace any graphics card. If that's all he changed, he's only looking at the card or the psu in almost all cases if the failures started immediately after the change.
Too much missing info to nail it down (same app or game crashing, crashing in everythying, when did they start etc), but your procedure should tell him more. But for most people a quick trip to fry's buying the same thing can avoid most testing :) 30 days to check if it's the card with nothing to do other than USE it to test. If it's fine after switch return the bad card (assuming under 30 days on bad card), or return new card and RMA now that you know it's bad. Cost you nothing (costs fry's...oops), solves the problem. If both crash, I'd probably replace my PSU (everything dies) if I didn't have any other quick replacements handy. Of course I have a PHD PCI and Quicktech Pro so I have other options :) Fry's idea is quick for most people and easy ;) But totally unscrupulous. :)
Same idea for PSU...LOL. Test and return. "I just couldn't get it to run, must be incompatible with my board". I even gave him an excuse for the return in both cases...LOL. Is my solution desperate? :)
I've just realised that actually the card I replaced in my home PC was a GTS 250 not a GTX 260 at all. In my defence, I've been using the GTX 260 a lot at work and actually did buy a 192 core version for my home PC only to find there wasn't room for it. That probably explains the "hotter and louder" and also means I might be looking at a power supply problem. Or it might be a dodgy card - its one of the first and it is of the external exhaust variety. Thanks for all the advice - It'll probably come in handy when I get around to looking into the problem.
Smells like your PC doesn't have an adequate PSU or you got a bad card. They overclock like mad. It should use about 10 less watts according to anandtech, than the GTX 260 LOAD or IDLE. Again, you must have a bad card. It's 10-20 cooler than GTX 260 at LOAD or IDLE or Anandtech lied in their recent article on the 460 (doubtful).
Same article...Go ahead...Scroll down a bit more and you see you're card is completely out of touch with reality apparenlty. Because from the charts it sure looks like all the GTX 460's are a few DB's less noisy than a GTX 460 either at LOAD or IDLE again (except for Zotac's card which seems a bad fan or something, so far from all others).
Nvidia and AMD both have drivers that don't crash anywhere near what you're saying so again, bad card or not enough PSU. Get a new card or PSU, quit hating a great product (best at $200, or Anandtech and everyone else writes BS reviews...). I have doubts about you owning this card. Why would you keep something and not RMA with reviews showing it makes a GTX 260 look like crap. You weren't skeptical? Currently I'd bet money you don't own one.
Too lazy to correct my previous post comment about amazon finally shipping my 5830. It was supposed to say 5850 (for $260, which is why the waited so long). Still a good deal a few months later. I complained for months which you can check on XFX 5850 complaint page at amazon if needed :)
Are these figures for the £250 GeForces or the £2500 Teslas? It makes quite a big difference since there's a factor of about 8 difference in double precision floating point performance between the two.
To be clear, Tesla parts. GeForce cards have their DP performance artificially restricted (although if they keep the same restriction ratio, then these improvements will carry over by the same factors).
at heise.de they asked Huang about 3D performance for the 2013 part. Looks like Huang lost all of his newly gained humbleness and starts to bloat his head once again.
The answer was: at least 10 times the 3D performance of the GF100. Right...
When Intel does their IDF; they have something to show for it. (Sandy Bridge, Light Peak, etc.)
When AMD books a hotel room nearby; they have something to demonstrate. (The Bobcat-based Zacate compared to a typical Intel based notebook.)
Even a Chinese company called Nufront, was able to demonstrate a dual-core 2GHz Cortex A9 based system prototype in the same week! => http://www.youtube.com/watch?v=0Gfs5ujSw1Q
...And what does Nvidia bring on their first day at GTC? Codenames and vague performance promises of future products on a graph. (Yes, I can mark points on a graph to represent a half-parabola too!) => http://en.wikipedia.org/wiki/Parabola
You know what would be awesome? Actually demonstrating a working prototype of "Kepler"!
Methinks you ought to look a bit more closely at those marketing slides.
Fermi is listed as 2009. Since this is the year it was paper launched, one should assume that implies 2011 is the scheduled date for paper launching Kepler. How do you show prototypes of powerpoint slides anyway?
Keep in mind that Fermi was supposed to make retail in 2009. It got delayed, but that didn't significantly impact the rest of the GPUs in development. Kepler has been intended to be a 2011 part for a long time now. If it gets held up like Fermi, then it will be to new issues, such as with the 28nm process.
Fermi was obviously delayed and horribly late. So then why does Jensen stand up there with a slide that still says it was a 2009 product? Makes it very difficult to believe the slide in general doesn't it.
As far as Kepler being "intended" as a 2011 part for "a long time" (source?) good intentions are not going to help it get to market. If Nvidia decides to put all their eggs gain in TSMC's basket and hope for the best (again) they are setting themselves up for another failure.
You mean like how ATI didn't need TSMC for their 5xxx cards ... oh wait, they used the same TSMC process just like Nvidia did. Furthermore, they're going to use the same 28nm TSMC process that Nvidia is going to use in 2011.
The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards.
Fermi was late to market, but the completed design was on-time internally within Nvidia.
Also its foolish to think that the exact same problems that affected Fermi will affect Kepler.
So then if Nvidia's "internal schedule" does not reflect actual available, then we can add months onto the timelines of that slide. And what does it mean to say "technically" Nvidia had Fermi cards in 2009? That makes no sense at all.
And so what if a design was completed, that helps Nvidia how? Designs don't generate revenue, actual products do. I don't think it's foolish to anticipate Kepler will be delayed, unless you mean the "design" will be on time, internally.
Again, it is foolish, because you are making a BIG assumption that Kepler will have the same flaws or weaknesses as Fermi does. You are also assuming TSMC will have as much trouble with its 28nm process as they did with their 40nm process.
In other words, far too many assumptions you are making.
As for Fermi being ready in late 2009, that is exactly what it means. The design was complete and ready, but like I said several issues led to the cards coming late to market.
Good point!. Getting the chip into manufacturing will have its own issues to resolve. ie Heat and power consumption issues. Just hope that the delays are not going to be past a quarter or two.
They need to milk Fermi for its worth in the market and seems like doing it already as shipped products. I hope they can keep power levels under control.
"The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards."
This is just...just...a giant LOL! If it can't be manufactured - it isn't ready, at all! Intel, AMD, Nvidia...they have designs ready for products that are due out in several years. Those products are as "ready to go" now as much as Fermi was in 2009.
That Nvidia slide was not designed for the average gamer or consumer. That slide was shown at the GPU Technology Conference, and someone simply took a pic of it. That slide doesn't even focus on performance directly, but efficiency and double precision instead.
That slide is simply an illustration on Nvidia's internal timeline/schedule. So yes, Fermi being listed under 2009 makes sense under Nvidia's *internal* timeline, which is what that slide is.
Yes the slide does not make sense for anyone else, like the rest of us. That slide is not meant for us though, so it's rather pointless to point and laugh at a slide of Nvidia's internal timeline.
You're right, other companies have designs completed far in advance of being on sale to consumers, yet how come nobody here is making fun of those companies and their future timelines? We've seen slides from other companies as well.
With so much negative criticism focused specifically at Nvidia and nobody else one really has to wonder about the motives of certain posters.
Hi Anand, Have you heard anything from AMD about when they plan to move their graphics chipsets to a newer process? Also, who are they planning to use for graphics chipsets considering the yield issues they had at the beginning for 5xxx series against 40nm? Also, with monitors continuing to be stagnant at 2560 & 1080p what kind of improvements are these designers planning to insert? I mean Nvidia is planning to double or triple their FP calculation abilities but what about AMD? Also, with Dx12 most likely coming up if Microsoft sticks to their schedule of releasing Win8 in 2011 what features are those folks planning to be available in next gen APIs.
Wikipedia has some entries about the new AMD HD6xxx series. I don't know how true it is, but some of the dates are not too far away. This may or may not help you. http://en.wikipedia.org/wiki/Comparison_of_AMD_gra...
If the estimates are mostly true, the HD6xxx series looks like it will be really good. The GFLOPS of the HD5870 is 2070 according the wikipedia page, and 4080 for the HD6870. That is almost double the performance for 40W more. It looks like no DirectX 12 either.
It was way too early for anyone to be releasing DirectX 12 cards. Such cards won't be out for a long while, and Microsoft won't even be releasing DirectX 12 for a while either. The current information shows that Microsoft won't release DirectX 12 until mid-2012 at the earliest.
NVidia had big problems with big GF100 chip, so I think they make now a small chip like GF106 and AMD RV870, at this case it will be not 2x times large in transistor count, but something 1.5x large. So Kepler high end card on 28 nm will be with ~ 768 CUDA cores and raised frequirences. They don't need to make new architecture, because GTX 480 SLI with 960 CUDAs works very well. I think everybody wanted to have such single card.
Nvidia could rework current Fermi cards to get better efficiency and performance, just like they did when the reworked the GTX 480 design to the GTX 460 design.
Nvidia could potentially add more shader and texture units for the same number of CUDA cores, and improve their memory controller to achieve better performance and efficiency without increasing die size that much. Nvidia could also add an additional warp scheduler to the GF104 design to improve efficiency and performance without much die size increase.
Fermi is a modular architecture which gives Nvidia some flexibility in making variations or improved versions of Fermi cards.
I would argue that Nvidia has a different approach to the graphics industry and is more beneficial to professionals in the field. AMD does not position their graphics division to cater to professionals the same as Nvidia. If nV’s revenue did not rely on consumer graphics cards so much, they could leave the market and still be a big player in the graphics industry. However, many gamers who do not read extensively in the tech industry form their opinions based only on consumer level graphics cards; that’s not saying *AMD’s current line-up is better in all ways measurable. IMO, AMD’s focus for their graphics division and formerly ATI’s position is engineering graphics cards specifically for the consumer level market. Nvidia may need to completely separate their teams for consumer graphics cards and professional graphics cards instead of leveraging Tesla to subsidize the rest of the architecture. That way we would have cards engineered specifically for consumers and professionals.
I would not think the Tesla is subsidizing the consumer cards. The NV approach makes sense in leveraging the maximum resources (people) to create multiple product of similar capabilities but doing it from 'Big to small" rather than ground up. The high risk is the Big Product needs to be done quickly so the variations can be harvested/tweaked allowing for the needed diversity of the market. Having separate teams will reduce the software throughput of the drivers and associated platforms it targets.
I'm sure the Tesla line leads in margin compared to the rest of their products. This allows some freedom in pricing with regards to Geforce. The focus of the different markets is getting to the point where it's affecting the efficiency of mainly their high-end consumer cards.If they separated things more, the basic architecture could still be the same and thus the software not much different. Furthermore, die space is allocated for things that the consumer space is not using at the moment. There are different approaches. Nvidia's approach is not really allowing them to be as competitive as they could be with Geforce.
Well the more of these comments I see, the more I begin to wonder how many posters here work for ATI's PR department.
Nvidia, "2nd tier" graphics maker? You sir have no idea what you are talking about.
I would highly encourage you to stop posting such nonsense and look up the new Quadro cards and how they obliterate comparable FirePro cards in the professional graphics market.
There is more to the graphics market than just gaming.
Pacific Crest analyst Michael McConnell this morning noted that due to conservative ordering practices for graphics chips since the end of July, checks find that channel inventory for Nvidia parts has dropped to 3-4 weeks, down from 10-12 weeks last quarter. McConnell says he thinks Nvidia is tracking in line guidance for 3%-5% sequential revenue growth for the fiscal third quarter ending in October.
There have been failures in the past from both camps: 5800 series, X1800XT, HD2900. Both firms have less than successful products once in a while and both firms trade market share. Unlike 5800 or HD2900 series though, Fermi is more like the X1800XT, where it's hotter and louder, but also faster (and this is especially important since it's faster in DX11 games at every price level).
Yes, a global conspiracy exists against Nvidia, luckily some people are brave enough to risk everything and post "the truth" because it is indeed out there.
You want truth? In a month AMD will begin refreshing their line-up. Nvidia will have nothing but slides and promises to counter. We all would like nothing more than for Nvidia and AMD to be trading blows and driving prices down. Sadly, it looks like the 6xxx cards will go uncontested at least through the holiday season.
Like I said to another user, go read TheJian's posts on page 4 and then come back and talk.
You can gloat and boast about how great the 6xxx cards are going to be (which has yet to be proven), but the fact remains that AMD is still in debt big-time and they simply cannot afford to spend as much as Nvidia on future GPUs.
There are many ways to reduce "channel inventory", one of them is: producing less. Anyway, whether they are selling stuff or not, fact remains that they still control at least 2/3 of the market. Which is a shame, considering how bad their offerings were.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
82 Comments
Back to Article
iwodo - Wednesday, September 22, 2010 - link
No mention of CUDA -X86?? i thought it is the most significant announcementMjello - Wednesday, September 22, 2010 - link
If they manage to make their gpu's do x86-64 flawlessly (well as flawlessly as anyone can, but at least on par with intel/amd) without a dedicated x86 core to do the logic I'll be gawking foolishly... Anyway its gonna be at least a year or two.GrowMyHair - Wednesday, September 22, 2010 - link
I think it is the other way around: make x86 cores do cuda calculations. So that everyone will easily adopt cuda and turns to NVIDIA afterwards for a significant performance increase of their new cuda programs.mcnabney - Wednesday, September 22, 2010 - link
Nvidia doesn't have a license to produce hardware based on the x86 architecture. They can emulate it, but can't produce the actual cores.Lanskuat - Wednesday, September 22, 2010 - link
x86 CUDA only for programmers to debug code and for some cases then complied application was run on non CUDA system, so now such applications need to have other branch of code in native x86, but with x86 CUDA they will work slower thinking, that still running on CUDA NVidia card.iwodo - Wednesday, September 22, 2010 - link
No no no - CUDA for X86, basically you can run CUDA programs on x86, it was announced in GTC and widely reported in other sits. Just wondering why anand did not menton it.aegisofrime - Wednesday, September 22, 2010 - link
Doesn't that defeat the purpose of nVidia's GPU computing efforts ? Afterall nVidia has been touting GPU computing as offering a lot more performance than conventional x86 CPUs. It will be hilarious if we see one piece of CUDA code run faster on Sandy Bridge or Bulldozer than on Kepler :DFITCamaro - Wednesday, September 22, 2010 - link
As another said, if they make it available for x86, more programs will be written that use it. Then it is easier for people to transition over to Nvidia GPUs for a large speed increase.Cerb - Wednesday, September 22, 2010 - link
It won't. The kinds of things it is good for are generally tens to thousands of times faster on the GPU. If new CPUs double the cores, and increase IPC by 20% v. last gen, that's not even a 3x speedup. Even if the CPU could compete in raw performance, the GPU would have it in performance/watt by miles--which is part of why that is what they stress on their graph. CPUs are scaling up better w/o hitting power walls, but when those walls get reached, the CPU is at a disadvantage.Natfly - Wednesday, September 22, 2010 - link
Most significant as in their latest attempt at vendor lock-in. It'd be nice if they un-crippled their cpu physx libraries.silverblue - Wednesday, September 22, 2010 - link
Quite so; apparently it's not difficult to convert x87 to SSE, yet nVidia deliberately didn't in order to make a better case for hardware physics processing. If they're going to be giving CUDA a helping hand, then they should put some effort into releasing version 3.0 of the PhysX SDK too.iwodo - Thursday, September 23, 2010 - link
While personally hate it, This is exactly what Intel has done so on their Compiler against AMD.nbjknk - Thursday, November 25, 2010 - link
Dear customers, thank you for your support of our company.
Here, there's good news to tell you: The company recently
launched a number of new fashion items! ! Fashionable
and welcome everyone to come buy. If necessary, please$$$$$$$$$$__$$$_$$$$$$$$$$$
http://www.vipshops.org
$$_____$$$_$$$_________$$$
$$$_____$$$_$$$______ $$$
$$$ ____$$$_ $$$_____ $$$
$$$$$$$$$$__$$$____$$$
$$$_____$$$_$$$___$$$
$$$_____$$$_$$$__$$$
$$$$$$$$$$$_$$$_$$$
$$$$$$$$$$__$$$_$$$$$$$$$$$$ !::!
http://www.vipshops.org
Thursday, 21 October 2010 at 9:48 PM
Ryan Smith - Wednesday, September 22, 2010 - link
CUDA-x86 is a commercial product by the Portland Group. It's not even the first x86 runtime for CUDA; Ocelot offered this last year (although I suspect Portland's implementation will be much better). In any case since it's a commercial product, don't expect to see it used in consumer products; it really doesn't sound like it's intended for that.iwodo - Thursday, September 23, 2010 - link
Arh, Thanks for the clarifying . I thought it came with the SDK as well.AnandThenMan - Wednesday, September 22, 2010 - link
--- NVIDIA took some criticism for introducing Fermi months before it shipped, but it seems to have worked out well for the company anyhow. ---It did? Fermi is a disaster for Nvidia, at least up until now. AMD is about to move on to a new(er) architecture, Nvidia has not even put Fermi into all market segments. Your statement is highly suspect.
Ryan Smith - Wednesday, September 22, 2010 - link
Keep in mind we're at GTC, so we're talking about the compute side of things. The guys here weren't any happier than anyone else about GF100's delay, but the fact of the matter is that there are halls full of developers happily clobbering Tesla products. For them, having Fermi announced ahead of its actual launch was a big deal. And that's why the early announcement worked out well for the company - it for the developers, not the consumers.muhahaaha - Wednesday, September 22, 2010 - link
After your recent failure with Fermi (hot, loud, and power hungry), you're gonna need to lie and hype your next gen stuff.If anything like last time happens, you'll be on your ass, you media whores, re-branding liars, and FUD pushers.
That CEO gotta go. What a foo.
aguilpa1 - Wednesday, September 22, 2010 - link
Dude you need to grow up. When the next generation of ATI the 2900XT came it was "hot, loud and power hungry...., and under performing to the at that time brand new 8800 series. It has taken this long for the architecture to mature. It has matured, you haven't. As far as re branding amd media whores, both sides do it man..., whatever.Dark_Archonis - Wednesday, September 22, 2010 - link
The only thing suspect here is the motives of some posters.Fermi is loud, hot, and power hungry? Really? I guess you've been living under a rock, and never heard of the GTX 460 or the GTS 450. Neither of them are loud, hot, or that power-hungry.
You want to know what's loud, hot, and power-hungry? An ATI 5970 card.
Griswold - Wednesday, September 22, 2010 - link
Ah clownboy compares mid-range to absolute top-end dual GPU card? I see how it is.Dark_Archonis - Wednesday, September 22, 2010 - link
Real mature, you must be proud of that comment.The above posters did NOT specify what products they were talking about, yet I don't see you criticizing them? I wonder why?
The above posters claimed Fermi in general has been a failure, and that Fermi cards in general are loud, hot, and power-hungry. Both claims as I pointed out are incorrect.
B3an - Wednesday, September 22, 2010 - link
You're 100% right.And Griswold is simply a prick.
Will Robinson - Wednesday, September 22, 2010 - link
No he isn't.When people say "Fermi" most users understand that you are talking about GTX480 or 470.He didn't say "Half a Fermi" which is a more apt description of GTX460.
So "Fermi" does indeed carry an H/P/N penalty and hasn't been a market success along the lines of AMD Cypress.
GTX460 is their only popular product and is an admirable part.
Dark_Archonis - Wednesday, September 22, 2010 - link
"Half" a Fermi? First, Fermi is NOT one specific card or chipset layout. Fermi is a GENERAL architecture. GTX 480 and GTX 470 were the first Fermi cards.So no, most informed users understand Fermi as being the 4xx series of Nvidia cards, NOT as only the GTX 480 and GTX 470.
Obsoleet - Monday, September 27, 2010 - link
The 480 is the full blown Fermi. Calling the 460 half a Fermi makes sense, it is a reduced version of the full monte. Which itself happens to be loud, hot and power hungry. If you cut it down enough it does get better but the 460 isn't fast enough. For my uses I wouldn't call it an admirable part, just the only NV part worth purchasing at low/mid range if you're unwilling to buy AMD.habibo - Friday, October 1, 2010 - link
460 is not a reduced version of the 480, it's a different chip. The 460's built from a GF104 while the 480/470 are built from GF100. The common thread is that they're all based on the Fermi architecture.silverblue - Wednesday, September 22, 2010 - link
Maybe so, but the 5970 is a dual-GPU solution, and as such, taken as given that it will be hot and hungry. It would've been more correct had muhahaaha referred directly to the 480, 470 and 465 instead of generalising Fermi, but there's no escaping the fact that even the 460 loses out, albeit barely, in a performance/watt comparison (though I suppose the 5830 is an exception in the 58xx line) with Evergreen.The 465 upwards use far too much power and are a little too warm when compared to the direct competition that is the 5850 and 5870. The 460 and below should help nVidia recoup some of the money lost on developing the higher spec parts, and the 460 1GB is a fantastic card.
Touche - Wednesday, September 22, 2010 - link
Fermi in general is a failure. GTX460 is only slightly less so, but being 11 months younger and larger, but still quite slower than 5870 does make it a failure.Dark_Archonis - Wednesday, September 22, 2010 - link
Except 5870 is more expensive than GTX 460 and the two cards do not compete directly against each other. As has already been stated by many websites on here before, AMD/ATI currently has nothing at the GTX 460 price-range that competes strongly performance-wise.Goty - Wednesday, September 22, 2010 - link
The only reason ATI has nothing competitive in the 460 price range is because they're making too much money selling the 5850s and 5870s for larger margins.They could lower the price of the 5850 and 5870 to be more competitive, but why? Cypress and GF104 cost roughly the same to produce (based on die sizes), but Cypress sells for significantly more and sells well at those prices. AMD makes more money by not "competing" with the 460 and just letting it sweep up the leftovers of the high-end.
Dark_Archonis - Wednesday, September 22, 2010 - link
Wait, so now people are talking about how ATI doesn't need to be the price/performance leader? I recall seeing in various forums people yelling from the rooftops about how the ATI 5xxx series is so great because of the price/performance aspect.So now ATI can act like Nvidia or Intel and keep prices high just because? That's fine, but ATI/AMD supporters should then not use price/performance as a main argument of why the 5xxx cards are so good.
Getting back to the point, the GTX 460 *currently* offers the best performance at its price point. Period.
ATI wanting high margins and keeping their prices high is irrelevant; the GTX 460 remains the best mid-range card you can get right now for the price.
Touche - Wednesday, September 22, 2010 - link
You have to look at it from both perspectives. GTX460 is a great card for customers, but it's terrible for Nvidia. They hardly make any money from it, if at all. Whole Fermi fiasco is terrible for everyone except for ATI. Nvidia is in trouble, and we don't get cheaper cards.And yes, ATI can, is and will act just like Nvidia or Intel or any other company out there when there is no competition. If they sell everything they produce anyway, why lower their margins? They wouldn't sell more because there's nothing to sell, only earn less.
ATI could sell 5870 for way less than GTX460 and still make a nice profit. It's sad that Fermi line is not making them do that.
TheJian - Thursday, September 23, 2010 - link
Sorry, AMD can't sell things cheaper. You apparently don't understand ATI doesn't exist any more. It seems you're unaware of the fact AMD has been losing money for over 4 years (more?...I can't even remember when they made money and I've owned the stock and follow their cpu/gpu's...LOL). NVDA has a few billion in the bank while AMD has a few billion in DEBT. I'm guessing AMD isn't spending 2Billion on their next GPU. They just can't afford it.The problem isn't AMD charging more. I'm an AMD lover...so want them to succeed, and would rather have an Intel/AMD 50/50 split for obvious reasons. It's fanboys freaking out when nvidia charges for their product, but complete silence when AMD does the same. Note I recently got my Radeon 5830 (sorry COOL/QUIET Fermi was a little late, Amazon beat them and finally shipped a card 7 months later...LOL).
If you actually think a company with a few billion in the bank, no debt, making money (not much for this year or last, but still profit - especially if you remove TSMC chips and payouts for them crapping out), currently the top product, investing in future heavily, diversifying into 2 brand new markets for them, about to complete their shrink of Fermi across the lines (more RED for AMD next 3 quarters - Until they do the same...and probably still not make money) you really need to read more about business. This business looks VERY strong for the next 18 months at least. Sandy Bridge will do nothing more than replace motherboard gpus. Performance sucks to bad to do anything else. AMD just gets weaker by the day (can't continue paying higher and higher interest to put off your debt each few years, you have to PAY IT OFF).
I really only see Intel as a longer term threat. With ARM tops in phone cpu's, and NVDA married to them they don't need a cpu (for now?) to add to the bottom line a LOT with little market share. Tegra2 is what Tegra1 should have been but the competition had good products so the first effort was just like larabee (DOA). Tegra3 however looks to make NVDA jump like google has in phones (from like 5% to 17% in about 9 months?). It should make most makers take notice (Tegra2 looks to get some, signing LG and looking like droids gpu of choice) and NVDA's bottom line finally rise significantly. Intel will need another rev of Atom (or two) or Sandy Bridge's next rev to change anything at NVDA. By then of course those chips will be facing Tegra3 and another rev on the desktop chips for nvda.
We won't have another TSMC chip fiasco with other fabs competing for that business (AMD's fabs...OUCH), I think all will avoid bumpgate this time. If the process does go bad at least they can launch with 1/2 chips from another fab and minimize damage. Also, I believe NVDA will eventually either sue TSMC (they said they would in their first statement released about bumpgate) for all money related to bumpgate as Jen Hsun said they would or they've got a pretty sweet manufacturing deal for ages coming because of it (maybe already got one). Whichever way NVDA goes with TSMC, either will boost bottom line. They've already paid for the damage over the last 2-3 years (note huge write-offs in statements, 200mil here, 180 mil there etc).
I haven't heard TSMC paid anything yet, though you could blame badly made chips on the actual CHIP MAKER easily I think. It's their silicon that failed not NVDA's. Arguments can be made about designer fault but it's really hard to ignore TSMC made them all (and even caused problems for AMD). It seems that all cost about 500-800mil. Even if NVDA gets half back that's a lot of money. HP ponied up 100mil adding to Nvidia's argument that notebook makers ignored heat recommendations (I don't pay that much unless I think I can lose in court). Something tells me it's being paid in production price reduction over years..
Again, they just started up HPC stuff and it looks promising as another market for NVDA to step into and it appears the foundation work is almost laid (vendors, guests etc tell the story at GTC 2010).
Chipset business death has been writing on the wall for a few years. Apple's profits should show what entering a new large market can do for a company. So I am hugely confident 2 new markets will win over the old dead one. High margins in HPC, and a billion+ phone market are great places to enter with 0%. You can only gain with a good product and it looks like a GREAT GPU is becoming very important on phones. Just $10/chip at 10% share would be adding a billion or so to revenue for NVDA. Interesting. AMD can dream if they want (I hope, but...), but I hope NVDA gets back into the 20's and just buys AMD for the cpu's/engineers. AMD vs. Intel=AMD death. AMD+NVDA vs. Intel=hmm...great race for consumer dollars :) AMD will likely go to $2-3 again by next xmas and NVDA could buy them at $20 pretty easily probably. AMD would have a market cap of about 2bil or so which is just NVDA's cash account.
Moral of the story is AMD is weak and losing money. They don't dictate pricing these days and certainly can't charge whatever they want. NVDA can bleed AMD to death just like Intel has. AMD couldn't have recovered from a product snafu that nvidia just rode out easily. Intel did the same for 3 years as AMD slaughtered them with Athlon64. AMD still couldn't capitalize on OWNING the crown for 3 YEARS! But hey, buy some AMD. I'll keep my NVDA. We'll compare notes next xmas and see who's right :)
If you actually think AMD will be stronger than NVDA in the next 18 months I think you're high on crack ;) I don't see 2 Bill in profits from them over the next 2 years to pay that debt off, not to mention left over profits to pay employees, R&D etc.
With a cancelled larabee (delayed etc...whatever), NVDA has smooth sailing for another 18 months. Atom is weak cpu, Sandy weak gpu. Only problem would be TSMC, solved by 1/2 supply over Global Foundries etc. GF has the money to put behind fabs so NV/AMD shouldn't have process problems for a while. GF is investing Intel style while TSMC looks like AMD. They already put in 10Bil! Charter wasn't good enough (not enough money), but GF is. GF is making NV's job easier a bit while it all just hurt AMD. GF will also make TSMC better. Charter couldn't kill TSMC over a failed process, but GF will have a few 15in fabs that would make bumpgate a huge problem if TSMC does it again. I think their employees will be told to perform a bit better for the next few years :) Fear does that.
Touche - Wednesday, September 22, 2010 - link
As Goty said, it's more expensive because ATI sells everything they make regardless. That's what no competition brings to us customers - no price drops.GTX460 is more expensive for Nvidia to make than 5870 is for ATI, yet ATI sells theirs for more. That is extreme failure from Nvidia's perspective.
6xxx series or price reduction for 5xxxx after 6 is launched will fill that 5770-5850 gap for ATI.
TheJian - Thursday, September 23, 2010 - link
Yet AMD hasn't made money for YEARS (was 2004 the last year they made money?). Stock has also diluted from 300mil shares outstanding to 671mil outstanding. NVDA has went from 400 to 561mil shares outstanding. AMD had to dump assets to sta afloat (while dominating graphics for a year...LOL). Also NVDA is still in the middle of a buyback of $2Bil. Not the sign of a weak company. If you're selling everything you make, you need to make MORE.The GTX 460 (and other shrinks on low end) will just get back whatever market share AMD took (and still couldn't profit from). Thus more debt for AMD soon. I think it's extreme success that a company can withstand a complete fiasco with production over 3 years and still keep the enemy from making money, keep market share from dwindling too much, still have a few billion in the bank, and be looking to capitalize in more than one way in the next year or two. Intel did the same for 3 years while AMD dominated the cpu's. Note Intel has taken back everything AMD gained over that 3 years. Here we go again.
GTX 460's direct competition is 5830 which is $10-20 more than GTX460 and SLOWER. That means, that while AMD sells out, NVDA can sell more than enough to profit handily while AMD is probably going to continue to lose money for the year (heck I think next year too). I just hope NVDA doesn't pull an AMD and buy way too expensive thus killing them. AMD should have paid 2Bil or less for ATI (they never made more than $60mil in a year! why 4.5bil? Smoking crack...LOL).
Will that gap be filled after xmas is totally over? Already missed back to school. How much can they get out the door with an Oct25th release date? At least they aimed at GTX 460 first (that was smart). Don't forget this card is going to be 40nm also (400mm range? vs 528mm for GTX 460). So not a lot to play with in margin for AMD vs the last year. At 32nm this would be a win, but at 40nm NV will just cut the price to whatever keeps you from getting market share. I see no way for AMD to win this xmas. AMD pulled a product forward (supposedly) to get this done. That can't be good for the bottom line when you're already losing money. You're already talking price reductions for 5xxxx which means competetion is back :) Also NVDA looks pretty stinking healthy company wise so competition isn't leaving any time soon.
shawkie - Wednesday, September 22, 2010 - link
I actually have a GTX 460 and I hate it. Its louder and more power hungry than the GTX 260 it replaced and as a bonus it seems to be unstable (crashes with screen corruption) at stock clocks.Dark_Archonis - Wednesday, September 22, 2010 - link
It's unfortunate that you have had that experience with your 460, but most 460 owners are happy with their cards. Look at any reviews of the 460 you want, or any gaming/3D websites that have 460 owners and you'll find the cards for the most part run cool, are not overly power-hungry, and are fairly quiet, all in the context of the performance the cards offer.kallogan - Wednesday, September 22, 2010 - link
I woudn't say Fermi is a utter failure, even if it's almost true. IMO only the GTX 460 is a valuable gpu and it's itx friendly. It fits nicely in a Sugo SG05 and it's really quiet. Though overclocking is very unstable compared to what i experienced with a HD 5770 or HD 5850. Don't know why exactly, poor Nvidia memory controller i guess. I often had Nvidia cards which were whining too, pretty annoyin.aegisofrime - Wednesday, September 22, 2010 - link
My own GTX 460 runs faster and also cooler than the non-reference 4870 it replaced. Granted it's different generations , but it's still a far cry from the jokes that 480 and 470 are , not to mention the super huge EPIC FAIL that GTX 465 is.Cerb - Wednesday, September 22, 2010 - link
Have you considered getting rid of it? And what one is it (Zotac, Palit, eVGA, by some chance?)? Crashing to the desktop w/ corruption sounds bad. Like bitching at the maker and RMAing it bad.<-- GB "Windforce" GTX 460 1GB; no problems in Win7 64-bit or Arch Linux 64-bit.
kilkennycat - Wednesday, September 22, 2010 - link
Re: Your suspect GTX460.First make sure that you are running the card with factory-defaults clocks and using the factory-default auto fan-control !! Next, PLEASE CHECK THAT THE HEAT-SINK IS FIRMLY SCREWED DOWN TO THE CIRCUIT-BOARD AND THAT NO SCREWS ARE MISSING. A very early batch of GTX460 from an unnamed manufacturer had a little (??) problem in this area....
Otherwise, desperate erratic problems need desperate resolution........
DISCLAIMER: You run these tests at your own risk !! Test #2 and Test #3 below are NOT recommended for problem-free situations !!
Run Furmark1.8.2 in the following sequence, gradually increasing the stress.
1. Benchmarking (only)
2. Stability Test (only)
3. Stability Test plus Xtreme Burning Mode (only).
Allow >3minutes of cooldown between the tests. If the card is physically A-OK, then you will be able to complete all 3 tests without experiencing crashes or glitches.
For each of the above tests:-
Run full-screen at your desired resolution and watch the on-screen Furmark temperature plot (GPU Core Temperature). It should very rapidly go up and either (a) gradually flatten out at a temperature less than 100degreesC or (b) reach ~100degreesC and suddently flatten as the GF104 goes into thermal-throttling... you will then also see a corresponding frame-rate drop-off in Furmark's on-screen parameter readout . (Unlike the GTX260, many of the current GTX460 air-coolers do not have sufficient thermal-mass/surface-area/airflow to prevent the GF104 reaching its built-in thermal throttle point in Furmark particularly in the Stability-plus-Extreme Burning Mode... this is potentially true of the factory-overclocked cards, especially the "external exhaust" variety.)
DO NOT leave the card running at the maximum temperature in either TEST#2 or TEST#3 for any length of time!! Seriously stresses the voltage regulators.
==================================
If the display crashes while the card is still cold, first carefully check for a problem elsewhere.... power-supply (unlikely since you are replacing the ~equal-wattage GTX260) driver corruption..etc... If the display is OK while the card is still cold but shows glitches or crashes during the heat-cycles, I suggest that you RMA the card. .
TheJian - Thursday, September 23, 2010 - link
They made two GTX 260's. Core216 and the old one. I still think this guy doesn't own one, but great troubleshooting for anyone that just replace any graphics card. If that's all he changed, he's only looking at the card or the psu in almost all cases if the failures started immediately after the change.Too much missing info to nail it down (same app or game crashing, crashing in everythying, when did they start etc), but your procedure should tell him more. But for most people a quick trip to fry's buying the same thing can avoid most testing :) 30 days to check if it's the card with nothing to do other than USE it to test. If it's fine after switch return the bad card (assuming under 30 days on bad card), or return new card and RMA now that you know it's bad. Cost you nothing (costs fry's...oops), solves the problem. If both crash, I'd probably replace my PSU (everything dies) if I didn't have any other quick replacements handy. Of course I have a PHD PCI and Quicktech Pro so I have other options :) Fry's idea is quick for most people and easy ;) But totally unscrupulous. :)
Same idea for PSU...LOL. Test and return. "I just couldn't get it to run, must be incompatible with my board". I even gave him an excuse for the return in both cases...LOL. Is my solution desperate? :)
shawkie - Friday, September 24, 2010 - link
I've just realised that actually the card I replaced in my home PC was a GTS 250 not a GTX 260 at all. In my defence, I've been using the GTX 260 a lot at work and actually did buy a 192 core version for my home PC only to find there wasn't room for it. That probably explains the "hotter and louder" and also means I might be looking at a power supply problem. Or it might be a dodgy card - its one of the first and it is of the external exhaust variety. Thanks for all the advice - It'll probably come in handy when I get around to looking into the problem.TheJian - Thursday, September 23, 2010 - link
Smells like your PC doesn't have an adequate PSU or you got a bad card. They overclock like mad. It should use about 10 less watts according to anandtech, than the GTX 260 LOAD or IDLE. Again, you must have a bad card. It's 10-20 cooler than GTX 260 at LOAD or IDLE or Anandtech lied in their recent article on the 460 (doubtful).http://www.anandtech.com/show/3809/nvidias-geforce...
Same article...Go ahead...Scroll down a bit more and you see you're card is completely out of touch with reality apparenlty. Because from the charts it sure looks like all the GTX 460's are a few DB's less noisy than a GTX 460 either at LOAD or IDLE again (except for Zotac's card which seems a bad fan or something, so far from all others).
Nvidia and AMD both have drivers that don't crash anywhere near what you're saying so again, bad card or not enough PSU. Get a new card or PSU, quit hating a great product (best at $200, or Anandtech and everyone else writes BS reviews...). I have doubts about you owning this card. Why would you keep something and not RMA with reviews showing it makes a GTX 260 look like crap. You weren't skeptical? Currently I'd bet money you don't own one.
Too lazy to correct my previous post comment about amazon finally shipping my 5830. It was supposed to say 5850 (for $260, which is why the waited so long). Still a good deal a few months later. I complained for months which you can check on XFX 5850 complaint page at amazon if needed :)
AnnonymousCoward - Wednesday, September 22, 2010 - link
Dark_Archonis, AMD's latest architecture blows away Nvidia's when you consider power consumption per performance (gaming).shawkie - Wednesday, September 22, 2010 - link
Are these figures for the £250 GeForces or the £2500 Teslas? It makes quite a big difference since there's a factor of about 8 difference in double precision floating point performance between the two.Ryan Smith - Wednesday, September 22, 2010 - link
To be clear, Tesla parts. GeForce cards have their DP performance artificially restricted (although if they keep the same restriction ratio, then these improvements will carry over by the same factors).Griswold - Wednesday, September 22, 2010 - link
at heise.de they asked Huang about 3D performance for the 2013 part. Looks like Huang lost all of his newly gained humbleness and starts to bloat his head once again.The answer was: at least 10 times the 3D performance of the GF100. Right...
stmok - Wednesday, September 22, 2010 - link
When Intel does their IDF; they have something to show for it.(Sandy Bridge, Light Peak, etc.)
When AMD books a hotel room nearby; they have something to demonstrate.
(The Bobcat-based Zacate compared to a typical Intel based notebook.)
Even a Chinese company called Nufront, was able to demonstrate a dual-core 2GHz Cortex A9 based system prototype in the same week!
=> http://www.youtube.com/watch?v=0Gfs5ujSw1Q
...And what does Nvidia bring on their first day at GTC? Codenames and vague performance promises of future products on a graph. (Yes, I can mark points on a graph to represent a half-parabola too!)
=> http://en.wikipedia.org/wiki/Parabola
You know what would be awesome? Actually demonstrating a working prototype of "Kepler"!
Even when question about Tegra-based solutions not being in consumer products: Talk. Talk. Talk.
=> http://www.youtube.com/watch?v=00wUypuruKE
=> http://www.youtube.com/watch#!v=tAHqmNmbN8U
wumpus - Wednesday, September 22, 2010 - link
Methinks you ought to look a bit more closely at those marketing slides.Fermi is listed as 2009. Since this is the year it was paper launched, one should assume that implies 2011 is the scheduled date for paper launching Kepler. How do you show prototypes of powerpoint slides anyway?
Ryan Smith - Wednesday, September 22, 2010 - link
Keep in mind that Fermi was supposed to make retail in 2009. It got delayed, but that didn't significantly impact the rest of the GPUs in development. Kepler has been intended to be a 2011 part for a long time now. If it gets held up like Fermi, then it will be to new issues, such as with the 28nm process.AnandThenMan - Wednesday, September 22, 2010 - link
Fermi was obviously delayed and horribly late. So then why does Jensen stand up there with a slide that still says it was a 2009 product? Makes it very difficult to believe the slide in general doesn't it.As far as Kepler being "intended" as a 2011 part for "a long time" (source?) good intentions are not going to help it get to market. If Nvidia decides to put all their eggs gain in TSMC's basket and hope for the best (again) they are setting themselves up for another failure.
Dark_Archonis - Wednesday, September 22, 2010 - link
You mean like how ATI didn't need TSMC for their 5xxx cards ... oh wait, they used the same TSMC process just like Nvidia did. Furthermore, they're going to use the same 28nm TSMC process that Nvidia is going to use in 2011.The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards.
Fermi was late to market, but the completed design was on-time internally within Nvidia.
Also its foolish to think that the exact same problems that affected Fermi will affect Kepler.
AnandThenMan - Wednesday, September 22, 2010 - link
So then if Nvidia's "internal schedule" does not reflect actual available, then we can add months onto the timelines of that slide. And what does it mean to say "technically" Nvidia had Fermi cards in 2009? That makes no sense at all.And so what if a design was completed, that helps Nvidia how? Designs don't generate revenue, actual products do. I don't think it's foolish to anticipate Kepler will be delayed, unless you mean the "design" will be on time, internally.
Dark_Archonis - Wednesday, September 22, 2010 - link
Again, it is foolish, because you are making a BIG assumption that Kepler will have the same flaws or weaknesses as Fermi does. You are also assuming TSMC will have as much trouble with its 28nm process as they did with their 40nm process.In other words, far too many assumptions you are making.
As for Fermi being ready in late 2009, that is exactly what it means. The design was complete and ready, but like I said several issues led to the cards coming late to market.
fteoath64 - Thursday, September 23, 2010 - link
Good point!. Getting the chip into manufacturing will have its own issues to resolve. ie Heat and power consumption issues. Just hope that the delays are not going to be past a quarter or two.They need to milk Fermi for its worth in the market and seems like doing it already as shipped products. I hope they can keep power levels under control.
Touche - Wednesday, September 22, 2010 - link
"The reason the slide shows Fermi as a 2009 product is because it IS a 2009 product based on Nvidia's internal schedule. The design being finished internally at a company and the product being released to consumers are two different things. Technically, Nvidia DID have the Fermi cards ready to go in late 2009. Unexpected heat and power issues, along with TSMC's 40nm issues is what ended up delaying the Fermi cards."This is just...just...a giant LOL! If it can't be manufactured - it isn't ready, at all! Intel, AMD, Nvidia...they have designs ready for products that are due out in several years. Those products are as "ready to go" now as much as Fermi was in 2009.
Dark_Archonis - Thursday, September 23, 2010 - link
I don't see why this is a giant lol.That Nvidia slide was not designed for the average gamer or consumer. That slide was shown at the GPU Technology Conference, and someone simply took a pic of it. That slide doesn't even focus on performance directly, but efficiency and double precision instead.
That slide is simply an illustration on Nvidia's internal timeline/schedule. So yes, Fermi being listed under 2009 makes sense under Nvidia's *internal* timeline, which is what that slide is.
Yes the slide does not make sense for anyone else, like the rest of us. That slide is not meant for us though, so it's rather pointless to point and laugh at a slide of Nvidia's internal timeline.
You're right, other companies have designs completed far in advance of being on sale to consumers, yet how come nobody here is making fun of those companies and their future timelines? We've seen slides from other companies as well.
With so much negative criticism focused specifically at Nvidia and nobody else one really has to wonder about the motives of certain posters.
arnavvdesai - Wednesday, September 22, 2010 - link
Hi Anand,Have you heard anything from AMD about when they plan to move their graphics chipsets to a newer process? Also, who are they planning to use for graphics chipsets considering the yield issues they had at the beginning for 5xxx series against 40nm?
Also, with monitors continuing to be stagnant at 2560 & 1080p what kind of improvements are these designers planning to insert? I mean Nvidia is planning to double or triple their FP calculation abilities but what about AMD? Also, with Dx12 most likely coming up if Microsoft sticks to their schedule of releasing Win8 in 2011 what features are those folks planning to be available in next gen APIs.
AlexWade - Wednesday, September 22, 2010 - link
Wikipedia has some entries about the new AMD HD6xxx series. I don't know how true it is, but some of the dates are not too far away. This may or may not help you.http://en.wikipedia.org/wiki/Comparison_of_AMD_gra...
If the estimates are mostly true, the HD6xxx series looks like it will be really good. The GFLOPS of the HD5870 is 2070 according the wikipedia page, and 4080 for the HD6870. That is almost double the performance for 40W more. It looks like no DirectX 12 either.
Dark_Archonis - Wednesday, September 22, 2010 - link
It was way too early for anyone to be releasing DirectX 12 cards. Such cards won't be out for a long while, and Microsoft won't even be releasing DirectX 12 for a while either. The current information shows that Microsoft won't release DirectX 12 until mid-2012 at the earliest.Lanskuat - Wednesday, September 22, 2010 - link
NVidia had big problems with big GF100 chip, so I think they make now a small chip like GF106 and AMD RV870, at this case it will be not 2x times large in transistor count, but something 1.5x large. So Kepler high end card on 28 nm will be with ~ 768 CUDA cores and raised frequirences. They don't need to make new architecture, because GTX 480 SLI with 960 CUDAs works very well. I think everybody wanted to have such single card.Dark_Archonis - Wednesday, September 22, 2010 - link
Nvidia could rework current Fermi cards to get better efficiency and performance, just like they did when the reworked the GTX 480 design to the GTX 460 design.Nvidia could potentially add more shader and texture units for the same number of CUDA cores, and improve their memory controller to achieve better performance and efficiency without increasing die size that much. Nvidia could also add an additional warp scheduler to the GF104 design to improve efficiency and performance without much die size increase.
Fermi is a modular architecture which gives Nvidia some flexibility in making variations or improved versions of Fermi cards.
Touche - Wednesday, September 22, 2010 - link
Lots of coulds in that one.Will Robinson - Wednesday, September 22, 2010 - link
It's still trying to put lipstick on a pig.No wonder NVDA has fallen to the level of 2nd tier graphics chip maker now.
JeffC. - Wednesday, September 22, 2010 - link
I would argue that Nvidia has a different approach to the graphics industry and is more beneficial to professionals in the field. AMD does not position their graphics division to cater to professionals the same as Nvidia. If nV’s revenue did not rely on consumer graphics cards so much, they could leave the market and still be a big player in the graphics industry. However, many gamers who do not read extensively in the tech industry form their opinions based only on consumer level graphics cards; that’s not saying *AMD’s current line-up is better in all ways measurable. IMO, AMD’s focus for their graphics division and formerly ATI’s position is engineering graphics cards specifically for the consumer level market. Nvidia may need to completely separate their teams for consumer graphics cards and professional graphics cards instead of leveraging Tesla to subsidize the rest of the architecture. That way we would have cards engineered specifically for consumers and professionals.fteoath64 - Thursday, September 23, 2010 - link
I would not think the Tesla is subsidizing the consumer cards. The NV approach makes sense in leveraging the maximum resources (people) to create multiple product of similar capabilities but doing it from 'Big to small" rather than ground up. The high risk is the Big Product needs to be done quickly so the variations can be harvested/tweaked allowing for the needed diversity of the market. Having separate teams will reduce the software throughput of the drivers and associated platforms it targets.JeffC. - Thursday, September 23, 2010 - link
I'm sure the Tesla line leads in margin compared to the rest of their products. This allows some freedom in pricing with regards to Geforce. The focus of the different markets is getting to the point where it's affecting the efficiency of mainly their high-end consumer cards.If they separated things more, the basic architecture could still be the same and thus the software not much different. Furthermore, die space is allocated for things that the consumer space is not using at the moment. There are different approaches. Nvidia's approach is not really allowing them to be as competitive as they could be with Geforce.Dark_Archonis - Thursday, September 23, 2010 - link
Well the more of these comments I see, the more I begin to wonder how many posters here work for ATI's PR department.Nvidia, "2nd tier" graphics maker? You sir have no idea what you are talking about.
I would highly encourage you to stop posting such nonsense and look up the new Quadro cards and how they obliterate comparable FirePro cards in the professional graphics market.
There is more to the graphics market than just gaming.
AnandThenMan - Thursday, September 23, 2010 - link
Says the person that defends Nvidia at every turn, no matter how illogical. RussianSensation is probably your sidekick.Dark_Archonis - Thursday, September 23, 2010 - link
Do you have anything meaningful to add?medi01 - Thursday, September 23, 2010 - link
So you post BS and expect "meaningful" comments, eh?Dark_Archonis - Friday, September 24, 2010 - link
Another poster with something meaningful to add ... or not.Go read TheJian's posts on page 4 and then come back and talk. I won't repost what he said.
The only BS being posted here is by people who have suspect motives, motives to make Nvidia look bad and make AMD look great.
RussianSensation - Wednesday, September 22, 2010 - link
Will, are we supposed to take your post seriously ??http://blogs.barrons.com/techtraderdaily/2010/09/2...
Pacific Crest analyst Michael McConnell this morning noted that due to conservative ordering practices for graphics chips since the end of July, checks find that channel inventory for Nvidia parts has dropped to 3-4 weeks, down from 10-12 weeks last quarter. McConnell says he thinks Nvidia is tracking in line guidance for 3%-5% sequential revenue growth for the fiscal third quarter ending in October.
There have been failures in the past from both camps: 5800 series, X1800XT, HD2900. Both firms have less than successful products once in a while and both firms trade market share. Unlike 5800 or HD2900 series though, Fermi is more like the X1800XT, where it's hotter and louder, but also faster (and this is especially important since it's faster in DX11 games at every price level).
Dark_Archonis - Thursday, September 23, 2010 - link
The truth it out there; not everyone is willing to admit or accept it. Good for posting that.AnandThenMan - Thursday, September 23, 2010 - link
Yes, a global conspiracy exists against Nvidia, luckily some people are brave enough to risk everything and post "the truth" because it is indeed out there.You want truth? In a month AMD will begin refreshing their line-up. Nvidia will have nothing but slides and promises to counter. We all would like nothing more than for Nvidia and AMD to be trading blows and driving prices down. Sadly, it looks like the 6xxx cards will go uncontested at least through the holiday season.
Dark_Archonis - Friday, September 24, 2010 - link
Like I said to another user, go read TheJian's posts on page 4 and then come back and talk.You can gloat and boast about how great the 6xxx cards are going to be (which has yet to be proven), but the fact remains that AMD is still in debt big-time and they simply cannot afford to spend as much as Nvidia on future GPUs.
medi01 - Thursday, September 23, 2010 - link
There are many ways to reduce "channel inventory", one of them is: producing less.Anyway, whether they are selling stuff or not, fact remains that they still control at least 2/3 of the market. Which is a shame, considering how bad their offerings were.
silverblue - Thursday, September 23, 2010 - link
It's important to note that the 5800 in question is NV30 and not Evergreen.silverblue - Thursday, September 23, 2010 - link
...or Cypress. Just thought I'd post that before someone jumped on it.ABR - Saturday, September 25, 2010 - link
You gotta love that exponential increase!