Comments Locked

73 Comments

Back to Article

  • kwrzesien - Monday, November 12, 2012 - link

    My first First! Okay, now back to work.
  • DigitalFreak - Monday, November 12, 2012 - link

    I wouldn't call riding the short-bus work...
  • kwrzesien - Monday, November 12, 2012 - link

    Hey, I wouldn't call reading news work either!
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Thank you insane amd fanboys, for months on end, you've been screaming that nVidia yields are horrible and they're late to the party, while nVidia itself has said yields are great, especially in the GPU gaming card space.
    now the big amd fanboy lie is exposed.
    " Interestingly NVIDIA tells us that their yields are terrific – a statement backed up in their latest financial statement – so the problem NVIDIA is facing appears to be demand and allocation rather than manufacturing."
    (that's in the article above amd fanboys, the one you fainted...after raging... trying to read)
    Wow.
    I'm so glad this site is so fair, and as we see, as usual, what nVidia has been telling them is considered a lie for a very, very long time, until the proof that it was and is actually the exact truth and has been all along is slammed hard into the obstinate amd fan brain.
    So nVidia NEVER had an ongoing yield issue on 600 series..
    That's what they said all along, and the liars, knows as amd fanboys, just lied instead, even after they were informed over and over again that nVidia did not buy up a bunch of manufacturing time early.
    Thanks amd fanboys, months and months of your idiot lies makes supporting amd that much harder, and now they are truly dying.
    Thank you for destroying competition.
  • mayankleoboy1 - Monday, November 12, 2012 - link

    Anand, I am a Nvidia fanboi.
    But still i was surprised by your AMD S10000 coverage. That merited a page in the _pipeline_ section.
    And a product from Nvidia gets a front seat, _3 page_ article ?

    Bias, or page hits ?
  • Ryan Smith - Monday, November 12, 2012 - link

    I had more to write about the K20, it's as simple as that. This is the first chance I've had to write in-depth about GK110, whereas S10000 is a dual-chip board using an existing GPU.
  • lx686x - Monday, November 12, 2012 - link

    Ohhh the W9000/8000 review that never got a promised part 2? And the S9000 and S7000 that was also thrown in the pipeline?
  • tviceman - Monday, November 12, 2012 - link

    Just like the gtx650 that never got it's own review. Get over it.
  • lx686x - Monday, November 12, 2012 - link

    It wasn't promised, get over it.
  • The Von Matrices - Tuesday, November 13, 2012 - link

    It was promised, but it never was published.

    http://www.anandtech.com/show/6289/nvidia-launches...

    "We’ll be looking at the GTX 650 in the coming week, at which point we should have an answer to that question."
  • CeriseCogburn - Thursday, November 29, 2012 - link

    LOL - wrong again amd fanboy
  • Bullwinkle J Moose - Tuesday, November 13, 2012 - link

    K20X
    384 bit bus
    6 GB VRAM
    7.1 Billion Transistors
    3.95 TFLOP Single Precision
    1.31 TFLOP Double Precision
    $3200

    Sounds impressive but can it play Crisis?
  • Bullwinkle J Moose - Wednesday, November 14, 2012 - link

    CRYSIS
  • CeriseCogburn - Thursday, November 29, 2012 - link

    You both meant Crysis Warhead, frost bench, an amd advantage favorite for a single amd win of late, not Crysis 2.
    LOL
    the bias is screaming out
  • eddman - Monday, November 12, 2012 - link

    "I am a Nvidia fanboi."

    You do know that fanboy means "A stupid and highly biased fan", right?
    The term you'd want to use is simply "Fan".
  • Denithor - Monday, November 12, 2012 - link

    I think that was his point exactly - he's a RABID nVidia fan but still finds it off balance the differential treatment for the two companies.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    The truth is probably he's a rabid amd fanboy in disguise
  • Sabresiberian - Tuesday, November 13, 2012 - link

    The "stupid" part is YOUR interpretation, it's not what it means to most people.

    Biased, yes, stupid, no.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Biased because, while being so stupid, so obsessed, so doused in tampon and motrin lacking estrogenic emoting for the team, that facts simply do not matter, and spinning Barney and and Nancy and Harry would be proud of, becomes all that spews forth.
    Stupid is definitely part of it, coupled with a huge liar factor.
    It may be the washing of the brain coupled with the excessive female hormone problems are the base cause, but in every case except flat out devious lying troll for amd, paid or unpaid, stupidity is a very large component.
  • dragonsqrrl - Monday, November 12, 2012 - link

    'An article about an AMD product got only 1 page of coverage while an article about an Nvidia product got 3, BIAS, FAVORITISM, FANBOI'

    Dude really, grow up. You're just about the last person who should be throwing around accusations of bias and fanboism. Do you really have nothing better to do than to troll and whine on Tom's and Anand, about how the whole world is conspiring against your benevolent AMD?
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Maybe he's the hacker that gives the -20 to every post on every gpu article at Tom's that is not 100% amd fanboy lie plus up based, or even hints at liking any nVidia card, ever.
    I'm so SICK of having to hit the show post crap at Tom's in order to read any comments that aren't radeon rager amd favor boys
  • nutgirdle - Monday, November 12, 2012 - link

    I've heard through back channels that nVidia may be moving away from supporting OpenCL. Can you confirm any of this?
  • Ryan Smith - Monday, November 12, 2012 - link

    There's always going to be that nagging concern since NVIDIA has CUDA, but I haven't heard anything so substantiate that rumor.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    You mean you're worried and still sore over how pathetic AMD has been in it's lack and severey long time lacking suport for OpenCL compared to nVidia's far ahead forever and a long time great job in actually supporting it and breaking all the new ground, while amd fanboys whine OpenCl is the way and amd PR propaganda liar freaks paid by amd squeal OpenCL should be the only way forward ?
    Yeah, that's what you really meant, or rather should have said.
    Hey, cheer up, amd finally got the checkmark in GPU-Z for OpenCL support. Like YEARS after nVidia.
    Thanks, I love the attacks on nVidia, while amd is crap.
    It's one of the major reasons why amd is now nearly dead. The amd fanboys focus on hating the rich, prosperous, and profitable competition 100%, instead of directing their efforts at kicking the loser, amd, in the head or groin, or at all, let alone hard enough, for their failures to become apparent and self evident, so that they actually do something about them, fix them, and perform.
    Most famous Catalyst Maker quote: " I didn't know we had a problem. "
    That's amd professionalism for you.
  • Casper42 - Wednesday, May 15, 2013 - link

    CeriseCogburn, how are you not banned from here?

    I'm an Nvidia fan, but 90% of what comes out of your keyboard is just hate and vitriol.

    Tone it down a little man!
  • Jorange - Monday, November 12, 2012 - link

    So the GK110 will form the basis of the GTX 680's replacement?
  • thebluephoenix - Monday, November 12, 2012 - link

    Yes.
  • suryad - Friday, November 16, 2012 - link

    That's pretty much what I needed to hear. My Geforce 285 GTX OC editions in SLI are getting a bit long in the tooth!
  • Ryan Smith - Monday, November 12, 2012 - link

    Frankly we have no idea. It is a GPU, so NVIDIA could absolutely do that if they wanted to. But right now the entire allocation is going to Tesla. And after that I would expect to see a Quadro GK110 card before we ever saw a consumer card.
  • mayankleoboy1 - Monday, November 12, 2012 - link

    Probably no. Why should it ? With HPC, they can sell it at $4000, making atleast $2000 in profit.

    With a consumer gaming card, thay would have to sell it at $ 600 max, making $150-200 max.
  • Assimilator87 - Monday, November 12, 2012 - link

    nVidia already sells a $1k consumer graphics card, aka the GTX 690, so why can't they introduce one more?
  • HisDivineOrder - Monday, November 12, 2012 - link

    More to the point, they don't need to. The performance of the GK104 is more or less on par with AMD's best. If you don't need to lose money keeping up with the best your opponent has, then why should you lose money?

    Keep in mind, they're charging $500 (and have been charging $500) for a GPU clearly built to be in the $200-$300 segment when their chief opponent in the discrete GPU space can't go a month without either dropping the prices of their lines or offering up a new, even larger bundle. This is in spite of the fact that AMD has released not one but two spectacular performance driver updates and nVidia disappeared on the driver front for about six months.

    Yet even still nVidia charges more for less and makes money hand over fist. Yeah, I don't think nVidia even needs to release anything based on Big Daddy Kepler when Little Sister Kepler is easily handing AMD its butt.
  • RussianSensation - Monday, November 12, 2012 - link

    "Big Daddy Kepler when Little Sister Kepler is easily handing AMD its butt."

    Only in sales. Almost all major professional reviewers have handed the win to HD7970 Ghz as of June 2012. With recent drivers, HD7970 Ghz is beating GTX680 rather easily:

    http://www.legionhardware.com/articles_pages/his_7...

    Your statement that Little Kepler is handing AMD's butt is absurd when it's slower and costs more. If NV's loyal consumers want a slower and more expensive card, more power to them.

    Also, it's evident based on how long it took NV to get volume production on K20/20X, that they used GK104 because GK100/110 wasn't ready. It worked out well for them and hopefully we will get a very powerful GTX780 card next generation based on GK110 (or perhaps some other variant).

    Still, your comment flies in the face of facts since GK104 was never build to be a $200-300 GPU because NV couldn't possibly have launched a volume 7B chip since they are only now shipping thousands of them. Why would NV open pre-orders for K20 parts in Spring 2012 and let its key corporate customers wait until November 2012 to start getting their orders filled? This clearly doesn't add up with what you are saying.

    Secondly, you make it sound like price drops on AMD's part are a sign of desperation but you don't acknowledge that NV's cards have been overpriced since June 2012. That's a double standard alright. As a consumer, I welcome price drops from both camps. If NV drops prices, I like that. Funny how some people view price drops as some negative outcome for us consumers...
  • CeriseCogburn - Thursday, November 29, 2012 - link

    So legion has the 7970 vanilla winning nearly every benchmark.
    LOL
    I guess amd fanboys can pull out all the stops, or as we know, they are clueless as you are.
    http://www.hardocp.com/article/2012/10/08/his_rade...

    Oh look at that, the super expensive amd radeon ICE Q X2 GIGAHERTZ EDITION overclocked can't even beat a vanilla MSI 680 .

    LOL

    Reality sucks for amd fanboys.
  • Gastec - Tuesday, November 13, 2012 - link

    Right now ,in the middle of the night, an idea sprang into my abused brain. nVidia is like Apple. And their graphical cards are like the iPhones. There's always a few millions of people willing to buy their producs no matter what, no matter what price they put up. Even if the rest of the world would stop buying nVidia and iPhones at least there will always be some millions of amaricans to will buy them, and their sons and their sons' sons and so on and so forth until the end of days. Heck even one of my friends when we were chatting about computers components uttered the words: "So you are not a fan of nVidia? You know it has PhysX." In my mind I was like : "FAN? What the...I bought my ATI card because it was cheaper and consumed less power so I pay less money when the bloo...electricity bill comes" And after reading all your comments I understand now what you mean by "fanboy" or "fanboi" whatever. Typically american bs.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    LOL - another amd fanboy idiot who needs help looking in the mirror.
  • Kevin G - Monday, November 12, 2012 - link

    A consumer card would make sense if yields are relatively poor. A die this massive has to have a very few fully functional chips (in fact, K20X only has 14 of 15 SMX clusters enabled). I can see a consumer card with 10 or 12 SMX clusters being active depending on yields for successful K20 and K20X dies.
  • RussianSensation - Monday, November 12, 2012 - link

    It would also make sense if the yields are very good. If your yields are exceptional, you can manufacture enough GK110 die to satisfy both the corporate and consumer needs. Right now the demand for GK110 is outstripping supply. Based on what NV has said, their yields are very good. The main issue is wafer supply. I think we could reasonably see a GK110 consumer card next year. Maybe they will make a lean gaming card though as a lot of features in GK110 won't be used by gamers.
  • Dribble - Tuesday, November 13, 2012 - link

    Hope not - much better to give us another GK104 style architecture but increase the core count.
  • wiyosaya - Monday, November 12, 2012 - link

    IMHO, at these prices, I won't be buying one, nor do I think that the average enthusiast is going to be interesting in paying perhaps one and a half to three times the price of a good performance PC for a single Tesla card. Though nVidia will probably make hoards of money from supercomputing centers, I think they are doing this while forsaking the enthusiast market.

    The 600 series seriously cripples double-precision floating point capabilities making a Tesla an imperative for anyone needing real DP performance, however, I won't be buying one. Now if one of the 600 series had DP performance on par or better than the 500 series, I would have bought one rather than buying a 580.

    I don't game much, however, I do run several BOINC projects, and at least one of those projects requires DP support. For that reason, I chose a 580 rather than a 680.
  • DanNeely - Monday, November 12, 2012 - link

    The Tesla (and quadro) cards have always been much more expensive than their consumer equivalents. The Fermi generation M2090 and M2070Q were priced at the same several thousand dollar pricepoint as K20 family; but the gaming oriented 570/580 were at the normal several hundred dollar prices you'd expect for a high end GPU.
  • wiyosaya - Tuesday, November 13, 2012 - link

    Yes, I understand that; however, IMHO, the performance differences are not significant enough to justify the huge price difference unless you work in very high end modeling or simulation.

    To me, with this generation of chips, this changes. I paid close attention to 680 reviews, and DP performance on 680 based cards is below that of the 580 - not, of course, that it matters to the average gamer. However, I highly doubt that the chips in these Teslas would not easily adapt to use as graphics cards.

    While it is nVidia's right to sell these into any market they want, as I see it, the only market for these cards is the HPC market, and that is my point. It will be interesting to see if nVidia continues to be able to make a profit on these cards now that they are targeted only at the high-end market. With the extreme margins on these cards, I would be surprised if they are unable to make a good profit on them.

    In other words, do they sell X amount at consumer prices, or do they sell Y amount at professional prices and which target market would be the better market for them in terms of profits? IMHO, X is likely the market where they will sell many times the amount of chips than they do in the Y market, but, for example, they can only charge 5X for the Y card. If they sell ten times the chips in X market, they will have lost profits buy targeting the Y market with these chips.

    Also, nVidia is writing their own ticket on these. They are making the market. They know that they have a product that every supercomputing center will have on its must buy list. I doubt that they are dumb.

    What I am saying here is that nVidia could sell these for almost any price they choose to any market. If nVidia wanted to, they could sell this into the home market at any price. It is nVidia that is making the choice of the price point. By selling the 680 at high-end enthusiast prices, they artificially push the price points of the market.

    Each time a new card comes out, we expect it to be more expensive than the last generation, and, therefore, consumers perceive that as good reason to pay more for the card. This happens in the gaming market, too. It does not matter to the average gamer that the 580 outperforms the 680 in DP operations; what matters is that games run faster. Thus, the 680 becomes worth it to the gamer and the price of the hardware gets artificially pushed higher - as I see it.

    IMHO, the problem with this is that nVidia may paint themselves into an elite market. Many companies have tried this, notably Compaq and currently Apple. Compaq failed, and Apple, depending on what analysts you listen to, is losing its creative edge - and with that may come the loss of its ability to charge high prices for its products. While nVidia may not fall into the "niche" market trap, as I see it, it is a pattern that looms on the horizon, and nVidia may fall into that trap if they are not careful.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Yep, amd is dying, rumors are it's going to be bought up after a chapter bankruptcy, restructured, saved from permadeath, and of course, it's nVidia that is in danger of killing itself... LOL
    Boinc is that insane sound in your head.
    NVidia professionals do not hear that sound, they are not insane.
  • shompa - Monday, November 12, 2012 - link

    These are not "home computer" cards. These are cards for high performance calculations "super computers". And the prices are low for this market.

    The unique thing about this years launch is that Nvidia always before sold consumer cards first and supercomputer cards later. This time its the other way.

    Nvidia uses the supercomputer cards for more or less subsidising its "home PC" graphic cards. Usually its the same card but with different drivers.

    Home 500 dollars
    Workstation 1000-1500 dollars
    Supercomputing 3000+ dollars

    Three different prices for the same card.

    But 7 billion transistors on 28nm will be expensive for home computing. It cost more then 100% more to manufacture these GPUs then Nvidia 680.

    7 BILLION. Remember that the first Pentium was the first 1 MILLION transistors. This is 7000 more dense.
  • kwrzesien - Monday, November 12, 2012 - link

    All true.

    But I think what has people complaining is that this time around Nvidia isn't going to release this "big" chip to the Home market at all. They signaled this pretty clearly by putting their "middle" chip into the 680. Unless they add a new top-level part name like a 695 or something they have excluded this part from the home graphics naming scheme. Plus since it is heavily FP64 biased it may not perform well for a card that would have to be sold for ~$1000. (Remember they are already getting $500 for their middle-size chip!)

    Record profits - that pretty much sums it up.
  • DanNeely - Monday, November 12, 2012 - link

    AFAIK that was necessity speaking. The GK100 had some (unspecified) problems; forcing them to put the Gk104 in both the mid and upper range of their product line. When the rest of the GK11x series chips show up and nVidia launches the 7xx series I expect to see GK110's in the top as usual. Having seen nVidia's midrange chip trade blows with their top end one, AMD is unlikely to be resting on it's laurels for their 8xxx series.
  • RussianSensation - Monday, November 12, 2012 - link

    Great to see someone who understood the situation NV was in. Also, people think NV is a charity or something. When they were selling 2x 294mm^2 GTX690 for $1000, we can approximate that on a per wafer cost, it would have been too expensive to launch a 550-600mm^2 GK100/110 early in the year and maintain NV's expected profit margins. They also faced wafer shortages which explains why they re-allocated mobile Kepler GPUs and had to delay under $300 desktop Kepler allocation by 6+ months to fulfill 300+ notebook design wins. Sure, it's still Kepler's mid-range chip in the Kepler family, but NV had to use GK104 as flagship.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    kwrsezien, another amd fanboy idiot loser with a tinfoil brain and rumor mongered brainwashed gourd
    Everything you said is exactly wrong.
    Perhaps and OWS gathering will help your emotional turmoil, maybe you can protest in front of the nVidia campus.
    Good luck, wear red.
  • bebimbap - Monday, November 12, 2012 - link

    Each "part" being made with the "same" chip is more expensive for a reason.

    For example Hard drives made by the same manufacturer have different price points for enterprise, small business, and home user. I remember an Intel server rep said to use parts that are designed for their workload so enterprise "should" use an enterprise drive and so forth because of costs. And he added further that with extensive testing the bearings used in home user drives will force out their lubricant fluid causing the drive to spin slower and give read/write errors if used in certain enterprise scenarios, but if you let the drive sit on a shelf after it has "failed" it starts working perfectly again because the fluids returned to where they need to be. Enterprise drives also tend to have 1 or 2 orders of magnitude better bit read error rate than consumer drives too.

    In the same way i'm sure the tesla, quadro, and gtx all have different firmwares, different accepted error rates, different loads they are tested for, and different binning. So though you say "the same card" they are different.

    And home computing has changed and have gone in a different direction. No longer are we gaming in a room that needs a separate AC unit because of the 1500w of heat coming from the computer. We have moved from using 130w CPUs to only 78w. Single gpu cards are no longer using 350w but only 170w. so we went from using +600-1500w systems using ~80% efficient PSUs to using only about ~<300-600w with +90% efficient PSUs, and that is just under high loads. If we were to compare idle power, instead of only using 1/2 we are only using 1/10. We no longer need a GK110 based GPU, and it might be said that it will not make economic sense for the home user.

    GK104 is good enough.
  • EJ257 - Monday, November 12, 2012 - link

    The consumer model of this with the fully operational die will be in the $1000 range. 7 billion transitors is a really big chip even for 28nm process.
  • CeriseCogburn - Thursday, November 29, 2012 - link

    We can look forward to a thousand morons screaming it's not twice as fast as the last gen, again.
    The pea brained proletariat never disappoints.
    They always have memorized 15 or 20 lies, and have hold of half a fact to really hammer their points home.
    I can hardly wait.
  • RussianSensation - Monday, November 12, 2012 - link

    For double precision BOINC projects, AMD cards have been dominating NV since at least HD4870. If you want a cheap card for distributed computing projects that use DP, HD7970 series offers very good value. Of course it depends what project you are passionate about/ want to help with. :)
  • Denithor - Monday, November 12, 2012 - link

    I'm guessing that should be 1.17 TFLOPS not the 1.17GFLOPS stated.

    :)
  • Ryan Smith - Monday, November 12, 2012 - link

    What's a few orders of magnitude between friends?

    Thanks for the heads up on that. Fixed.
  • mayankleoboy1 - Monday, November 12, 2012 - link

    Is a GK110 derivative without the DP and ECC, but with 13/14 SMX possible ?
  • N4g4rok - Monday, November 12, 2012 - link

    What do you do with one of these monsters?
  • menting - Monday, November 12, 2012 - link

    attain world peace (or destruction)
  • HighTech4US - Monday, November 12, 2012 - link

    > What do you do with one of these monsters?

    Read this article: Inside the Titan Supercomputer: 299K AMD x86 Cores and 18.6K NVIDIA GPUs

    http://www.anandtech.com/show/6421/inside-the-tita...

    And watch the videos in it (the applications are impressive):

    If you want to get time on Titan you write a proposal through a program called Incite. In the proposal you ask to use either Titan or the supercomputer at Argonne National Lab (or both). You also outline the problem you're trying to solve and why it's important. Researchers have to describe their process and algorithms as well as their readiness to use such a monster machine.

    http://www.anandtech.com/show/6421/inside-the-tita...
  • RussianSensation - Monday, November 12, 2012 - link

    Seismic processing
    CFD, CAE
    Financial computing
    Computational chemistry and Physics
    Data analytics
    Satellite imaging
    Weather modeling
  • Holly - Tuesday, November 13, 2012 - link

    I could very much imagine using it to calculate distances and collisions between massive numbers of players in MMO games, it's brute force, data uniformity and parallel nature of the task would make it ideal candidate.
    But well, you wouldn't need too much of those on your average MMO cluster.
  • dcollins - Monday, November 12, 2012 - link

    It should be noted that recursive algorithms are not always more difficult to understand than their iterative counterparts. For example, the quicksort algorithm used in nvidia's demos is extremely simple to implement recursively but somewhat tricky to get right with loops.

    The ability to directly spawn sub-kernels has applications beyond supporting recursive GPU programming. I could see the ability to create your own workers would simplify some problems and leave the CPU free to do other work. Imagine an image processing problem where a GPU kernel could do the work of sharding an image and distributing it to local workers instead of relying on a (comparatively) distant CPU to perform that task.

    In the end, this gives more flexibility to GPU compute programs which will eventually allow them to solve more problems, more efficiently.
  • mayankleoboy1 - Monday, November 12, 2012 - link

    We need compilers that can work on GPGPU to massively speed up compilation times.
  • Loki726 - Tuesday, November 13, 2012 - link

    I'm working on this. It actually isn't as hard as it might seem at first glance.

    The amount of parallelism in many compiler optimizations scale with program size, and the simplest algorithms basically boil down to for(all instructions/functions/etc) { do something; }. Everything isn't so simple though, and it still isn't clear if there are parallel versions of some algorithms that are as efficient as their sequential implementations (value-numbering is a good example).

    So far the following work very well on a massively parallel processor:
    - instruction selection
    - dataflow analysis (live sets, reaching defs)
    - control flow analysis
    - dominance analysis
    - static single assignment conversion
    - linear scan register allocation
    - strength reduction, instruction simplification
    - constant propagation (local)
    - control flow simplification

    These are a bit harder and need more work:
    - register allocation (general)
    - instruction scheduling
    - instruction subgraph isomorphism (more general instruction selection)
    - subexpression elimination/value numbering
    - loop analysis
    - alias analysis
    - constant propagation (global)
    - others

    Some of these might end up being easy, but I just haven't gotten to them yet.

    The language frontend would also require a lot of work. It has been shown
    that it is possible to parallelize parsing, but writing a parallel parser for a language
    like C++ would be a very challenging software design project. It would probably make more sense to build a parallel parser generator for framework like Bison or ANTLR than to do it by hand.
  • eachus - Wednesday, November 14, 2012 - link

    I've always assumed that the way to do compiles on a GPU or other heavily parallel CPU is to do the parsing in a single sequential process, then spread the semantic analysis and code generation over as many threads as you can.

    I may be biased in this since I've done a lot of work with Ada, where adding (or changing) a 10 line file can cause hundreds of re-analysis/code-generation tasks. The same thing can happen in any object-oriented language. A change to a class library, even just adding another entry point, can cause all units that depend on the class to be recompiled to some extent. In Ada you can often bypass the parser, but there are gotchas when the new function has the same (simple) name as an existing function, but a different profile.

    Anyway, most Ada compilers, including the GNAT front-end for GCC will use as many CPU cores as are available. However, I don't know of any compiler yet that uses GPUs.
  • Loki726 - Thursday, November 15, 2012 - link

    The language frontend (semantic analysis and IR generation, not just parsing) for C++ is generally harder than languages that have concepts of import/modules or interfaces because you typically need to parse all included files for every object file. This is especially true for big code bases (with template libraries).

    GPUs need thousands of way parallelism rather than just one dimension for each file/object, so it is necessary to extract parallelism on a much finer granularity (e.g. for each instruction/value).

    A major part of the reason why GPU compilers don't exist is because they are typically large/complex codebases that don't map well onto parallel models like OpenMP/OpenACC etc. The compilers for many languages like OpenCL are also immature enough that writing a debugging a large codebase like this would be intractable.

    CUDA is about the only language right now that is stable enough and has enough language features (dynamic memory allocation, object oriented programming, template) to try. I'm writing all of the code in C++ right now and hoping that CUDA will eventually cease to be a restricted subset of C++ and just become C++ (all it is missing is exceptions, the standard library, and some minor features that other compilers are also lax about supporting).
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Don't let the AMD fans see that about OpenCL sucking so badly and being immature.
    It's their holy grail of hatred against nVidia/Cuda.
    You might want to hire some protection.
    I can only say it's no surprise to me, as the amd fanboys are idiots 100% of the time.
    Now as amd crashes completely, gets stuffed in a bankruptcy, gets dismantled and bought up as it's engineers are even now being pillaged and fired, the sorry amd fanboy has "no drivers" to look forward to.
    I sure hope their 3G ram 79xx "futureproof investment" they wailed and moaned about being the way to go for months on end will work with the new games...no new drivers... 3rd tier loser engineers , sparse crew, no donuts and coffee...
    *snickering madly*
    The last laugh is coming, justice will be served !
    I'd just like to thank all the radeon amd ragers here for all the years of lies and spinning and amd tokus kissing, the giant suction cup that is your pieholes writ large will soon be able to draw a fresh breath of air, you'll need it to feed all those delicious tears.
    ROFL
    I think I'll write the second edition of "The Joys of Living".
  • inaphasia - Tuesday, November 13, 2012 - link

    Everybody seems to be fixated on the fact that the K20 doesn't have ALL it's SMXes enabled and assuming this is the result of binning/poor yields, whatever...

    AFAICT the question everybody should be asking and the one I'd love to know the answer to is:
    Why does the TFLOP/W ratio actually IMPROVE when nVidia does that?

    Watt for Watt the 660Ti is slightly better at compute than the 680 and far better than the 670, and we all know they are based on the "same" GK104 chip. Why? How?

    My theory is that even if TSCM's output of the GK110 was golden, we'd still be looking at disabled SMXes. Of course since it's just a theory, it could very well be wrong.
  • frenchy_2001 - Tuesday, November 13, 2012 - link

    No, you are probably right.

    Products are more than their raw capabilities. When GF100 came out, Nvidia placed a 480 core version (out of 512) in the consumer market (at 700MHz+) and a 448 at 575MHz in the Quadro 6000. Power consumption, reliability and longevity were all parts of that decision.

    This is part of what was highlighted in the article as a difference between K20X and K20, the 235W vs 225W makes a big difference if your chassis is designed for the latter.
  • Harry Lloyd - Tuesday, November 13, 2012 - link

    Can you actually play games with these cards (drivers)?

    I reckon some enthusiasts would pick this up.
  • Ryan Smith - Wednesday, November 14, 2012 - link

    Unfortunately not. If nothing else, because there aren't any display outputs.
  • CyberAngel - Thursday, November 22, 2012 - link

    some could still pay 999 USD for a consumer product
    naturally dual GPU card would then cost 1999 USB
    SLI a couple of those....it WILL run Crysis !!
  • wilicc - Wednesday, December 12, 2012 - link

    I have some benchmarks on the Tesla K20, if someone is interested. Nothing surprising on the single precision front.
    http://wili.cc/blog/gpgpu-faceoff.html
  • Anitha.kale - Tuesday, March 12, 2013 - link

    Please provide the dealer's complete address and mobile number for NVIDIA TESLA K20X in PUNE (INDIA). Regards

Log in

Don't have an account? Sign up now