Comments Locked

24 Comments

Back to Article

  • close - Tuesday, November 10, 2015 - link

    FP64 performance makes me cry...
  • tipoo - Tuesday, November 10, 2015 - link

    Didn't they make FP16 in large part for deep learning? I'm no expert in this field but it seems they were saying that you not only don't need double precision for it, you may need less than single precision and can use half precision FP16 for deep learning. Which is what these cards are for, so it may be moot to complain about FP64 being so cut down.
  • Loki726 - Tuesday, November 10, 2015 - link

    Yes, high precision is not a requirement for deep learning. Many models actually run in 8-bit fixed point.
  • p1esk - Tuesday, November 10, 2015 - link

    Which models run in 8-bit?
  • extide - Tuesday, November 10, 2015 - link

    Well, that's what the GK110 is for -- if you need FP64, then these cards (or any Maxwell cards) are not for you.
  • HighTech4US - Tuesday, November 10, 2015 - link

    As a baby you do cry a lot. Often about nothing. This being the case again.

    As for FP64 you do not need it (and it would hurt) deep learning. In fact FP16 is coming to accelerate it even more.
  • close - Tuesday, November 10, 2015 - link

    Do we know each other? Or do you just roam the interwebs throwing trash with one hand and copying what others say with the other?

    A quick search by your (very inaccurate) username revealed that both of the above are true.
  • 303rob - Thursday, November 12, 2015 - link

    @HighTech4US for the record babies never ever cry about nothing you fucking imbeceal, i joined purely to say that to you....such is the gargantuan size of your error, one can only asume you are as thoughtless as your comment
  • nico_mach - Tuesday, November 10, 2015 - link

    Kind of irrelevant, but they have the best branding in the tech world: Titan, Tesla, Shield, CUDA. GeForce, G-SYNC and Grid aren't imaginative, but there are no stinkers in that list, which is shocking. It's as good as the best car makers, which is kind of amazing.
  • tipoo - Tuesday, November 10, 2015 - link

    G-Sync came the closest to bad. Imagine if they went N-Sync instead :P
  • tipoo - Tuesday, November 10, 2015 - link

    I like their internal codenames too. Scientists is a good theme. Kepler, Fermi, Maxwell, etc.
  • mapesdhs - Tuesday, November 10, 2015 - link

    Blimey, good point, I hadn't noticed that before... Not seeing the wood for the trees as it were. :D
  • Flunk - Tuesday, November 10, 2015 - link

    Branding and success aren't always connected. Some of the most successful companies have terrible branding. Marketing people have a hugely inflated opinion of their relevance.
  • chlamchowder - Tuesday, November 10, 2015 - link

    The K40 has a 1/3 FP64 ratio. So Maxwell can do a decent job at FP64. I wonder why that's not the case for any of the lower end Tesla cards though.

    I also wonder how FP64 works on these cards. Are there distinct FP64 units? Or are several FP32 units combined to handle FP64?
  • Vatharian - Tuesday, November 10, 2015 - link

    That's Kepler, not Maxwell
  • Dusk_Star - Tuesday, November 10, 2015 - link

    The K40 is a Kepler card, though. As far as I know, all of the Maxwell cards, consumer or professional, are limited to 1/32 FP64 performance.
  • extide - Tuesday, November 10, 2015 - link

    That's a Kepler card.
  • ajp_anton - Tuesday, November 10, 2015 - link

    As others have pointed out, but I'll clarify by adding that the K in K40 stands for Kepler. Maxwell products start with M, Fermi started with F, etc. The chips also have the same letter codes, GM200 is Maxwell, GFxxx was Fermi.
  • frenchy_2001 - Tuesday, November 10, 2015 - link

    Actually, the letter denomination in the quadro/tesla product started with Kepler K6000.
    Before, they used pure number models:
    Quadro X000 were Fermi (last generation with pure numbers), with 6000 being top model, 5000 below, 4000, 2000... Corresponding consumer cards being GF4XX and 5XX.
    Quadro X800 were Tesla, with 5800 at top, 4800 below, 3800... Consumer card were GF2XX.
    Quadro X600 were before with Quadro 5600/4600...

    When Kepler arrived, they decided to add the letter and keep the round numbers:
    K6000/K5000... for Kepler then M6000 for maxwell and so on. We can guess the next release should be P6000 for Pascal.
  • evilspoons - Tuesday, November 10, 2015 - link

    I know MSRP on these is in the "if you have to ask" territory, but that M4 sorta makes me want to use it as a PhysX card in a gaming PC... if it would actually do that (driver support).

    I wonder what kind of insane rig you could build for BOINC with some of these running Seti@Home or similar. Again, not cost-effective, but it'd be fun.
  • MrSpadge - Tuesday, November 10, 2015 - link

    M4 as PhysX card: it doesn't provide any benefit over a regular 200$/€ GTX960 4GB which has its power target manually lowered to 50 - 75 W (easy via software).

    Regarding BOINC farms: people keep asking that question, but again Teslas don't provide any benefit over the regular GPUs. And people are building rigs with 4 high-end Cards, sometimes with up to 8 GPUs. That's really pushing the boundaries and can require a beta-bios because the manufacturer didn't expect anyone to actually do this.
  • CaffeineFreak - Tuesday, November 10, 2015 - link

    Not sure I see the difference here between an M40 and a Titan-X? Except the branding and I guess warranty. Hardware specs are the same. But it looks like Titan-X has been considered the defacto standard for Machine Learning for a long time now. See for example:

    https://developer.nvidia.com/devbox

    and

    http://exxactcorp.com/index.php/solution/solu_list...

    The later includes servers etc with Titan-X. So what's the market for M40?
  • frenchy_2001 - Tuesday, November 10, 2015 - link

    Servers.
    Titan-X is a good solution for workstations, but if you plan on stuffing 4 of those in a 1U rack and then use 8 such racks as accelerators for your CPU rack, a Tesla product is recommended.
    Chip is the same, the rest of the board, tolerances and cooling are designed for different purpose.
  • iwod - Wednesday, November 11, 2015 - link

    Still on 28nm....?? When the slide state transcoding, does it do software (GPU assisted ) trans code or Hardware based? There will be quality difference.

Log in

Don't have an account? Sign up now