Comments Locked

33 Comments

Back to Article

  • TristanSDX - Tuesday, January 5, 2016 - link

    Using GPU for driving cars is probably bad idea. They should build cutom ASIC for this task, if they want to be serious.
  • widL - Tuesday, January 5, 2016 - link

    If a GPU has enough performance, why spend enormous amounts of money and time to design an ASIC, when you don't have critical power constraints?
  • jjj - Tuesday, January 5, 2016 - link

    For now everybody uses GPUs for deep learning,more or less. There are alternatives but nothing has gained enough traction. In 10 years, we'll see but for now this is it.
  • Mondozai - Tuesday, January 5, 2016 - link

    You are confused. He's talking about GPUs in cars, not deep learning in general.

    Right now mobileye owns the market. Tesla uses them for instance and they use fixed function hardware.
  • nathanddrews - Tuesday, January 5, 2016 - link

    NVIDIA made a pretty strong case for GPUs in cars at last year's CES, specifically its ability to not only identify known objects via cloud-based deep learning and visual computing, but also utilize that power for identifying unknown objects on the fly... er, drive. jjj is correct that maybe after several years of use IRL, a single, low-power ASIC could probably do everything necessary. For now, sloughing off GPU silicon is a much cheaper option.
  • tuxRoller - Wednesday, September 28, 2016 - link

    Everybody? Yes, except those who can both afford to do otherwise and actually need the added efficiency.
    For instance, Baidu, Microsoft and Google.
    For now, they're using FPGAs, but eventually, as you've said, they'll move to an asic, or at least a "memory-centric" processing platform.
  • psychobriggsy - Tuesday, January 5, 2016 - link

    Once the market is grown, then they will do dedicated ASICs.

    Right now we have two 3 TFLOPS Pascal GPUs and 2 1 TFLOPS ARM SoC controllers and post-processors.

    Why 3 TFLOPS? Because 24 DLTOps. Each 32-bit Pascal shader can also run as 2 16-bit FPOps. And each 16-bit FPOp, with a little bit of extra logic, can perform 4 DLTOps (I suspect). However given the 16-bit FPOp is an FMA for them to achieve this, we do need a couple of additional operations to get those other 2 DLTOps. Probably some saturation/trigger/decay/thingy for neural network simulation.
  • jptech7 - Tuesday, January 5, 2016 - link

    Actually, there are people out there designing specific ASICs for neural networks, so that may eventually happen. See IBM's TrueNorth chip as a glimpse of where things are heading.

    IBM TrueNorth: http://www.research.ibm.com/articles/brain-chip.sh...
  • MrSpadge - Tuesday, January 5, 2016 - link

    It's good that those huge SUVs are getting the drive assist first, as there are so many people out there who can't handle them yet drive them due to some strange reason. [only being half-serious here]
  • jjj - Tuesday, January 5, 2016 - link

    That GPU seems too big not to have HBM, looks similar enough in size to the GM204 and if it is 400mm2 (+/- 50mm2), it's not gonna be cheap so why GDDR. Guess it's low clocks too here since perf would is too weak otherwise.
    Ear,y access in Q2 and general availability in Q4 should mean that we see some Pascal at Coputex, hopefully.
  • the1mike1man - Tuesday, January 5, 2016 - link

    With a bit of image manipulation and napkin mathematics, it looks like the GPU is about 490mm2. This is assuming those GDDR5 chips are the usual 12x14 size which I see no reason why they wouldn't be, unless of course they're GDDR5X chips.
    I think that 490 number seems pretty big, but then again Maxwell had some pretty big GPUs, with the GM200 being 600mm2. At the moment, however, the floating point performance just doesn't seem to line up. If the 8TFLOPS number quoted by NVIDIA refers to the performance of the whole board, that's pretty poor. I mean, in the best case, that would put the Pascal GPU performance at 4TFLOPS, and at that GPU size that seems way too small. On the other hand, if 8TFLOPS refers to the power of one Pascal GPU, it seems way too high!
  • jjj - Tuesday, January 5, 2016 - link

    Pretty sure it's not that big but other than that ,yes it is a bit weird.
    The thing is that a 16ff wafer costs a lot more than 28nm so a 400mm chip wouldn't be just 500$ (like the 980 that is about 400mm2 on 28nm) so why the hell no HBM , doesn't make sense to do 2 designs.
    Sure could be that the actual chip is much smaller than the cover but they don't usually do that.
    Perf wise,they likely keep clocks low to increase efficiency and in desktop the chip will be up to 50% higher TDP - some 140-150 vs some 100W here. If this chip ends up being a lot smaller and cheaper than it appears here, it would be nice enough.
  • scottjames_12 - Tuesday, January 5, 2016 - link

    Those MXM modules look completely identical to existing GTX980M modules. I wonder if they are just using them in place of actual Pascal modules to keep everything secret, or perhaps they don't have any actual Pascal MXM modules yet?
  • scottjames_12 - Tuesday, January 5, 2016 - link

    In fact, if I'm reading the etching on the GPU die correctly, 1503A1 would suggest those GPU's were built on the 3rd week of 2015, which I would have thought would make it pretty unlikely they are Pascal GPU's
  • psychobriggsy - Tuesday, January 5, 2016 - link

    Frickin' woodscrews again, isn't it. :p

    But yeah, looks like it's a placeholder for the presentation.
  • superunknown98 - Tuesday, January 5, 2016 - link

    Anyone else really weary of having this liquid cooled? The thought of adding another radiator, tucked deep under the dashboard makes me shudder. Just wait until it leaks, corrodes or clogs with dust.
  • xthetenth - Tuesday, January 5, 2016 - link

    I'd be surprised if most implementations weren't hooked into the already existing loop.
  • speely - Monday, January 11, 2016 - link

    I would be surprised. You'd be essentially using ~100°C engine coolant to try to cool GPUs and CPUs. This just seems like a bad idea to me.
  • tipoo - Thursday, August 4, 2016 - link

    Most good electric cars are already liquid cooled, so no.
  • Adski - Tuesday, January 5, 2016 - link

    "the company is targeting electric vehicles with this"
    Really? From a quick search online it seems electric cars such as Model S and Leaf use around 300Wh/mile and if this uses 250W then you're nearly going to halve your electric car range. If integrated well then in winter at least that heat can be put to good use but otherwise I think electric car manufacturers will stick to MobileEye which (from another quick Google) has a new version that uses multiple MIPS CPUs and draws 3W.
    There's no reference on what the expected power utilisation of this PX 2 will be, but even if the vehicle is stationary I imagine it will still be doing a lot of processing.
    Long time reader, but just signed up to post this as no-one else seems to have picked this point up.
  • Yojimbo - Tuesday, January 5, 2016 - link

    The Drive PX 2 is an electronic device that fits in a lunchbox. An electric car is a device weighing thousands of pounds doing mechanical work. These facts should have tipped you off that the presumption that they have about the same average power draw over normal operation just doesn't make sense. Adding a 250 W draw to something with 300 Wh/mile energy usage is not going to halve the range of the car unless 1) One is driving the car at less than 1 MPH and 2) the 300 Wh/mile number would still be relevant driving at that speed. If the car is using 300Wh/mile traveling at 30 MPH, then total energy usage over one hour is 9000Wh. The Drive PX 2 will further add 250Wh of energy demands over that hour, so the total energy usage will be 9250Wh. Dividing this by 30, we get that the car with Drive PX 2 uses 308.3Wh/mile. As you can see, it's a very small decrease in energy efficiency.

    Besides, why do you think electric car manufacturers are supremely concerned about the driving range of their training vehicles?
  • Adski - Tuesday, January 5, 2016 - link

    Yes indeed, it's a bit embarrassing really considering I run an off grid solar system at home for routers, server and other stuff. Thanks for the informative response, I clearly should have thought a bit more before posting! Loosing a mile of range per hour of driving isn't so bad. :-)
  • JubilantOstrich - Tuesday, January 5, 2016 - link

    The leaf battery capacity is 30kW*hr. 250W over four hours of driving (which puts you around the 100mile distance limit) costs 1kW*hr. I think mixing up Wh/mile and Wh confused your units.
  • Adski - Tuesday, January 5, 2016 - link

    Thanks for response, as above I wasn't quite thinking clearly before registering and posting. I was too busy thinking "wow, I've thought of something no other commenters have, I must make a contribution!"
    Oops :-)
  • hans_ober - Saturday, January 9, 2016 - link

    The mobileEye chip Tesla uses has a reported power draw of less than 4W.
  • nismotigerwvu - Tuesday, January 5, 2016 - link

    Oh look, there's a silhouette of a 300ZX Twin Turbo in that last figure. Props to the guy who snuck it in there.
  • MANOL VOJKA - Thursday, January 7, 2016 - link

    Is Pascal based on 3D Stacked Memory?...why its on standart VGPU mem.
  • Yojimbo - Friday, January 8, 2016 - link

    Pascal is not based on 3D stacked memory. 3D stacked memory is a technology that NVIDIA plans to use with some Pascal implementations. Both AMD and NVIDIA have recently said that both GDDR and HBM will be used for their upcoming generation of GPUs.
  • extide - Tuesday, January 12, 2016 - link

    Yeah, HBM will only be used on the top 1 MAYBE 2 chips. I highly doubt we will see to HBM chips from AMD (probably 2 sku's, but one chip, perhaps cut-down) and then 1 or 2 from nvidia -- GP100 will be HBM, but the question is whether or not GP104 will be HBM as well...
  • extide - Tuesday, January 12, 2016 - link

    That was supposed to say two HBM chips from AMD , not 'to'
  • effingterrible - Tuesday, January 12, 2016 - link

    Those aren't Pascal GPUs...https://semiaccurate.com/2016/01/11/nvidia-pascal-...
  • extide - Tuesday, January 12, 2016 - link

    That's an odd statement to make while linking that article. Charlie is of the opinion that those ARE 16nm Pascal chips. I am thinking that they are not, and they are just GTX 980 MXM boards in place for show. Not like nvidia hasn't done similar things before..........

    Also, the funny thing is that AMD DID indeed have 14nm stuff in Jan of 2015 -- the "G91" chip, which was probably the one they showed the demo vs the GTX 950 on -- appears to have come from the fab in Jan 2015, So either Charlie is being entirely facetious and making fun of nvidia with that article, OR he is completely wrong.

    REF AMD GPU 14nm in Jan 2015: http://wccftech.com/amd-polaris-gpus-spotted/
  • bestscanner - Thursday, July 8, 2021 - link

    Moreover, it accompanies live information following for up to 10 distinctive information streams at the same time. In addition, https://bestscannertools.com/best-obd2-scanner-for... it accompanies different upkeep highlights from oil resets, TPMS resets, battery voltage tests, controlling point adjustment devices, and then some!

Log in

Don't have an account? Sign up now