Original Link: https://www.anandtech.com/show/6873/nvidias-geforce-700m-family-full-details-and-specs



Introducing the NVIDIA GeForce 700M Family

With spring now well under way and the pending launch of Intel’s Haswell chips, OEMs always like to have “new” parts across the board, and so once more we’re getting a new series of chips from NVIDIA, the 700M parts. We’ve already seen a few laptops shipping with the 710M and GT 730M; today NVIDIA is filling out the rest of 700M family. Last year saw NVIDIA’s very successful launch of mobile Kepler; since that time, the number of laptops shipping with NVIDIA dGPUs compared to AMD dGPUs appears to have shifted even more in NVIDIA’s favor.

Not surprisingly, with TSMC still on 28nm NVIDIA isn’t launching a new architecture, but they’ll be tweaking Kepler to keep it going through 2013. Today's launch of the various 700M GPUs is thus very similar to what we saw with the 500M launch: everything in general gets a bit faster than the previous generation. To improve Kepler NVIDIA is taking the existing architecture and making a few moderate tweaks, improving their drivers (which will also apply to existing GPUs), and as usual they’re continuing to proclaim the advantages of Optimus Technology.

Starting on the software side of things, we don’t really have anything new to add on the Optimus front, other than to say that in our experience it continues to work well on Windows platforms—Linux users may feel otherwise, naturally. On the bright side, things like the Bumblebee Project appear to be helping the situation, so now it's at least possible to utilize the dGPU and iGPU under Linux. As far as OEMs go, Optimus has now matured to the point where I can't immediately come up with any new laptop that has an NVIDIA GPU and doesn't support Optimus; we're now at the point where an NVIDIA equipped laptop inherently implies Optimus support.

The second software aspect is NVIDIA’s GeForce Experience software, which allows for automatic game configuration based on your hardware. You can see the full slide deck in the gallery at the end with a few additional details, but GeForce Experience is a new software tool that’s designed to automatically adjust supported games to the “best quality for your hardware” setting. This may not seem like a big deal for enthusiasts, but for your average Joe that doesn’t know what all the technical names mean (e.g. antialiasing, anisotropic filtering, specular highlighting, etc.) it’s a step towards making PCs more gamer friendly—more like a console experience, only with faster hardware. ;-) GeForce Experience is already in open beta, with over 1.5 million downloads and counting, so it’s definitely something people are using.

Finally, NVIDIA has added GPU Boost 2.0 to the 700M family. This is basically the same as what’s found in GeForce Titan, though with some tuning specific to mobile platforms as opposed to desktops. We’re told GPU Boost 2.0 is the same core hardware as GPU Boost 1.0, with software refinements allowing for more fine-grained control of the clocks. Ryan has already covered GPU Boost 2.0 extensively, so we won’t spend much more time on it other than to say that over a range of titles, NVIDIA is getting a 10-15% performance improvement relative to GPU Boost 1.0.

Moving to the hardware elements, hardware change only applies to one of the chips. GK104 will continue as the highest performing option in the GTX 675MX and GTX 680M (as well as the GTX 680MX in the iMac 27), and GK106 will likewise continue in the GTX 670MX (though it appears some 670MX chips also use GK104). In fact, for now NVIDIA isn’t announcing any new high-end mobile GPUs, so the GTX 600M parts will continue to fill that niche. The changes come for everything in the GT family, with some of the chips apparently continuing to use GK107 while a couple options will utilize a new GK208 part.

While NVIDIA won’t confirm which parts use GK208, the latest drivers do refer to that part number so we know it exists. GK208 looks to be largely the same as GK107, and we’re not sure if there are any real differences other than the fact that GK208 will be available as a 64-bit part. Given the similarity in appearance, it may serve as a 128-bit part as well. Basically, GK107 was never available in a 64-bit configuration, and GK208 remedies that (which actually makes it a lower end chip relative to GK107).



GeForce 700M Models and Specifications

With that brief introduction out of the way, here are the specs of the now announced 700M family. If I had to guess, I expect we’ll see revised high-end 700M parts sometime later this year based on tweaked GK106 and GK104 chips—like maybe a GTX 780M that has the performance of the GTX 680MX but in the power envelope of the GTX 680M—but we’ll have to wait and see what happens.

  GeForce GT 750M GeForce GT 745M GeForce GT 740M
GPU and Process 28nm GK107 or GK106 28nm GK107 28nm GK107
CUDA Cores 384 384 384
GPU Clock Up to 967MHz
plus Boost
Up to 837MHz
plus Boost
Up to 980MHz
plus Boost
Memory Eff. Clock Up to 5.0GHz Up to 5.0GHz Up to 5.0GHz
Memory Bus Up to 128-bit Up to 128-bit Up to 128-bit
Memory Bandwidth Up to 80GB/s Up to 80GB/s Up to 80GB/s
Memory Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3

Compared to the previous generation GTX 660M, GT 650M, GT645M, and GT 640M (not to mention the GT 640M LE), the new chips all have the same core set of features but now with GPU Boost 2.0 and higher memory clocks. I wish NVIDIA would just drop support for DDR3 on their higher end chips, and likewise the “up to” clauses aren’t really helpful, but they’re both necessary evils thanks to working with OEMs that sometimes have slightly different requirements. Overall, performance of these new 700M parts should be up 15-25% relative to the previous models, thanks to higher GPU and memory clock speeds.

You’ll note that the core clocks appear to be a little crazy, but this is based largely on how the OEMs choose to configure a specific laptop. With both GDDR5 and DDR3 variants available, NVIDIA wants to keep performance of chips in the same name within 10% of each other. Thus, we could see a GT 740M with 2.5GHz GDDR5 and a moderate core clock, another GT 740M with 2.0GHz GDDR5 and a slightly higher core clock, and a third variant with 1800MHz DDR3 but matched to a 980MHz core clock. Presumably, most (all?) currently planned GT 750M and GT 745M laptops are using GDDR5 memory, and thus we don’t see the higher core clocks. As for the Boost clocks, in practice that can increase the GPU core speed 15% or more over the normal value, with most games realizing a 10-15% performance thanks to the increase.

One final item of interest is that while the GT 750M appears to have a similar configuration to the other GPUs—384 cores, 128-bit memory interface—at least in the chip shots provided the GT 750M uses a different GPU core. Based on the appearance in the above images, the GT 750M uses GK106, only it’s what would be called a “floor sweeper” model: any GK106 chip with too many defective cores to be used elsewhere can end up configured basically the same as GK107. Presumably, there will also be variants that use GK107 (or potentially GK208, just like the other parts), but NVIDIA wouldn’t confirm or deny this.

  GeForce GT 735M GeForce GT 730M GeForce GT 720M GeForce 710M
GPU and Process 28nm GK208 28nm GK208 28nm Fermi 28nm Fermi
CUDA Cores 384 384 96 96
GPU Clock Up to 889MHz
plus Boost
Up to 719MHz
plus Boost
Up to 938MHz
with Boost
Up to 800MHz
with Boost
Memory Eff. Clock Up to 2.0GHz Up to 2.0GHz Up to 2.0GHz Up to 1.8GHz
Memory Bus Up to 64-bit Up to 64-bit Up to 64-bit Up to 64-bit
Memory Bandwidth 32GB/s 32GB/s 32GB/s 32GB/s
Memory Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3

Moving on to the lower end of the 700M range, we have the GT 730M and 710M that have already shown up in a few laptops. Joining them are GT 735M and GT 720M, which are similar chips with higher clocks. All of these chips have 64-bit memory interfaces and that will obviously curtail performance a bit, but NVIDIA is targeting Ultrabooks and other thin form factors here so performance and thermals need to be kept in balance; more on this in a moment.

The GT 735M and 730M at least are “new” parts that we haven’t seen previously in the Kepler family. The word is that some OEMs were after more economical alternatives than even the GT 640M LE, and the option to go with a 64-bit interface opens up some new markets. It’s basically penny pinching on the part of the OEMs, but we’ve complained about BoM price saving measures plenty so we won’t get into it here. NVIDIA did mention that they’ve spent some additional time tuning the drivers for performance over a 64-bit bus on these chips, and their primary competition in the iGPU market is going to be HD 4000 running on a ULV chip—and in the near future, HD 4600 with Haswell. They'll also compete with AMD APUs and dGPUs, obviously, but NVIDIA is more interested in trying to show laptop vendors and users what they gain by adding an NVIDIA dGPU to an Intel platform.



Performance Expectations

In their presentation and FAQ, NVIDIA provided estimates of performance relative to Ivy Bridge Core i5 ULV with HD 4000. Before we get to those numbers, we want to quickly set the stage for what NVIDIA is showing. Despite the similarity in name and features, as we discovered last year the ULV chips tend to run into TDP limits when you’re trying to hit both the CPU and the iGPU. The CPU cores for instance can use around 12-15W and a full load on the iGPU can create another 10-15W; stuff both into a 17W TDP and you’re going to get throttling, which is exactly what happens.

Looking at HD 4000 performance with Core i5 ULV and Core i7 quad-core, you can see that the quad-core part is anywhere from 0% to around 60% faster. On average it’s 35% faster at our 2012 “Value” settings and 26% faster at our 2012 “Mainstream” settings. As for the 700M performance relative to Core i5 ULV, NVIDIA provides the following estimates based on benchmarks at moderate detail and 1366x768 in Battlefield 3, Crysis 2, Just Cause 2, DiRT 3, and F1 2011:

Besides the above slide, NVIDIA provided some performance estimates using results from the 3DMark 11 Performance benchmark, and the results are even more heavily in favor of NVIDIA. In their FAQ, NVIDIA states that even the lowly GeForce 710M is three times faster than ULV HD 4000, while the GT 720M is 3.3x faster, GT 730M and 735M are 4.8X faster (hmmm…do we really need GT 735M?), GT 740M is 5.3X faster, GT 745M is 5.8x faster, and GT 750M is 6.3x faster. Of course, those number are from NVIDIA, going up against the much slower ULV variant of Ivy Bridge, and using 3DMark 11—which isn’t quite as important as actual gaming performance.

I suspect the GT3 and GT3e configurations of Haswell will be substantially faster than IVB’s HD 4000 and may come close to the lower end of NVIDIA’s range…at least on the standard voltage Haswell chips. For ULV, I’ve heard a performance estimates that GT3 Haswell will be 30%-50% faster than GT2 IVB, and GT3e could be roughly twice as fast, but that should still leave NVIDIA with a healthy lead. Anyway, we’d suggest taking all of these numbers with a grain of salt for now. The real comparison for most is going to be Haswell and 700M, and while we have a pretty good idea where 700M and HD 4000 performance fall (since the 700M parts are Kepler and Fermi updates), Haswell’s iGPU is likely to be a different beast.

Closing Thoughts

On the whole, Kepler has been amazingly successful for NVIDIA, particularly in the mobile world. The bar for midrange mobile dGPUs was raised significantly with the GT 640M LE and above parts typically offering anywhere from 25% to 75% better performance than the previous generation, and that was accomplished along with reducing power use. It was NVIDIA’s version of Intel’s Core 2 launch, and the vast majority of notebooks with dGPUs seem to be using NVIDIA hardware these days. Much of that can also be attributed to NVIDIA’s driver team, where Optimus support and usability still trumps AMD’s Enduro alternative. AMD is still working to improve their drivers, but they're still not at the same level as NVIDIA's mobile drivers.

Not surprisingly, it looks like every laptop with an NVIDIA dGPU these days also comes with Optimus support, and NVIDIA says they’ll be in three times as many Ultrabooks and ultraportables in 2013 compared to 2012—which isn’t too hard, since off the top of my head the only two Ultrabooks with NVIDIA dGPUs I can name are the Acer M5 and the ASUS UX32VD. NVIDIA also says they have over 30 design wins for touchscreen laptops, but again considering Windows 8 almost requires a touchscreen to really be useful that’s expected. We will likely see a limited number of laptops launching with Ivy Bridge CPUs and 700M dGPUs over the coming weeks, with ASUS specifically listed in NVIDIA’s 700M FAQ with their X450 (GT 740M) and N46 (GT 740M as well); Lenovo is also a launch day partner with several options: Y400 with GT 750M, and Z400/Z500 with GT 740M.

The real launch is likely to coincide with Intel’s Haswell update later in Q2 2013. When that comes along, we're likely to see some additional 700M updates from NVIDIA on the high end (again, echoing what happened with the 600M and 680M launches). Just don't count on seeing a mobile variant of Titan/GK110 for a while yet; I'd peg that level of performance as something we won't see in laptops until we have two more process shrinks under our belts (i.e. when TSMC is at 16nm).

Log in

Don't have an account? Sign up now