Comments Locked

52 Comments

Back to Article

  • IntelUser2000 - Monday, August 30, 2010 - link

    Holy crap. It doesn't make sense, but 650/1300MHz with 12 EUs seems amazing on mobile. I'd WAG the 650/1300MHz IGP is clock speed equivalent to a 1.1GHz "stable" frequency GPU, while the 850/1350MHz on desktop is equal to 1.25GHz.

    Still, the 12EU is a big thing. So the question is, 1 vs 2 core. Does it really mean EU-wise or more than just EU? If its just EU the difference might be much smaller than 50%.
  • CharonPDX - Monday, August 30, 2010 - link

    The difference is that the mobile space has a lower power threshold to hit; and Intel would like to muscle discrete GPUs out of the equation completely in both the low end and midrange.

    So by offering such aggressive turbo (plus all SKUs having more EUs,) they make it more likely that midrange systems will forego discrete GPUs. For example, we may very well see Intel integrated graphics as the standard, even on the highest-end Apple products; with a discrete GPU solely as an additional-cost option.

    In the mobile space, you can trade off CPU power usage for GPU power usage more readily. On the desktop, you don't need to do that nearly as much, with the greater power envelope, combined with the reasonably common acceptance of a low-end discrete GPU on mid-range systems.

    For the majority of games, even two CPU cores aren't running at full load, so the CPU half could run at stock speeds, possibly even idling one or two cores on the quad-core parts, while the GPU could take up all the power headroom to boost itself up all the way. Once you're back at your desktop doing some video compression, though, the GPU can drop to idle, and all (potentially four) cores of the CPU can take all the power they can muster to get that over with.

    Now, if you play a game that will fully stress all your CPU cores *AND* your GPU, though... And you end up with a mess.

    I can't wait to see benchmarks of CPU-heavy games, tweaked with different settings. You'll probably find that settings that should be purely CPU-intensive will drop your framerate noticeably, as the GPU doesn't have the headroom to clock up when the CPU is drawing the power.
  • IntelUser2000 - Monday, August 30, 2010 - link

    You don't see a full-fledged Geforce GTX480 on a laptop because the thermals don't allow it. If you are giving a faster product on a mobile variant it just means your desktop part is crippled and not showing its full potential.

    They can't take such a chance when Llano's GPU is rumored to be substantial. They need all the advantages they can get.

    About Turbo: I think its too early to judge how it'll do on CPU-intensive games. If implemented well it can work pretty good. The Turbo driver for graphics on Arrandale has a frame rate counter. If they can see that boosting the GPU works better than CPU it can be programmed so GPU Turbo kicks in more often.

    It's not "oh we'll let the software decide what to do", but rather "hmm most games require GPU power so we'll make the software Turbo more on GPU".
  • IntelUser2000 - Monday, August 30, 2010 - link

    You don't see a full-fledged Geforce GTX480 on a laptop because the thermals don't allow it. If you are giving a faster product on a mobile variant it just means your desktop part is crippled and not showing its full potential.

    They can't take such a chance when Llano's GPU is rumored to be substantial. They need all the advantages they can get.

    About Turbo: I think its too early to judge how it'll do on CPU-intensive games. If implemented well it can work pretty good. The Turbo driver for graphics on Arrandale has a frame rate counter. If they can see that boosting the GPU works better than CPU it can be programmed so GPU Turbo kicks in more often.

    It's not "oh we'll let the software decide what to do", but rather "hmm most games require GPU power so we'll make the software Turbo more on GPU".
  • IntelUser2000 - Monday, August 30, 2010 - link

    Sorry for the double post.

    Everything else is good for Anandtech except for the retarded posting system.
  • iwod - Tuesday, August 31, 2010 - link

    Unless Apple and Intel will develop a OpenCL Drivers for this new Intel IGP, otherwise Apple using it solely for their Laptop will be out of the question.
  • Calin - Tuesday, August 31, 2010 - link

    A game that is fully stressing both the CPU and the GPU belongs to a gaming rig (or a gaming laptop), not to a laptop.
    This "powerful integrated graphics" will also greatly simplify the cooling (one heat sink, one fan) and the power delivery (probably cut in half the number of components). Along with real "fast enough" graphics, looks like NVidia is out of the market of midrange laptop graphics (and AMD/ATI is also out of the midrange laptop graphic for everything outside its own processors/chipsets).
  • bennyg - Thursday, September 2, 2010 - link

    lol. My G51J has only 1 fan for 100W+ worth of i7-720 CPU and GTX260M GPU and manages to keep it below boiling point. Not by much mind you...

    Midrange graphics won't be touched. We're only talking about 2-3x the power of current integrated gfx, which only puts the low-end dedicated GPUs under threat, where arguably the feature set is more of a selling point than han grunt anyway. If these iGPUs can match on feature set then we'll see the "6400"/"6500"/"410M" et al have a tough time justifying their existence.
  • SteelCity1981 - Monday, August 30, 2010 - link

    I'm suprised intel didn't name the 2720QM or 2820QM to something on the lines of 2750QM or 2850QM considering the clock speeds are a bit faster then the original 720QM's and 820QM's which sounds like a more evolutionary step to the next level beyond the 740QM and 840QM's. Also, I take it that Q4 of 2011 will prob see a refresh to thev 2nd gen mobile core I-series lineup much like the current mobile core I-series saw this year.
  • bennyg - Thursday, September 2, 2010 - link

    740/840/940 wasn't a refresh. It was a rebadging of the kind we hate Nvidia for.

    It was just a redrawing of the lines between speed bins. They are EXACTLY the same chips as were used in 720/820/920 just with a 1x higher multiplier across the board.

    I also see the confusion everywhere... go onto any forum where people ask "what notebook should I buy" and you'll see a "i5 vs i7" or "dual vs quad" thread on a daily basis with a totally confused OP and sometimes even the replies are even more wrong.
  • SteelCity1981 - Saturday, September 11, 2010 - link

    Um it was a refresh to the current lineup. Sure it may have been the same core logic but it was still an update even intel states that. idiot.
  • Mike1111 - Monday, August 30, 2010 - link

    I'm really interested to see what Apple will put into the MacBook Pro 13-inch. Because if Apple won't abandon Nvidia because of the missing OpenCL support in Sandy Bridge, are they gonna have to keep the years-old Core 2 Duo around??? There doesn't seem to be an obvious solution for Apple, apart from adding a power hungry dedicated graphics card like with the 15-inch.
  • Roland00 - Monday, August 30, 2010 - link

    Reason is Llano is a 2 chip solution.
    Chip 1 is Chipset
    Chip 2 is CPU+GPU

    SandyBridge with a discrete graphic card would be a 3 chip solution
    Chip 1 is Chipset
    Chip 2 is SandyBridge CPU
    Chip 3 is Nvidia or ATI GPU with apples version of Optimus

    Besides space and cost considerations; having a different 13 inch vs 15 inch may be incentive for you to spend another 500 to 600, average selling price that the 15inch has over the 13inch Macbook pro, due to the fact that Sandybridge CPU+Separate Graphic Card will probably be faster in both CPU and GPU tasks (though not necessarily in battery life.)
  • pcfxer - Monday, August 30, 2010 - link

    They are going with AMD/ATi. You just watch and see.
  • Roland00 - Monday, August 30, 2010 - link

    though it is too far in the future to truely know (instead of rumors) which models will use AMD and which will use Intel. For all we know they may use Bobcat for Apple TV, Llano for Mac Mini, and a combination of Llano and SB for the Macbook Pros. Who knows what they will use the for the standard Macbook and IMac.
  • TEAMSWITCHER - Monday, August 30, 2010 - link

    When Apple was getting the G5 chip from IBM, there were huge supply problems. This and the performance/watt efficiency of Intel processors forced Apple to the x86 architecture.

    Apple is not about to go back to the supply problem days of the G5, and AMD has has lots of supply issues. AMD should be the discrete graphics provider though.
  • erple2 - Monday, August 30, 2010 - link

    Honestly, the primary reason why Apple went with Intel over the mobile G5, and G6 chip was supply reasons - Intel was willing to put Apple in its "preferred" category, something IBM and company weren't willing to do with the G5+'s.

    IBM was in the process of developing the low power G5 chips for use in mobile Apple's, but that was <b>not</b> the focus of IBM. Intel already had low power CPU's available, and was willing to treat Apple as a first-rate reseller. I think that's why Apple went with Intel.
  • MonkeyPaw - Monday, August 30, 2010 - link

    What supply issues? Apple can use CPUs from both companies, just like it uses GPUs from both. That means you are LESS likely of supply issues, so long as Intel doesn't threaten Apple with limited product lines. I doubt that will happen, after the AMD settlement.
  • iwodo - Monday, August 30, 2010 - link

    Again, if Dual Core GPU works like current SLI and Crossfire then we will need constant drivers update. Which is not a good thing.

    With these amount of GPU power may be we can finally move all of the Desktop Display Rendering to GPU.

    The mention of Nvidia is interesting. For Apple which wants every system to be OpenCL capable, would need another GPU. However the systems which only has 2 Chips space would means either Apple have to make an OpenCL drivers for Intel GPU or get a Nvidia Chipset. ( Via the PCI - Express Interface ).

    Now the only thing missing from my next set up is an Super Fast and affordable SSD.
  • JarredWalton - Monday, August 30, 2010 - link

    I seriously doubt we're looking at anything like SLI/CF. This is merely a doubling of the number of GPU execution units (EUs), similar to the way NVIDIA has 16 and 32 CUDA cores, or AMD has 40 and 80 stream processors. What we don't know is if the number of shader processors is the only thing to double--i.e. is there an increase in the number of ROPs as well?

    Bandwidth is the one thing that almost certainly won't increase between the two variants, so there will be games that run into bandwidth limitations, but for mobile GPUs with DDR3-1600 we're still talking about far more bandwidth than most entry GPUs get. There you have 64-bit DDR2/3~1600 is common, so we're looking at sharing roughly twice times that much bandwidth with the CPU, if we're talking about 310M and HD5470.
  • Doormat - Monday, August 30, 2010 - link

    Can I get more info on why SB doesn't support opencl? Is it intel drivers or the hardware just can't do it? That's incredibly disappointing. I was hoping for a 13"'MBP but it doesn't look like it'll happen.
  • Mike1111 - Monday, August 30, 2010 - link

    Even if it's only a driver problem, will Sandy Bridge's GPU be able to offer at least the same OpenCL performance as the current Nvidia solution? Because IMHO Apple won't tolerate a slower GPU (in regards to OpenCL) in a newer MacBook Pro.
  • B3an - Monday, August 30, 2010 - link

    It's for the best. Now you'll get a real and fully capable laptop/OS instead, thats probably faster too, for the same money that would have gone into that Macbook.
  • Roland00 - Monday, August 30, 2010 - link

    Where they differ though is how they switch. Nvidia Optimus (unlike Nvidia Switchable and ATI switchable) does not require you to log out to switch. Furthermore Nvidia Optimus is a software switch that automatically switches based off the program used, while Nvidia Switchable (still used on some older designs like the Z series Sony) and ATI Switchable requires the user to tell the computer to switch graphics.

    Now ATI is working on a similar solution as Nvidia Optimus, and Mac has created their own version (with their own IP) of Nvidia Optimus.

    So yes Intel will allow you to switch graphics with Sandy Bridge just like you can do so with Nehalem but how it is implemented is up to other companies such as ATI and NVIDIA.
  • Thermogenic - Monday, August 30, 2010 - link

    Small correction - switchable graphics do not force you to log out to switch, that's a limitation artificially placed on the user by Apple. I switch my Alienware m11x all the time, and technically prefer that to the Optimus method, although I'm sure most non-techie users prefer optimus.

    As another user posted, the biggest gain from Optimus is nVidia driver support.
  • Roland00 - Monday, August 30, 2010 - link

    that was an error on my part. Prior switchable graphics before Optimus either required a logout/reboot or had an interposer that was a "combined" driver of "intel and nvidia" or "ati and ati." The interposer forced you to use specific drivers developed by your oem and was rarely update and sometimes it simply didn't work. Jared talked about it here when he was introducing the review of optimus.

    http://www.anandtech.com/show/2934/nvidia-optimus-...
  • Stuka87 - Monday, August 30, 2010 - link

    So just what do you do for a living that allows you to put yourself up on a pedestal above AMD's engineers? Have you worked with them and therefore have first hand knowledge?

    Your comments seems very narrow minded regardless of your background though.
  • StevoLincolnite - Monday, August 30, 2010 - link

    "Because, since all of AMD's engineers are fools, AMD's products will not work appropriately. Only AMDiots will buy those crappy products."

    I must say... "Wow".

    And here I thought the fanboys moved from Intel/AMD/nVidia/ATI and onto the PC vs Mac debate, seems I was mistaken.

    sans2212, you are an idiot. - I would like to see you create a multi-billion dollar company if you think you can do better than AMD, seriously.

    Until then... Grow up and get a clue and stop being an incompetent, ignorant, moronic, douche.
  • gfody - Monday, August 30, 2010 - link

    YHBTYHLHAND
    best ignore them
  • bennyg - Thursday, September 2, 2010 - link

    Steve Irwin made a career of hitting stupid animals with a stick and look what happened to him.
  • Belard - Monday, August 30, 2010 - link

    With these intel model numbers, it seems they have hired some people from nVidia?

    Thank AMD for making intel CPUs affordable.

    I own both brands of CHIPS.... and my current main desktop is an Intel Core2Quad, but I've been building AMD systems for most people and since the Phenom II X2~X4 CPUs, I've not had to go with Intel again. I like my system.

    Reliability and compatibility are not an issue with AMD or their soon to be EX-ATI, graphics products.
  • freeman70 - Monday, August 30, 2010 - link

    Very interesting. I have the same opinion. I have an older overclocked Core2Quad rig (Q6600 overclocked to 3.2GHz) but I have chosen to build cheap AMD systems for friends and family. They honestly can't tell the difference and with the pricing of AMD motherboards with pretty decent IGPs and cheap quad core CPUs like the Athlon X4, it's hard to justify the extra $100 for me to build them an Intel rig with a crappy Intel IGP. However, it seems with Sandy Bridge, Intel will finally provide relatively decent graphics performance. Now, if they just don't price their bloody motherboards components outrangeously. I thought since they were integrating the GPU on package or on die, motherboards would get cheaper because they would only need a single chip instead of the old northbridge/southbridge but motherboards with Intel chipsets just seem to keep increasing in price with no real added value. It will definitely be interesting to see what kind of performance the new series of desktop and mobile CPUs and chipsets from both companies will provide.
  • Thermogenic - Monday, August 30, 2010 - link

    I'm with you - just built a rig for my Dad and it was using Athlon II X2 with an older 785G DDR2 chipset. With the money I saved from using intel (mostly in the motherboard), I got him a 4850 graphics card. Not state of the art, but the entire build, including an Antec 300 case, was around $400, plus $100 for the Microsoft tax.

    On the low end, AMD is still the best value. For a web surfer, you can build a solid AMD rig running a free operating system for around $325 - not bad at all. An equivalent intel system would be around $400.
  • Jamahl - Monday, August 30, 2010 - link

    Maybe you want to check the new Mac OS drivers, which already have the full range of 6000 series drivers waiting.

    Retard.
  • danielkza - Monday, August 30, 2010 - link

    "the only indication that .. .isn't is the letter Q."

    First page, 4th paragraph.
  • LyCannon - Monday, August 30, 2010 - link

    I think many are overlooking the possibility of writing parallel code on GPU's. I know that nVidia is CUDA, AMD has their "own" type of GPU code, what about Intel?

    Even better, what about Intel, AMD, and nVidia getting together with Microsoft and writing a new DirectX API for general GPU processing.
  • DesktopMan - Monday, August 30, 2010 - link

    "OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Intel, and Nvidia."

    They already got together and made OpenCL, which in contrast to DirectX is quite platform independent.

    Why Intel hasn't implemented it on their own GPUs yet is anyone's guess.
  • chizow - Monday, August 30, 2010 - link

    The "Why hasn't Intel jumped onboard with OpenCL" should be glaringly obvious, its a threat to their x86 stranglehold on the desktop and server markets and the emerging HPC market everyone wants a piece of.

    A platform/architecture agnostic programming language marginalizes the importance and reliance on x86, and while Intel is a fully paid-in contributing member of the Khronos Group that oversees the OpenCL spec, they're really just there to monitor its progress imo.

    They're instead promoting their own parallel programming language, Ct, that will of course leverage and promote Intel's own compilers and x86 architecture.

    Similarly, Microsoft is absent from participating in the Khronos Group/OpenCL, as it provides a threat to their own proprietary APIs like DirectX/DirectCompute.
  • rootheday - Monday, August 30, 2010 - link

    I believe Intel must have some sort of parallel GPU support - see
    http://laptops.toshiba.com/laptop-finder?EXTRAS=Re...

    Toshiba's Resolution+ is a proprietary video processing upscaling technology. It apparently works on Intel HD graphics in Core i3/i5/i7 today.
  • synaesthetic - Monday, August 30, 2010 - link

    The primary reason that the current MBP13 is using Penryn instead of Arrandale is due to Intel HD Graphics' lack of OpenCL support.

    Apple is definitely gearing up to use AMD processors and GPUs.

    And AMD products aren't rubbish. Their new quad-core mobile processors are absolutely on par with the high-end Mobile i3 and Mobile i5 chips from Intel, consume the same amount of power, have better integrated graphics... and are cheaper.

    If I hadn't bought my laptop when I did, I'd probably be using a lappy with one of those mobile quad Phenoms.
  • synaesthetic - Monday, August 30, 2010 - link

    Optimus is a totally different animal than normal hybrid graphics switching.

    In normal switchable graphics, you have to have extra hardware components--multiplexers--to switch the data moving from each GPU to the display.

    With Optimus, nvidia skipped this step by having the data spat out by the discrete GPU copied over to the IGP, which is always connected to the display. This *does* mean that the IGP is always on (and the discrete GPU is only on when it's used) but it means that switching can be totally seamless.

    Driver support is the thing that bothers me the most. With Optimus you don't have to worry, because Optimus drivers are baked right into nvidia's Verde package. With other switchable graphics, there is an issue for driver compatibility there.

    I hope AMD nails down a switchable graphics standard soon so that we don't have to worry about driver issues with AMD switchable graphics.
  • GTVic - Monday, August 30, 2010 - link

    The 2820 and 2620 would seem to be good choice for a mobile workstation but the current i7-620M and i7-720QM are also 35W and 45W respectively so not much power saving but maybe there is some saving with the integrated graphics???. Would be nice to get more battery time on a mobile workstation and less weight?
  • erple2 - Monday, August 30, 2010 - link

    TDP only gives a mild understanding of power consumption, not in how much power the CPU uses in practice. The power numbers of the desktop parts share the same TDP as the previous socket 1156 CPUs, but pull less power under load...
  • Roland00 - Monday, August 30, 2010 - link

    One reason that I can see Intel putting a dedicated part of the cpu related to transcoding is due to Intel eventually doing a big push for Intel Wireless Display. The 1st gen of the technology has been out for a little over 6 months. It is limited to 1280x800 or lower resolutions at 30fps at a range of 20ft (line of sight.)

    With dedicated transcoding intel can possibly compress the video stream on the fly and send it to the proprietary receiver that is hooked up to the TV via component or hdmi. With the right compression it is possible in theory to get 1080p at 30fps with a decent bit rate if everything works correctly. The problem is that many companies have tried so far, and to my knowledge no one's solution works correctly 98% of the time. Hopefully Intel can get this up to the 98%.
  • Overmind - Tuesday, August 31, 2010 - link

    Apple will probably go for Radeons.
  • silverblue - Tuesday, August 31, 2010 - link

    There's always one, isn't there?
  • ClagMaster - Tuesday, August 31, 2010 - link

    How am I going to compare the performance of these new processors with the Q6600 on a G965 chipset since all of the benchmarks have changed?

    I upgrade after the following preconditions apply:

    1) I have owned my current PC for 3 years and have recouped my investment
    2) I can upgrade to new equipment that has double the capacity for the same cost.
    3) The power draw is the same or less.

    You need to compare some popular CPU's such as the socket 775 Q6600 and Q9650 to give upgraders a clearer picture of what the REAL performance gains are over the legacy hardware
  • 7Enigma - Wednesday, September 1, 2010 - link

    I believe Anandtech does this at every new RELEASE of major hardware. I seem to remember comparison charts of old-school P4 and single/dual core Athlon systems put in to give a good idea of how much improved the WHOLE PLATFORM performs. Since these last 2 SandyBridge articles are basically previews it makes sense that there isn't a more detailed comparison.

    But I do agree it needs to happen. I think we can safely ditch the P4 numbers and have the bottom-tier be single/dual Athlon XP systems (and even that is pretty darn old), then have a Q6600 based system, and finally a Nehelem based system. That should give a good idea of improvements in both performance and power consumption.
  • ClagMaster - Wednesday, September 1, 2010 - link

    Thank you.

    Adding some popular processors such as the Q6600 would be beneficial.
  • name99 - Tuesday, September 14, 2010 - link

    FWIW C-Net, in this article, http://news.cnet.com/8301-13924_3-20016302-64.html
    claims that SB does have OpenCL 1.1 support.
  • 8steve8 - Monday, September 27, 2010 - link

    yes i also want to know, who is right?
  • #1techman - Sunday, December 12, 2010 - link

    I don't think the qm for quad cores or m for dual cores has anything to do with how many cores bulldozer will have,seeing as how there will be no mobile bulldozer chips.

Log in

Don't have an account? Sign up now