Second to last paragraph on the "Extended compatibility and performance results:"
"Ultimately, Sandy Bridge’s IGP is far more capable than many would have expected. Sure, it doesn’t even try to support DX11 or OpenCL, but at least for gaming DX11 is typically too much for even midrange GPUs."
An Intel rep has said that Sandy Bridge will support OpenCL. (http://news.cnet.com/8301-13924_3-20024079-64.html ) The trick is that it may be a combo CPU+GPU to do it. So it may not be what you are thinking by OpenCL being solely GPU, but OpenCL code should be able to run.
And in the end, what does it matter, really, as long as it runs? As the desktop Sandy Bridge review points out, video encoding is just as fast using solely the x86 codepaths as using nVidia's CUDA or ATI's Stream.
OpenCL was designed from the outset to run on heterogenous resources, including CPU.
So intel claiming that they "support" OpenCL is nothing special - they just needed the right drivers/API.
However, don't expect OpenCL code running solely on the CPU (my guess as to how SB will handle it) to be any faster than the x86 codepath running on the same CPU.
What Intel wants to do is to have the CPU run OpenCL code.
This totally defeats the purpose of OpenCL.
OpenCL is suppose to allow both the GPU and the CPU to run code simultaneously. This is to allow significant acceleration in running OpenCL code compared to using just the CPU.
Sure. OpenCL code will run. But it will run MORE SLOWLY than with a discrete GPU. And the 16 GPUs in Sandy Bridge will be wasted.
Intel's Sandy Bridge has non-programmable GPUs. This is a serious limitation and deal killer when it comes to running OpenCL code.
I expect Apple to continue use nVidia's or AMD's discrete GPUs with the MacBooks and Mac Book Pros.
This is very disappointing. It shows that Intel still doesn't have the talent to produce decent GPUs.
<i>What Intel wants to do is to have the CPU run OpenCL code.
This totally defeats the purpose of OpenCL.
OpenCL is suppose to allow both the GPU and the CPU to run code simultaneously. This is to allow significant acceleration in running OpenCL code compared to using just the CPU.</i>
No, this is the *primary* purpose of OpenCL. The goal of OpenCL is not to "allow the GPU and CPU to run code simultaneously", but to provide a single unified code path that can be used with any hardware, be it CPU or GPU. There are/were already code paths specific to each vendor/type (CUDA for nVIDIA GPUs, Stream for AMD/ATI GPUs, x86 for Intel/AMD CPUs). The problem is that fully supporting all three platforms requires three separate code paths.
OpenCL unifies this, and allows a single codepath to be used regardless of the GPU's type or existence. You've completely misunderstood the purpose of OpenCL.
You need to ask what applications on a desktop actually use OpenCL in a meaningful way? Intel added hardware for media transcoding, which makes transcoding on something besides the cpu useless and that was roughly all openCL can be used for on the desktop, laptop, or cellphone. OpenCL is for vector calculations, AVX is for vector calculations. All four cores running AVX instructions would just be a faster choice than OpenCL on a low end gpu. Intel most likely could get sandybridge's gpu running OpenCL, but it would be pointless. OpenCL just is not a desktop feature.
Given how much money they have, I doubt Intel is lacking the "talent" to do anything they want. OpenCL execution on the GPU portion of the SNB chips was probably just not that big a deal to them, and given the number of other things (such as speed and battery life) SNB brings to the table they probably won't have trouble selling lots of these to the average consumer.
They don't specifically break out VT-d and VT-x on the mobile products; all the slides state is that the mobile products support virtualization. On the desktop slide, they have a line saying "vPro/TXT/VT-d/SIPP" but on mobile slides the line says "AES/TXT/vPro". There's a second line for both desktops and mobile chips that just says "Intel Virtualization Technology" but it's not too useful since it just says "Yes" on every single Sandy Bridge CPU listed. :-\
AMD is toast. Those are blistering performance numbers that even I did not expect. Incredible that it manages near 30 fps in several games at medium detail settings.
The lower power dual core Sandy Bridge models will really put the squeeze on AMD. Even a regular 2520M will give AMD's Brazos a lot of trouble.
I'm a tech enthusiast especially in the portable device scene, and I always nit-pick things. Which is the reason why I own the Acer 4810TG.
The Core i7 640-UM would be my favourite processor, until I saw this. The successor, 2657M, seems to have (theoretical) performance improvement of 19% and battery saving of 6%, which is very amazing.
From pure guestimation, this is ~200% (or slightly more) performance of the SU7300 at the same battery life. Whoa!
This would mean new ultra-portable devices (less than 14" and over 6 hours battery life). If this gets partnered with the ATi 5650 (or its successor), this will bring serious gaming potential to ultra-portable devices *drools*
------
BUT, I wish they could add another chip on that (ULV) list. The exact same thing as the i7 2657M but tossing the dual-core setup for a single core, if it meant they could increase the down the battery life by 70%. (Name it the 1357M?)
I mean, how about real 10 hours battery life (6 cell) on something about as fast as the SU7300 ??
Something like that (Core i7 1357M?) could make Windows7 tablets a more viable option.
SpeedStep lets all the SNB processors (mobile versions at least) run at 800MHz when they're not doing anything else. So you've already got what you're asking for, more or less.
Something like Core i7 1357M could make Win 7 tablets temporarily viable. Remember that in the ultra portable space the big words are: multitasking, dual core processors (like Cortex A9). So, realistically, we need ULV dual-core Sandy Bridge.
The i7-640M runs at 1.2GHz minimum and 2.26GHz maximum. The i7-2657M runs at 1.6GHz minimum and 2.7GHz maximum. (Actually, minimum on all the Core 2nd Gen is 800MHz when you aren't doing anything that needs more speed.) That would be 33% faster base speed and up to 19% higher max speed, just on clock speeds alone. However, you forgot to factor in a round 20-25% performance increase just from the Sandy Bridge architecture, so you're really looking at anywhere from 19% (bare minimum) to as much as 66% faster for normal usage, and things like Quick Sync would make certain things even faster.
You've got a limited range of TDP that any given architecture will be good in. According to Intel (at the time of the atom launch) things start getting rather ragged when the range gets to 10x. Until Core2 this wasn't really an issue for Intel because the p3 and prior's top end parts had sufficiently low TDPs that fitting the entire product line into a single architecture wasn't a problem. It didn't matter much in the P4 era because the Pentium-M and Core 1 were separate architectures and could be tuned so its sweet spot was significantly lower than the desktop P4. Beginning with Core2 however Intel only had a single architecture. The bottom tier of ULV chips suffered due to this, and on the high end the fact that overclocking (especially voltage OCing) was very poor on the performance gain/increased power consumption scale.
The atom is weak as you approach 10W because it was designed not as a low end laptop part (although Intel is more than willing to take your money for a netbook); but to invade ARM's stronghold in smartphones, tablets, and other low power embedded systems. Doing that requires good performance at <1W TDP. By using a low power process (instead of the performance process of every prior Intel fabbed CPU) Moorestown should finally be able to do so. The catch is that it leaves Intel without anything well optimized for the 10-15W range. In theory the AMD Bobcat should be well placed for this market, but the much larger chunk of TDP given to graphics combined with AMDs historic liability in idle power make it something of a darkhorse. I wouldn't be surprised if the 17W Sandybridge is able to end up getting better battery life than the 10W Bobcat because of this.
I have seen in the past that when Mac OS X and Win 7 are run on the same machine, Mac OS X can have significantly better battery life. Is there any chance we could see what Sandy Bridge does for battery life under Mac OS X?
This was a test machine that intel cobbled together. Give it a few weeks or months after some retail machines come out, and then I'm sure that someone in the community will have somehow shoehorned OSX onto one of the machines. (Although I don't know how well it would perform since they'd probably have to write new drivers for the chipset and the graphics)
I think that in the past we've seen MacOS and Win7 battery life comparison while running on the same Mac, not on the same Acer/Asus/Any machine (cause MacOS doesn't run on such w/o hacks). And I suspect Apple manages better power management only because they have to support only few hardware configurations (so doing optimizations especially for that hardware), it's a major advantage of their business model. It's like with the performance of games on Xbox and the like... The hardware isn't that impressive but you write and compile only for that configuration and nothing else: you're sure that every other machine is the same, not depending on AMD code paths, smaller or larger cache, slower or faster RAM, that or the other video card, and so on...
Aside power management in macs, to see what Sandy Bridge can do under MacOS would be frustrating... You know how long it takes until Jobs fits new stuff in those MBPs. Hell, he still sells Core2 duo.
Having fewer configurations don't mean better optimized graphics drivers they are worse. Having only intel doesn't mean the GCC compiler only outputs optimized code. It's a compiler AMD contribute to among others and there's no such thing as AMD code paths, there is some minor difference in how it manages SSE but that's it. Most is exactly the same and the compiler just optimizes for x86 not a brand. If it supports the same features it is as optimized. Machine Code is the same. It's not like having a cell processor there.
Power management is handles by the kernel/drivers. You can expect SB MacBooks in like this summer. Not too long off. And you might even be seeing people accepting Flash on their macs again as Adobe is starting to move away from their archaic none video player work flow. With 10.2 and forward. Battery/Power management won't really work without Apples firmware though. But you are simply not going to optimize code on a OS X machine like a console, your gonna leave it in a worse state then the Windows counterpart. Apple will also be using C2D as long as Intel don't provide them with optimized proper drivers. It's a better fit for the smaller models as is.
Perhaps the issue is more the Compal's cooling system but those max CPU temps (91 degrees celsius) seem high. It may also be that the non-Extreme CPUs will have lower temps when stressed.
My Envy 17 already has high temps - I was looking forward to SB notebooks having better thermal characteristics than the i7 QM chips (i.e. no more hot palmrests or ball-burning undersides)....
This is a "works as designed" thing. Intel runs the CPU at the maximum speed allowed (3.1GHz on heavily threaded code in this case) until the CPU gets too warm. Actually, funny thing is that when the fan stopped working at one point (a cold reboot fixed it), CPU temps maxed out at 99C. Even with no fan running, the system remained fully stable; it just ran at 800MHz most of the time (particularly if you put a load on the CPU for more than 5 seconds), possibly with other throttling going on. Cinebench 11.5 for instance ran about 1/4 as fast as normal.
Throttling down to maintain TDP at safe levels has been an intel feature since the P4 era. back in 2001(?) toms hardware demoed this dramatically by running quake on a P4 and removing the cooler entirely. Quake dropped into slideshow mode but remained stable and recovered as soon as the heatsink was set back on top.
The p3 they tested did a hard crash. The athlon XP/MP chips reached several hundred degrees and self destructed (taking the mobos with them). Future AMD CPUs had thermal protection circuitry to avoid this fail mode as well.
True, it's been around a while, but I found it interesting that while performance dropped, it wasn't the "slideshow effect". If the system sat idle, the CPU would start to cool down, so when I fired up a benchmark it would run fast for a little bit. It was very perplexing until I figured out what was happening. First run on MediaEspresso gave me 11s with Quick Sync. Then I ran it again and it was 17s. The next time it was suddenly down to 33s.
I'm hoping that someone will annouce something like ASUS's new U36JC that has an i5-2410 at CES. I'd love to be able to go a full day at school without needing to recharge in almost every class (and actaully be able to play minecraft between classes)
To correct the correction (I was going by the graphs), the graphs for the G73J should read GTX460M (I noticed the reference to the GTX460M in the text later and checked the G73J article).
God help us all when it comes to talking/writing about the Sandy Bridge chips themselves, "the i7-2539"...
I imagine that a single resolution is the best way to compare different machines, but it would have been nice to see some gaming benchmarks at the native res. 1600x900 is not a whole lot higher than 1366x768 (37% more pixels), so I imagine it's possible to game with low details at that resolution. Many Anandtech articles add such figures into the benchmark tables, and I was really missing them here.
I ran out of time, but I did test 1600x900 at our "High" defaults. Umm... not really what you'd want, as everything is completely unplayable. Perhaps post-CES I'll get a chance to do additional testing, but my feeling is most actual notebooks using SNB will likely ship with a 768p display. Some might do 1080p as well, but they'll be more likely to include Optimus GPUs for gaming.
Good idea testing at 1366x768. Not only does it fall in line with most notebook screen resolutions, but it also give good indication of 720p performance. Given that many, many gamers play PS3 and 360 (most games being 720p@30fps), it's very good to see that most games are completely playable from low-medium settings. Some games could probably even get away with higher settings and still stay around 30fps.
It's awesome that Intel is putting the "HD 3000" GPU in all its mobile chips, but I'm very curious how the different clock speeds of the GPU and CPUs will affect performance.
Definitely a driver bug, and I've passed it along to Intel. The HD 4250 manages 7.7FPS, so SNB ought to be able to get at least 15FPS or so. The game is still a beast, though... some would say poorly written, probably, but I just call it "demanding". LOL
Thanks for mentioning USB 3.0 Jarred. It is a much too overlooked essential feature these days. I simply will not pay money for a new laptop in 2011 without a single USB 3.0 port.
I had both CPU-Z and the Intel Turbo Monitoring tool up, but neither one supports logging so I have to just eyeball it. The clocks in CPU-Z were generally steady, though it's possible that they would bump up for a few milliseconds and then back down and it simply didn't show up.
On the other Sandy Bridge article by Anand, right on the front page, it is mentioned that the 6EU GT1 (HD2000) die has 504M transistors, while the 12EU GT2 (HD 3000) die has 624M transistors. Yet here you are saying HD Graphics 3000 has 114M. If the 12EU version has 120M more transistors than the 6EU version, then does that not imply a total gpu transistor count well north of 200M?
AFAIK, the 114M figure is for the 12EU core. All of the currently shipping SNB chips are quad-core with the full 12EU on the die, but on certain desktop models Intel disables half the EUs. However, if memory serves there are actually three SNB die coming out. At the top is the full quad-core chip. Whether you have 6EU or 12EU, the die is the same. For the dual-core parts, however, there are two chips. One is a dual-core with 4MB L3 cache and 12EUs, which will also ship in chips where the L3 only shows 3MB. This is the GT1 variant. The other dual-core version is for the ultra-low-cost Pentium brand, which will ship with 6EUs (there will only be 6EU on the die) and no L3 cache, as well as some other missing features (Quick Sync for sure). That's the GT2, and so the missing 120M includes a lot of items.
Note: I might not be 100% correct on this, so I'm going to email Anand and our Intel contact for verification.
Anyway those 114M do not include memory controller, encoding, display output etc. so the comparison with Redwood/Cedar is not really meaningful.
If you actually insist on comparing transistor counts, semething like (Cedar-Redwood)/3 shall give you a reasonable value of AMD's SPU efficiency from transistors/performance POW.
"After all, being able to run a game at all is the first consideration; making it look good is merely the icing on the cake."
If making it look good is merely icing on the cake, why bother with GPUs ? Lets just play 2D Mines! (While for the poor souls stuck with Intel IGPs it certainly is just the icing, for Christ's sake, that is a major _problem_, not a feature !!!)
After a few pages I have decided to forgo the "best-thing-since-sliced-bread" attitude, but, what is too much is too much...
Regardless the attitude, HUGE thanks for listening to comments and including the older games roundup.
While I'd love to see more games that actually provide playable frame-rates (read: even older ones) on SNB-class IGPs like Far Cry or HL2, even this mini-roundup is a really big plus.
As for a suggestion on future game-playability roundup on IGP's, it is really simple: 1) Take a look at your 2006-2007 GPU benchmarking suites 2) Add in a few current MMORPGs
Anand covered several other titles, and most of the pre-2007 stuff should run fine (outside of blacklisting problems or bugs). Time constraints limit how much we can test, obviously, but your "reviewer on crack" comment is appreciated. 2D and 3D are completely different, and while you might feel graphical quality is of paramount importance, the fact of the matter is that SNB graphics are basically at the same level as PS3/Xbox 360 -- something millions of users are "okay" with.
NVIDIA and AMD like to show performance at settings where they're barely playable and SNB fails, but that's no better. If "High + 1680x1050" runs at 20FPS with Sandy Bridge vs. 40FPS on discrete mobile GPUs, wouldn't you consider turning down the detail to get performance up? I know I would, and it's the same reason I almost never enable anti-aliasing on laptops: they can't handle it. But if that's what you require, by all means go out and buy more expensive laptops; we certainly don't recommend SNB graphics as the solution for everyone.
Honestly, until AMD gets the Radeon equivalent of Optimus for their GPUs (meaning, AMD GPU + Intel CPU with IGP and automatic switching, plus the ability to update your Radeon and Intel drivers independently), Sandy Bridge + GeForce 400M/500M Optimus is going to be the way to go.
For your CPU specific benchmarks you annotate the CPU and GPU. I beleive the HDD or SSD plays a much larger role in those benchmarks then a GPU. Would it not be more appropriate to annotate the storage device used. Were all of the CPUs in the comparison paired with SSDs? If they weren't how much would that affect the benchmarks?
The SSD is a huge benefit to PCMark, and since this is laptop testing I can't just use the same image on each system. Anand covers the desktop side of things, but I include PCMark mostly for the curious. I could try and put which SSD/HDD each notebook used, but then the text gets to be too long and the graph looks silly. Heh.
For the record, the SNB notebook has a 160GB Intel G2 SSD. The desktop uses a 120GB Vertex 2 (SF-1200). W870CU is an 80GB Intel G1 SSD. The remaining laptops all use HDDs, mostly Seagate Momentus 7200.4 I think.
the synthetics benchmarks are all run at turbo frequencies. the scores from the 2.3ghz 2820qm is almost the same as the 3.4ghz i7 2600k. this is because the 2820qm is running at 3.1ghz under cinebench.
no one knows how long this turbo frequency lasts. maybe just enough to finish cinebench!
It probably lasts forever given decent cooling so the review is accurate, but there is something funny going on here: the score for the 2820QM is 20393 while the score for the score in the 2600K review is 22875. This would be consistent with a difference between CPUs running at 3.4GHz and 3.1GHz, but why doesn't the 2600K Turbo up to 3.8GHz? The claim is that it can be effortlessly overclocked to 4.4GHz so we know the thermal headroom is there.
If you do continual heavy-duty CPU stuff on the 2820QM, the overall score drops about 10% on later runs in Cinebench and x264 encoding. I mentioned this in the text: the CPU starts at 3.1GHz for about 10 seconds, then drops to 3.0GHz for another 20s or so, then 2.9 for a bit and eventually settles in at 2.7GHz after 55 seconds (give or take). If you're in a hotter testing environment, things would get worse; conversely, if you have a notebook with better cooling, it should run closer to the maximum Turbo speeds more often.
Macpod, disabling Turbo is the last thing I would do for this sort of chip. What would be the point, other than to show that if you limit clock speeds, performance will go down (along with power use)? But you're right, the whole review should be redone because I didn't mention enough that heavy loads will eventually drop performance about 10%. (Or did you miss page 10: "Performance and Power Investigated"?)
Just like any other low-end GPU (integrated or otherwise) I believe most users would rely on the HD3000 just for undemanding games in the category of which I would mention Civilization IV and V or FIFA / PES 11. This goes to say that I would very much like to see how the new Intel graphics fares in these games, should they be available in the test lab of course.
I am not necessarily worried about the raw performance, clearly the HD3000 has the capacity to deliver. Instead, the driver maturity may come out as an obstacle. Firstly one has to consider the fact that Intel traditionally has problems with GPU driver design (relative to their competitors). Secondly, should at one point Intel be able to repair (some of) the rendering issues mentioned in this article or elsewhere, notebook producers still take their sweet time before supplying users with new driver versions.
In this context I am genuinely concerned about the HD3000 goodness. The old GMA HD + Radeon 5470 combination still seems tempting. Strictly referring to the gaming aspect I honestly prefer reliability and a few FPS' missing rather than the aforementioned risks.
So, when Apple starts putting these in Macbooks, I'd assume the battery life will easily eclipse 10 hours under light usage, maybe 6 hours under medium usage ??? I'm no fanboy but I'll be in line for that ! My Dell XPS M1530's 9-cell battery just died, I can wait a few months =]
I'm definitely interested in seeing what Apple can do with Sandy Bridge! Of course, they might not use the quad-core chips in anything smaller than the MBP 17, if history holds true. And maybe the MPB13 will finally make the jump to Arrandale? ;-)
Yeah... Saying that the nVidia 320M is consistently slower than the HD3000 when comparing a CPU from 2008 and a CPU from 2011...
Great job comparing GPUs! (sic)
A more intelligent thing to say would have been: a 2008 CPU (P8600) with an nVidia 320M is consistently slightly slower than a 2011 CPU (i7-2820QM) with HD3000, don't you think?
That's the only thing I care about with these-and as far as I'm aware, the jump isn't anything special. It's FAR from the "tock" it supposedly is, going by earlier Anandtech data. (In fact the "tick/tock" thing seems to have broken down after just one set of products...)
This sounds like it is a big advantage for me...but only because Intel refused to produce quad core CPUs at 32nm, so these by default run quite a bit faster than the last gen chips.
Otherwise it sounds like they're wasting 114 million transistors that I want spent on the CPU-whether it's more cache, more, more functional units, another core (if that's possible in 114 million transistors) etc.
I absolutely do NOT want Intel's garbage, incompatible graphics. I do NOT want the addition complexity, performance hit, and software complexity of Optimus or the like. I want a real GPU, functioning as a real GPU, with Intels' garbage completely shut off at all times.
I hope we'll see that in mid range and high end notebooks, or I'm going to be very disappointed.
I'm a notebook noob, up till now I've avoided them as much as I could. I have evaluated them over the years and have a pretty good dell precision 4500 at work, however, I had to build a desktop because the laptop, as provided, just doesn't cut it. With a SSD and 8 GB of ram it would probably suffice but I run a lot of Virtual Machines for testing.
Anyhoo, enough of my background: I am very interested in the Sandy Bridge line specifically the retail chips, 2720 and 2820.
However all I'm seeing announced from the OEM's are 2630 based solutions. Are the OEM's going to have an option to upgrade to the retail chips? Are the 2720 and 2820 going to be available any time soon or is it just the 2630 that will have broad availability?
This review unit came with an Intel SSD, which probably made a huge impact on general usage, but can we expect SSD boot drives for most Sandy Bridge laptops? If i were Intel, i'd make a branding program where Sandy + Intel SSD (310, G2, or newer) gave a fancy sticker for marketers to drool over, guaranteeing smooth and snappy operation without hiccups from spinning platter IOs.
"We might get some of the above in OEM systems sent for review, and if so it will be interesting to see how much of an impact the trimmed clock speeds have on overall performance."
Looking forward for this to happen. Very important to know for me. Because I will be using Adobe Illustrator CS4, Cinema 4D R12 Prime, and Unity 3D. I hope that the performance impact between an i7-2720QM and a i7-2820QM, is as minimal as it was between the i7-740QM and i7-840QM.
It's going to be a toss up between the SB Dell XPS 17 and the SB HP Envy 17 for me, combined with a Dell or HP 30" monitor. Just too bad that both notebooks will not offer 1920x1200 resolution.
Your gaming benchmark is a joke! Anyone who has a radeon 5650m in there laptop isn't going to set game setting to "Ultra Low" a good mid range setting would have been more realistic and probably playable... but the Intel HD graphics on Sandy Bridge would not have looked so good then.... "Lies, damned lies and statistics!" all manipulated so the uneducated are taken in to think they can game on Intel IGP's....
I would mostly appreciate your suggestions regarding the bottleneck of overclocked QSV.
I have Core i5-2400s on Intel's DH67BL (H67) mother and have been using Media Espresso 6.5 to transcode ts files (MPEG-2) into H.264 by QSV. DH67BL allows me to overclock the graphics core from its default 1.1GHz to 2GHz. I observe linear shortening of transcoding time from 43 seconds/GB (1.1GHz) to 35 seconds/GB (1.6GHz), but beyond that there is no further improvement. Thus, it is expected to be transcoding in 30 seconds/GB at 2GHz but in reality it takes 35 seconds/GB.
QSV encoding in Media Espresso 6.5 is already ultrafast, and first I thought it might be hitting the I/O bandwidth of HDD, but it was not the case because SSD or even RAMDISK did not improve the situation.
Any idea about what is becoming the bottleneck of overclocked QSV? My guess is that it has something to do with either Sandy Bridge's internal hardware (such as data transfer) or Media Espresso's logic or both.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
66 Comments
Back to Article
tipoo - Monday, January 3, 2011 - link
Sorry if I missed this somewhere in the review, but does the graphics component support OpenCL?RyuDeshi - Monday, January 3, 2011 - link
Second to last paragraph on the "Extended compatibility and performance results:""Ultimately, Sandy Bridge’s IGP is far more capable than many would have expected. Sure, it doesn’t even try to support DX11 or OpenCL, but at least for gaming DX11 is typically too much for even midrange GPUs."
CharonPDX - Monday, January 3, 2011 - link
An Intel rep has said that Sandy Bridge will support OpenCL. (http://news.cnet.com/8301-13924_3-20024079-64.html ) The trick is that it may be a combo CPU+GPU to do it. So it may not be what you are thinking by OpenCL being solely GPU, but OpenCL code should be able to run.And in the end, what does it matter, really, as long as it runs? As the desktop Sandy Bridge review points out, video encoding is just as fast using solely the x86 codepaths as using nVidia's CUDA or ATI's Stream.
Voldenuit - Monday, January 3, 2011 - link
OpenCL was designed from the outset to run on heterogenous resources, including CPU.So intel claiming that they "support" OpenCL is nothing special - they just needed the right drivers/API.
However, don't expect OpenCL code running solely on the CPU (my guess as to how SB will handle it) to be any faster than the x86 codepath running on the same CPU.
Checkbox feature.
jameskatt - Monday, January 3, 2011 - link
What Intel wants to do is to have the CPU run OpenCL code.This totally defeats the purpose of OpenCL.
OpenCL is suppose to allow both the GPU and the CPU to run code simultaneously. This is to allow significant acceleration in running OpenCL code compared to using just the CPU.
Sure. OpenCL code will run. But it will run MORE SLOWLY than with a discrete GPU. And the 16 GPUs in Sandy Bridge will be wasted.
Intel's Sandy Bridge has non-programmable GPUs. This is a serious limitation and deal killer when it comes to running OpenCL code.
I expect Apple to continue use nVidia's or AMD's discrete GPUs with the MacBooks and Mac Book Pros.
This is very disappointing. It shows that Intel still doesn't have the talent to produce decent GPUs.
PlasmaBomb - Monday, January 3, 2011 - link
*cough* I think you mean 12 EU *cough*
Guspaz - Monday, January 3, 2011 - link
<i>What Intel wants to do is to have the CPU run OpenCL code.This totally defeats the purpose of OpenCL.
OpenCL is suppose to allow both the GPU and the CPU to run code simultaneously. This is to allow significant acceleration in running OpenCL code compared to using just the CPU.</i>
No, this is the *primary* purpose of OpenCL. The goal of OpenCL is not to "allow the GPU and CPU to run code simultaneously", but to provide a single unified code path that can be used with any hardware, be it CPU or GPU. There are/were already code paths specific to each vendor/type (CUDA for nVIDIA GPUs, Stream for AMD/ATI GPUs, x86 for Intel/AMD CPUs). The problem is that fully supporting all three platforms requires three separate code paths.
OpenCL unifies this, and allows a single codepath to be used regardless of the GPU's type or existence. You've completely misunderstood the purpose of OpenCL.
Wiggy McShades - Tuesday, January 4, 2011 - link
You need to ask what applications on a desktop actually use OpenCL in a meaningful way? Intel added hardware for media transcoding, which makes transcoding on something besides the cpu useless and that was roughly all openCL can be used for on the desktop, laptop, or cellphone.OpenCL is for vector calculations, AVX is for vector calculations. All four cores running AVX instructions would just be a faster choice than OpenCL on a low end gpu. Intel most likely could get sandybridge's gpu running OpenCL, but it would be pointless. OpenCL just is not a desktop feature.
strikeback03 - Wednesday, January 5, 2011 - link
Given how much money they have, I doubt Intel is lacking the "talent" to do anything they want. OpenCL execution on the GPU portion of the SNB chips was probably just not that big a deal to them, and given the number of other things (such as speed and battery life) SNB brings to the table they probably won't have trouble selling lots of these to the average consumer.8steve8 - Monday, January 3, 2011 - link
which mobile cpus on pg1 support TXT or VT-d or AES-NI or VT-x or Quick Sync?JarredWalton - Monday, January 3, 2011 - link
All of the mobile chips list AES/TXT/vPRO support, unlike the desktop chips. They also all support Quick Sync and have 12 EUs.DesktopMan - Monday, January 3, 2011 - link
What about virtualization? Not sure why you are mentioning vPro, the requirement for vPro is usually the chipset, in this case QM67.JarredWalton - Monday, January 3, 2011 - link
They don't specifically break out VT-d and VT-x on the mobile products; all the slides state is that the mobile products support virtualization. On the desktop slide, they have a line saying "vPro/TXT/VT-d/SIPP" but on mobile slides the line says "AES/TXT/vPro". There's a second line for both desktops and mobile chips that just says "Intel Virtualization Technology" but it's not too useful since it just says "Yes" on every single Sandy Bridge CPU listed. :-\Hrel - Monday, January 3, 2011 - link
finally gaming on IG. Sooo, when do new Nvidia Gpu's come out for laptops?JarredWalton - Monday, January 3, 2011 - link
Check back on Jan 6. :-pmobomonster - Monday, January 3, 2011 - link
AMD is toast. Those are blistering performance numbers that even I did not expect. Incredible that it manages near 30 fps in several games at medium detail settings.The lower power dual core Sandy Bridge models will really put the squeeze on AMD. Even a regular 2520M will give AMD's Brazos a lot of trouble.
tipoo - Monday, January 3, 2011 - link
Bah, AMD has been toast for years now, if they really were, they would be buttered and eaten already.yes, horrible metaphor is horrible.
Kangal - Monday, January 3, 2011 - link
I'm a tech enthusiast especially in the portable device scene, and I always nit-pick things.Which is the reason why I own the Acer 4810TG.
The Core i7 640-UM would be my favourite processor, until I saw this.
The successor, 2657M, seems to have (theoretical) performance improvement of 19% and battery saving of 6%, which is very amazing.
From pure guestimation, this is ~200% (or slightly more) performance of the SU7300 at the same battery life. Whoa!
This would mean new ultra-portable devices (less than 14" and over 6 hours battery life).
If this gets partnered with the ATi 5650 (or its successor), this will bring serious gaming potential to ultra-portable devices *drools*
------
BUT, I wish they could add another chip on that (ULV) list.
The exact same thing as the i7 2657M but tossing the dual-core setup for a single core, if it meant they could increase the down the battery life by 70%. (Name it the 1357M?)
I mean, how about real 10 hours battery life (6 cell) on something about as fast as the SU7300 ??
Something like that (Core i7 1357M?) could make Windows7 tablets a more viable option.
davepermen - Monday, January 3, 2011 - link
I'd prefer a dualcore with 1ghz, or even 800mhz. as it could still clock to 2ghz or so, it would be fast when needed, but very battery saving else.if intel would go down further, it would most likely by now kill atom in the netbook and tablet area. and in the phone area, atom isn't there yet.
personally, i hate atom for being in the way. ultralow core i1 would be AWESOME.
JarredWalton - Monday, January 3, 2011 - link
SpeedStep lets all the SNB processors (mobile versions at least) run at 800MHz when they're not doing anything else. So you've already got what you're asking for, more or less.mtoma - Monday, January 3, 2011 - link
Something like Core i7 1357M could make Win 7 tablets temporarily viable. Remember that in the ultra portable space the big words are: multitasking, dual core processors (like Cortex A9). So, realistically, we need ULV dual-core Sandy Bridge.JarredWalton - Monday, January 3, 2011 - link
The i7-640M runs at 1.2GHz minimum and 2.26GHz maximum. The i7-2657M runs at 1.6GHz minimum and 2.7GHz maximum. (Actually, minimum on all the Core 2nd Gen is 800MHz when you aren't doing anything that needs more speed.) That would be 33% faster base speed and up to 19% higher max speed, just on clock speeds alone. However, you forgot to factor in a round 20-25% performance increase just from the Sandy Bridge architecture, so you're really looking at anywhere from 19% (bare minimum) to as much as 66% faster for normal usage, and things like Quick Sync would make certain things even faster.DanNeely - Monday, January 3, 2011 - link
You've got a limited range of TDP that any given architecture will be good in. According to Intel (at the time of the atom launch) things start getting rather ragged when the range gets to 10x. Until Core2 this wasn't really an issue for Intel because the p3 and prior's top end parts had sufficiently low TDPs that fitting the entire product line into a single architecture wasn't a problem. It didn't matter much in the P4 era because the Pentium-M and Core 1 were separate architectures and could be tuned so its sweet spot was significantly lower than the desktop P4. Beginning with Core2 however Intel only had a single architecture. The bottom tier of ULV chips suffered due to this, and on the high end the fact that overclocking (especially voltage OCing) was very poor on the performance gain/increased power consumption scale.The atom is weak as you approach 10W because it was designed not as a low end laptop part (although Intel is more than willing to take your money for a netbook); but to invade ARM's stronghold in smartphones, tablets, and other low power embedded systems. Doing that requires good performance at <1W TDP. By using a low power process (instead of the performance process of every prior Intel fabbed CPU) Moorestown should finally be able to do so. The catch is that it leaves Intel without anything well optimized for the 10-15W range. In theory the AMD Bobcat should be well placed for this market, but the much larger chunk of TDP given to graphics combined with AMDs historic liability in idle power make it something of a darkhorse. I wouldn't be surprised if the 17W Sandybridge is able to end up getting better battery life than the 10W Bobcat because of this.
Kenny_ - Monday, January 3, 2011 - link
I have seen in the past that when Mac OS X and Win 7 are run on the same machine, Mac OS X can have significantly better battery life. Is there any chance we could see what Sandy Bridge does for battery life under Mac OS X?QChronoD - Monday, January 3, 2011 - link
This was a test machine that intel cobbled together. Give it a few weeks or months after some retail machines come out, and then I'm sure that someone in the community will have somehow shoehorned OSX onto one of the machines. (Although I don't know how well it would perform since they'd probably have to write new drivers for the chipset and the graphics)cgeorgescu - Monday, January 3, 2011 - link
I think that in the past we've seen MacOS and Win7 battery life comparison while running on the same Mac, not on the same Acer/Asus/Any machine (cause MacOS doesn't run on such w/o hacks). And I suspect Apple manages better power management only because they have to support only few hardware configurations (so doing optimizations especially for that hardware), it's a major advantage of their business model.It's like with the performance of games on Xbox and the like... The hardware isn't that impressive but you write and compile only for that configuration and nothing else: you're sure that every other machine is the same, not depending on AMD code paths, smaller or larger cache, slower or faster RAM, that or the other video card, and so on...
Aside power management in macs, to see what Sandy Bridge can do under MacOS would be frustrating... You know how long it takes until Jobs fits new stuff in those MBPs. Hell, he still sells Core2 duo.
Penti - Monday, January 3, 2011 - link
Having fewer configurations don't mean better optimized graphics drivers they are worse. Having only intel doesn't mean the GCC compiler only outputs optimized code. It's a compiler AMD contribute to among others and there's no such thing as AMD code paths, there is some minor difference in how it manages SSE but that's it. Most is exactly the same and the compiler just optimizes for x86 not a brand. If it supports the same features it is as optimized. Machine Code is the same. It's not like having a cell processor there.Power management is handles by the kernel/drivers. You can expect SB MacBooks in like this summer. Not too long off. And you might even be seeing people accepting Flash on their macs again as Adobe is starting to move away from their archaic none video player work flow. With 10.2 and forward. Battery/Power management won't really work without Apples firmware though. But you are simply not going to optimize code on a OS X machine like a console, your gonna leave it in a worse state then the Windows counterpart. Apple will also be using C2D as long as Intel don't provide them with optimized proper drivers. It's a better fit for the smaller models as is.
mcdill the pig - Monday, January 3, 2011 - link
Perhaps the issue is more the Compal's cooling system but those max CPU temps (91 degrees celsius) seem high. It may also be that the non-Extreme CPUs will have lower temps when stressed.My Envy 17 already has high temps - I was looking forward to SB notebooks having better thermal characteristics than the i7 QM chips (i.e. no more hot palmrests or ball-burning undersides)....
JarredWalton - Monday, January 3, 2011 - link
This is a "works as designed" thing. Intel runs the CPU at the maximum speed allowed (3.1GHz on heavily threaded code in this case) until the CPU gets too warm. Actually, funny thing is that when the fan stopped working at one point (a cold reboot fixed it), CPU temps maxed out at 99C. Even with no fan running, the system remained fully stable; it just ran at 800MHz most of the time (particularly if you put a load on the CPU for more than 5 seconds), possibly with other throttling going on. Cinebench 11.5 for instance ran about 1/4 as fast as normal.DanNeely - Monday, January 3, 2011 - link
Throttling down to maintain TDP at safe levels has been an intel feature since the P4 era. back in 2001(?) toms hardware demoed this dramatically by running quake on a P4 and removing the cooler entirely. Quake dropped into slideshow mode but remained stable and recovered as soon as the heatsink was set back on top.The p3 they tested did a hard crash. The athlon XP/MP chips reached several hundred degrees and self destructed (taking the mobos with them). Future AMD CPUs had thermal protection circuitry to avoid this fail mode as well.
JarredWalton - Monday, January 3, 2011 - link
True, it's been around a while, but I found it interesting that while performance dropped, it wasn't the "slideshow effect". If the system sat idle, the CPU would start to cool down, so when I fired up a benchmark it would run fast for a little bit. It was very perplexing until I figured out what was happening. First run on MediaEspresso gave me 11s with Quick Sync. Then I ran it again and it was 17s. The next time it was suddenly down to 33s.QChronoD - Monday, January 3, 2011 - link
I'm hoping that someone will annouce something like ASUS's new U36JC that has an i5-2410 at CES. I'd love to be able to go a full day at school without needing to recharge in almost every class (and actaully be able to play minecraft between classes)PlasmaBomb - Monday, January 3, 2011 - link
That should read the GTX465 comes next...
PlasmaBomb - Monday, January 3, 2011 - link
To correct the correction (I was going by the graphs), the graphs for the G73J should read GTX460M (I noticed the reference to the GTX460M in the text later and checked the G73J article).God help us all when it comes to talking/writing about the Sandy Bridge chips themselves, "the i7-2539"...
JarredWalton - Monday, January 3, 2011 - link
Fixed, thanks. I had some good ones in those graphs... G73Jw with 260M and 456M, but no 460M! LOLiwodo - Monday, January 3, 2011 - link
now all that is left are Gfx drivers, i hope intel put 10x more resources at their current Gfx Drivers team.Other then that, i am waiting for Ivy Bridge........
ET - Monday, January 3, 2011 - link
I imagine that a single resolution is the best way to compare different machines, but it would have been nice to see some gaming benchmarks at the native res. 1600x900 is not a whole lot higher than 1366x768 (37% more pixels), so I imagine it's possible to game with low details at that resolution. Many Anandtech articles add such figures into the benchmark tables, and I was really missing them here.JarredWalton - Monday, January 3, 2011 - link
I ran out of time, but I did test 1600x900 at our "High" defaults. Umm... not really what you'd want, as everything is completely unplayable. Perhaps post-CES I'll get a chance to do additional testing, but my feeling is most actual notebooks using SNB will likely ship with a 768p display. Some might do 1080p as well, but they'll be more likely to include Optimus GPUs for gaming.therealnickdanger - Tuesday, January 4, 2011 - link
Good idea testing at 1366x768. Not only does it fall in line with most notebook screen resolutions, but it also give good indication of 720p performance. Given that many, many gamers play PS3 and 360 (most games being 720p@30fps), it's very good to see that most games are completely playable from low-medium settings. Some games could probably even get away with higher settings and still stay around 30fps.It's awesome that Intel is putting the "HD 3000" GPU in all its mobile chips, but I'm very curious how the different clock speeds of the GPU and CPUs will affect performance.
ULV Sandy Bridge numbers soon?
therealnickdanger - Tuesday, January 4, 2011 - link
Oh yeah, I forgot to add:What's with Dark Athena? Is it really that stressful to run or is there a driver issue?
JarredWalton - Tuesday, January 4, 2011 - link
Definitely a driver bug, and I've passed it along to Intel. The HD 4250 manages 7.7FPS, so SNB ought to be able to get at least 15FPS or so. The game is still a beast, though... some would say poorly written, probably, but I just call it "demanding". LOLsemo - Monday, January 3, 2011 - link
Thanks for mentioning USB 3.0 Jarred. It is a much too overlooked essential feature these days. I simply will not pay money for a new laptop in 2011 without a single USB 3.0 port.dmbfeg2 - Monday, January 3, 2011 - link
Which tool do you use to check the turbo frequencies under load?JarredWalton - Monday, January 3, 2011 - link
I had both CPU-Z and the Intel Turbo Monitoring tool up, but neither one supports logging so I have to just eyeball it. The clocks in CPU-Z were generally steady, though it's possible that they would bump up for a few milliseconds and then back down and it simply didn't show up.Shadowmaster625 - Monday, January 3, 2011 - link
On the other Sandy Bridge article by Anand, right on the front page, it is mentioned that the 6EU GT1 (HD2000) die has 504M transistors, while the 12EU GT2 (HD 3000) die has 624M transistors. Yet here you are saying HD Graphics 3000 has 114M. If the 12EU version has 120M more transistors than the 6EU version, then does that not imply a total gpu transistor count well north of 200M?JarredWalton - Monday, January 3, 2011 - link
AFAIK, the 114M figure is for the 12EU core. All of the currently shipping SNB chips are quad-core with the full 12EU on the die, but on certain desktop models Intel disables half the EUs. However, if memory serves there are actually three SNB die coming out. At the top is the full quad-core chip. Whether you have 6EU or 12EU, the die is the same. For the dual-core parts, however, there are two chips. One is a dual-core with 4MB L3 cache and 12EUs, which will also ship in chips where the L3 only shows 3MB. This is the GT1 variant. The other dual-core version is for the ultra-low-cost Pentium brand, which will ship with 6EUs (there will only be 6EU on the die) and no L3 cache, as well as some other missing features (Quick Sync for sure). That's the GT2, and so the missing 120M includes a lot of items.Note: I might not be 100% correct on this, so I'm going to email Anand and our Intel contact for verification.
mino - Monday, January 3, 2011 - link
Nice summary (why was this not in the article ?).Anyway those 114M do not include memory controller, encoding, display output etc. so the comparison with Redwood/Cedar is not really meaningful.
If you actually insist on comparing transistor counts, semething like (Cedar-Redwood)/3 shall give you a reasonable value of AMD's SPU efficiency from transistors/performance POW.
mino - Monday, January 3, 2011 - link
"After all, being able to run a game at all is the first consideration; making it look good is merely the icing on the cake."If making it look good is merely icing on the cake, why bother with GPUs ? Lets just play 2D Mines!
(While for the poor souls stuck with Intel IGPs it certainly is just the icing, for Christ's sake, that is a major _problem_, not a feature !!!)
After a few pages I have decided to forgo the "best-thing-since-sliced-bread" attitude, but, what is too much is too much...
mino - Monday, January 3, 2011 - link
Regardless the attitude, HUGE thanks for listening to comments and including the older games roundup.While I'd love to see more games that actually provide playable frame-rates (read: even older ones) on SNB-class IGPs like Far Cry or HL2, even this mini-roundup is a really big plus.
As for a suggestion on future game-playability roundup on IGP's, it is really simple:
1) Take a look at your 2006-2007 GPU benchmarking suites
2) Add in a few current MMORPGs
JarredWalton - Monday, January 3, 2011 - link
Anand covered several other titles, and most of the pre-2007 stuff should run fine (outside of blacklisting problems or bugs). Time constraints limit how much we can test, obviously, but your "reviewer on crack" comment is appreciated. 2D and 3D are completely different, and while you might feel graphical quality is of paramount importance, the fact of the matter is that SNB graphics are basically at the same level as PS3/Xbox 360 -- something millions of users are "okay" with.NVIDIA and AMD like to show performance at settings where they're barely playable and SNB fails, but that's no better. If "High + 1680x1050" runs at 20FPS with Sandy Bridge vs. 40FPS on discrete mobile GPUs, wouldn't you consider turning down the detail to get performance up? I know I would, and it's the same reason I almost never enable anti-aliasing on laptops: they can't handle it. But if that's what you require, by all means go out and buy more expensive laptops; we certainly don't recommend SNB graphics as the solution for everyone.
Honestly, until AMD gets the Radeon equivalent of Optimus for their GPUs (meaning, AMD GPU + Intel CPU with IGP and automatic switching, plus the ability to update your Radeon and Intel drivers independently), Sandy Bridge + GeForce 400M/500M Optimus is going to be the way to go.
skywalker9952 - Monday, January 3, 2011 - link
For your CPU specific benchmarks you annotate the CPU and GPU. I beleive the HDD or SSD plays a much larger role in those benchmarks then a GPU. Would it not be more appropriate to annotate the storage device used. Were all of the CPUs in the comparison paired with SSDs? If they weren't how much would that affect the benchmarks?JarredWalton - Monday, January 3, 2011 - link
The SSD is a huge benefit to PCMark, and since this is laptop testing I can't just use the same image on each system. Anand covers the desktop side of things, but I include PCMark mostly for the curious. I could try and put which SSD/HDD each notebook used, but then the text gets to be too long and the graph looks silly. Heh.For the record, the SNB notebook has a 160GB Intel G2 SSD. The desktop uses a 120GB Vertex 2 (SF-1200). W870CU is an 80GB Intel G1 SSD. The remaining laptops all use HDDs, mostly Seagate Momentus 7200.4 I think.
Macpod - Tuesday, January 4, 2011 - link
the synthetics benchmarks are all run at turbo frequencies. the scores from the 2.3ghz 2820qm is almost the same as the 3.4ghz i7 2600k. this is because the 2820qm is running at 3.1ghz under cinebench.no one knows how long this turbo frequency lasts. maybe just enough to finish cinebench!
this review should be re done
Althernai - Tuesday, January 4, 2011 - link
It probably lasts forever given decent cooling so the review is accurate, but there is something funny going on here: the score for the 2820QM is 20393 while the score for the score in the 2600K review is 22875. This would be consistent with a difference between CPUs running at 3.4GHz and 3.1GHz, but why doesn't the 2600K Turbo up to 3.8GHz? The claim is that it can be effortlessly overclocked to 4.4GHz so we know the thermal headroom is there.JarredWalton - Tuesday, January 4, 2011 - link
If you do continual heavy-duty CPU stuff on the 2820QM, the overall score drops about 10% on later runs in Cinebench and x264 encoding. I mentioned this in the text: the CPU starts at 3.1GHz for about 10 seconds, then drops to 3.0GHz for another 20s or so, then 2.9 for a bit and eventually settles in at 2.7GHz after 55 seconds (give or take). If you're in a hotter testing environment, things would get worse; conversely, if you have a notebook with better cooling, it should run closer to the maximum Turbo speeds more often.Macpod, disabling Turbo is the last thing I would do for this sort of chip. What would be the point, other than to show that if you limit clock speeds, performance will go down (along with power use)? But you're right, the whole review should be redone because I didn't mention enough that heavy loads will eventually drop performance about 10%. (Or did you miss page 10: "Performance and Power Investigated"?)
lucinski - Tuesday, January 4, 2011 - link
Just like any other low-end GPU (integrated or otherwise) I believe most users would rely on the HD3000 just for undemanding games in the category of which I would mention Civilization IV and V or FIFA / PES 11. This goes to say that I would very much like to see how the new Intel graphics fares in these games, should they be available in the test lab of course.I am not necessarily worried about the raw performance, clearly the HD3000 has the capacity to deliver. Instead, the driver maturity may come out as an obstacle. Firstly one has to consider the fact that Intel traditionally has problems with GPU driver design (relative to their competitors). Secondly, should at one point Intel be able to repair (some of) the rendering issues mentioned in this article or elsewhere, notebook producers still take their sweet time before supplying users with new driver versions.
In this context I am genuinely concerned about the HD3000 goodness. The old GMA HD + Radeon 5470 combination still seems tempting. Strictly referring to the gaming aspect I honestly prefer reliability and a few FPS' missing rather than the aforementioned risks.
NestoJR - Tuesday, January 4, 2011 - link
So, when Apple starts putting these in Macbooks, I'd assume the battery life will easily eclipse 10 hours under light usage, maybe 6 hours under medium usage ??? I'm no fanboy but I'll be in line for that ! My Dell XPS M1530's 9-cell battery just died, I can wait a few months =]JarredWalton - Tuesday, January 4, 2011 - link
I'm definitely interested in seeing what Apple can do with Sandy Bridge! Of course, they might not use the quad-core chips in anything smaller than the MBP 17, if history holds true. And maybe the MPB13 will finally make the jump to Arrandale? ;-)heffeque - Wednesday, January 5, 2011 - link
Yeah... Saying that the nVidia 320M is consistently slower than the HD3000 when comparing a CPU from 2008 and a CPU from 2011...Great job comparing GPUs! (sic)
A more intelligent thing to say would have been: a 2008 CPU (P8600) with an nVidia 320M is consistently slightly slower than a 2011 CPU (i7-2820QM) with HD3000, don't you think?
That would make more sense.
Wolfpup - Wednesday, January 5, 2011 - link
That's the only thing I care about with these-and as far as I'm aware, the jump isn't anything special. It's FAR from the "tock" it supposedly is, going by earlier Anandtech data. (In fact the "tick/tock" thing seems to have broken down after just one set of products...)This sounds like it is a big advantage for me...but only because Intel refused to produce quad core CPUs at 32nm, so these by default run quite a bit faster than the last gen chips.
Otherwise it sounds like they're wasting 114 million transistors that I want spent on the CPU-whether it's more cache, more, more functional units, another core (if that's possible in 114 million transistors) etc.
I absolutely do NOT want Intel's garbage, incompatible graphics. I do NOT want the addition complexity, performance hit, and software complexity of Optimus or the like. I want a real GPU, functioning as a real GPU, with Intels' garbage completely shut off at all times.
I hope we'll see that in mid range and high end notebooks, or I'm going to be very disappointed.
seamusmc - Friday, January 7, 2011 - link
I'm a notebook noob, up till now I've avoided them as much as I could. I have evaluated them over the years and have a pretty good dell precision 4500 at work, however, I had to build a desktop because the laptop, as provided, just doesn't cut it. With a SSD and 8 GB of ram it would probably suffice but I run a lot of Virtual Machines for testing.Anyhoo, enough of my background: I am very interested in the Sandy Bridge line specifically the retail chips, 2720 and 2820.
However all I'm seeing announced from the OEM's are 2630 based solutions. Are the OEM's going to have an option to upgrade to the retail chips? Are the 2720 and 2820 going to be available any time soon or is it just the 2630 that will have broad availability?
Who will have the retail chips available?
GullLars - Saturday, January 8, 2011 - link
This review unit came with an Intel SSD, which probably made a huge impact on general usage, but can we expect SSD boot drives for most Sandy Bridge laptops?If i were Intel, i'd make a branding program where Sandy + Intel SSD (310, G2, or newer) gave a fancy sticker for marketers to drool over, guaranteeing smooth and snappy operation without hiccups from spinning platter IOs.
IntoGraphics - Monday, January 17, 2011 - link
"We might get some of the above in OEM systems sent for review, and if so it will be interesting to see how much of an impact the trimmed clock speeds have on overall performance."Looking forward for this to happen. Very important to know for me. Because I will be using Adobe Illustrator CS4, Cinema 4D R12 Prime, and Unity 3D.
I hope that the performance impact between an i7-2720QM and a i7-2820QM, is as minimal as it was between the i7-740QM and i7-840QM.
It's going to be a toss up between the SB Dell XPS 17 and the SB HP Envy 17 for me, combined with a Dell or HP 30" monitor. Just too bad that both notebooks will not offer 1920x1200 resolution.
psiboy - Wednesday, January 19, 2011 - link
Your gaming benchmark is a joke! Anyone who has a radeon 5650m in there laptop isn't going to set game setting to "Ultra Low" a good mid range setting would have been more realistic and probably playable... but the Intel HD graphics on Sandy Bridge would not have looked so good then.... "Lies, damned lies and statistics!" all manipulated so the uneducated are taken in to think they can game on Intel IGP's....BTW: Dirt 2 looks like crap on Ultra Low...
katleo123 - Tuesday, February 1, 2011 - link
It works on new motherboards based on Intel’s forthcoming 6-series chipsetsVisit http://www.techreign.com/2010/12/intels-sandy-brid...
welcomesorrow - Friday, June 10, 2011 - link
Hi,I would mostly appreciate your suggestions regarding the bottleneck of overclocked QSV.
I have Core i5-2400s on Intel's DH67BL (H67) mother and have been using Media Espresso 6.5 to transcode ts files (MPEG-2) into H.264 by QSV. DH67BL allows me to overclock the graphics core from its default 1.1GHz to 2GHz. I observe linear shortening of transcoding time from 43 seconds/GB (1.1GHz) to 35 seconds/GB (1.6GHz), but beyond that there is no further improvement. Thus, it is expected to be transcoding in 30 seconds/GB at 2GHz but in reality it takes 35 seconds/GB.
QSV encoding in Media Espresso 6.5 is already ultrafast, and first I thought it might be hitting the I/O bandwidth of HDD, but it was not the case because SSD or even RAMDISK did not improve the situation.
Any idea about what is becoming the bottleneck of overclocked QSV? My guess is that it has something to do with either Sandy Bridge's internal hardware (such as data transfer) or Media Espresso's logic or both.