Original Link: https://www.anandtech.com/show/5871/intel-core-i5-3470-review-hd-2500-graphics-tested
Intel Core i5 3470 Review: HD 2500 Graphics Tested
by Anand Lal Shimpi on May 31, 2012 12:00 AM EST- Posted in
- CPUs
- Intel
- Ivy Bridge
- GPUs
Intel's first 22nm CPU, codenamed Ivy Bridge, is off to an odd start. Intel unveiled many of the quad-core desktop and mobile parts last month, but only sampled a single chip to reviewers. Dual-core mobile parts are announced today, as are their ultra-low-voltage counterparts for use in Ultrabooks. One dual-core desktop part gets announced today as well, but the bulk of the dual-core lineup won't surface until later this year. Furthermore, Intel only revealed the die size and transistor count of a single configuration: a quad-core with GT2 graphics.
Compare this to the Sandy Bridge launch a year prior where Intel sampled four different CPUs and gave us a detailed breakdown of die size and transistor counts for quad-core, dual-core and GT1/GT2 configurations. Why the change? Various sects within Intel management have different feelings on how much or how little information should be shared. It's also true that at the highest levels there's a bit of paranoia about the threat ARM poses to Intel in the long run. Combine the two and you can see how some folks at Intel might feel it's better to behave a bit more guarded. I don't agree, but this is the hand we've been dealt.
Intel also introduced a new part into the Ivy Bridge lineup while we weren't looking: the Core i5-3470. At the Ivy Bridge launch we were told about a Core i5-3450, a quad-core CPU clocked at 3.1GHz with Intel's HD 2500 graphics. The 3470 is near identical, but runs 100MHz faster. We're often hard on AMD for introducing SKUs separated by only 100MHz and a handful of dollars, so it's worth pointing out that Intel is doing the exact same here. It's possible that 22nm yields are doing better than expected and the 3470 will simply quickly take the place of the 3450. The two are technically priced the same so I can see this happening.
Intel 2012 CPU Lineup (Standard Power) | |||||||||
Processor | Core Clock | Cores / Threads | L3 Cache | Max Turbo | Intel HD Graphics | TDP | Price | ||
Intel Core i7-3960X | 3.3GHz | 6 / 12 | 15MB | 3.9GHz | N/A | 130W | $999 | ||
Intel Core i7-3930K | 3.2GHz | 6 / 12 | 12MB | 3.8GHz | N/A | 130W | $583 | ||
Intel Core i7-3820 | 3.6GHz | 4 / 8 | 10MB | 3.9GHz | N/A | 130W | $294 | ||
Intel Core i7-3770K | 3.5GHz | 4 / 8 | 8MB | 3.9GHz | 4000 | 77W | $332 | ||
Intel Core i7-3770 | 3.4GHz | 4 / 8 | 8MB | 3.9GHz | 4000 | 77W | $294 | ||
Intel Core i5-3570K | 3.4GHz | 4 / 4 | 6MB | 3.8GHz | 4000 | 77W | $225 | ||
Intel Core i5-3550 | 3.3GHz | 4 / 4 | 6MB | 3.7GHz | 2500 | 77W | $205 | ||
Intel Core i5-3470 | 3.2GHz | 4 / 4 | 6MB | 3.6GHz | 2500 | 77W | $184 | ||
Intel Core i5-3450 | 3.1GHz | 4 / 4 | 6MB | 3.5GHz | 2500 | 77W | $184 | ||
Intel Core i7-2700K | 3.5GHz | 4 / 8 | 8MB | 3.9GHz | 3000 | 95W | $332 | ||
Intel Core i5-2550K | 3.4GHz | 4 / 4 | 6MB | 3.8GHz | 3000 | 95W | $225 | ||
Intel Core i5-2500 | 3.3GHz | 4 / 4 | 6MB | 3.7GHz | 2000 | 95W | $205 | ||
Intel Core i5-2400 | 3.1GHz | 4 / 4 | 6MB | 3.4GHz | 2000 | 95W | $195 | ||
Intel Core i5-2320 | 3.0GHz | 4 / 4 | 6MB | 3.3GHz | 2000 | 95W | $177 |
The 3470 does support Intel's vPro, SIPP, VT-x, VT-d, AES-NI and Intel TXT so you're getting a fairly full-featured SKU with this part. It isn't fully unlocked, meaning the max overclock is only 4-bins above the max turbo frequencies. The table below summarizes what you can get out of a 3470:
Intel Core i5-3470 | ||||||
Number of Cores Active | 1C | 2C | 3C | 4C | ||
Default Max Turbo | 3.6GHz | 3.6GHz | 3.5GHz | 3.4GHz | ||
Max Overclock | 4.0GHz | 4.0GHz | 3.9GHz | 3.8GHz |
In practice I had no issues running at the max overclock, even without touching the voltage settings on my testbed's Intel DZ77GA-70K board:
It's really an effortless overclock, but you have to be ok with the knowledge that your chip could likely go even faster were it not for the artificial multiplier limitation. Performance and power consumption at the overclocked frequency are both reasonable:
Power Consumption Comparison | ||||
Intel DZ77GA-70K | Idle | Load (x264 2nd pass) | ||
Intel Core i7-3770K | 60.9W | 121.2W | ||
Intel Core i5-3470 | 54.4W | 96.6W | ||
Intel Core i5-3470 @ Max OC | 54.4W | 110.1W |
Power consumption doesn't go up by all that much because we aren't scaling the voltage up significantly to get to these higher frequencies. Performance isn't as good as a stock 3770K in this well threaded test simply because the 3470 lacks Hyper Threading support:
Overall we see a 10% increase in performance for a 13% increase in power consumption. Power efficient frequency scaling is difficult to attain at higher frequencies. Although I didn't increase the default voltage settings for the 3470, at 3.8GHz (the max 4C overclock) the 3470 is selecting much higher voltages than it would have at its stock 3.4GHz turbo frequency:
Intel's HD 2500 & Quick Sync Performance
What makes the 3470 particularly interesting to look at is the fact that it features Intel's HD 2500 processor graphics. The main difference between the 2500 and 4000 is the number of compute units on-die:
Intel Processor Graphics Comparison | ||||
Intel HD 2500 | Intel HD 4000 | |||
EUs | 6 | 16 | ||
Base Clock | 650MHz | 650MHz | ||
Max Turbo | 1150MHz | 1150MHz |
At 6 EUs, Intel's HD 2500 has the same number of compute resources as the previous generation HD 2000. In fact, Intel claims that performance should be around 10 - 20% faster than HD 2000 in 3D games. Given that Intel's HD 4000 is getting close to the minimum level of 3D performance we'd like to see from Intel, chances are the 2500 will not impress. We'll get to quantifying that shortly, but the good news is Quick Sync performance is retained:
The HD 2500 does a little better than our HD 4000 here, but that's just normal run to run variance. Quick Sync does rely heavily on the EU array for transcode work, but it looks like the workload itself isn't heavy enough to distinguish between the 6 EU HD 2500 and the 16 EU HD 4000. If your only need for Intel's processor graphics is for transcode work, the HD 2500 appears indistinguishable from the HD 4000.
The bad news is I can't say the same about its 3D graphics performance.
Crysis: Warhead
Our first graphics test is Crysis: Warhead, which in spite of its relatively high system requirements is the oldest game in our test suite. Crysis was the first game to really make use of DX10, and set a very high bar for modern games that still hasn't been completely cleared. And while its age means it's not heavily played these days, it's a great reference for how far GPU performance has come since 2008. For an iGPU to even run Crysis at a playable framerate is a significant accomplishment, and even more so if it can do so at better than performance (low) quality settings.
While Crysis on the HD 4000 was downright impressive, the HD 2500 is significantly slower.
Metro 2033
Our next graphics test is Metro 2033, another graphically challenging game. Since IVB is the first Intel GPU to feature DX11 capabilities, this is the first time an Intel GPU has been able to run Metro in DX11 mode. Like Crysis this is a game that is traditionally unplayable on Intel iGPUs, even in DX9 mode.
DiRT 3
DiRT 3 is our next DX11 game. Developer Codemasters Southam added DX11 functionality to their EGO 2.0 engine back in 2009 with DiRT 2, and while it doesn't make extensive use of DX11 it does use it to good effect in order to apply tessellation to certain environmental models along with utilizing a better ambient occlusion lighting model. As a result DX11 functionality is very cheap from a performance standpoint, meaning it doesn't require a GPU that excels at DX11 feature performance.
Portal 2
Portal 2 continues to be the latest and greatest Source engine game to come out of Valve's offices. While Source continues to be a DX9 engine, and hence is designed to allow games to be playable on a wide range of hardware, Valve has continued to upgrade it over the years to improve its quality, and combined with their choice of style you’d have a hard time telling it’s over 7 years old at this point. From a rendering standpoint Portal 2 isn't particularly geometry heavy, but it does make plenty of use of shaders.
It's worth noting however that this is the one game where we encountered something that may be a rendering error with Ivy Bridge. Based on our image quality screenshots Ivy Bridge renders a distinctly "busier" image than Llano or NVIDIA's GPUs. It's not clear whether this is causing an increased workload on Ivy Bridge, but it's worth considering.
Ivy Bridge's processor graphics struggles with Portal 2. A move to fewer EUs doesn't help things at all.
Battlefield 3
Its popularity aside, Battlefield 3 may be the most interesting game in our benchmark suite for a single reason: it was the first AAA DX10+ game. Consequently it makes no attempt to shy away from pushing the graphics envelope, and pushing GPUs to their limits at the same time. Even at low settings Battlefield 3 is a handful, and to be able to run it on an iGPU would no doubt make quite a few traveling gamers happy.
The HD 4000 delivered a nearly acceptable experience in single player Battlefield 3, but the HD 2500 falls well below that. At just under 20 fps, this isn't very good performance. It's clear the HD 2500 is not made for modern day gaming, never mind multiplayer Battlefield 3.
Starcraft 2
Our next game is Starcraft II, Blizzard’s 2010 RTS megahit. Starcraft II is a DX9 game that is designed to run on a wide range of hardware, and given the growth in GPU performance over the years it's often CPU limited before it's GPU limited on higher-end cards.
Starcraft 2 performance is borderline at best on the HD 2500. At low enough settings the HD 2500 can deliver an ok experience, but it's simply not fast enough.
Skyrim
Bethesda's epic sword & magic game The Elder Scrolls V: Skyrim is our RPG of choice for benchmarking. It's altogether a good CPU benchmark thanks to its complex scripting and AI, but it also can end up pushing a large number of fairly complex models and effects at once. This is a DX9 game so it isn't utilizing any of IVB's new DX11 functionality, but it can still be a demanding game.
At lower quality settings, Intel's HD 4000 definitely passed the threshold for playable in Skyrim on average. The HD 2500 is definitely not in the same league however. At 21.5 fps performance is marginal at best, and when you crank up the resolution to 1680 x 1050 the HD 2500 simply falls apart.
Minecraft
Switching gears for the moment we have Minecraft, our OpenGL title. It's no secret that OpenGL usage on the PC has fallen by the wayside in recent years, and as far major games go Minecraft is one of but a few recently released major titles using OpenGL. Minecraft is incredibly simple—not even utilizing pixel shaders let alone more advanced hardware—but this doesn't mean it's easy to render. Its use of massive amounts of blocks (and the overdraw that creates) means you need solid hardware and an efficient OpenGL implementation if you want to hit playable framerates with a far render distance. Consequently, as the most successful OpenGL game in quite some number of years (at over 5.5mil copies sold), it's a good reminder for GPU manufacturers that OpenGL is not to be ignored.
Our test here is pretty simple: we're looking at lush forest after the world finishes loading. Ivy Bridge's processor graphics maintains a significant performance advantage over the Sandy Bridge generation, making this one of the only situations where the HD 2500 is able to significantly outperform Intel's HD 3000. Minecraft is definitely the exception however as whatever advantage we see here is purely architectural.
Civilization V
Our final game, Civilization V, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression. There are other games that are more stressful overall, but this is likely the game most stressing of DX11 performance in particular.
Civilization V was an extremely weak showing on the HD 4000 when we looked at it last month, and it's even worse on the HD 2500. Civ players need not bother with Intel's processor graphics, go AMD or discrete.
HD 2500: Compute & Synthetics
While compute functionality could technically be shoehorned into DirectX 10 GPUs such as Sandy Bridge through DirectCompute 4.x, neither Intel nor AMD's DX10 GPUs were really meant for the task, and even NVIDIA's DX10 GPUs paled in comparison to what they've achieved with their DX11 generation GPUs. As a result Ivy Bridge is the first true compute capable GPU from Intel. This marks an interesting step in the evolution of Intel's GPUs, as originally projects such as Larrabee Prime were supposed to help Intel bring together CPU and GPU computing by creating an x86 based GPU. With Larrabee Prime canceled however, that task falls to the latest rendition of Intel's GPU architecture.
With Ivy Bridge Intel will be supporting both DirectCompute 5—which is dictated by DX11—but also the more general compute focused OpenCL 1.1. Intel has backed OpenCL development for some time and currently offers an OpenCL 1.1 runtime that runs across multiple generations of CPUs, and now Ivy Bridge GPUs.
Our first compute benchmark comes from Civilization V, which uses DirectCompute 5 to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. And while games that use GPU compute functionality for texture decompression are still rare, it's becoming increasingly common as it's a practical way to pack textures in the most suitable manner for shipping rather than being limited to DX texture compression.
These compute results are mostly academic as I don't expect anyone to really rely on the HD 2500 for a lot of GPU compute work. With under 40% of the EUs of the HD 4000, we get under 30% of the performance from the HD 2500.
We have our second compute test: the Fluid Simulation Sample in the DirectX 11 SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.
Thanks to its large shared L3 cache, Intel's HD 4000 did exceptionally well here. Thanks to its significantly fewer EUs, Intel's HD 2500 does much worse by comparison.
Our last compute test and first OpenCL benchmark, SmallLuxGPU, is the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.
Intel's HD 4000 does well here for processor graphics, delivering over 70% of the performance of NVIDIA's GeForce GTX 285. The HD 2500 takes a big step backwards though, with less than half the performance of the HD 4000.
Synthetic Performance
Moving on, we'll take a few moments to look at synthetic performance. Synthetic performance is a poor tool to rank GPUs—what really matters is the games—but by breaking down workloads into discrete tasks it can sometimes tell us things that we don't see in games.
Our first synthetic test is 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to memory bandwidth so long as there's enough ROP throughput in the first place.
It's interesting to note here that as DDR3 clockspeeds have crept up over time, IVB now has as much memory bandwidth as most entry-to-mainstream level video cards, where 128bit DDR3 is equally common. Or on a historical basis, at this point it's half as much bandwidth as powerhouse video cards of yesteryear such as the 256bit GDDR3 based GeForce 8800GT.
Moving on, our second synthetic test is 3DMark Vantage’s texture fill test, which provides a simple FP16 texture throughput test. FP16 textures are still fairly rare, but it's a good look at worst case scenario texturing performance.
Our final synthetic test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Since IVB is the first Intel iGPU with tessellation capabilities, it will be interesting to see how well IVB does here, as IVB is going to be the de facto baseline for DX11+ games in the future. Ideally we want to have enough tessellation performance here so that tessellation can be used on a global level, allowing developers to efficiently simulate their worlds with fewer polygons while still using many polygons on the final render.
The results here are as expected. With far fewer EUs, the HD 2500 falls behind even some of the cheapest discrete GPUs.
GPU Power Consumption
As you'd expect, power consumption with the HD 2500 is tangibly lower than HD 4000 equipped parts:
GPU Power Consumption Comparison under Load (Metro 2033) | ||||
Intel HD 2500 (i5-3470) | Intel HD 4000 (i7-3770K) | |||
Intel DZ77GA-70K | 76.2W | 98.9W |
Running our Metro 2033 test, the HD 4000 based Core i7 drew nearly 30% more power at the wall compared to the HD 2500.
General Performance
The general performance of the Core i5-3470 is nothing too unusual. We know from our original Ivy Bridge review that the advantage over Sandy Bridge is typically in the single digits. In other words, if Sandy Bridge was a good upgrade for your current system, Ivy Bridge won't change things. Idle power doesn't really improve over Sandy Bridge, but load power is a bit better.
Compared to the 3770K, you will lose out on heavily threaded performance due to the lack of Hyper Threading. But for many client workloads, including gaming, you can expect the 3470 to perform quite similarly to the 3770K.
Power Consumption Comparison | ||||
Intel DZ77GA-70K | Idle | Load (x264 2nd pass) | ||
Intel Core i7-3770K | 60.9W | 121.2W | ||
Intel Core i5-3470 | 54.4W | 96.6W | ||
Intel Core i5-3470 @ 4GHz | 54.4W | 110.1W |
Final Words
Intel's Core i5-3470 is a good base for a system equipped with a discrete GPU. You don't get the heavily threaded performance of the quad-core, eight-thread Core i7 but you're also saving nearly $100. For a gaming machine or anything else that's not going to be doing a lot of thread heavy work (e.g. non-QuickSync video transcode, offline 3D rendering, etc...) the 3470 is definitely good enough. Your overclocking options are significantly limited as the 3470 is a partially unlocked CPU, but you can pretty much count on getting an extra 400MHz across the board, regardless of number of active cores.
Intel's HD 2500 however is less exciting. This is clearly the processor graphics option for users who don't care about processor graphics performance. The 2500's performance is tangibly worse than last year's HD 3000 offering (which makes sense given the 6 EU configuration) and it's not good enough to be considered playable in any of the games we tested. The good news is Quick Sync performance remains unaffected, making HD 2500 just as good as the HD 4000 for video transcoding. In short, if you're going to rely on processor graphics for gaming, you need the HD 4000 at a minimum. Otherwise, the HD 2500 is just fine.