![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/6124/the-intel-ssd-910-review
The Intel SSD 910 Review
by Anand Lal Shimpi on August 9, 2012 1:00 PM EST- Posted in
- Storage
- SSDs
- Intel
- Intel SSD 910
The increase in compute density in servers over the past several years has significantly impacted form factors in the enterprise. Whereas you used to have to move to a 4U or 5U chassis if you wanted an 8-core machine, these days you can get there with just a single socket in a 1U or 2U chassis (or smaller if you go the blade route). The transition from 3.5" to 2.5" hard drives helped maintain IO performance as server chassis shrunk, but even then there's a limit to how many drives you can fit into a single enclosure. In network architectures that don't use a beefy SAN or still demand high-speed local storage, PCI Express SSDs are very attractive. As SSDs just need lots of PCB real estate, a 2.5" enclosure can be quite limiting. A PCIe card on the other hand can accommodate a good number of controllers, DRAM and NAND devices. Furthermore, unlike a single 2.5" SAS/SATA SSD, PCIe offers enough bandwidth headroom to scale performance with capacity. Instead of just adding more NAND to reach higher capacities you can add more controllers with the NAND, effectively increasing performance as you add capacity.
The first widely available PCIe SSDs implemented this simple scaling approach. Take the hardware that you'd find on a 2.5" SSD, duplicate it multiple times and put it behind a RAID controller all on a PCIe card. The end user would see the, admittedly rough, illusion of a single SSD without much additional development work on the part of the SSD vendor. There are no new controllers to build and firmwares aren't substantially different from the standalone 2.5" drives. PCIe SSDs with on-board RAID became a quick way of getting your consumer SSDs into a soon to be huge enterprise SSD market. Eventually we'll see native PCIe SSD controllers that won't need the pesky SATA/SAS to PCIe bridge to be present on the card, and there's even a spec (NVMe) to help move things along. For now we're stuck with a bunch of controllers on a PCIe card.
It took surprisingly long for Intel to dip its toe in the PCIe SSD waters. In fact, Intel's SSD behavior post-2008 has been a bit odd. To date Intel still hasn't released a 6Gbps SATA controller based on its own IP. Despite the lack of any modern Intel controllers, its SSDs based on third party controllers with Intel firmware continue to be some of the most dependable and compatible on the market today. Intel hasn't been the fastest for quite a while, but it's still among the best choices. It shouldn't be a surprise that the market eagerly anticipated Intel's SSD move into PCI Express.
When Intel first announced the 910, its first PCIe SSD, some viewed it as a disappointment. After all, Intel's SSD 910 isn't bootable and is just a collection of Intel/Hitachi SAS SSD controllers behind an LSI SAS to PCIe bridge just like most other PCIe SSDs on the market today. To make matters worse, it doesn't even have hardware RAID support - the 910 presents itself as multiple independent SSDs, you have to rely on software RAID if you want a single drive volume.
For its target market however, neither of these exclusions is a deal breaker. It's quite common for servers to have a dedicated boot drive. Physically decoupling data and boot drives remains a good practice in a server. For a virtualized environment, having a single PCIe SSD act as multiple drives can actually be a convenience. And if you're only running a single environment on your box, the lower software RAID levels (0/1) perform just as well as HBA RAID and remove the added point of hardware failure (and cost).
The 910 could certainly be more flexible if it added these two missing features, but I don't believe their absense is a huge issue for most who would be interested in the drive.
The Controller
A few years back Intel announced a partnership with Hitachi to build SAS enterprise SSDs. Intel would contribute its own IP on the controller and firmware side, while Hitachi would help with the SAS interface and build/sell the drives themselves. The resulting controller looks a lot like Intel's X25-M G2/310/320 controller, but with some changes. The big architectural change is obviously support for the SAS interface. Intel also moved from a single core design to a dual-core architecture with the Hitachi controller. One core is responsible for all host side transactions while the other manages the NAND/FTL side of the equation. The Intel/Hitachi controller is still a 10-channel design like its consumer counterpart. Like the earlier Intel controllers, the SAS version does not support hardware accelerated encryption.
Hitachi uses this controller on its Ultrastar SSD400M, but it's also found on the Intel SSD 910. Each controller manages a 200GB partition (more on the actual amount of NAND later). In other words, the 400GB 910 features two controllers while the 800GB 910 has four. As a result there's roughly a doubling of performance between the two drives.
As I mentioned earlier all of the controllers on the 910 are behind a single LSI SAS to PCIe bridge with drivers that are built in to all modern versions of Windows. Linux and VMware support is also guaranteed. By choosing a widely used SAS to PCIe bridge, Intel can deliver the illusion of a plug and play SSD even though it's on a PCIe card with a 3rd party SAS controller.
Price and Specs
One benefit of Intel's relatively simple board design is the 910's remarkably competitive cost structure:
Intel SSD 910 Pricing | |||||
Capacity | Price | $ per GB | |||
Intel SSD 710 | 200GB | $790 | $3.950 | ||
Intel SSD 910 | 400GB | $2000 | $5.000 | ||
Intel SSD 910 | 800GB | $4000 | $5.000 | ||
OCZ Z-Drive R4 CM84 | 600GB | $3500 | $5.833 |
While you do pay a premium over Intel's SSD 710, the 910 is actually cheaper than the SandForce based OCZ Z-Drive R4. At $5/GB in etail, Intel's 910 is fairly reasonably priced for an enterprise drive - particularly when you take into account the amount of NAND you're getting on board (1792GB for the 800GB drive).
Intel Enterprise SSD Comparison | ||||||
Intel SSD 910 | Intel SSD 710 | Intel X25-E | Intel SSD 320 | |||
Interface | PCIe 2.0 x8 | SATA 3Gbps | SATA 3Gbps | SATA 3Gbps | ||
Capacities | 400 / 800 GB | 100 / 200 / 300GB | 32 / 64GB | 80 / 120 / 160 / 300 / 600GB | ||
NAND | 25nm MLC-HET | 25nm MLC-HET | 50nm SLC | 25nm MLC | ||
Max Sequential Performance (Reads/Writes) | 2000 / 1000 MBps | 270 / 210 MBps | 250 / 170 MBps | 270 / 220 MBps | ||
Max Random Performance (Reads/Writes) | 180K / 75K IOPS | 38.5K / 2.7K IOPS | 35K / 3.3K IOPS | 39.5K / 600 IOPS | ||
Endurance (Max Data Written) | 7 - 14 PB | 500TB - 1.5PB | 1 - 2PB | 5 - 60TB | ||
Encryption | - | AES-128 | - | AES-128 |
By default the 910 is rated for a 25W max TDP, regardless of capacity. At 25W the 910 requires 200 linear feet per minute of cooling to keep its temperatures below 55C. The 800GB drive has the ability to run in a special performance mode that will cause the drive to dissipate up to 28/38W (average/peak). In its performance mode you get increased sequential write performance, but the drive needs added cooling (300 LFM) and obviously draws more power. The 400GB drive effectively always runs in its performance mode but power consumption and cooling requirements are kept at 25W and 200 LFM, respectively.
The Drive and The Teardown
The 910 starts as a half height PCIe 2.0 x8 card, although a full height bracket comes in the box as well:
Intel sent us the 800GB 910, which features three total PCB layers that are sandwiched together. The 400GB model only has two boards. The top one/two PCBs (400GB/800GB) are home exclusively to NAND packages, the final PCB is where all of the controllers and DRAM reside. Each NAND PCB is home to a total of 28 NAND packages, for a total of 56 NAND devices on an 800GB Intel SSD 910. Here's a shot of the back of the topmost PCB:
Each PCB has 17 NAND packages on the front and 11 on the back. If you look closely (and remember Intel's NAND nomenclature) you'll realize that these are quad-die 25nm MLC-HET NAND packages with a total capacity of 32GB per package. Do the math and that works out to be 1792GB of NAND on a 800GB drive (I originally underestimated how much NAND Intel was putting on these things). Intel uses copious amounts of NAND as spare area in all of its enterprise class SSDs (the 2.5" 200GB Intel SSD 710 used 320GB of NAND). Having tons of spare area helps ensure write amplification remains low and keeps endurance high, allowing the 910 to hit Intel's aggressive 7 - 14 Petabyte endurance target.
Intel SSD 910 Endurance Ratings | ||||
400GB | 800GB | |||
4KB Random Write | Up to 5PB | Up to 7PB | ||
8KB Random Write | Up to 10PB | Up to 14PB |
Remove the topmost PCB on the 800GB drive and you'll see the middle layer with another 28 NAND packages totalling 896GB. The NAND is organized in the same 17 + 11 confguration as the top PCB:
This next shot is the middle PCB again, just removed from the stack completely:
and here's the back of the second PCB:
The final PCB in the stack is home to the four Intel/Hitachi controllers and half of the 2GB of DDR2-800:
Under the heatsink is LSI's 2008 SAS to PCIe bridge, responsible for connecting all of the Intel/Hitachi controllers to the outside world. Finally we have the back of the Intel SSD 910, which is home to the other half of the 2GB of DDR2-800 on the card:
The 910 is a very compact design and is well assembled. The whole thing, even in its half height form factor only occupies a single PCIe slot. Cooling the card isn't a problem for a conventional server, Intel claims you need 200LFM of air to keep the 910 within comfortable temperatures.
Intel's SSD Data Center Tool
Despite Intel's wonderful desktop SSD toolbox, there's a more powerful but less user friendly option for the 910. It's called the Intel SSD Data Center Tool (isdct for short) and it's driven entirely by the command line. The tool is available for both Windows and Linux. You can use the isdct to secure erase the 910:
Each controller/partition must be secure erased independently, or in tandem by running four copies of isdct:
Since each partition is effectively an independent drive, you can run multiple isdct commands and just target a different drive with each instance. The isdct is also how you monitor temperatures on the individual drives, once again you have to execute a command per drive to get the temperature of that drive. Under Windows you need to execute the following command:
isdct.exe –log 0x2F –drive <drivenum> -device <devicenum> -verbose
Then look at the value of byte 10, which will tell you the current temperature...in hex. Convert back to decimal and you'll have the temperature of the specified NAND partition in degrees C. I have to admit I found all of this a bit endearing (I never get to read temperatures in hex), but your system administrator may be less impressed. Thankfully it shouldn't be all that difficult to script the isdct to quickly give you the data you want, even at regular intervals. The tool exposes quite a bit and since it's entirely command line driven it's pretty easy to automate, but I can't help feel like Intel should at least do some of this for you. I appreciate the flexibility, but others may want something a bit simpler.
Other than obsessively monitoring temperatures (I never saw a temperature higher than 1F, 37 is the operational limit) and secure erasing the drives, I used the isdct to switch between performance modes on the 910:
Remember the 400GB drive always runs in this max performance mode, but the 800GB drive must be forced into it. You get a warning about the increased cooling and power requirements (up from 25W max TDP and 200 LFM cooling requirement) but otherwise the process is painless.
The Test
For our tests we looked at combined performance of all two/four NAND partitions on the Intel SSD 910. We created a two/four drive RAID-0 array from all of the active controllers on the card to show aggregate performance. If you're going to dedicate one partition to each VM in a virtualized environment, you can expect performance to be roughly a quarter of what you see here per drive.
CPU | Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) |
Motherboard: | Intel H67 Motherboard |
Chipset: | Intel H67 |
Chipset Drivers: | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory: | Qimonda DDR3-1333 4 x 1GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews. For our enterprise suite we make a few changes to our usual tests.
Our first test writes 4KB in a completely random pattern over all LBAs on the drive (compared to an 8GB address space in our desktop reviews). We perform 32 concurrent IOs (compared to 3) and run the test until the drive being tested reaches its steady state. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Intel was one of the first mainstream SSD vendors to prioritize random read performance, and it applies just as much to its enterprise offerings. The 800GB Intel SSD 910 delivers gobs of performance in our random read test. The 400GB version is also good but it's interesting to note Toshiba's 400GB 2.5" SAS drive does just as well here.
Random write performance is good, although still far away from what the SandForce based solutions from OCZ are able to deliver with compressible data. Throw anything other than pure text into your database however and Intel's drives become the fastest offerings once again.
Sequential Read/Write Speed
Similar to our other Enterprise Iometer tests, queue depths are much higher in our sequential benchmarks. To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 32. The results reported are in average MB/s over the entire test length.
Sequential read performance of the 800GB 910 is nearly 2GB/s and competitive with OCZ's 600GB Z-Drive R4. This is really where PCIe based SSDs shine, there's simply no way to push this much bandwidth over a single SATA/SAS port. The 400GB cuts performance in half but we're still talking about nearly 1GB/s. Of the single drives, Toshiba's 400GB SAS drive does the best at 521MB/s. Micron's P400e is a close second among 2.5" drives.
Here we finally see a difference between running the 800GB 910 in standard and high performance modes. In its max performance state the 800GB 910 is good for 1.5GB/s. OCZ's Z-Drive R4 is still a bit quicker with compressible data, but if you throw any incompressible (or encrypted) data at the drive performance is cut in half. Without the TDP cap removed, we can write sequentially to the 910 at 1GB/s.
Enterprise Storage Bench - Oracle Swingbench
We begin with a popular benchmark from our server reviews: the Oracle Swingbench. This is a pretty typical OLTP workload that focuses on servers with a light to medium workload of 100 - 150 concurrent users. The database size is fairly small at 10GB, however the workload is absolutely brutal.
Swingbench consists of over 1.28 million read IOs and 3.55 million writes. The read/write GB ratio is nearly 1:1 (bigger reads than writes). Parallelism in this workload comes through aggregating IOs as 88% of the operations in this benchmark are 8KB or smaller. This test is actually something we use in our CPU reviews so its queue depth averages only 1.33. We will be following up with a version that features a much higher queue depth in the future.
The 910 absolutely suffers through our Swingbench test, performing worse than a single Intel SSD 710. The explanation is pretty simple, small file, low queue depth IO is very slow on the 910. Running ATTO on a single 910 partition tells us everything we need to know:
Look at the small file write performance here. At 0.5KB a single 910 partition can only write at 413KB/s. Performance doesn't actually scale up until we hit 4KB transfers on the write side. I suspect this has a lot to do with why we're seeing poor performance here.
Enterprise Storage Bench - Microsoft SQL UpdateDailyStats
Our next two tests are taken from our own internal infrastructure. We do a lot of statistics tracking at AnandTech - we record traffic data to all articles as well as aggregate traffic for the entire site (including forums) on a daily basis. We also keep track of a running total of traffic for the month. Our first benchmark is a trace of the MS SQL process that does all of the daily and monthly stats processing for the site. We run this process once a day as it puts a fairly high load on our DB server. Then again, we don't have a beefy SSD array in there yet :)
The UpdateDailyStats procedure is mostly reads (3:1 ratio of GB reads to writes) with 431K read operations and 179K write ops. Average queue depth is 4.2 and only 34% of all IOs are issued at a queue depth of 1. The transfer size breakdown is as follows:
AnandTech Enterprise Storage Bench MS SQL UpdateDaily Stats IO Breakdown | ||||
IO Size | % of Total | |||
8KB | 21% | |||
64KB | 35% | |||
128KB | 35% |
Our SQL benchmarks are far more compatible with the 910 in terms of IO size, as a result Intel posts the absolute best scores we've seen yet here. The 400GB drive cuts performance in half compared to the 800GB model, but you're still looking at 3x the performance from a 200GB Intel SSD 710. Toshiba's 400GB drive does very well, but Intel's SSD 520 ends up being the fastest 2.5" drive here.
Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Our final enterprise storage bench test once again comes from our own internal databases. We're looking at the stats DB again however this time we're running a trace of our Weekly Maintenance procedure. This procedure runs a consistency check on the 30GB database followed by a rebuild index on all tables to eliminate fragmentation. As its name implies, we run this procedure weekly against our stats DB.
The read:write ratio here remains around 3:1 but we're dealing with far more operations: approximately 1.8M reads and 1M writes. Average queue depth is up to 5.43.
Once again we see great performance out of the 910 here. The 800GB drives are significantly faster than the SandForce based drives from OCZ, but at 400GB performance is cut in half once again. At the 2.5" form factor Intel's SSD 520 is in the lead, followed by Toshiba's 400GB SAS drive.
Final Words
From a performance perspective, the Intel SSD 910 is an absolute beast. If you take into account encrypted or otherwise incompressible data performance, the 800GB 910 is easily the fastest SSD we've ever tested. The performance loss for the 400GB drive makes it a bit more normal, but it's still among the best. The only thing to be concerned about with the 910 is its poor low queue depth, small file write performance. If your workload is dominated by 2KB (or smaller) writes then the 910 isn't going to be a great performer and you'd be much better off with a standalone 2.5" drive. For all other workloads however, the 910 is great.
Pricing is also extremely competitive with other high-end enterprise PCIe offerings. Intel comes in at $5/GB for its top of the line enterprise SSD, despite the 710 being introduced at over $6 per GB. If you really want to get nostalgic, the old X25-E launched at over $15/GB. The cost per GB is much lower if you take into account how much NAND Intel is actually putting on board the 910. With 56 x 32GB 25nm MLC-HET die on a single 800GB 910, you're talking about around $2.23 per GB. I'd even be interested in seeing Intel offer a higher capacity version of the 910 with less endurance for those applications that need the performance but aren't tremendously write heavy.
Of course there's Intel's famed reliability to take into account. All of the components on the 910 are either widely used already or derived from SSDs that have been shipping for years. There's bound to be some additional firmware complexity but it's nothing compared to doing a completely new drive/controller. Most of the server shops I've worked with lately tend to prefer Intel's 2.5" SSDs, even though there are higher performing alternatives on the market today. The 910 simply gives these folks an even higher end option should their customers or workloads demand it.
My only real complaint is about the inflexibility on the volume side. It would be nice to be able to present two larger volumes (or maybe even a single full capacity volume) to the OS rather than four independent volumes on an 800GB 910. Some VM platforms don't support software RAID and at only 200GB per volume capacity could become an issue. You really need to make sure that your needs are properly lined up with the 910 before pulling the trigger.
As a secondary issue, although I appreciate the power of Intel's SSD Data Center Tool, I would like to see something a bit easier to use. Not everyone wants to grok hexadecimal temperature values (although doing so wins you cool points).
Overall I'm pleased with the 910. It's (for the most part) a solid performer, it's competitively priced and it should last for a good while. If you're space constrained and need to get a lot of local IO performance in your server, Intel's SSD 910 is worth considering.