Original Link: https://www.anandtech.com/show/6725/the-full-intel-ssd-525-review-30gb-60gb-120gb-180gb-240gb-tested
The Full Intel SSD 525 Review: 30GB, 60GB, 120GB, 180GB & 240GB Tested
by Anand Lal Shimpi on February 3, 2013 2:30 AM EST- Posted in
- Storage
- SSDs
- Intel
- SandForce
- Intel SSD 520
- Intel SSD 525
Last week we published a teaser look at Intel's latest mSATA SSD: the Intel SSD 525. At the time we only presented performance for a single 240GB drive, however Intel decided to break the mold and send us nearly every capacity in the 525 lineup. We've finally completed testing of the remaining capacities and can now bring you a full look at the Intel SSD 525 lineup.
Typically manufacturers send along their best performing SSD and what follows after launch is a bunch of begging for or outright purchase of additional capacities in order to present the most complete picture. Intel was one of the first companies to send along both large and small capacity SSDs in for review, so it's no surprise that they're one of the first to send nearly every member of a new SSD family for review. I can't stress how important it is that other manufacturers follow in Intel's footsteps here. Given the direct relationship between the number of NAND die/packages on an SSD and the performance of the drive, being able to demonstrate the performance of the entire family is very important to those looking to make an actual buying decision.
In the case of the 525, Intel did a good job of keeping things pretty simple. Since the 525 is exclusively an mSATA drive, there's only room for a maximum of four NAND packages on board. The 525 still uses 25nm 2bpc MLC NAND, which is limited to 8GB of NAND per die and 8 die per package (64GB max per NAND package). This is where the 240GB max capacity comes from (256GB of actual NAND).
With the exception of the 90GB and 180GB drives, all of the 525s populate all four NAND packages. The 90GB and 180GB drives are the exception and only feature three NAND packages on board:
Intel SSD 525 180GB (top) vs. Intel SSD 525 240GB (bottom)
The full breakdown of capacities and NAND split are listed below:
Intel SSD 525 | |||||||
Advertised Capacity (GB) | User Addressable Space (GiB) | Total NAND On-board (GiB) | % Spare Area | NAND Packages Number/Die per Package/Package Capacity | MSRP | ||
30GB | 27.95 GiB | 32 GiB | 12.6% | 4 / 1 / 8 GiB | $54 | ||
60GB | 55.89 GiB | 64 GiB | 12.6% | 4 / 2 / 16 GiB | $104 | ||
90GB | 83.82 GiB | 96 GiB | 12.6% | 3 / 4 / 32 GiB | $129 | ||
120GB | 111.79 GiB | 128 GiB | 12.6% | 4 / 4 / 32 GiB | $149 | ||
180GB | 167.68 GiB | 192 GiB | 12.6% | 3 / 8 / 64 GiB | $214 | ||
240GB | 223.57 GiB | 256 GiB | 12.6% | 4 / 8 / 64 GiB | $279 |
All of the 525 members feature the same ~12.6% spare area (~14% OP). Despite the reduction in number of NAND packages, the 90 and 180GB models aren't actually any slower than the 60GB and 120GB versions respectively. The reason that performance doesn't suffer when going to these odd sizes is because Intel/SF do a good job of parallelizing requests across all NAND die within a package, of which there are physically more in the 90/180GB configurations compared to the 60/120GB models.
I went over the rest of the details of the 525 in the initial review. In short, the SF-2281 controller is still at use but paired with firmware that's directed/validated by Intel. Performance, compatibility and stability can be different on the 525 compared to other drives that use SandForce's SF-2281 controller. The 525 in particular uses a newer version of the Intel branch of the SF-2281 firmware with additional stability/compatibilty enhancements and power optimizations. The 525's LLKi firmware revision hasn't been backported to the 520/330/335 and as of now there aren't any public plans to do so. The 525 also comes with a 5-year warranty from Intel.
Performance of the 525 is very similar to the 2.5" Intel SSD 520 that came before it, the big difference here is the physical size of the drive as the 525 is mSATA only. For all testing we used an mSATA to SATA adapter:
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
The 60GB drive is nearly enough to saturate the 525's 4KB random read performance. We don't see a ton of scaling here beyond the 60GB capacity.
With easily compressible data, nearly all of the 525 capacities perform alike since very little data is actually being written - we're just effectively updating the indirection table in addition to a small number of NAND writes. Flip the switch and look at incompressible data and you'll see a clear relationship between the number of active die on board and random IO performance. None of these drives are bad by any means, but they don't all perform at the level of the 240GB model.
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
There's some scaling with queue depth, but when you're this constrained by the number of die you can write to there's just not that much more parallelism to extract.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Even the smallest 30GB 525 can read at almost 300MB/s in low queue depth sequential workloads, that's very quick. To put that in perspective, the 30GB 525 is 30% faster in sequential reads than Intel's 2nd generation, high-end 160GB SSD. The old Intel SSD 310 also gets a sound beating by the new 525.
Sequential write performance once again comes down to the natur eof the data you're writing to the drive. Incompressible data, which will likely make up a good portion of what you write sequentially to a decent sized 525 sees performance that's almost directly proportional to capacity. Looking at Intel's performance historically though, only the 30GB model gets really slow here. The 60GB drive is at least as fast as the old X25-M G2, while the 120 and larger capacities deliver downright solid performance.
AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
The AS-SSD data mimics what we've seen already in IOmeter, although we do get the benefit of seeing what higher queue depths do to incompressible performance on the 525.
Performance vs. Transfer Size
ATTO does a good job of showing us how sequential performance varies with transfer size. Most controllers optimize for commonly seen transfer sizes and neglect the rest. The optimization around 4KB, 8KB and 128KB transfers makes sense given that's what most workloads are bound by, but it's always important to understand how a drive performs across the entire gamut.
In the initial 525 review I compared the SSD to other drives including the 520:
As you can see there's little difference between the 525 and the 520. SandForce also does a good job of performing well across all transfer sizes, which seems to be less common in some of the newer controllers.
ATTO uses easily compressible data so if we toss all of the 525 capacities into the mix we end up with a bunch of curves that nearly overlap one another:
The 30GB model's peak write speed is capped noticeably lower than the rest as it only has a total of four NAND die to stripe across, but the rest are bound by the speed at which the DuraWrite engine can do its job.
Performance Consistency
In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.
To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.
I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.
The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I've tested here but not all controllers may behave the same way.
The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).
The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.
Impact of Spare Area | ||||||||
Intel SSD DC S3700 200GB | Intel SSD 335 240GB | Intel SSD 525 240GB | Corsair Neutron 240GB | OCZ Vector 256GB | Samsung SSD 840 Pro 256GB | |||
Default | ||||||||
25% Spare Area | - | - |
As promised I re-ran our consistency tests on the 525 and came up with somewhat different but still slightly odd results, at least compared to the 335. There's a clear degradation in consistency over time, however both the pre-fill and 4KB random writes are using incompressible data which could be a bit unrealistic here. Between your OS and installed applications, there's bound to be more "free" space on any full SF-2281 drive thanks to the inherently compressible nature of a lot of software. The 25% spare area (192GB) toggle shows us what happens to IO consistency if you either only use 192GB of the 256GB of NAND or if you use the entire drive but have some data on it that's fully compressible. The result isn't anywhere near as impactful as what we see on other drives. The SF-2281 controller is reasonably well behaved to begin with, but the fact remains that with incompressible data the controller has to do a lot more work than it was banking on - which causes large variance in IO latency. Minimum performance is still quite good though, especially if you compare the 525 in its default configuration to Samsung's SSD 840 Pro for example. The 525 just doesn't respond as well to additional spare area as conventional SSDs.
The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.
Impact of Spare Area | ||||||||
Intel SSD DC S3700 200GB | Intel SSD 335 240GB | Intel SSD 525 240GB | Corsair Neutron 240GB | OCZ Vector 256GB | Samsung SSD 840 Pro 256GB | |||
Default | ||||||||
25% Spare Area | - | - |
The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 50K IOPS. We're also only looking at steady state (or close to it) performance here:
Impact of Spare Area | ||||||||
Intel SSD DC S3700 200GB | Intel SSD 335 240GB | Intel SSD 525 240GB | Corsair Neutron 240GB | OCZ Vector 256GB | Samsung SSD 840 Pro 256GB | |||
Default | ||||||||
25% Spare Area | - | - |
IO consistency isn't that great for the SF-2281 controller, although minimum performance remains very good despite the wide distribution of IO latencies. Throwing more spare area at the problem (or just having some compressible data on your drive) does help get rid of the really unusual dips in performance, but the overall distribution remains loosely clustered.
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 28% | |||
16KB | 10% | |||
32KB | 10% | |||
64KB | 4% |
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:
Overall performance of the 60GB 525 is similar to the old X25-M G2, which is pretty impressive when you consider the size of the 60GB mSATA drive. Other interesting comparisons include the 120GB 525 narrowly beating Micron's 128GB C400 mSATA drive, and the 180GB model outperforming the two MyDigitalSSD solutions.
The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:
AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 27% | |||
16KB | 8% | |||
32KB | 6% | |||
64KB | 5% |
The 525s do a bit better vs. the competition in our light workload, but the gap between capacities doesn't change all that much.
TRIM Functionality
SandForce has always exhibited strange behavior when it came to TRIM. Even Intel's custom firmware in the SSD 520 wasn't able to fix SandForce's TRIM problem. The issue happens when the SSD is completely filled with incompressible data (both user LBAs and spare area). Any performance degradation after that point won't be restored with a TRIM pass and instead will require a secure erase to return to new. None of the Intel SF SSDs have been able to fix this issue and the 525 is no exception. I ran a slightly modified version of our usual test here. I filled the drive with incompressible data, ran our consistency workload (also with incompressible data) then measured performance using a 128KB (incompressible) pass in IOmeter. I then TRIMed the entire drive and re-ran the IOmeter test.
Intel SSD 525 Resiliency - IOMeter 128KB Incompressible Sequential Write | |||||
Clean | After Torture (30 mins) | After TRIM | |||
Intel SSD 525 240GB | 293.5 MB/s | 59.8 MB/s | 153.3 MB/s |
And the issues persists. This is really a big problem with SandForce drives if you're going to store lots of incompressible data (such as MP3s, H.264 videos and other highly compressed formats) because sequential speeds may suffer even more in the long run. As an OS drive the SSD 525 will do just fine since it won't be full of incompressible data, but I would recommend buying something non-SandForce if the main use will be storage of incompressible data.
Power Consumption
In our original 525 review I noticed lower idle power consumption compared to the Intel SSD 520, however higher active power consumption. It turns out that both active and idle power consumption should be lower than the 520. As is the case with all of our SSD reviews, we measure power consumption on the 5V rail going into the drive as most drives don't use 3.3V or 12V power. In the case of the 525, it will draw power on the 3.3V rail by default but when used in our mSATA to SATA adapter it is only fed power on the 5V rail. Measuring power at the mSATA to SATA adapter rather than at the mSATA board itself is likely the cause for this discrepancy. Update: We've measured 525 power consumption using the 3.3V rail supplied directly to the drive.
The numbers below are taken from drives using the same mSATA adapter, although actual power consumption in a system with a native mSATA connector should be lower for the 525 at least. Note that the 180GB 525 only has three NAND packages, which seems to positively influence power consumption in the easily compressible data access tests. Under incompressible data load the extra NAND die outweigh the power savings of only having three NAND packages.
Power consumption remains a strength of the 525, although I'd be curious to see how an mSATA Samsung 840 Pro would do here.
Final Words
There aren't really any new conclusions to be made here now that we've gone through almost all of the capacities of Intel's SSD 525. While I'd still like to see Intel bring its own 6Gbps controller technology down to the client space, the SF-2281 based Intel SSD 525 should be a good solution for any mSATA client machine facing a typical workload. I do appreciate Intel taking the mSATA space seriously, as it hasn't seen a ton of attention from tier 1 vendors or companies with good validation track records.
The 525 offers users looking for an mSATA SSD a wide variety of capacities and a level of performance that's almost equal to the best from the 2.5" SATA SSD space. The icing on the cake is you get quite possibly the best validated SF-2281 SSD on the market, even more so than Intel's SSD 520 thanks to the 525's newer firmware. Once again I'd still prefer a controller of Intel's own design (or perhaps even the upcoming 3rd generation SandForce/LSI controller), but one (or both?) of those things will take several more quarters to come to fruition. Until then, the 525 is a good option.