Original Link: https://www.anandtech.com/show/7682/the-wd-black2-review
The WD Black2 Review: World's First 2.5" Dual-Drive
by Kristian Vättö on January 30, 2014 7:00 AM ESTIf you had asked me a few years ago, I would've said that hybrid drives will be the next big thing in the storage industry. Seagate's Momentus XT got my hopes up that there is manufacturer interest in the concept of hybrid drives and I was expecting the Momentus XT to be just the beginning with more announcements following shortly after. A hybrid drive made so much sense -- it combined the performance of an SSD with the capacity of a hard drive in an affordable and easy to use package.
Seagate's Momentus XT showed that even 4GB/8GB of NAND can make a tremendous impact on the user experience, although it couldn't compete with a standalone SSD. The reason for that was the very limited amount of NAND since the speed of an SSD relies on parallelism: a single NAND die isn't fast (though for random IO even it's still far better than a hard drive), but when you combine a dozen or more dies and read/write to them simultaneously, the performance adds up. I knew the Momentus XT was a first generation product and I accepted its limitations, but I truly expected that there would be a high-end drive with enough NAND to substitute for an SSD.
It turns out I was wrong... dead wrong. Sure Seagate doubled the NAND capacity to 8GB in the second generation Momentus XT, but other than that the hybrid drive market was pretty much non-existent. Western Digital showed their hybrid drives over a year ago but limited them to OEMs due to a unique connector. To be honest, I've not seen WD's hybrid drives used in any systems, so I'm guessing OEMs weren't the biggest fans of the connector either.
As the hard drive companies weren't able to come up with decent hybrid offerings, the PC OEMs had to look elsewhere. Intel's Ultrabook concept was a big push for SSDs because Intel required at least 20GB of flash storage or the OEM wouldn't be able to use Intel's Ultrabook branding. Of course Intel had no weapons to stop OEMs from making ultraportables without flash but given the millions Intel has spent on Ultrabook marketing, it was worthwhile for OEM's to follow Intel's guidelines. Since the PC market kind of pushed itself into a corner with the price war, it wasn't possible for PC OEMs to do what Apple did and go SSD only due to prices, but on the other hand Ultrabooks had no space for two 2.5" drives. The solution? mSATA.
mSATA is barely the size of a credit card
Unlike hard drives, SSDs didn't have to be 2.5", it was simply a matter of compatibility with existing systems. What mSATA did was allow PC OEMs to build hybrid storage systems while keeping the Ultrabook spec and form factor. In my opinion this was a huge yet missed opportunity for hard drive OEMs. All they would have needed to do was build a hybrid drive with at least 20GB of NAND in order to meet the Ultrabook spec. I bet many PC OEMs would have chosen an all-in-one hybrid drive instead of two separate drives because managing a single supplier is easier and assuming sufficient volume the pricing should have been competitive as well.
When SSDs first appeared in the consumer space, the hard drive companies didn't feel threatened. The pricing was absurd (the first generation 80GB Intel X-25M cost $595) and performance wasn't much better than what hard drives offered. Back then SSDs were only interesting to some enthusiasts and enterprises, although many were unconvinced about the long-term benefits since the technology was very new. The hard drive companies had no reason to even think about hybrid drives as traditional hard drives were selling like hot cakes.
Today the situation is very different. Let's take the 80GB Intel X-25M G1 and 240GB Intel SSD 530 as examples: the price per gigabyte has dropped from around $7.50 to $0.83. In percentage points, that's a massive 89% decrease. Drops this big are impossible to predict as they usually aren't intentional and neither was this one. The reason why NAND prices dropped so rapidly was the oversupply caused by large investments made around 2010. The sudden increase in NAND demand due to the popularity of smartphones and tablets made the NAND business look like a good investment, which is why many companies invested heavily on it. While smartphone and tablet shipments continued to increase, the thing NAND fabricators didn't take into account was that their capacities didn't (at least not very quickly). In other words, the NAND fabricators expected that the demand for NAND would continue to grow rapidly and increased their production capacities based on that but in reality the demand growth was much smaller, which lead to oversupply. Just like other goods, NAND prices are controlled by demand and supply: when there's more supply than demand, the prices have to come down for supply and demand to meet.
SSDs are no longer luxury items. Plenty of systems are already shipping with some sort of SSD storage and the number will continue to grow. The hard drive companies can no longer neglect SSDs and in fact WD, Seagate, and Toshiba have all made substantial investments regarding SSDs. Last year WD acquired STEC, Virident, and Velobit; Seagate introduced their first consumer SSD lineup, and Toshiba has been in the SSD game for years. However, there still hasn't been a product that would combine an SSD and hard drive into one compact package. The WD Black2, the world's first SSD+HD dual-drive, changes that.
The Drive
The Black2 consists of a 120GB SSD and 1TB dual-platter 5400rpm hard drive. It's not a hybrid drive (or SSHD) by definition like the Momentus XT because there's no caching involved. The SSD and hard drive appear as separate partitions, giving the end-user the power to decide what data goes to the SSD and what doesn't. WD calls the Black2 a dual-drive, which is a logical name for the drive because it's fundamentally two drives in one.
(Sorry for the poor quality photos -- I no longer have access to the DSLR I used before)
WD Black2 Specifications | |
Interface | SATA 6Gbps |
Sequential Read | 350MB/s |
Sequential Write | 140MB/s |
Power Consumption | 0.9W (idle/standby) / 1.9W (read/write) |
Noise | 20dBA (idle) / 21dBA (seek) |
Warranty | 5 years |
Price | $299 |
Included in the retail package is a USB 3.0 to SATA adapter and Acronis True Image WD Edition (via download) for easy data migration. There is no driver disc to my surprise but a small USB drive, which when plugged in runs a command that sends you to WD's download page (i.e. the actual drivers have to be downloaded).
Internally the drive is rather unique. The hard drive itself is the same as WD's Blue Slim model (7mm dual-platter 5400rpm drive) but in addition to the hard drive, there are two PCBs. The bigger PCB contains the SSD components (controller, NAND, DRAM) and the smaller one is home to Marvell's bridge chip, which allows the SSD and hard drive to utilize the same partition table.
WD went with a rather rare JMicron JMF667H controller in the Black2. It's a 4-channel controller and is based on the ARM9 instruction set, but as usual JMicron doesn't provide much in the way of public details.
JMicron used to be a fairly big player in the consumer SSD space back in ~2009 but the lack of a SATA 6Gbps controller pushed SSD OEMs to go with other manufacturers. The JMF667H isn't JMicron's first SATA 6Gbps controller, although it seems that all the members of JMF66x family are mostly the same with a few tweaks. I've seen the JMF66x controllers used in some Asian brand SSDs (e.g. Transcend SSD740) but the biggest demand for JMF66x has been in the industrial SSD side.
As for the NAND, WD has only disclosed that the NAND is 20nm MLC, suggesting that we're dealing with IMFT NAND (Micron or Intel). I tried googling the part numbers but it appears that the NAND is custom packaged as there was no data to be found. However, I'm guessing we're dealing with 64Gb dies, meaning eight dies (64GB) per package. There's also a 128MB DDR3-1600 chip from Nanya, which acts as a cache for the JMF667H controller.
Setting Up the Black2
When the Black2 is first connected, it appears as a 120GB drive and gaining access to the 1TB hard drive portion requires driver installation. The reason why the driver is required is due to the limits of the SATA protocol. Connecting two drives to a single SATA port would require port multiplication, which isn't supported by all SATA controllers as it's not an official requirement. Most modern SATA controllers do support port multiplication but for instance older Intel and nVidia chipsets don't. It's always better to play it safe and not have any specific hardware requirements, especially as most people have no idea what chipset is in their system.
Once the drivers have been installed, the Black2 will show up as a single drive with two partitions. The way this works is pretty simple. Operating systems use Logical Block Addresses (LBAs) for read/write commands, which are used to keep the data seen in the OS and the data in the drive in sync. As OSes have been designed with hard drives in mind, they use linear addressing, meaning that the LBAs start from 1 (i.e. the outer circle of the hard drive) and increase linearly as more data is written. Partitions are based on LBA ranges and as some of you might remember (and may still do it), splitting a hard drive into two partitions was a way to increase performance because the first partition would get the earliest LBAs with the highest performance.
In the Black2, the earliest LBAs (i.e. 120GB) are assigned for the SSD partition, whereas the rest are for the hard drive. The Marvell chip keeps track of all the LBAs and then sends data to the SSD or hard drive based on the LBA.
Out of interest, I also tried creating a 1120GB volume to see how the drive reacts. It certainly works and I was able to read and write data normally, but the issue is that you lose control of what goes to the SSD and what doesn't. As the earliest LBAs have been assigned to the SSD, the SSD will be filled first and once 120GB has been written the drive moves to writing to the hard drive, meaning that you are pretty much left with the hard drive for anything write related. If you go and delete something that's in the SSD, the next writes will go those SSD LBAs so in theory you could use the Black2 as a single volume drive, but it wouldn't be efficient in any way.
Unofficial Mac Support
The drivers WD provides are Windows only but there is a way, at least in theory, to use the drive in OS X. You need Windows access for this and what you do is set up the partitions in Windows and then use OS X's Disk Utility to format the partitions from NTFS to HFS.
Unfortunately I don't have a Mac with USB 3.0 port to thoroughly test the Black2 in OS X, so this is merely a heads up that it may work. I was able to read and write to the drive normally but without a faster interface I can't test that the writes are indeed going to the SSD when they should be. In theory yes, but it's possible that the drivers include more than just an automated partition set up.
Test System
CPU | Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled) |
Motherboard | AsRock Z68 Pro3 |
Chipset | Intel Z68 |
Chipset Drivers | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory | G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24) |
Video Card | Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective) |
Video Drivers | NVIDIA GeForce 332.21 WHQL |
Desktop Resolution | 1920 x 1080 |
OS | Windows 7 x64 |
Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit
Performance Consistency
Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
WD Black2 120GB | Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 240GB | Intel SSD 525 | Plextor M5M | |||||
Default | |||||||||
25% OP | - |
The area where low cost designs usually fall behind is performance consistency and the JMF667H in the Black2 is no exception. I was actually expecting far worse results, although the JMF667H is certainly one of the worst SATA 6Gbps controllers we've tested lately. The biggest issue is the inability to sustain performance because while the thickest line is at ~5,000 IOPS, the performance is constantly dropping below 1,000 IOPS and even to zero on occasion. Increasing the over-provisioning helps a bit, although no amount of over-provisioning can fix a design issue this deep.
WD Black2 120GB | Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 240GB | Intel SSD 525 | Plextor M5M | |||||
Default | |||||||||
25% OP | - |
WD Black2 120GB | Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 480GB | Intel SSD 525 | Plextor M5M | |||||
Default | |||||||||
25% OP | - |
TRIM Validation
To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.
Based on our sequential Iometer write test, the write performance should be around 150MB/s after secure erase. It seems that TRIM doesn't work perfectly but performance would likely further recover after some idle time.
AnandTech Storage Bench 2013
Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based -- we record all IO requests made to a test system and play them back on the drive we're testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes.
As some of you have asked, I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.
AnandTech Storage Bench 2013 - The Destroyer | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox | ||||||||||
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite | ||||||||||
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox | ||||||||||
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Video Playback | Copy and watch movies | Windows 8 | ||||||||||
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
Our Storage Bench 2013 favors 480GB and bigger drives due to its focus on steady-state performance. Having more NAND helps with worst case performance as ultimately steady-state performance is dictated by the speed of the read-modify-write cycle, which depends on the program and erase times of the NAND. The more NAND the drive has, the higher the probability that there is at least some empty blocks available.
When taking the lower capacity into account, the Black2 isn't terrible but it's not great either. There are some 256GB drives that perform similarly, although it should be noted that the Black2 has 12% over-provisioning instead of 7%, giving it a slight advantage there (the drives are filled with sequential data before the test after all).
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Random IO performance is relatively low per today's standards but not truly horrible. I was expecting something worse but the JMF667H turns out to be rather competitive with popular big brand drives like the Samsung 840 EVO and Crucial M500.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
The same goes for sequential performance. It's not bad but there are far better options at 120/128GB.
AS-SSD Incompressible Sequential Read/Write Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
Random & Sequential Performance - HDD
The HDD performance is average, though that shouldn't surprise anyone as there hasn't been any major breakthrough in hard drive technology. The performance is still dictated by platter density and spindle speed, which puts the Black2 at the upper end with its two 500GB 5400rpm platters. I should note that as there is no way to test the hard drive partition of the Black2 as a raw disk like we usually test drives, I had to create an NTFS volume for testing. It's possible that there is some OS prefetching/caching going on, giving the Black2 an advantage especially in the random IO tests.
Performance vs. Transfer Size
ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. ATTO gives a rather bleak picture of the Black2 as other SSDs dominate it and regardless of the IO size, the Black2 is almost always the slowest. Fortunately ATTO isn't that reliable benchmark for real world performance but it's clear that the JMicron controller isn't exactly competitive with the best offerings in the market.
Click for full size
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). We've included a large amount of email downloading, document creation and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 28% | |||
16KB | 10% | |||
32KB | 10% | |||
64KB | 4% |
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1. The full description of the test can be found here.
AnandTech Storage Bench 2011 - Heavy Workload
AnandTech Storage Bench 2011 - Light Workload
Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). There's lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming.
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers.
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 27% | |||
16KB | 8% | |||
32KB | 6% | |||
64KB | 5% |
Power Consumption
The idle power consumption comes out fairly high. Given that there are more components than in a regular SSD, I was expecting higher power draw but 1.32W is more than I would've thought. Note that this is SSD only -- the hard drive isn't even spinning, although its controller still draws some power. Since notebooks are the target market of the Black2, I would have liked to see some sort of low-power state support (HIPM+DIPM and DevSleep) or at least lower idle power consumption in general.
Load power consumption isn't nearly as bad. If the SSD and HDD are accessed simultaneously, the power draw is obviously higher than that of a single drive but not critically. If you ran a normal dual-drive setup, the power draw could easily be more than 4.5W anyway.
Final Words
There are two conditions in which the Black2 makes sense:
1) You have a laptop with a single 2.5" hard drive bay and no mSATA slot
2) You need more capacity than 480/512GB and/or aren't willing to pay for a 500GB class SSD.
If your answer to both questions is a 'yes', the Black2 is likely the best option in the market right now. However, if you answered 'no' to either of the questions, there are far better and cheaper options available.
If your laptop can take two 2.5" drives or a 2.5" drive and an mSATA SSD, it's much cheaper to go that route. As the table below shows, a 120GB SSD and a 1TB 2.5" hard drive costs almost half of what the Black2 does.
NewEgg Price Comparison (1/28/2014) | ||
Cost | ||
WD Black2 120GB SSD + 1TB HDD | $290 | |
Kingston SSDNow V300 120GB + HGST Travelstar 1TB 5400rpm | $85 + $65 = $150 | |
Crucial M500 120GB mSATA + HGST Travelstar 1TB 5400rpm | $80 + $65 = $145 |
By buying separate drives you are also given the option to choose the SSD and HDD in case you want a higher performance SSD or prefer a certain brand HDD. Even if you went with a high-end SSD like the 120GB SanDisk Extreme II, you would end up saving over $50. In fact you could easily buy a 240GB SSD and still easily beat the Black2 in price.
If you don't need more than 240/256GB of SSD storage, the solution is simple: buy an SSD and use it as primary storage. As a matter of fact, Crucial is currently having a sale on the M500 and the 480GB model is retailing for $270 in NewEgg, $20 less than what the Black2 costs. This might be just a short-term sale but lately I've seen many 480/512GB drives selling at ~$300, so the Black2 really only makes sense if you need more than 480/512GB.
Don't get me wrong; I like the concept of the Black2 but the execution and timing are not the best. Had the Black2 been released two years ago, I would've been all over it. Back then mSATA was still rather new and most OEMs hadn't adopted it yet, but nowadays nearly all decent laptops have an mSATA slot that makes a dual-drive or hybrid drive redundant.
Furthermore, the SSD in the Black2 is only mediocre, although I must say I wasn't expecting much in the first place. There must be a reason why none of the big OEMs have adopted JMicron's controllers and I think performance is one of the top reasons. If the Black2 sees another generation, I certainly hope WD focuses more on the SSD performance (maybe Marvell silicon?) because the truth is that there are far better SSDs in the market. Couple that with the pricing and high power draw and we're really looking at a very niche product.
If there is one thing WD should have done in the Black2, that would be caching (or tiered storage). The reason why people usually have negative thoughts about caching is because the solutions are always crippled by small, low performance SSDs. The Black2 has enough NAND to make the caching experience smooth and with the right software the Black2 could have been similar to Apple's Fusion Drive. I'm currently testing a development version of software that will bring Fusion Drive like tiered storage to Windows and it works with the Black2 as well, but until it becomes available (and hopefully WD bundles it with the Black2), the biggest potential of the Black2 is missed.
At $199 and with proper caching software, the Black2 would be a totally different product. Right now it's an expensive niche product that only serves a small user base. If you meet the two conditions at the top, then I have no problem recommending the Black2 but otherwise you should look elsewhere.