Original Link: https://www.anandtech.com/show/9009/ocz-vector-180-240gb-480gb-960gb-ssd-review
The OCZ Vector 180 (240GB, 480GB & 960GB) SSD Review
by Kristian Vättö on March 24, 2015 2:00 PM EST- Posted in
- Storage
- SSDs
- OCZ
- Barefoot 3
- Vector 180
OCZ has been teasing the Vector 180 for quite some time now. The first hint of the drive was unveiled over nine months ago at Computex 2014 where OCZ displayed a Vector SSD with power loss protection, but the concept of 'full power loss protection for the enterprise segment' as it existed back then never made it to the market. Instead, OCZ decided to partially use the concept and apply it to its new flagship client drive that is also known as the Vector 180.
OCZ calls the power loss protection feature in Vector 180 'Power Failure Management Plus', or PFM+ for short. For cost reasons, OCZ didn't go with full power loss protection similar to enterprise SSDs and hence PFM+ is limited to offering protection for data-at-rest. In other words, PFM+ will protect data that has already been written to the NAND, but any and all user data that still sits in the DRAM buffer waiting to be written will be lost in case of a sudden power loss.
The purpose of PFM+ is to protect the mapping table and reduce the risk of bricking due to a sudden power loss. Since the mapping table is stored in the DRAM for faster access, all SSDs without some sort of power loss protection are inherently vulnerable to mapping table corruption in case of a sudden power loss. In its other SSDs OCZ tries to protect the mapping table by frequently flushing the table from DRAM to NAND, but with higher capacities (like the 960GB) there's more metadata involved and thus more data at risk, which is why OCZ is introducing PFM+ to the Vector 180.
That said, while drive bricking due to mapping table corruption has always been a concern, I don't think it has been significant enough to warrant physical power loss protection for all client SSDs. It makes sense for the Vector 180 given it's high-end focus as professional users are less tolerant to downtime and it also grants OCZ some differentiation in the highly competitive client market.
Aside from PFM+, the other new thing OCZ is bringing to the market with the Vector 180 is a 960GB model. The higher capacity is enabled by the use of 128Gbit NAND, whereas in the past OCZ has only used a 64Gbit die in its products. It seems that Toshiba's switch to 128Gbit die has been rather slow as I have not seen too many products with 128Gbit Toshiba NAND - perhaps there have been some yield issues or maybe Toshiba's partners are just more willing to use the 64Gbit die for performance reasons (you always lose some performance with a higher capacity die due to reduced parallelism).
OCZ Vector 180 Specifications | ||||||
Capacity | 120GB | 240GB | 480GB | 960GB | ||
Controller | OCZ Barefoot 3 M00 | |||||
NAND | Toshiba A19nm MLC | |||||
NAND Density | 64Gbit per Die | 128Gbit per Die | ||||
DRAM Cache | 512MB | 1GB | ||||
Sequential Read | 550MB/s | 550MB/s | 550MB/s | 550MB/s | ||
Sequential Write | 450MB/s | 530MB/s | 530MB/s | 530MB/s | ||
4KB Random Read | 85K IOPS | 95K IOPS | 100K IOPS | 100K IOPS | ||
4KB Random Write | 90K IOPS | 90K IOPS | 95K IOPS | 95K IOPS | ||
Steady-State 4KB Random Write | 12K IOPS | 20K IOPS | 23K IOPS | 20K IOPS | ||
Idle Power | 0.85W | |||||
Max Power | 3.7W | |||||
Encryption | AES-256 | |||||
Endurance | 50GB/day for 5 years | |||||
Warranty | Five years | |||||
MSRP | $90 | $150 | $275 | $500 |
The retail package includes a 3.5" desktop adapter and a license for Acronis True Image HD 2013 cloning software. Like some of OCZ's recent SSDs, the Vector 180 includes a 5-year ShieldPlus Warranty.
OCZ has two flavors of the Barefoot 3 controller and obviously the Vector 180 is using the faster M00 bin, which runs at 397MHz (whereas the M10 as used in the ARC 100 and Vertex 460(a) is clocked at 352MHz).
OCZ's other SSDs have already made the switch to Toshiba's latest A19nm MLC and with the Vector 180 the Vector series is the last one to make that jump. Given that the Vector lineup is OCZ's SATA 6Gbps flagship, it makes sense since NAND endurance and performance tend to increase as the process matures.
The Vector 180 review is the second that is based on our new 2015 SSD Suite and I suggest that you read the introduction article (i.e. the Samsung SM951 review) to get the full details. Due to several NDAs and travel, I unfortunately don't have too many drives as comparison points yet, but I'm running tests non-stop to add more drives for more accurate conclusions.
AnandTech 2015 SSD Test System | |
CPU | Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled) |
Motherboard | ASUS Z97 Deluxe (BIOS 2205) |
Chipset | Intel Z97 |
Chipset Drivers | Intel 10.0.24+ Intel RST 13.2.4.1000 |
Memory | Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T) |
Graphics | Intel HD Graphics 4600 |
Graphics Drivers | 15.33.8.64.3345 |
Desktop Resolution | 1920 x 1080 |
OS | Windows 8.1 x64 |
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z97 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Hydro H60 CPU cooler and Carbide 330R case
SSD Guru: The New OCZ Toolbox
During the past couple of years we've seen a big push for better toolbox-like software for SSDs from nearly every major vendor. The reason lies in the ability to differentiate because SATA 6Gbps has already been saturated for so long and being substantially different in the performance department has become practically impossible (although that will soon change with PCIe and NVMe). As a result, the SSD manufacturers have had to seek for other opportunities that can increase the value for the customer and software has lately become one of the key aspects in doing so. (It is worth noting that the motherboard industry went through the same process, whereby most motherboards in a price bracket had a flat feature set and software became a differentiating factor. -Ian)
The old OCZ Toolbox
OCZ has had a toolbox for as long as I can remember, but to be honest it looked more like an engineering tool rather than something that was aimed for the end-user. It did have the critical functionality (firmware update, secure erase, SMART data), but given what the competitors have put out to the market it was certainly lacking in both features and usability.
I guess my original Vector is in need of a firmware update
Today, along with the Vector 180 release OCZ is launching its fully redesigned toolbox called the SSD Guru. The overall design of the SSD Guru is much more user friendly and, as we've seen in other toolboxes, the welcoming screen already includes all the essential information about the drive so the user doesn't have to dig through the different tabs to find the important data.
The SSD Guru is available as both Windows and Linux installers as well as a separate bootable tool for Mac users. All Barefoot 3 based drives are supported along with the RevoDrive 350, but the older Indilinx and SandForce based drives are not (although you can still use the old toolbox if you wish).
The 'Tuner' tab includes two separate functions: SSD and OS Tuner. The SSD Tuner allows the user to issue a TRIM command to the drive to erase unused blocks to improve performance (although this should be unnecessary if you are running an OS with TRIM support) and it also includes a tool for increasing the over-provisioning for further performance gains.
The OS Tuner includes a few basic OS features that can be disabled for higher performance and/or capacity. By default the SSD Guru does nothing, but there are three preset options (reliability, performance and capacity) that you can choose from to optimize the OS. Different settings will be disabled based on what you choose (e.g. capacity option only disabled hibernation, whereas reliability disabled all four listed in the image above), although you can also customize the settings and disable what you see fit.
The maintenance tab has the common firmware update and secure erase functions that were also present in the old OCZ toolbox. The SSD Guru will also show a notification on the desktop if there's a newer firmware available.
The SSD Guru also supports logging, which can be a useful feature if you ever have issues with the drive and need to contact OCZ's support.
One feature OCZ emphasized is the ability to save a 'support package' that can then be sent to OCZ support if the drive isn't operating properly. The file includes a brief overview of the system with the necessary information that may be needed by the support staff for troubleshooting.
The one last cool feature of the SSD Guru is its SMART data monitor. Instead of just listing all the values like toolboxes usually do, OCZ has included three key icons that help the user to understand the purpose of each SMART value. While enthusiasts will understand the data without the keys, I still think it's a nice addition and something that at least slightly differentiates the SSD Guru from what is already out on the market.
The version that is being released today has all the core features that you would expect from a toolbox, but none of them are truly unique. Obviously, being a 1.0 release, OCZ only decided to include the most critical features to build the foundation for SSD Guru and the company already has a list of features that are under consideration for future updates (e.g. benchmarking tool). That said, I think the SSD Guru was a necessary move from OCZ in order to be considered a tier one OEM because it's an area where the company has certainly been lacking compared to the competition. I can't say the SSD Guru is special, but in the end the purpose of a toolbox is to provide easy access to the most needed SSD tools and the SSD Guru certainly does that.
Performance Consistency
We've been looking at performance consistency since the Intel SSD DC S3700 review in late 2012 and it has become one of the cornerstones of our SSD reviews. Back in the days many SSD vendors were only focusing on high peak performance, which unfortunately came at the cost of sustained performance. In other words, the drives would push high IOPS in certain synthetic scenarios to provide nice marketing numbers, but as soon as you pushed the drive for more than a few minutes you could easily run into hiccups caused by poor performance consistency.
Once we started exploring IO consistency, nearly all SSD manufacturers made a move to improve consistency and for the 2015 suite, I haven't made any significant changes to the methodology we use to test IO consistency. The biggest change is the move from VDBench to Iometer 1.1.0 as the benchmarking software and I've also extended the test from 2000 seconds to a full hour to ensure that all drives hit steady-state during the test.
For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period. Average IOPS provides a quick look into overall performance, but it can easily hide bad consistency, so looking at standard deviation is necessary for a complete look into consistency.
I'm still providing the same scatter graphs too, of course. However, I decided to dump the logarithmic graphs and go linear-only since logarithmic graphs aren't as accurate and can be hard to interpret for those who aren't familiar with them. I provide two graphs: one that includes the whole duration of the test and another that focuses on the last 400 seconds of the test to get a better scope into steady-state performance.
Barefoot 3 has always done well in steady-state performance and the Vector 180 is no exception. It provides the highest average IOPS by far and the advantage is rather significant at ~2x compared to other drives.
But on the down side, the Vector 180 also has the highest variation in performance. While the 850 Pro, MX100 and Extreme Pro are all slower in terms of average IOPS, they are a lot more consistent and what's notable about the Vector 180 is how the consistency decreases as the capacity goes up.
Default | |||||||||
25% Over-Provisioning |
Looking at the scatter graph reveals the source of poor consistency: the IOPS reduce to zero or near zero even before we hit any type of steady state. This is known behavior of the Barefoot 3 platform, but what's alarming is how the 480GB and 960GB drives frequently drop to zero IOPS. I don't find that acceptable for a modern high-end SSD, no matter how good the average IOPS is. Increasing the over-provisioning helps a bit by shifting the dots up, but it's still clear that 240GB is the optimal capacity for Barefoot 3 because after that the platform starts to run into issues with consistency due to metadata handling.
Default | |||||||||
25% Over-Provisioning |
AnandTech Storage Bench - The Destroyer
The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.
AnandTech Storage Bench - The Destroyer | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox | ||||||||||
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite | ||||||||||
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox | ||||||||||
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Video Playback | Copy and watch movies | Windows 8 | ||||||||||
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background.
AnandTech Storage Bench - The Destroyer - Specs | ||||||||||||
Reads | 38.83 million | |||||||||||
Writes | 10.98 million | |||||||||||
Total IO Operations | 49.8 million | |||||||||||
Total GB Read | 1583.02 GB | |||||||||||
Total GB Written | 875.62 GB | |||||||||||
Average Queue Depth | ~5.5 | |||||||||||
Focus | Worst case multitasking, IO consistency |
The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random.
AnandTech Storage Bench - The Destroyer - IO Breakdown | |||||||||||
IO Size | <4KB | 4KB | 8KB | 16KB | 32KB | 64KB | 128KB | ||||
% of Total | 6.0% | 26.2% | 3.1% | 2.4% | 1.7% | 38.4% | 18.0% |
I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.
AnandTech Storage Bench - The Destroyer - QD Breakdown | ||||||||||||
Queue Depth | 1 | 2 | 3 | 4-5 | 6-10 | 11-20 | 21-32 | >32 | ||||
% of Total | 50.0% | 21.9% | 4.1% | 5.7% | 8.8% | 6.0% | 2.1% | 1.4 |
Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent.
The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.
I'm also reporting the total power consumed during the trace, which gives us good insight into the drive's power consumption under different workloads. It's better than average power consumption in the sense that it also takes performance into account because a faster completion time will result in less watt-hours consumed. Since the idle times of the trace have been truncated for faster playback, the number doesn't fully address the impact of idle power consumption, but nevertheless the metric is valuable when it comes active power consumption.
For a high-end drive, the Vector 180 has average data rate in our heaviest 'The Destroyer' trace. At 480GB and 960GB it's able to keep up with the Extreme Pro, but the 240GB model doesn't bear that well when compared to the competition.
The same story continues when looking at average latency, although I have to say that the differences between drives are quite marginal. What's notable is how consistent the Vector 180 is regardless of the capacity.
Positively, the Vector 180 has very few high latency IOs and actually leads the pack when looking at all capacities.
The Vector 180 also appears to be very power efficient under load and manages to beat every other SSD I've run through the test so far. Too bad there is no support for slumber power modes because the Barefoot 3 seems to excel otherwise when it comes to power.
AnandTech Storage Bench - Heavy
While The Destroyer focuses on sustained and worst-case performance by hammering the drive with nearly 1TB worth of writes, the Heavy trace provides a more typical enthusiast and power user workload. By writing less to the drive, the Heavy trace doesn't drive the SSD into steady-state and thus the trace gives us a good idea of peak performance combined with some basic garbage collection routines.
AnandTech Storage Bench - Heavy | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Editing | Import images, edit, export | Adobe Photoshop | ||||||||||
Gaming | Pllay games, load levels | Starcraft II, World of Warcraft | ||||||||||
Content Creation | HTML editing | Dreamweaver | ||||||||||
General Productivity | Browse the web, manage local email, document creation, application install, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Application Development | Compile Chromium | Visual Studio 2008 |
The Heavy trace drops virtualization from the equation and goes a bit lighter on photo editing and gaming, making it more relevant to the majority of end-users.
AnandTech Storage Bench - Heavy - Specs | ||||||||||||
Reads | 2.17 million | |||||||||||
Writes | 1.78 million | |||||||||||
Total IO Operations | 3.99 million | |||||||||||
Total GB Read | 48.63 GB | |||||||||||
Total GB Written | 106.32 GB | |||||||||||
Average Queue Depth | ~4.6 | |||||||||||
Focus | Peak IO, basic GC routines |
The Heavy trace is actually more write-centric than The Destroyer is. A part of that is explained by the lack of virtualization because operating systems tend to be read-intensive, be that a local or virtual system. The total number of IOs is less than 10% of The Destroyer's IOs, so the Heavy trace is much easier for the drive and doesn't even overwrite the drive once.
AnandTech Storage Bench - Heavy - IO Breakdown | |||||||||||
IO Size | <4KB | 4KB | 8KB | 16KB | 32KB | 64KB | 128KB | ||||
% of Total | 7.8% | 29.2% | 3.5% | 10.3% | 10.8% | 4.1% | 21.7% |
The Heavy trace has more focus on 16KB and 32KB IO sizes, but more than half of the IOs are still either 4KB or 128KB. About 43% of the IOs are sequential with the rest being slightly more full random than pseudo-random.
AnandTech Storage Bench - Heavy - QD Breakdown | ||||||||||||
Queue Depth | 1 | 2 | 3 | 4-5 | 6-10 | 11-20 | 21-32 | >32 | ||||
% of Total | 63.5% | 10.4% | 5.1% | 5.0% | 6.4% | 6.0% | 3.2% | 0.3% |
In terms of queue depths the Heavy trace is even more focused on very low queue depths with three fourths happening at queue depth of one or two.
I'm reporting the same performance metrics as in The Destroyer benchmark, but I'm running the drive in both empty and full states. Some manufacturers tend to focus intensively on peak performance on an empty drive, but in reality the drive will always contain some data. Testing the drive in full state gives us valuable information whether the drive loses performance once it's filled with data.
It sure seems like Samsung is the only manufacturer that has figured out a secret recipe to boost throughput with SATA 6Gbps because all the other drives are hitting a wall at ~290MB/s.
In terms of latency the difference between all drives is much more marginal. The Vector 180 has a small advantage over the Extreme Pro at larger capacities, although once again the 850 Pro tops the charts.
The Vector 180 is also very consistent with only a small number of >10ms IOs. Oddly enough, the 240GB does better when it's full, although I think that might be just an anomaly since it practically makes no sense at all.
Similar to what we saw in The Destroyer benchmark, the 240GB and 480GB Vector 180 has wonderful load power characteristics and the difference to other drives is actually fairly significant.
AnandTech Storage Bench - Light
The Light trace is designed to be an accurate illustration of basic usage. It's basically a subset of the Heavy trace, but we've left out some workloads to reduce the writes and make it more read intensive in general.
AnandTech Storage Bench - Light - Specs | ||||||||||||
Reads | 372,630 | |||||||||||
Writes | 459,709 | |||||||||||
Total IO Operations | 832,339 | |||||||||||
Total GB Read | 17.97 GB | |||||||||||
Total GB Written | 23.25 GB | |||||||||||
Average Queue Depth | ~4.6 | |||||||||||
Focus | Basic, light IO usage |
The Light trace still has more writes than reads, but a very light workload would be even more read-centric (think web browsing, document editing, etc). It has about 23GB of writes, which would account for roughly two or three days of average usage (i.e. 7-11GB per day).
AnandTech Storage Bench - Light - IO Breakdown | |||||||||||
IO Size | <4KB | 4KB | 8KB | 16KB | 32KB | 64KB | 128KB | ||||
% of Total | 6.2% | 27.6% | 2.4% | 8.0% | 6.5% | 4.8% | 26.4% |
The IO distribution of the Light trace is very similar to the Heavy trace with slightly more IOs being 128KB. About 70% of the IOs are sequential, though, so that is a major difference compared to the Heavy trace.
AnandTech Storage Bench - Light - QD Breakdown | ||||||||||||
Queue Depth | 1 | 2 | 3 | 4-5 | 6-10 | 11-20 | 21-32 | >32 | ||||
% of Total | 73.4% | 16.8% | 2.6% | 2.3% | 3.1% | 1.5% | 0.2% | 0.2% |
Over 90% of the IOs have a queue depth of one or two, which further proves the importance of low queue depth performance.
The Barefoot 3's focus has always been sustained rather than peak performance and that's visible in our Light trace. It's not slow by any means since most drives are within a 10% margin (excluding the 850 Pro and Neutron XT), though.
The same applies to latency where most drives are essentially on par with each other.
While the 850 Pro does very well when it comes to performance, it's also the most power hungry, whereas the Vector 180 at smaller capacities is again easily the most power efficient drive.
Random Read Performance
One of the major changes in our 2015 test suite is the synthetic Iometer tests we run. In the past we used to test just one or two queue depths, but real world workloads always contain a mix of different queue depths as shown by our Storage Bench traces. To get the full scope in performance, I'm now testing various queue depths starting from one and going all the way to up to 32. I'm not testing every single queue depth, but merely how the throughput scales with the queue depth. I'm using exponential scaling, meaning that the tested queue depths increase in powers of two (i.e. 1, 2, 4, 8...).
Read tests are conducted on a full drive because that is the only way to ensure that the results are valid (testing with an empty drive can substantially inflate the results and in reality the data you are reading is always valid rather than full of zeros). Each queue depth is tested for three minutes and there is no idle time between the tests.
I'm also reporting two metrics now. For the bar graph, I've taken the average of QD1, QD2 and QD4 data rates, which are the most relevant queue depths for client workloads. This allows for easy and quick comparison between drives. In addition to the bar graph, I'm including a line graph, which shows the performance scaling across all queue depths. To keep the line graphs readable, each drive has its own graph, which can be selected from the drop-down menu.
I'm also plotting power for SATA drives and will be doing the same for PCIe drives as soon as I have the system set up properly. Our datalogging multimeter logs power consumption every second, so I report the average for every queue depth to see how the power scales with the queue depth and performance.
Random read performance at small queue depths has never been an area where the Vector 180 has excelled in. Given that these are one of the most common IOs, it's an area where I would like to see improvement on OCZ's behalf.
Power consumption, on the other hand, is excellent, which is partially explained by the lower performance.
Having a closer look at the performance data across all queue depths reveals the reason for Vector 180's poor random read performance. For some reason, the performance only starts to scale properly after queue depth of 4, but even then the scaling isn't as aggressive as on some other drives.
Random Write Performance
Write performance is tested in the same way as read performance, except that the drive is in a secure erased state and the LBA span is limited to 16GB. We already test performance consistency separately, so a secure erased drive and limited LBA span ensures that the results here represent peak performance rather than sustained performance.
In random write performance the Vector 180 does considerably better, although it's still not the fastest drive around.
Even though the random write performance doesn't scale at all with capacity, the power consumption does. Still, the Vector 180 is quite power efficient compared to other drives.
The Vector 180 scales smoothly across all queue depths, but it could scale a bit more aggressively because especially the QD4 score is a bit low. On a positive side, the Vector 180 does very well at QD1, though.
Sequential Read Performance
Our sequential tests are conducted in the same manner as our random IO tests. Each queue depth is tested for three minutes without any idle time in between the tests and the IOs are 4K aligned similar to what you would experience in a typical desktop OS.
Sequential read performance is decent, but it leaves a bit to be desired to match the other high-end SSDs.
Fortunately the power characteristics are still very good despite the slight lack of performance.
The performance at queue depths of 1 and 2 (i.e. the most common ones) leaves room for improvement, but practically every drive is maxing out SATA 6Gbps at QD4 and higher.
Sequential Write Performance
Sequential write testing differs from random testing in the sense that the LBA span is not limited. That's because sequential IOs don't fragment the drive, so the performance will be at its peak regardless.
The Vector 180 doesn't do any better in sequential writes and especially the 960GB model is surprisingly slow. It's quite evident that the Barefoot 3 was never designed with such a large capacity in mind as there is clearly some performance loss due to additional LBA tracking from extra NAND.
This time the power consumption isn't too good either.
While the performance scales pretty nicely, the Vector 180 seems to hit a wall at 500MB/s (450MB/s for the 960GB model). That's pretty far from the 530MB/s that OCZ rates the sequential write at.
Mixed Random Read/Write Performance
Mixed read/write tests are also a new addition to our test suite. In real world applications a significant portion of workloads are mixed, meaning that there are both read and write IOs. Our Storage Bench benchmarks already illustrate mixed workloads by being based on actual real world IO traces, but until now we haven't had a proper synthetic way to measure mixed performance.
The benchmark is divided into two tests. The first one tests mixed performance with 4KB random IOs at six different read/write distributions starting at 100% reads and adding 20% of writes in each phase. Because we are dealing with a mixed workload that contains reads, the drive is first filled with 128KB sequential data to ensure valid results. Similarly, because the IO pattern is random, I've limited the LBA span to 16GB to ensure that the results aren't affected by IO consistency. The queue depth of the 4KB random test is three.
Again, for the sake of readability, I provide both an average based bar graph as well as a line graph with the full data on it. The bar graph represents an average of all six read/write distribution data rates for quick comparison, whereas the line graph includes a separate data point for each tested distribution.
The Vector 180 does better in mixed 4KB random IO than the 850 Pro, but it's a bit slower than the rest of the drives.
Fortunately the power consumption is still excellent.
Vector 180's problem is its low random read performance because the performance gets better as more writes are thrown into the mix.
Mixed Sequential Read/Write Performance
The sequential mixed workload tests are also tested with a full drive, but I've not limited the LBA range as that's not needed with sequential data patterns. The queue depth for the tests is one.
In mixed sequential tests the Vector 180 does slightly better in the sense that the difference between drives in the order of 10% when excluding the 850 Pro.
Similar to what we saw in the sequential tests, the write power consumption is fairly high, which also increases the average power consumption and the Vector 180 no longer enjoys an advantage over the other drives.
Vector 180's "bathtub" curve is pretty average, but as we can see here the power scales as soon as the portion of writes is increased, which isn't unique but for instance the 850 Pro and Extreme Pro don't exhibit such behavior.
ATTO - Transfer Size vs Performance
I'm keeping our ATTO test around because it's a tool that can easily be run by anyone and it provides a quick look into performance scaling across multiple transfer sizes. I'm providing the results in a slightly different format because the line graphs didn't work well with multiple drives and creating the graphs was rather painful since the results had to be manually inserted cell be cell as ATTO doesn't provide a 'save as CSV' functionality.
Vector 180's performance across different IO sizes doesn't scale as smoothly as on some other drives and especially the read performance has room for improvement.
AS-SSD Incompressible Sequential Performance
I'm also keeping AS-SSD around as it's freeware like ATTO and can be used by our readers to confirm that their drives operate properly. AS-SSD uses incompressible data for all of its transfers, so it's also a valuable tool when testing SandForce based drives that perform worse with incompressible data.
Idle Power Consumption
Since we truncuate idle times to 25µs in our Storage Bench traces, they don't give a fully accurate picture of real world power consumption as idle power consumption is not taken properly into account. Hence I'm still reporting idle power consumption as a separate benchmark because it's one of the most critical metrics when it comes evaluating an SSD for mobile use.
Unfortunately I still don't have a way to test DevSleep power consumption due to lack of platform support, but my testbed supports HIPM+DIPM power commands (also referred to as Slumber power), so the results give a rather accurate picture of real-world idle power consumption.
As we've known for quite some time, OCZ's Barefoot 3 platform doesn't support slumber power modes at all, which results in rather high idle power consumption at nearly one watt for the highest capacity 960GB model. The problem lies in the silicon itself, so instead of spending resources on another silicon iteration OCZ has decided to focus on its next generation JetExpress platform that will carry support for slumber power states as well as DevSleep.
TRIM Validation
The move from Windows 7 to 8.1 introduced some problems with the methodology we have previously used to test TRIM functionality, so I had to come up with a new way to test. I tested a couple of different methods, but ultimately I decided to go with the easiest one that can actually be used by anyone. The software is simply called trimcheck and it was made by a developer that goes by the name CyberShadow in GitHub.
Trimcheck tests TRIM by creating a small, unique file and then deleting it. Next the program will check whether the data is still accessible by reading the raw LBA locations. If the data that is returned by the drive is all zeros, it has received the TRIM command and TRIM is functional.
And as expected TRIM works like it should.
Final Words
From a technological standpoint, the Vector 180 is the most interesting Barefoot 3 based SSD that we have seen in a while. With partial power loss protection (PFM+) and a new 960GB capacity, it's able to bring something new to the now two and a half year old Barefoot 3 platform and most importantly it offers some long desired differentiation to OCZ's client SSD lineup.
I'm not sure what to think about PFM+ because drive bricking due to sudden power losses is fortunately quite rare, and if it was a critical issue then all client-grade SSDs should incorporate some level of physical power loss protection -- not just the high-end drives. In my mind it mainly offers an extra layer of protection and peace of mind, but OCZ would have had to go with full power loss protection in order to add real value to the end-user (although in that case OCZ would have jeopardized its own enterprise SSD sales). I think PFM+ is a nice addition and at least brings something slightly new to the market, but I wouldn't consider it to be a deal breaker because any user that really needs power loss protection will still have to look for enterprise drives.
One point I want to bring up is the performance consistency at different capacities. It really looks like the Barefoot 3 was designed ideally for 240/256GB as going above that will result in some issues with performance consistency. It's normal that companies have an optimal capacity in mind when designing a controller because optimizing for higher capacities will always require more processing power due to the additional metadata handling, which in turn results in higher cost. Back in 2012 when the Barefoot 3 was launched the price per GB was nearly double of what it is today, so it made sense to focus on 120GB and 240GB capacities since 480GB and higher were a small niche due to the high price. Fortunately the IO consistency issues didn't translate to our Storage Benches, but still there are better optimized high capacity SSDs available that don't have any consistency issues.
It's also too bad that the Barefoot 3 lacks support for slumber power states because its active power consumption is simply the best we have tested so far (excluding the 960GB model) and the difference in favor of the Vector 180 is in fact quite substantial. The Vector 180 would be a killer for mobile use if it had proper slumber power management, but since the idle power consumption is ~700mW at its best whereas other drives are able to achieve 20mW, I just can't recommend the Vector 180 or any Barefoot 3 SSD for a laptop/tablet. OCZ's next generation controller, the JetExpress, will support DevSleep and slumber power states and I certainly hope it will share Barefoot 3's excellent active power consumption behavior.
Amazon Price Comparison (3/24/2015) | ||||
120/128GB | 240/250/256GB | 480/500/512GB | 960GB/1TB | |
OCZ Vector 180 (MSRP) | $90 | $150 | $275 | $500 |
OCZ Vertex 460A | $65 | $106 | $200 | - |
OCZ ARC 100 | $60 | $85 | $157 | - |
Corsair Neutron XT | - | $170 | $260 | $540 |
Crucial MX100 | $72 | $110 | $209 | - |
Intel SSD 730 | - | $144 | $240 | - |
Samsung SSD 850 EVO | $70 | $117 | $210 | $370 |
Samsung SSD 850 Pro | $100 | $155 | $290 | $500 |
SanDisk Extreme Pro | - | $146 | $285 | $475 |
Transcend SSD370 | $58 | $90 | $175 | $360 |
Since the Vector 180 is OCZ's flagship, it's also priced accordingly. The MSRPs are very close to what the 850 Pro and Extreme Pro currently retail for and it's clear that OCZ is considering the two as direct competitors to the Vector 180. The problem, however, is that the 850 Pro is better as it's faster, more durable and has longer warranty and better hardware encryption support (Opal & eDrive), so the only area where the Vector 180 can compete is the price, which isn't happening with the MSRPs (of course, actual street pricing may end up being different).
All in all, despite PFM+ and a 960GB capacity, the Vector 180 is ultimately the same Barefoot 3 that we have seen numerous times already, and it's a natural transition to more cost effective Toshiba's A19nm NAND. The performance is good and roughly on par with the Extreme Pro, but it's not high enough for the Vector 180 to truly have an advantage over other high-end drives. To be frank, there's no arguing about the fact that the 850 Pro is a clear leader when it comes to SATA 6Gbps performance. On the other hand, given that client PCIe SSDs are only a quarter or two away, I think anyone who is considering a high-end SSD should hold off their purchase for now. There is no point in upgrading from a SATA SSD to another SATA SSD at this point because the performance benefit will be marginal compared to what PCIe will bring to the table, so you will simply get far more value for your money if you wait a bit. That's also where OCZ's focus is right now and the JetExpress definitely looks promising.