A 5 year warranty is a pretty solid commitment on the part of a manufacturer. I don't think they would have done that if they didn't trust the stability of the hardware, so they really put their money where their mouth is.
Other thing: is the Indilinx co-processor 'Argon' or 'Aragon'? Pic differs from your text description.
Nah, you've got it all wrong unfortunately - they've bet the farm on this drive and if it fails they won't be around in five years to honor those warranties.
When you've got nothing, you've got nothing to lose.
Well that, but I'm glad to see OCZ committing more to their drives... on my local price price check there's Agility, Colossus, Enyo, Ibis, Lightfoot, Octane, Onyx, Petrol, RevoDrive, Synapse, Vertex and Z-drive not counting numbering or variations like Vertex EX, Vertex Limitied Edition, Vertex Turbo and using a zillion different controllers and stuff. The warranty is also an indication this is the technology they'll continue working on and fixing bugs for, which is good because their attention span has been too short and spread too thin. It's better with a few models that kick ass than dozens of models that are all shoddy.
alacard may be right, OCZ is sliding closer to the cliff as we speak. There's so much competition in the SSD market, someone's got to go sooner or later, and it will probably be the less diversified companies that will go first. I recently bought a Vertex 4 128 for my boot drive, and it lasted only 15 days before it disappeared and refused to be recognized in BIOS. The Crucial M4 128 that replaced it has the problem of disappearing every time the power is shut off suddenly (or with the power button after Windows hangs), but comes back after a couple of reboots and a resetting of your boot priorities. And it's regarded as one of the most reliable drives out there. So in order for OCZ to remain solvent, the Vector must be super reliable and stable, and absolutely must stay visible in BIOS at all times. If it's plagued by the same problems as the Vertex 4, it's time to cash out and disappear before the bankruptcy court has it's way.
Actually that connection is indeed a physically identically sized/compatible m-SATA connection. The problem is it's inability to actually plug in due to the SSD's general size or whether it's able to communicate with the typical m-SATA ports on mobos. http://www.pclaunches.com/entry_images/1210/22/tra... should give a decent example.
might be a sign of something else in the works from ocz like an msata cable to plug into it or something, maybe something even more awesome like double the band width by connected it to a ocz pci break off board. i guess we will see
If you've got a motherboard with SATA 6Gb/s you would probably notice a difference. Whether it's worth it is up to you - do you do a lot of disk-intensive work to the point where you wish it were faster? While I'm the difference would be noticable, it might not be huge or worth spending $200+ on.
It's going to take more than a nice type written letter to resolve the many product and service issues at OCZ - if they stay in business over the next six to 12 months.
FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists. In addition a five year warranty does not mean that a particular product is any better than a product with a one year warranty. For each extended year of warranty, the product price increases. So you're paying for something you may or may not ever use.
In addition it's useful to read the fine print on warranties. Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect. If you've ever seen some of the "reconditioned" or "refurbished" mobos from Asus or similar products from other companies, you'd never install them in your PC.
People reach many untrue conclusions about product quality based on the warranty.
So, a longer warranty is only good if you use it? Otherwise you're paying for something you don't need?
And, you're paying extra for a 5-year warranty here? What, so all these top end SSDs, whose prices are lower than ever, are in fact over-priced with fake expensive warranties, so should come out with 1-year warranties and lower prices?
a refurbished SSD? I'm not even sure what that means. That's like going to McDonald's and getting a refurbished McFlurry. It doesn't even make sense.
This isn't a laptop, where worn parts can be replaced. This is a limited lifespan, consumable product, where replacing any parts is equivalent to throwing the old one away and pulling out a brand new one. If the warranty actually says this, then please, point me to it, but otherwise, I'm gonna have to call this bluff and say it's not practical.
The point that some of you seem to not understand is that the 5 year warranty does NOT mean that an SSD or other product is any better quality than a product with a one year warranty. And yes you are paying for the extended warranty no matter what the current price. SSD prices are dropping as cost to produce them is dropping. This particular OCZ model is not a highend model by any stretch, it's just the SSD-of-the-week to be superceded by a new model in a month or two.
Refurbished can mean hand soldered chip replacement or other poorly executed repairs that would not be acceptable to most technically knowledgeable consumers. Reconditioned can mean it's been laying in the warehouse collecting dust for six months and nothing was actually done to repair it when it was returned defective. You would not believe some of the crap that ships as replacement warranty products.
^^^ I'm with Beenthere. A 5 year warranty means a 5 year warranty; nothing more nothing less. The notion that '5 year warranty = great product!' is asinine.
I think if you want to assume anything based off a 5 year warranty in this case, it's because the product is new, the controller is relatively new, and it's an OCZ SSD product.
I'm not likely to buy an OCZ SSD anytime soon, but I'd definitely rather buy one with a 5 year warranty than a 1 or 3 year warranty....if I have to buy an OCZ branded SSD because every other brand is sold out.
I owned a 30GB Vertex. For 9 months, it was great. Then it turned into a big POS. Constant chkdsk errors. I did a sanitary erase/firmware flash and sold it for what I could get for it.
I certainly would not want a refurbished SSD. It would NOT mean new NAND chips, which are the parts most likely to be a problem. Or a new controller. I would never buy a refurbished HDD either. These devices do have lifetimes. Since you have no idea how these drives have been used, or abused, you are taking a very big chance for the dubious opportunity of saving a few bucks.
I can't help but wonder how many replacement SSDs it will take to get to the end of that 5 year warranty. If you go by the track record of Vertex 3 & 4, you can expect a failure about every 90 days, so that's 20 drives, less shipping time to and from, so call it 15 drives with a total downtime of 1.25 years. Wow!.. where can I get one? My Vertex 4 lasted 15 days, but I'm sure that was just a fluke...
I basically agree. From anecdotal reports, OCZ is one of the least reliable vendors, with their drives less reliable than the average HDD. While, so far, the average SSD reliability being about the same as the average HDD, despite people's expectations, this isn't good.
Most people don't need the really high speeds a few of these drives support, higher reliability would be a much better spec to have. Unfortunately, these reviews can't indicate how reliable these drives will be in the longer term.
While I see that OCZ seems to be thought of as failing, this is the first I've heard of it. Have their sales collapsed of late? I was surprised to find that their long time CEO whom Anand had communicated so often with in the past is gone.
"FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists." <- Depends on how you purchase it. Credit card companies will often honour warranties on products purchased from defunct companies. YMMV.
"Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect." <- Happily now everyone in the thread after you has used this conjecture to knock OCZ warranties. That's not really your fault, but I don't think anyone here has read the terms of OCZ's warranty on this product yet?
The point being made here is that OCZ would not offer a 5 year warranty on the product if they thought the cost of honouring that warranty would eclipse their income from sales. This is why 1-year warranties are a red flag. So *something* can be inferred from it; just only about the manufacturer's confidence in their product. You can read into that whatever you want, but I don't generally find that companies plan to be out of business within their warranty period.
Your comment about it increasing the price of the product is odd, because this product is the same price and specification as models with shorter warranties. So either a) you're wrong, or b) you're trivially correct.
Here here. I second that. I am so tired of getting worn refurbished parts for things I just bought BRAND NEW. CoolerMaster just did this for a higher end power supply I bought. Why would I want to spend a hundred dollars for a used PSU? Seriously. Now all the components aren't new in it. Once the warranty expires it'll die right away. Where is the support behind products these days?
It used to be that buying American meant you got quality and customer service. Gone are those days I guess, since all the corporations out there are about to start actually paying taxes.
The GiB/GB bug in Windows accounts for almost all of the difference. It is not worth mentioning that partitioning usually leaves 1MiB of space at the beginning of the drive. 256GB = 238.4186GiB. If you subtract 1MiB from that, it is 238.4176GiB. So why bother to split hairs?
This is correct. I changed the wording to usable vs. formatted space, I was using the two interchangeably. The GiB/GB conversion is what gives us the spare area.
When people see their 1TB-labelled drive displays only 931GB in Windows, they assume it's because formatting a drive with NTFS magically causes it to lose 8% of space, which is totally false. Here's a short explanation for newbie readers. A gigabyte (GB) as displayed in Windows is actually a gibibyte (GiB).
SSDs and HDDs are labelled differently in terms of space. Let's say they made a spinning hard disk with exactly 256GB (238GiB) of space. It would appear as 238GB in Windows, even after formatting. You didn't lose anything, because the other 18 gigs was never there in the first place.
Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.
If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows.
Tbh imho using base 10 units in binary environment is just asking for a facepalm. Everything underneath runs on 2^n anyway and this new "GB" vs "GiB" is just a commercial bullshit so storage devices can be sold with flashier stickers. Your average raid controller bios will show 1TB drive as 931GB one as well (at least few ICHxR and one server Adaptec I have access to right now all do).
What does that mean; usable space? Every OS leaves a different amount after formatting, so whether the drive is rated by GB or GiB, the end result would be different. Normally, SSD's are rated as to the around seen by the OS, not by that plus the around overrated. So it isn't really a problem.
Actually, the differences we're talking about isn't all that much, and is more a geeky thing to concern oneself with more than anything else. Drives are big enough, even SSD's, so that a few GB's more or less isn't such a big deal.
An SSD can't operate without any over-provisioning. If you filled the whole drive, you would end up in a situation where the controller couldn't do garbage collection or any other internal tasks because every block would be full.
Drive manufacturers are not the issue here, Microsoft is (in my opinion). They are using GB while they should be using GiB, which causes this whole confusion. Or just make GB what it really is, a billion bytes.
Sorry to say so, but I am afraid you look on this from wrong perspective. Unless you are IT specialist you go buy a drive that says 256GB and expect it to have 256GB capacity. You don't care how much additional space is there for replacement of bad blocks or how much is there for internal drive usage... so you will get pretty annoyed by fact that your 256GB drive would have let's say 180GB of usable capacity.
And now this GB vs GiB nonsense. From one point of view it's obvious that k,M,G,T prefixes are by default *10^3,10^6,10^9,10^12... But in computers capacity units they used to be based on 2^10, 2^20 etc. to allow some reasonable recalculation between capacity, sectors and clusters of the drive. No matter what way you prefer, the fact is that Windows as well as many IDE/SATA/SAS/SCSI controllers count GB equal to 2^30 Bytes.
Also, if you say Windows measurement is wrong, why is RAM capacity shown in 'GB' but your 16GB shown in EVERY BIOS in the world is in fact 16384MiB?
Tbh there is big mess in these units and pointing out one thing to be the blame is very hasty decision.
Also, up to some point the HDD drive capacity used to be in 2^k prefixes long time ago as well... still got old 40MB Seagate that is actually 40MiB and 205MB WD that is actually 205MiB. CD-Rs claiming 650/700MB are in fact 650/700MiB usable capacity. But then something changed and your 4.7GB DVD-R is in fact 4.37GiB usable capacity. And same with hard discs...
Try to explain angry customers in your computer shop that the 1TB drive you sold them is 931GB unformatted shown both by controller and Windows.
Imho nobody would care slightest bit that k,M,G in computers are base 2 as long as some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes.
It is absurd to claim that "some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes".
The S.I. system of units prefixes for K, M, G, etc. has been in use since before computers were invented. They have always been powers of 10. In fact, those same prefixes were used as powers of ten for about 200 years, starting with the introduction of the metric system.
So those "marketing twats" you refer to are actually using the correct meaning of the units, with a 200 year historical precedent behind them.
It is the johnny-come-latelys that began misusing the K, M, G, ... unit prefixes.
Fortunately, careful people have come up with a solution for the people incorrectly using the metric prefixes -- it is the Ki, Mi, Gi prefixes.
Unfortunately, Microsoft persists in misusing the metric prefixes, rather than correctly using the Ki, Mi, Gi prefixes. That is clearly a bug in Microsoft Windows. Kristian is absolutely correct about that.
No, he is right. Everything was fine until HDD guys decided they could start screwing customers for bigger profits. Microsoft and everyone else uses GB as they should with computers. It was HDD manufacturers that caused this whole GB/GiB confusion regarding capacity.
Well, 2^10k prefixes marked with 'i' were made in a IEC in 1998, in IEEE in 2005, alas the history is showing up frequent usage of both 10^3k and 2^10k meanings. Even with IEEE passed in 2005 it took another 4 years for Apple (who were the first with OS running with 2^10k) to turn to 'i' units and year later for Ubuntu with 10.10 version.
For me it will always make more sense to use 2^10k since I can easily tell size in kiB, MiB, GiB etc. just by bitmasking (size & 11111111110000000000[2]) >> 10 (for kiB). And I am way too used to k,M,G with byte being counted for 2^10k.
"Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.
If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows. "
I'm pretty sure he's referring to the amount of NAND on the drive minus the 6.8% set aside as spare area, not the old mechanical meaning where you "lost" disk space when a drive was formatted because of base 10 to base 2 conversion.
How long does the heavy test take? The longest recorded busy time was 967 seconds from the Crucial M4. This is only 16 minutes of activity. Does the trace replay in real time, or does it run compressed? 16 minutes surely doesnt seem to be that much of a long test.
yes, I took note of that :). That is the reason for the question though, if there were an idea of how long the idle periods were we can take into account the amount of time the GC for each drive functions, and how well.
Wouldn't this compress the QD during the test period? If the SSDs recorded activity is QD2 for an hour, then the trace is replayed quickly this creates a high QD situation. QD2 for an hour compressed to 5 minutes is going to play back at a much higher QD.
I would love to have seen results using the 1.5 firmware for the 256GB Vertex 4. Going from 1.4 to 1.5 is non destructive. The inconsistency of graphs in other SSD reviews that included the 512GB Vertex 4 drive with 1.5 firmware and the 256GB Vertex 4 drive with 1.4 firmware drove me nuts.
When I saw the Barefoot 3 press release on Yahoo Finance, I immediately went to your site hoping to see the review. I was happy to see the article up, but when I saw your review sample was 256GB I feared you would not have updated the firmware on the Vertex 4 yet. Unfortunately, my fears were confirmed. I love your site, that's why I'm sharing my $.02 as a loyal reader.
Some of the results are actually using the 1.5 firmware (IO consistency, steady state 4KB random write performance). We didn't notice a big performance difference between 1.4 and 1.5 which is why I didn't rerun on 1.5 for everything.
Isn't this similar? Sandforce comes in, reached top speed in SATA 6Gbps, then other controller, Marvell, Barefoot managed to catch up. That is exactly what happen before with SATA 3Gbps Port. So in 2013 we would have controller and SSD all offering similar performance bottlenecked by its Port Speed.
When are we going to see SATA Express that give us 20Gbps? We need those ASAP.
SATA Express (on PCIe 3.0) will top out at 16 Gbps until PCIe 4.0 is out. This is the same bandwidth as single-channel DDR3-2133, by the way, so 16 Gbps should be plenty of performance for the next several years.
It is good to see anandtech including results of performance consistency tests under a heavy write workload. However, there is a small or addition you should make for these results to be much more useful.
You fill the SSDs up to 100% with sequential writes and I assume (I did not see a specification in your article) do 100% full-span 4KQD32 random writes. I agree that will give a good idea of worst-case performance, but unfortunately it does not give a good idea of how someone with that heavy a writeload would use these consumer SSDs.
Note that the consumer SSDs only have about 7% spare area reserved. However, if you overprovision them, some (all?) of them may make good use of the extra reserved space. The Intel S3700 only makes available 200GB / 264GiB of flash, which comes to 70.6% available, or 29.4% of the on-board flash is reserved as spare area.
What happens if you overprovision the Vector a similar amount? Or to take a round number, only use 80% of the available capacity of 256GB, which comes to just under 205GB.
I don't know how well the Vector uses the extra reserved space, but I do know that it makes a HUGE improvement on the 256GB Samsung 840 Pro. Below are some graphs of my own tests on the 840 Pro. I included graphs of Throughput vs. GB written, as well as latency vs. time. One the 80% graphs, I first wrote to all the sectors up to the 80% mark, then I did a 80% span 4KQD32 random write. On the 100% graphs, I did basically the same as anandtech did, filling up 100% of the LBAs then doing a 100% full-span 4KQD32 random write. Note that when the 840 Pro is only used up to 80%, it improves by a factor of about 4 in throughput, and about 15 in average latency (more than a 100 times improvement in max latency). It is approaching the performance of the Intel S3700. If I used 70% instead of 80% (to match the S3700), perhaps it would be even better.
Excellent testing, very relevant, and thanks for sharing. How do you feel that the lack of TRIM in this type of testing affects the results? Do you feel that testing without a partition and TRIM would not provide an accurate depiction of real world performance?
I just re-read your comment, and I thought perhaps you were asking about sequence of events instead of what I just answered you. The sequence is pretty much irrelevant since I did a secure erase before starting to write to the SSD.
1) Secure erase SSD 2) Write to all LBAs up to 80% 3) 80% span 4KQD32 random write
You are correct, I ran a 100% span of the 4KB/QD32 random write test. The right way to do this test is actually to gather all IO latency data until you hit steady state, which you can usually do on most consumer drives after just a couple of hours of testing. The problem is the resulting dataset ends up being a pain to process and present.
There is definitely a correlation between spare area and IO consistency, particularly on drives that delay their defragmentation routines quite a bit. If you look at the Intel SSD 710 results you'll notice that despite having much more spare area than the S3700, consistency is clearly worse.
As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up). I think there's definitely value in looking at exactly what you're presenting here. The interesting aspect to me is this tells us quite a bit about how well drives make use of empty LBA ranges.
I tend to focus on the worst case here simply because that ends up being what people notice the most. Given that consumers are often forced into a smaller capacity drive than they'd like, I'd love to encourage manufacturers to pursue architectures that can deliver consistent IO even with limited spare area available.
Anand wrote: "As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up)."
Actually, all of my tests did use up all the spare area, and had reached steady state during the graph shown. Perhaps you have misunderstood how I did my tests. I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests.
The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough.
It reaches steady state somewhere between 80 and 120GB. The spare area is used up at about 62GB and the speed drops precipitously, but then there is a span where the speed actually increases slightly, and then levels out somewhere around 80-120GB.
Note that steady state is about 110MB/sec. That is about 28K IOPS. Not as good as the Intel S3700, but certainly approaching it.
Hey J, thanks for taking the time to reply to me in the other comment. I think my question is even more noobish than you have assumed.
"I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests."
I am confused because I thought the only way to "over-provision" was to create a partition that didn't use all the available space??? If you are merely writing raw data up to the 80% full level, what exactly does over provisioning mean? Does the term "over provisioning" just mean you didn't fill the entire drive, or you did something to the drive?
No, overprovisioning generally just means that you avoid writing to a certain range of LBAs (aka sectors) on the SSD. Certainly one way to do that is to create a partition smaller than the capacity of the SSD. But that is completely equivalent to writing to the raw device but NOT writing to a certain range of LBAs. The key is that if you don't write to certain LBAs, however that is accomplished, then the SSD's flash translation table (FTL) will not have any mapping for those LBAs, and some or all SSDs will be smart enough to use those unmapped-LBAs as spare area to improve performance and wear-leveling.
So no, I did not "do something to the drive". All I did was make sure that fio did not write to any LBAs past the 80% mark.
"The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough."
WOW - this is an interesting discussion which concludes that by simply over-provisioning a consumer SSD by 20-30% those units can approach the vetted S3700! I had to re-read those posts 2x to be sure I read that correctly.
It seems some later posts state that if the workload is not sustained (drive can recover) and the drive is not full, that the OP has little to no benefit.
So is an best bang really just not fill the drives past 75% of the available area and call it a day?
The conclusion I draw from the data is that if you have a Samsung 840 Pro (or similar SSD, I believe several consumer SSDs behave similarly with respect to OP), and the big one -- IF you have a very heavy, continuous write workload, then you can achieve large improvements in throughput and huge improvements in maximum latency if you overprovision at 80% (i.e., leave 20% unwritten or unpartitioned)
Note that such OP is not needed for most desktop users, for two reasons. First, most desktop users will not fill the drive 100% and as long as they have TRIM working, and if the drive is only filled to 80% (even if the filesystem covers all 100%), then it should behave as if it were actually overprovisioned at 80%. Second, most desktop users do not continuously write tens of Gigabytes of data without pause.
By the way, I am not sure why you say the data sets are "a pain to process and present". I have written some test scripts to take the data automatically and to produce the graphs automatically. I just hot-swap the SSD in, run the script, and then come back when it is done to look at the graphs.
Also, the best way to present latency data is in a cumulative distribution function (CDF) plot with a normal probability scale on the y-axis, like this:
One other tip is that it does not take hours to reach steady state if you use a random map. This means that you do a random write to all the LBAs, but instead of sampling with replacement, you keep a map of the LBAs you have already written to and don't randomly select the same ones again. In other words, write each 4K-aligned LBA on a tile, put all the tiles in a bag, and randomly draw the tiles out but do not put the drawn tile back in before you select the next tile. I use the 'fio' program to do this. With an SSD like the Samsung 840 Pro (or any SSD than can do 300+ MB/s 4KQD32 random writes), you only have to write a little more than the capacity of the SSD (eg., 256GB + 7% of 256GB) to reach steady state. This can be done in 10 or 20 minutes on fast SSDs.
I consistently over-provision every single SSD I use by at least 20%. I have had stellar performance doing this with 50-60+ SSDs over the years.
I do this on friend's/family's builds and tell anybody I know to do this with theirs. So, with my tiny sample here, OP'ing SSDs is a big deal, and it works. I know many others do this as well. I base my purchase decisions with OP in mind. If I need 60GB of space, I'll buy a 120GB. If I need 120GB of usable space, I'll buy a 250GB drive, etc.
I think it would be valuable addition to the Anand suite of tests to account for this option that many of us use. Maybe a 90% OP write test and maybe an 80% OP write test. Assuming there's a constitent difference between the two.
I should note that this works best after a secure erase and during the Windows install, don't let Windows take up the entire partition. Create a smaller pertition from the get-go. Don't shrink it later in Windows, once the OS has been installed. I believe the SSD controller knows that it can't do it's work in the same way if there is/was an empty partition taking up those cells. I could be wrong - this was the case with the older SSDs - maybe the newer controllers treat any free space as fair game to do their garbage collection/wear leveling.
If the SSD has a good TRIM implementation, you should be able to reap the same OP benefits (as a secure erase followed by creating a smaller-than-SSD partition) by shrinking a full-SSD partition and then TRIMming the freed LBAs. I don't know for a fact the Windows 7 Disk Management does a TRIM on the freed LBAs after a shrink, but I expect that it does.
I tend to use linux more than Windows with SSDs, and when I am doing tests I often use linux hdparm to TRIM whichever sectors I want to TRIM, so I do not have to wonder whether Windows TRIM did I wanted or not. But I agree that the safest way to OP in Windows is to secure erase and then create a partition smaller than the SSD -- then you can be absolutely sure that your SSD has erased LBAs, that are never written to, for the SSD to use as spare area.
Wouldn't it be better if you just paid half price and bought the 60GB drive (or 80GB if you actually *need* 60GB) for the amount of space you needed at the present, and then in a year or two when SSD's are half as expensive, more reliable, and twice as fast you upgrade to the amount of space your needs have grown to?
Your new drive without overprovisioning would destroy your old overprovisioned drive in performance, have more space (because we're double the size and not 30% OP'ed), you'd have spent the same amount of money, AND you now have an 80GB drive for free.
Of course, you should never go over 80-90% usage on an SSD anyway, so if that's what you're talking about then never mind...
Nice results and great pictures. Really shows importance of free space/OP for random write preformance. Even more amazing is that the results you got seem to fit quite good with simplified model of SSD internal workings: Lets assume we have SDD with only the usual ~7% of OP which was nearly 100% filled (one can say trashed) by purely random 4KB writes (should we now write KiB just to make a few strange guys happy?) and assuming also that the drive operates on 4KB pages and 1MB blocks (today drives seem to be using rather 8KB/2MB but 4KB makes things simpler to think about) so having 256 pages per block. If trashing was good enough to achieve perfect randomisation we can expect that each block contains about 18-19 free pages (out of 256). Under heavy load (QD32, using ncq etc) decent firmware should be able to make use of all that free pages in given block before it (the firmware) decides to write the block back to NAND. Thus under heavy load and with above assumptions ( 7% OP) we can expect at worst case (SSD totaly trashed by random writes and thus free space fully randomized) Wear Amplification of about 256:18 ~= 14:1. Now when we allow for 20% of free space (in addition to implicit ~7%OP) we should see on average about 71-72 out of 256 pages free in each and every block. This translates to WA ~= 3.6:1 (again assuming that firmware is able to consume all free space in the block before writing it back to nand. That is maybe not so obvious as there are limits in max number of i/o bundled in single ncq request but should not be impossible for the firmware to delay the block write few msecs till next request comes to see if there are more writes to be merged into the block). Differences in WA translate directly to differences in performance (as long as there is no other bottleneck of course) so with 14:3.6 ~= 3.9 we may expect random 4KB write performance nearly 4 higher for drive with 20% free space compared to drive working with only bare 7% of implicit OP. May be just an accident but that seem to fit pretty close to results you achieved. :)
Interesting analysis. Without knowing exactly how the 840 Pro firmware works I cannot be certain, but it does sound like you have a reasonable explanation for the data I observed.
Yeah, there is lot of assumptions and simplifications in above... well surely woudn't call it analysis, perhaps hypothesis would be also a bit too much. Modern SSDs have lots of bells and whistles - as well as quirks - and most of them - particularly quirks - aren't documented well. All that means that conformity of the estimation with your results may very well be nothing more then just a coincidence. The one thing I'm reasonably sure however - and it is the reason I thought it is worth to write the post above - is that for random 4K writes on the heavily (ab)used SSD the factor which is limiting performance the most is Write Amplification. (at least as long as we talk about decent SSDs with decent controllers - Phisons does not qualify here I guess :)
In addition to obvious simplifications there was one another shortcut I took above: I based my reasoning on the "perfectly trashed" state of SSD - that is one where SSD's free space is in pages equally spread over all nand blocks. In theory the purpose of GC algos is to prevent drives reaching such a state. Still I think there are workloads which may bring SSDs close enough to that worst possible state and so that it is still meaningful for worst case scenario. In your case however the starting point was different. AFAIU you used first sequential I/O to fill 100 / 80 % of drive capacity so we can safely assume that before random write session started drive contained about 7% (or ~27% in second case) of clean blocks (with all pages free) and the rest of blocks were completely filled with data with no free pages (thanks to nand-friendly nature of sequential I/O). Now when random 4K writes start to fly... looking from the LBA space perspective these are by definition overwrites of random chunks of LBA space but from SSD managed perspective at first we have writes filling pool of clean blocks coupled by deletions of randomly selected pages within blocks which were until now fully filled with data. Surely such deletion is actually reduced to just marking of such pages as free in firmware FTL tables (any GC at that moment seem highly unlikely imho). At last comes the moment when clean blocks pool is exhausted (or when size of clean pool falls below the threshold) which is waking up GC alhorithms to make their sisyphean work. At that moment situation looks like that (assuming that there was no active GC until now and that firmware was capable enough to fill clean blocks fully before writing to NAND): 7% (or 27% in second case) of blocks are fully filled with (random) data whereas 93/73 % of blocks are now pretty much "trashed" - they contain (virtually - just marked in FTL) randomly distributed holes of free pages. Net effect is that - compared to the starting point - free space condensed at first in the pool of clean blocks is now evenly spread (with page granularity) over most of drive nand blocks. I think that state does not look that much different then the state of complete, random trashing I assumed in post above... From that point onward till the end of random session there is ongoing epic struggle against entropy: on one side stream of incoming 4K writes is punching more free page holes in SSD blocks thus effectively trying to randomize distribution of drive available free space and GC algorithms on the other side are doing their best to reduce the chaos and consolidate free space as much as possible. As a side note I think it is really a pity that there is so little transparency amongst vendors in terms of comunicating to customers internal workings of their drives. I understand commercial issues and all but depriving users of information they need to efficiently use their ssds leads to lots of confused and sometimes simply disappointed consumers and that is not good - in the long run - for the vendors too. Anyway maybe it is time to think about open source ssd firmware for free community?! ;-)))
ps. Thanks for reference to fio. Looks like very flexible tool, may be also easier to use then iometer. Surely worth to try at least.
So based on 36TB to end the warranty, basically you can only fill up your 512GB drive 72 times before the warranty expires? That doesn't seem like a whole lot of durability. reinstalling a few large games several times could wear this out pretty quickly... or am I understanding something incorrectly. According to my calculations, assuming a gigabit network connection, running at 125MB per second storing data, that is .12GB per second, 7.3242GB per minute, 439GB per hour, or 10.299TB per day... Assuming this heavy write usage,... that 36 TB could potentially be worn out in as little as 3.5 days using a conservative gigabit network speed as the baseline.
I assume warranties such as this assume an unrealistically high write amplification (e.g. 10x) (to save the SSD maker some skin, probably). Your sequential write example (google "rogue data recorder") most likely has a write amplification very close to 1. Hence, you can probably push much more data (still though the warranty remains conservative).
Even WA=10 is not enough to account for the 512GB warranty of 36TB. That only comes to about 720 erase cycles if WA=10.
I think the problem is that the warrantied write amount should really scale with the capacity of the SSD.
Assuming WA=10, then 128GB @ 3000 erase cycles should allow about 3000 *128GB/10 = 38.4TB. The 256GB should allow 76.8TB. And the 512GB should get 153.6TB.
The write amount does actually scale with capacity, OCZ just tried to simplify things with how they presented the data here. In actuality, even the smallest capacity Vector should be good for more than 20GB of host writes per day x 5 years.
Wait, what? I thought OCZ claimed the warranty was the same for all capacities, 5 years of 36TB, whichever comes first.
Are you saying that the 36TB number is only for the 128GB Vector, and the other two have double and quadruple that amount allowed before the warranty runs out?
These endurance tests that they use to generate the predicted life of the SSD are with 100% fill and full span random writes. This prevents the SSD from doing many of the internal tasks as efficiently that reduce write amplification. You would need to be doing full span random writes to see these types of endurance numbers. Free capacity on the drive, and different types of data other than 4K random will result in much higher endurance. These numbers are intentionally worst case scenarios.
If your usage case is saturating a Gigabit connection 24/7, you need to be buying SLC Enterprise drives (and get a better network connection :P).
36TB doesn't sound like much if you're making up crazy scenarios, but that is probably near a decade of use for a normal power-user. Another way to put it is that you'd have to re-install a 12GB game 3,000 times to get that number..
But if you reinstall a 12GB game four times per day, and eight times on a Saturday, then your drive could be worn out after just three months!
It's a reasonable use case for someone who only wants to spring for the budget 40GB SSD, but still wants to oscillate between playing four large games on a daily basis.
Vertex, Octane, Agility, Synapse, Revodrive, Z-Drive, Velodrive and now Vector, plus an array of generation numbers and suffixes. Could OCZ's flash product naming system be any more complicated?
Numerical product names may not be sexy, but they sure are easy to understand.
So, this drive costs as much as a 840 Pro (or a little less for the 512GB version) and has slightly worse performance in most cases. But if I use more than 50% of its capacity, I get much worse performance? That's something that bugged me in the Vertex 4 reviews: you test with the performance mode enabled in pretty much all graphs, but I will use it without it, because if I buy an SSD, I intend to use more than 50% of the drive. I don't get it.
You ONLY see the slow down when you write to the whole of the drive in 1 go..so you will only ever see it if you sit running HDtach or a similar bench to the whole of the drive. The drive is actually intelligent, say you write a 4.7GB file for instance, it writes the data in a special way, more like an enhanced burst mode. Once writes have finished it then moves that written data to free up this fast write nand so its available again.
It does this continually as you use the drive, if you are an average user writing say 15GB a day you will NEVER see a slow down.
The way it works is that in the STEADY STATE, performance mode is faster than storage mode. This should be obvious, because why would they even both having two modes if the steady state performance is not different between the modes?
Now, there is a temporary (but severe) slowdown when the drive switches from performance mode to storage mode, but I don't think that is what Death666Angel was talking about.
By the way, if you want a simple demonstration of the STEADY STATE speed difference between the modes, then secure erase the SSD, then use something like HD Tune to write to every LBA on the SSD. It will start out writing at speed S1, then around 50% or higher it will write at a lower speed, call it Sx. But that is only temporary. Give it a few minutes to complete the mode switch, then run the full drive write again. It will write at a constant speed over the drive, call it S2. But the key thing to notice is that S2 is LESS THAN S1. That is a demonstration that the steady-state performance is lower once the drive has been filled past 50% (or whatever percentage triggers the mode switch).
Yes, I know who you are. You have posted incorrect information in the past. I have noticed on the OCZ forums that whatever makes OCZ or OCZ products look good is posted, whether true or not.
I am talking about facts. Do you dispute what I wrote? Because hardocp measured exactly what I wrote here:
In the end you do NOT fully understand how the drives work, you think you do, you do not. If a 100% write to all LBA test is run on the 128 and 256's you get what Anand shows, the reason for this is the drive is unable to move data around during the test. So...if you like running 100% LBA write tests to your drive all day them knock yourself out...buy the 512 and as you see it delivers right thru the LBA range. However if you just want to run the drive as an OS drive and you average a few GB writes per day, with coffee breaks and time away from the PC then the drive will continually recover and deliver full speed with low write access for every write you make to the drive right up till its full..the difference is you are not writing to 100% LBA in 1 go.
So what I said about it being a benchmark quirk is 100% correct, yes when you run that benchmark the 256's and 128's do slow up, however if you install an OS, and then load all your MP3's to the drive and it hist 70% of a 128 it may slow if it runs out of burst speed nand to write to BUT as soon as you finish writing it will recover..infact if you wrote the MP3's in 10GB chunks with a 1min pause between each write it would never slow down.
The drives are built to deliver with normal write usage patterns...you fail to see this though.
Maybe we need to give the option to turn the bust mode off and on, maybe then you will see the benefits.
BTW the test was rin on an MSI 890FX with SB850, so an old sata3 AMD based platform...this is my work station. The drive is much faster on an Intel platform due tot he AMD sata controller not being as fast.
I show you a vector with no slow down, same write access latency for 100% LBA and explain why the 2 other capacity drives work the way they do and its still not good enough.
Come to my forum, ask what you want and we will do everything we can to answer every question within the realms of not disclosing any IP we have to protect.
In fact Jwilliams email me at tony_@_ocztechnology.com without the _ and I will forward an NDA to you, sign it and get it back to me and I will call you and explain exactly how the drives work..you will then know.
No NDA. All we are talking about is basic operation of the device. If you cannot explain that to everyone, then you are not worth much as a support person.
I asked two questions, which you still have not answered:
1) How do you explain the HD Tune results on HardOCP that I linked to?
2) If storage mode and performance mode are the same speed, then why bother having two modes? Why call one "performance"?
By the way, it seems like your explanation is, if you do this, and only this, and do not do that, and do this other thing, but do not do that other thing, then the performance of OCZ SSDs will be good.
So I have another question for you. Why should anyone bother with all that rigamarole, when they can buy a Samsung 840 Pro for the same price, and you can use it however you want and get good performance?
That is basically the original question from Death666Angel.
Heh, didn't think I'd break off such a discussion. jwilliams is right about what my question would be. And just showing a graph that does not have the slow down in the 2nd 50% is not proof that the issue of a slow down in the 2nd 50% does not exist (as it has been shown by other sites and you cannot tell us why they saw that). I also don't care about the rearranging of the NAND that takes place between the 2 operation modes, that slow down is irrelevant to me. What I do care about is that there are 2 different modes, one operating when the disc is less than 50% full, the other operating over that threshold, and that I will only use the slower one because I won't buy a 512GB drive just to have 256GB useable space. And if they two modes have exactly the same speed, why have them at all? NDA information about something as vital as that is bullshit btw. :)
...these drives idle a lot more of the time than they work at full speed. A considerably higher idle is just bad all around.
I don't think OCZ's part warrants the price they're asking. Its performance is less most of the time, its a power hog, its obviously hotter, it has the downsides of their 50% scheme, and it has OCZ's (Less Than) "Stellar" track record of firmware blitzkrieg to go along with it.
I wonder, how many times will I lose all my data while constantly updating its firmware? 10? 20 times?
That was beta firmware that Samsung has admitted had a problem. They said all retail drives shipped with the newer, fixed firmware. There have been ZERO reported failures of retail 840 Pro drives.
Anand(tech) did lots of good testing, but seems to have left out copy performance.
Copy performance can be less than one tenth of the read or write performance, even after taking into account that copying a file takes 2 times the interface bandwidth of moving the file one direction over a single directional interface. (Seeing that one drive is only able to copy less than 10MB/second, compared to 200MB for another drive when both each can both read or write faster than 400MB/s over a 6Gb/s interface is much more important than seeing that one can read 500MB/s and the other only at 400MB/s.)
I use actual copy commands (for single files and trees) and the same on TrueCrypt volumes, as well as HD Tune Pro File Benchmark for these tests. (For HD Tune Pro the difference between 4 KB random single and multi is often the telling point.)
I'd also like to see the performance of the OCZ Vector at 1/2 capacity.
I'd also like to see how the OCZ Vector 512GB performs on the Oracle Swingbench benchmark. It would be interesting to see how wht Vector at 1/2 capacity compares to the Intel SSD DC S7300.
Copy Performance is tied to the block size you use when reading and writing. IE if you read 4k at a time, then write 4k at a time, you will get different performance than reading 4MB at a time and then writing 4MB. So it largely depends on the specific app you are using. Copy isnt anything special, just reads and writes.
Maybe I should have explained more: I have found that most USB keys and many SATA SSDs perform MUCH worse (factor of 10 and even up to more than 300 decrease in performance) when reads and writes are mixed, rather than being a bunch of reads followed by a bunch writes.
The reads and writes can be to random locations and there still can be a big performance it.
A feel that a simple operating system copy of a large sequential file and a tree of a bunch of smaller files should be done since the two tests have shown me large performance differences between two devices that have the about the same: . sequential read rate . sequential write rate . Read/second . Writes/second when the reads and writes aren't mixed.
I also found that HD Tune Pro File Benchmark sometimes shows significant (factor of 10 or more) differences between the Sequential 4 KB random single and 4 KB random multi tests.
(For my own personal use, the best benchmark seems to be copying a tree of my own data that has about 6GB in about 25000 files and copying from one 8GB TrueCrypt virtual disk to another on the same device. I see differences of about 15 to one between stuff that I have tested in the last year that all show speeds limited by my 7 year old motherboards in sequential tests and all performing much slower with the tree copy tests.
Since the tree is my ad-hoc data and my hardware is so old I don't expect anyone to be able to duplicate the tests, but I have given results in USENET groups that shows that there are large performance differences that are not obviously related to bottlenecks or slowness of my hardware.
There could be something complicated happening that is due, for instance, in a problem with intermixing read and write operations on USB 3 or SATA interface that is dependent on the device under test but not due to an inherent problem with the device under test, but I think that the low performance for interleaved reads and writes is at least 90% due to the device under test and less than 10% due to problems with mixing operations on my hardware since some devices don't have a hit in performance when read and write operations are mixed and have sequential uni-directional performance much higher than 200MB/s on SATA and up to 134MB/s on USB 3.
There could be some timing issues caused by having a small number of buffers (much less than 1000), only 2 CPUs, having to wait for encryption, etc., but I don't think these add up to a factor of 4, and, as I have said, I see performance hits of much more than 15:1 for the same device, and all I did was switch from copying from another flash device to the flash device under test to copying from one location on the flash device under test to another location. on the same device. Similarly, the HD Tune Pro File Benchmark Sequential 4 KB random single compared to 4 KB random multi with multi 4 or more takes a hit of up to 100 for some USB 3 flash memory keys, whereas other flash memory keys may run about the same speed for random single and multi as well as about the same speed for as the poorly performing device does for 4 KB random single.
Anand, I just want to know what you think of as a difference with the new CEO sending a formal, official compared to the hand-written notes by Ryan. To me (an outsider), official letters bore me, as they are just a carbon copy of the same letter sent to many others. A handwritten note would mean more to me. Now, given that the handwritten note was more of a nudge, I can understand that perhaps a less "nudging" note would be more appreciated, but I digress. Just curious. -March
Do you have more confidence this time that OCZ is actually being honest about the contents of their controller chip? Clearly last time you were concerned about OCZ's behaviour when you reviewed the Octane (both in terms of reviewing their drives and allowing them to advertise) and they out right lied to you about the contents of the chip, they lied to everyone until they got caught.
This time do you think the leopard has changed its spots or is this just business as usual for a company that cheats so frequently?
If these are priced to compete with Samsung's 840 Pro, only a die-hard OCZ fanboy would buy one, since the 840 Pro beats it in almost every benchmark, and is considered the most reliable brand, while OCZ has a long, rich history of failed drives, controllers, and firmware. Even if they were priced $50 below the Samsung I wouldn't buy one, at least not until they had 6 months under their belt without major issues. It get's old re-inventing your system every time your SSD has issues.
I noticed that in the consistency testing, the Intel 330 seemed to outperform just about everything except the Intel 3700. That seems like a story worth exploring! Is the 330 a sleeper user-experience bargain?
For one thing, it did not look to me like the 330 had yet reached steady-state in the graphs provided. Maybe it had, but at the point where the graph cut-off things were still looking interesting.
In one of the podcasts (E10?) Anand talks about how SF controllers have less issues with these IO latency worst case scenarios. So it's not necessarily an Intel feature, but a SF feature and the graph might look the same with a Vertex 3 etc. Also, it may behave differently if it were filled with different sequential data at the start of the test and if the test were to run longer. I wouldn't draw such a positive conclusion from the test Anand has done there. :)
did they have to name their 2 drives Vector and Vertex? they couldnt have picked 2 names that looked more alike if they tried. i have to image this was done on purpose for some reason that i can think of. now that ocz has its own controller are they retiring the vertex or will they just use barefoot controllers in vertex ssd's going forward?
While it is great that the OCZ Vector is able to compete with the Samsung SSDs in terms of performance, but OCZ's past reliability records have been iffy at most, they fail prematurely and RMA rates have been quite high. I've known countless people suffering issues with OCZ drives.
I'll wait for a bit before recommending OCZ drives to anyone again due to reliability issues if the OCZ Vector can meet the reliability of Corsair, Intel or Samsung drives. Until then, I'll keep recommending Samsung drives as they exceed in performance and reliability than most manufacturers.
If you check the review of the Vector on the hardwarecanucks website, page 11 you will see the Vector AND Vertex crush every other drive listed when filled with over 50% capacity. This is probably the most important bench to judge SSD performance by.
"While the Vector 256GB may not have topped our charts when empty, it actually blasted ahead of every other drive available when there was actual data housed on it. To us, that’s even more important than initial performance since no one keeps their brand new drive completely empty. "
you will note that the Samsung 840 Pro is conspicuously absent from the list, so we do not know how the Vector fares against its most difficult competitor.
you are right and I wish they did include the 840 Pro but they didn't. Point is compared to all those other SSDs/Controllers, the BF3 clearly outperforms everything in the real world with actual data on the drive. The 840 Pro uses faster nand than the Vector yet both drives are pretty much equal. The toshiba toggle version of Vector cant come soon enough!
The Samsung 840 Pro is significantly faster (about 33%) than the Vector for 4KiB QD1 random reads. This is an important metric, since small random reads are the slowest operation on a drive, and if you are going to take just one figure of merit for an SSD, that is a good one.
well according to most sites, the Vector beats it on writes and in mixed read/write environments especially with heavy use. Not to mention the 840 takes a long time to gets its performance back after getting hammered hard whereas the Vector recovers very quickly.
First of all, the type of heavy writes I believe you are referring to are a very uncommon workload for most home users, even enthusiasts. Reads are much more common.
Second, I have seen only one credible study of Vector use with heavy writes, and the Samsung 840 Pro does a better than the Vector with steady-state, heavy writes:
I have seen nothing to suggest the Vector recovers more quickly. If anything, there is circumstantial evidence that the Vector has delayed recovery after heavy writes (assuming the Vector is similar to the Vertex 4) due to the Vector's quirky "storage mode" type behavior:
I certainly would not call it "terrible" -- it actually looks pretty good to me. And if you want even better performance under sustained heavy workloads, just overprovision the SSD.
You don't. Most SSDs will come in 128, 256 or 512 GB sizes. If you have an SSD and you see a decrease in size, usually at 120, 240 or 480 GB sizes, it means the controller has already over provisioned the SSD for you.
I have WAY too much scar tissue from this vendor to ever buy their products again. I bought five of their SSDs, and was five for five RMAing them back. I have the replacements, but don't trust them enough to use them in anything other than evaulation work because them are just not dependable. I would avoid them like the plague.
I have had multiple SSDs from OCZ, and none of them have failed up till today. I boot Mac OS X from my OCZ Vector, and from every OCZ SSD before that. In my experience, it's not the OCZ SSDs that have terrible reliability, it's Windows. Besides, have any of you guys complaining about OCZ SSDs ever tried turning off automatic disk defragmentation in Windows? Windows has an automated disk defragmenting tool to defragment HDDs, but when you plug in an SSD, the tool is automatically disabled. Chances are, those of you with SSD problems have a PC with windows that did not successfully disable automated disk defragmentation, and have had your SSDs killed due to that. Mac OS X does not have an automated disk defragmenting tool as it generally tries not to write in fragments. Without the automated defragmentation tool, my OCZ SSDs have never failed.
My Vector 256 drive completely failed in just under 4 months. OCZ is going to replace it but if the replacement fails in less than 48 months I will look for alternatives.
My OCZ VTR1-25SAT3-512G failed after just 33 days. This was 3 days after the vendors replacement agreement expired. Has to go to OCZ, OCZ is replacing the drive, but they are following a delayed time frame to get the new drive in my hands.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
151 Comments
Back to Article
wsaenotsock - Tuesday, November 27, 2012 - link
A 5 year warranty is a pretty solid commitment on the part of a manufacturer. I don't think they would have done that if they didn't trust the stability of the hardware, so they really put their money where their mouth is.Other thing: is the Indilinx co-processor 'Argon' or 'Aragon'? Pic differs from your text description.
alacard - Wednesday, November 28, 2012 - link
Nah, you've got it all wrong unfortunately - they've bet the farm on this drive and if it fails they won't be around in five years to honor those warranties.When you've got nothing, you've got nothing to lose.
Kjella - Wednesday, November 28, 2012 - link
Well that, but I'm glad to see OCZ committing more to their drives... on my local price price check there's Agility, Colossus, Enyo, Ibis, Lightfoot, Octane, Onyx, Petrol, RevoDrive, Synapse, Vertex and Z-drive not counting numbering or variations like Vertex EX, Vertex Limitied Edition, Vertex Turbo and using a zillion different controllers and stuff. The warranty is also an indication this is the technology they'll continue working on and fixing bugs for, which is good because their attention span has been too short and spread too thin. It's better with a few models that kick ass than dozens of models that are all shoddy.MrSpadge - Wednesday, November 28, 2012 - link
Some of the drives you list are several years old..JonnyDough - Friday, November 30, 2012 - link
Furthermore, the Intel 520 (which I just purchased) got dropped off the enterprise iometer 4KB random write chart. That don't make a lick-a sense!Hood6558 - Wednesday, November 28, 2012 - link
alacard may be right, OCZ is sliding closer to the cliff as we speak. There's so much competition in the SSD market, someone's got to go sooner or later, and it will probably be the less diversified companies that will go first. I recently bought a Vertex 4 128 for my boot drive, and it lasted only 15 days before it disappeared and refused to be recognized in BIOS. The Crucial M4 128 that replaced it has the problem of disappearing every time the power is shut off suddenly (or with the power button after Windows hangs), but comes back after a couple of reboots and a resetting of your boot priorities. And it's regarded as one of the most reliable drives out there.So in order for OCZ to remain solvent, the Vector must be super reliable and stable, and absolutely must stay visible in BIOS at all times. If it's plagued by the same problems as the Vertex 4, it's time to cash out and disappear before the bankruptcy court has it's way.
Sufo - Tuesday, December 4, 2012 - link
Windows hanging? I smell problems with the user...djy2000 - Wednesday, July 31, 2013 - link
That warranty doesn't cover the most important thing. The data on the drive.Bull Dog - Tuesday, November 27, 2012 - link
Is that an m-SATA connector on the other side of the PCB?philipma1957 - Tuesday, November 27, 2012 - link
good question if I open the case can i use this as a msata?a 512gb msata is very hard to find.
SodaAnt - Wednesday, November 28, 2012 - link
Its way too large for a msata drive, so it wouldn't fit anyways.somebody997 - Thursday, April 11, 2013 - link
Why is there an mSATA connector on the PCB anyway?somebody997 - Thursday, April 11, 2013 - link
The PCB is far too large for an mSATA drive anyway, so why have the mSATA connector?Anand Lal Shimpi - Wednesday, November 28, 2012 - link
That's likely a custom debug port, not mSATA.Take care,
Anand
Heavensrevenge - Wednesday, November 28, 2012 - link
Actually that connection is indeed a physically identically sized/compatible m-SATA connection. The problem is it's inability to actually plug in due to the SSD's general size or whether it's able to communicate with the typical m-SATA ports on mobos.http://www.pclaunches.com/entry_images/1210/22/tra... should give a decent example.
vanwazltoff - Thursday, December 20, 2012 - link
might be a sign of something else in the works from ocz like an msata cable to plug into it or something, maybe something even more awesome like double the band width by connected it to a ocz pci break off board. i guess we will seemayankleoboy1 - Tuesday, November 27, 2012 - link
I have a Vertex2 256GB SSD.Is it worth upgrading to a Vector or a Samsung840 Pro SSD ?
MadMan007 - Tuesday, November 27, 2012 - link
If you've got a motherboard with SATA 6Gb/s you would probably notice a difference. Whether it's worth it is up to you - do you do a lot of disk-intensive work to the point where you wish it were faster? While I'm the difference would be noticable, it might not be huge or worth spending $200+ on.MrSpadge - Wednesday, November 28, 2012 - link
Are you often waiting for your disk to finish tasks? If not it's not going to be worth it.Beenthere - Tuesday, November 27, 2012 - link
It's going to take more than a nice type written letter to resolve the many product and service issues at OCZ - if they stay in business over the next six to 12 months.FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists. In addition a five year warranty does not mean that a particular product is any better than a product with a one year warranty. For each extended year of warranty, the product price increases. So you're paying for something you may or may not ever use.
In addition it's useful to read the fine print on warranties. Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect. If you've ever seen some of the "reconditioned" or "refurbished" mobos from Asus or similar products from other companies, you'd never install them in your PC.
People reach many untrue conclusions about product quality based on the warranty.
Sabresiberian - Tuesday, November 27, 2012 - link
So, a longer warranty is only good if you use it? Otherwise you're paying for something you don't need?And, you're paying extra for a 5-year warranty here? What, so all these top end SSDs, whose prices are lower than ever, are in fact over-priced with fake expensive warranties, so should come out with 1-year warranties and lower prices?
coder543 - Tuesday, November 27, 2012 - link
a refurbished SSD? I'm not even sure what that means. That's like going to McDonald's and getting a refurbished McFlurry. It doesn't even make sense.This isn't a laptop, where worn parts can be replaced. This is a limited lifespan, consumable product, where replacing any parts is equivalent to throwing the old one away and pulling out a brand new one. If the warranty actually says this, then please, point me to it, but otherwise, I'm gonna have to call this bluff and say it's not practical.
Beenthere - Wednesday, November 28, 2012 - link
The point that some of you seem to not understand is that the 5 year warranty does NOT mean that an SSD or other product is any better quality than a product with a one year warranty. And yes you are paying for the extended warranty no matter what the current price. SSD prices are dropping as cost to produce them is dropping. This particular OCZ model is not a highend model by any stretch, it's just the SSD-of-the-week to be superceded by a new model in a month or two.Refurbished can mean hand soldered chip replacement or other poorly executed repairs that would not be acceptable to most technically knowledgeable consumers. Reconditioned can mean it's been laying in the warehouse collecting dust for six months and nothing was actually done to repair it when it was returned defective. You would not believe some of the crap that ships as replacement warranty products.
zero2dash - Wednesday, November 28, 2012 - link
^^^ I'm with Beenthere.A 5 year warranty means a 5 year warranty; nothing more nothing less. The notion that '5 year warranty = great product!' is asinine.
I think if you want to assume anything based off a 5 year warranty in this case, it's because the product is new, the controller is relatively new, and it's an OCZ SSD product.
I'm not likely to buy an OCZ SSD anytime soon, but I'd definitely rather buy one with a 5 year warranty than a 1 or 3 year warranty....if I have to buy an OCZ branded SSD because every other brand is sold out.
I owned a 30GB Vertex. For 9 months, it was great. Then it turned into a big POS. Constant chkdsk errors. I did a sanitary erase/firmware flash and sold it for what I could get for it.
melgross - Wednesday, November 28, 2012 - link
I certainly would not want a refurbished SSD. It would NOT mean new NAND chips, which are the parts most likely to be a problem. Or a new controller. I would never buy a refurbished HDD either. These devices do have lifetimes. Since you have no idea how these drives have been used, or abused, you are taking a very big chance for the dubious opportunity of saving a few bucks.Hood6558 - Wednesday, November 28, 2012 - link
I can't help but wonder how many replacement SSDs it will take to get to the end of that 5 year warranty. If you go by the track record of Vertex 3 & 4, you can expect a failure about every 90 days, so that's 20 drives, less shipping time to and from, so call it 15 drives with a total downtime of 1.25 years. Wow!.. where can I get one? My Vertex 4 lasted 15 days, but I'm sure that was just a fluke...melgross - Wednesday, November 28, 2012 - link
I basically agree. From anecdotal reports, OCZ is one of the least reliable vendors, with their drives less reliable than the average HDD. While, so far, the average SSD reliability being about the same as the average HDD, despite people's expectations, this isn't good.Most people don't need the really high speeds a few of these drives support, higher reliability would be a much better spec to have. Unfortunately, these reviews can't indicate how reliable these drives will be in the longer term.
While I see that OCZ seems to be thought of as failing, this is the first I've heard of it. Have their sales collapsed of late? I was surprised to find that their long time CEO whom Anand had communicated so often with in the past is gone.
Spunjji - Wednesday, November 28, 2012 - link
"FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists." <- Depends on how you purchase it. Credit card companies will often honour warranties on products purchased from defunct companies. YMMV."Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect." <- Happily now everyone in the thread after you has used this conjecture to knock OCZ warranties. That's not really your fault, but I don't think anyone here has read the terms of OCZ's warranty on this product yet?
The point being made here is that OCZ would not offer a 5 year warranty on the product if they thought the cost of honouring that warranty would eclipse their income from sales. This is why 1-year warranties are a red flag. So *something* can be inferred from it; just only about the manufacturer's confidence in their product. You can read into that whatever you want, but I don't generally find that companies plan to be out of business within their warranty period.
Your comment about it increasing the price of the product is odd, because this product is the same price and specification as models with shorter warranties. So either a) you're wrong, or b) you're trivially correct.
JonnyDough - Friday, November 30, 2012 - link
Here here. I second that. I am so tired of getting worn refurbished parts for things I just bought BRAND NEW. CoolerMaster just did this for a higher end power supply I bought. Why would I want to spend a hundred dollars for a used PSU? Seriously. Now all the components aren't new in it. Once the warranty expires it'll die right away. Where is the support behind products these days?It used to be that buying American meant you got quality and customer service. Gone are those days I guess, since all the corporations out there are about to start actually paying taxes.
smalM - Tuesday, November 27, 2012 - link
(e.g. a 256GB Vector appears formatted as a 238GB drive in Windows).Oh please Anand, the old "formatted" nonsens of all people from you?
You really should drop this sentence from your phrase list....
kmmatney - Tuesday, November 27, 2012 - link
I don't see anything wrong with stating that. My 256 Samsung 830 also appears as a 238GB drive in windows...jwilliams4200 - Tuesday, November 27, 2012 - link
The problem is that "formatting" a drive does not change the capacity.Windows is displaying the capacity in GiB, not GB. It is just Windows bug that they label their units incorrectly.
Gigaplex - Tuesday, November 27, 2012 - link
Yes and no. There is some overhead in formatting which reduces usable capacity, but the GiB/GB distinction is a much larger factor in the discrepancy.jwilliams4200 - Wednesday, November 28, 2012 - link
The GiB/GB bug in Windows accounts for almost all of the difference. It is not worth mentioning that partitioning usually leaves 1MiB of space at the beginning of the drive. 256GB = 238.4186GiB. If you subtract 1MiB from that, it is 238.4176GiB. So why bother to split hairs?Anand Lal Shimpi - Wednesday, November 28, 2012 - link
This is correct. I changed the wording to usable vs. formatted space, I was using the two interchangeably. The GiB/GB conversion is what gives us the spare area.Take care,
Anand
suprem1ty - Thursday, November 29, 2012 - link
It's not a bug. Just a different way of looking at digital capacity.suprem1ty - Thursday, November 29, 2012 - link
Oh wait sorry I see what you mean now. Disregard previous postflyingpants1 - Wednesday, November 28, 2012 - link
I think I might know what his problem is.When people see their 1TB-labelled drive displays only 931GB in Windows, they assume it's because formatting a drive with NTFS magically causes it to lose 8% of space, which is totally false. Here's a short explanation for newbie readers. A gigabyte (GB) as displayed in Windows is actually a gibibyte (GiB).
1 gibibyte = 1073741824 bytes = 1024 mebibytes
1 gigabyte = 1000000000 bytes = 1000 megabytes = 0.931 gibibytes
1000 gigabytes = 931 gibibytes
Windows says GB but actually means GiB.
SSDs and HDDs are labelled differently in terms of space. Let's say they made a spinning hard disk with exactly 256GB (238GiB) of space. It would appear as 238GB in Windows, even after formatting. You didn't lose anything,
because the other 18 gigs was never there in the first place.
Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.
If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows.
flyingpants1 - Wednesday, November 28, 2012 - link
IMO drive manufacturers should stop messing around and put 256GiB of USABLE space on each 256GiB drive, and start marking them as such.Holly - Wednesday, November 28, 2012 - link
Tbh imho using base 10 units in binary environment is just asking for a facepalm. Everything underneath runs on 2^n anyway and this new "GB" vs "GiB" is just a commercial bullshit so storage devices can be sold with flashier stickers. Your average raid controller bios will show 1TB drive as 931GB one as well (at least few ICHxR and one server Adaptec I have access to right now all do).melgross - Wednesday, November 28, 2012 - link
What does that mean; usable space? Every OS leaves a different amount after formatting, so whether the drive is rated by GB or GiB, the end result would be different. Normally, SSD's are rated as to the around seen by the OS, not by that plus the around overrated. So it isn't really a problem.Actually, the differences we're talking about isn't all that much, and is more a geeky thing to concern oneself with more than anything else. Drives are big enough, even SSD's, so that a few GB's more or less isn't such a big deal.
Kristian Vättö - Wednesday, November 28, 2012 - link
An SSD can't operate without any over-provisioning. If you filled the whole drive, you would end up in a situation where the controller couldn't do garbage collection or any other internal tasks because every block would be full.Drive manufacturers are not the issue here, Microsoft is (in my opinion). They are using GB while they should be using GiB, which causes this whole confusion. Or just make GB what it really is, a billion bytes.
Holly - Thursday, November 29, 2012 - link
Sorry to say so, but I am afraid you look on this from wrong perspective. Unless you are IT specialist you go buy a drive that says 256GB and expect it to have 256GB capacity. You don't care how much additional space is there for replacement of bad blocks or how much is there for internal drive usage... so you will get pretty annoyed by fact that your 256GB drive would have let's say 180GB of usable capacity.And now this GB vs GiB nonsense. From one point of view it's obvious that k,M,G,T prefixes are by default *10^3,10^6,10^9,10^12... But in computers capacity units they used to be based on 2^10, 2^20 etc. to allow some reasonable recalculation between capacity, sectors and clusters of the drive. No matter what way you prefer, the fact is that Windows as well as many IDE/SATA/SAS/SCSI controllers count GB equal to 2^30 Bytes.
Random controllers screenshots from the internet:
http://www.cisco.com/en/US/i/100001-200000/190001-...
http://www.cdrinfo.com/Sections/Reviews/Specific.a...
http://i.imgur.com/XzVTg.jpg
Also, if you say Windows measurement is wrong, why is RAM capacity shown in 'GB' but your 16GB shown in EVERY BIOS in the world is in fact 16384MiB?
Tbh there is big mess in these units and pointing out one thing to be the blame is very hasty decision.
Also, up to some point the HDD drive capacity used to be in 2^k prefixes long time ago as well... still got old 40MB Seagate that is actually 40MiB and 205MB WD that is actually 205MiB. CD-Rs claiming 650/700MB are in fact 650/700MiB usable capacity. But then something changed and your 4.7GB DVD-R is in fact 4.37GiB usable capacity. And same with hard discs...
Try to explain angry customers in your computer shop that the 1TB drive you sold them is 931GB unformatted shown both by controller and Windows.
Imho nobody would care slightest bit that k,M,G in computers are base 2 as long as some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes.
jwilliams4200 - Thursday, November 29, 2012 - link
It is absurd to claim that "some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes".The S.I. system of units prefixes for K, M, G, etc. has been in use since before computers were invented. They have always been powers of 10. In fact, those same prefixes were used as powers of ten for about 200 years, starting with the introduction of the metric system.
So those "marketing twats" you refer to are actually using the correct meaning of the units, with a 200 year historical precedent behind them.
It is the johnny-come-latelys that began misusing the K, M, G, ... unit prefixes.
Fortunately, careful people have come up with a solution for the people incorrectly using the metric prefixes -- it is the Ki, Mi, Gi prefixes.
Unfortunately, Microsoft persists in misusing the metric prefixes, rather than correctly using the Ki, Mi, Gi prefixes. That is clearly a bug in Microsoft Windows. Kristian is absolutely correct about that.
Holly - Friday, November 30, 2012 - link
How much RAM does your bios report you have?Was the BIOS of your motherboard made by Microsoft?
jwilliams4200 - Friday, November 30, 2012 - link
Would you make that argument in front of a judge?"But judge, lots of other guys stole cars also, it is not just me, so surely you can let me off the hook on these grand-theft-auto charges!"
Touche - Saturday, December 1, 2012 - link
No, he is right. Everything was fine until HDD guys decided they could start screwing customers for bigger profits. Microsoft and everyone else uses GB as they should with computers. It was HDD manufacturers that caused this whole GB/GiB confusion regarding capacity.jwilliams4200 - Saturday, December 1, 2012 - link
I see that you are a person who never lets the facts get in the way of a conspiracy theory.Touche - Monday, December 3, 2012 - link
http://betanews.com/2006/06/28/western-digital-set...Holly - Monday, December 3, 2012 - link
Well, 2^10k prefixes marked with 'i' were made in a IEC in 1998, in IEEE in 2005, alas the history is showing up frequent usage of both 10^3k and 2^10k meanings. Even with IEEE passed in 2005 it took another 4 years for Apple (who were the first with OS running with 2^10k) to turn to 'i' units and year later for Ubuntu with 10.10 version.For me it will always make more sense to use 2^10k since I can easily tell size in kiB, MiB, GiB etc. just by bitmasking (size & 11111111110000000000[2]) >> 10 (for kiB). And I am way too used to k,M,G with byte being counted for 2^10k.
Some good history reading about Byte prefixes can be found at http://en.wikipedia.org/wiki/Timeline_of_binary_pr... ...
Ofc, trying to reason with people who read several paragraph post and start jumping around for one sentence they feel offended with is useless.
But honestly even if kB was counted for 3^7 bytes it wouldn't matter... as long as everyone uses the same transform ratio.
dj christian - Thursday, November 29, 2012 - link
"Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows. "
Uuh what?
sully213 - Wednesday, November 28, 2012 - link
I'm pretty sure he's referring to the amount of NAND on the drive minus the 6.8% set aside as spare area, not the old mechanical meaning where you "lost" disk space when a drive was formatted because of base 10 to base 2 conversion.JellyRoll - Tuesday, November 27, 2012 - link
How long does the heavy test take? The longest recorded busy time was 967 seconds from the Crucial M4. This is only 16 minutes of activity. Does the trace replay in real time, or does it run compressed? 16 minutes surely doesnt seem to be that much of a long test.DerPuppy - Tuesday, November 27, 2012 - link
Quote from text "Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:"JellyRoll - Tuesday, November 27, 2012 - link
yes, I took note of that :). That is the reason for the question though, if there were an idea of how long the idle periods were we can take into account the amount of time the GC for each drive functions, and how well.Anand Lal Shimpi - Wednesday, November 28, 2012 - link
I truncate idles longer than 25 seconds during playback. The total runtime on the fastest drives ends up being around 1.5 hours.Take care,
Anand
Kristian Vättö - Wednesday, November 28, 2012 - link
And on Crucial v4 it took 7 hours...JellyRoll - Wednesday, November 28, 2012 - link
Wouldn't this compress the QD during the test period? If the SSDs recorded activity is QD2 for an hour, then the trace is replayed quickly this creates a high QD situation. QD2 for an hour compressed to 5 minutes is going to play back at a much higher QD.dj christian - Thursday, November 29, 2012 - link
What is QD?doylecc - Tuesday, December 4, 2012 - link
Que Depthjeffrey - Tuesday, November 27, 2012 - link
Anand,I would love to have seen results using the 1.5 firmware for the 256GB Vertex 4. Going from 1.4 to 1.5 is non destructive. The inconsistency of graphs in other SSD reviews that included the 512GB Vertex 4 drive with 1.5 firmware and the 256GB Vertex 4 drive with 1.4 firmware drove me nuts.
When I saw the Barefoot 3 press release on Yahoo Finance, I immediately went to your site hoping to see the review. I was happy to see the article up, but when I saw your review sample was 256GB I feared you would not have updated the firmware on the Vertex 4 yet. Unfortunately, my fears were confirmed. I love your site, that's why I'm sharing my $.02 as a loyal reader.
Take care,
Jeffrey
Anand Lal Shimpi - Wednesday, November 28, 2012 - link
Some of the results are actually using the 1.5 firmware (IO consistency, steady state 4KB random write performance). We didn't notice a big performance difference between 1.4 and 1.5 which is why I didn't rerun on 1.5 for everything.Take care,
Anand
iwod - Tuesday, November 27, 2012 - link
Isn't this similar? Sandforce comes in, reached top speed in SATA 6Gbps, then other controller, Marvell, Barefoot managed to catch up. That is exactly what happen before with SATA 3Gbps Port. So in 2013 we would have controller and SSD all offering similar performance bottlenecked by its Port Speed.When are we going to see SATA Express that give us 20Gbps? We need those ASAP.
A5 - Wednesday, November 28, 2012 - link
SATA Express (on PCIe 3.0) will top out at 16 Gbps until PCIe 4.0 is out. This is the same bandwidth as single-channel DDR3-2133, by the way, so 16 Gbps should be plenty of performance for the next several years.extide - Wednesday, November 28, 2012 - link
Actually I believe Single Channel DDR-2133 is 16GiB a sec, not Gb a sec, so Sata Express is only ~1/8th the speed of single channel DDR3-2133jwilliams4200 - Wednesday, November 28, 2012 - link
It is good to see anandtech including results of performance consistency tests under a heavy write workload. However, there is a small or addition you should make for these results to be much more useful.You fill the SSDs up to 100% with sequential writes and I assume (I did not see a specification in your article) do 100% full-span 4KQD32 random writes. I agree that will give a good idea of worst-case performance, but unfortunately it does not give a good idea of how someone with that heavy a writeload would use these consumer SSDs.
Note that the consumer SSDs only have about 7% spare area reserved. However, if you overprovision them, some (all?) of them may make good use of the extra reserved space. The Intel S3700 only makes available 200GB / 264GiB of flash, which comes to 70.6% available, or 29.4% of the on-board flash is reserved as spare area.
What happens if you overprovision the Vector a similar amount? Or to take a round number, only use 80% of the available capacity of 256GB, which comes to just under 205GB.
I don't know how well the Vector uses the extra reserved space, but I do know that it makes a HUGE improvement on the 256GB Samsung 840 Pro. Below are some graphs of my own tests on the 840 Pro. I included graphs of Throughput vs. GB written, as well as latency vs. time. One the 80% graphs, I first wrote to all the sectors up to the 80% mark, then I did a 80% span 4KQD32 random write. On the 100% graphs, I did basically the same as anandtech did, filling up 100% of the LBAs then doing a 100% full-span 4KQD32 random write. Note that when the 840 Pro is only used up to 80%, it improves by a factor of about 4 in throughput, and about 15 in average latency (more than a 100 times improvement in max latency). It is approaching the performance of the Intel S3700. If I used 70% instead of 80% (to match the S3700), perhaps it would be even better.
Here are some links to my test data graphs:
http://i.imgur.com/MRZAM.png
http://i.imgur.com/Vvo1H.png
http://i.imgur.com/eYj7w.png
http://i.imgur.com/AMYoe.png
Ictus - Wednesday, November 28, 2012 - link
Just so I am clear, did you actually re create the partition utilitizing 80% of the space, or just keep the used space at 80% ?jwilliams4200 - Wednesday, November 28, 2012 - link
No partitions, no filesystems. I'm just writing to the raw device. In the 80% case, I just avoided writing to any LBAs higher than 80%.JellyRoll - Wednesday, November 28, 2012 - link
Excellent testing, very relevant, and thanks for sharing. How do you feel that the lack of TRIM in this type of testing affects the results? Do you feel that testing without a partition and TRIM would not provide an accurate depiction of real world performance?jwilliams4200 - Wednesday, November 28, 2012 - link
I just re-read your comment, and I thought perhaps you were asking about sequence of events instead of what I just answered you. The sequence is pretty much irrelevant since I did a secure erase before starting to write to the SSD.1) Secure erase SSD
2) Write to all LBAs up to 80%
3) 80% span 4KQD32 random write
dj christian - Thursday, November 29, 2012 - link
What is SZ80/100 in the graphs, what do they stand for?Anand Lal Shimpi - Wednesday, November 28, 2012 - link
You are correct, I ran a 100% span of the 4KB/QD32 random write test. The right way to do this test is actually to gather all IO latency data until you hit steady state, which you can usually do on most consumer drives after just a couple of hours of testing. The problem is the resulting dataset ends up being a pain to process and present.There is definitely a correlation between spare area and IO consistency, particularly on drives that delay their defragmentation routines quite a bit. If you look at the Intel SSD 710 results you'll notice that despite having much more spare area than the S3700, consistency is clearly worse.
As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up). I think there's definitely value in looking at exactly what you're presenting here. The interesting aspect to me is this tells us quite a bit about how well drives make use of empty LBA ranges.
I tend to focus on the worst case here simply because that ends up being what people notice the most. Given that consumers are often forced into a smaller capacity drive than they'd like, I'd love to encourage manufacturers to pursue architectures that can deliver consistent IO even with limited spare area available.
Take care,
Anand
jwilliams4200 - Wednesday, November 28, 2012 - link
Anand wrote:"As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up)."
Actually, all of my tests did use up all the spare area, and had reached steady state during the graph shown. Perhaps you have misunderstood how I did my tests. I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests.
The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough.
Look at this one again:
http://i.imgur.com/Vvo1H.png
It reaches steady state somewhere between 80 and 120GB. The spare area is used up at about 62GB and the speed drops precipitously, but then there is a span where the speed actually increases slightly, and then levels out somewhere around 80-120GB.
Note that steady state is about 110MB/sec. That is about 28K IOPS. Not as good as the Intel S3700, but certainly approaching it.
Ictus - Wednesday, November 28, 2012 - link
Hey J, thanks for taking the time to reply to me in the other comment.I think my question is even more noobish than you have assumed.
"I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests."
I am confused because I thought the only way to "over-provision" was to create a partition that didn't use all the available space??? If you are merely writing raw data up to the 80% full level, what exactly does over provisioning mean? Does the term "over provisioning" just mean you didn't fill the entire drive, or you did something to the drive?
jwilliams4200 - Wednesday, November 28, 2012 - link
No, overprovisioning generally just means that you avoid writing to a certain range of LBAs (aka sectors) on the SSD. Certainly one way to do that is to create a partition smaller than the capacity of the SSD. But that is completely equivalent to writing to the raw device but NOT writing to a certain range of LBAs. The key is that if you don't write to certain LBAs, however that is accomplished, then the SSD's flash translation table (FTL) will not have any mapping for those LBAs, and some or all SSDs will be smart enough to use those unmapped-LBAs as spare area to improve performance and wear-leveling.So no, I did not "do something to the drive". All I did was make sure that fio did not write to any LBAs past the 80% mark.
gattacaDNA - Sunday, December 2, 2012 - link
"The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough."WOW - this is an interesting discussion which concludes that by simply over-provisioning a consumer SSD by 20-30% those units can approach the vetted S3700! I had to re-read those posts 2x to be sure I read that correctly.
It seems some later posts state that if the workload is not sustained (drive can recover) and the drive is not full, that the OP has little to no benefit.
So is an best bang really just not fill the drives past 75% of the available area and call it a day?
jwilliams4200 - Sunday, December 2, 2012 - link
The conclusion I draw from the data is that if you have a Samsung 840 Pro (or similar SSD, I believe several consumer SSDs behave similarly with respect to OP), and the big one -- IF you have a very heavy, continuous write workload, then you can achieve large improvements in throughput and huge improvements in maximum latency if you overprovision at 80% (i.e., leave 20% unwritten or unpartitioned)Note that such OP is not needed for most desktop users, for two reasons. First, most desktop users will not fill the drive 100% and as long as they have TRIM working, and if the drive is only filled to 80% (even if the filesystem covers all 100%), then it should behave as if it were actually overprovisioned at 80%. Second, most desktop users do not continuously write tens of Gigabytes of data without pause.
gattacaDNA - Sunday, December 2, 2012 - link
Thank You. That's what my take-away is as well.jwilliams4200 - Wednesday, November 28, 2012 - link
By the way, I am not sure why you say the data sets are "a pain to process and present". I have written some test scripts to take the data automatically and to produce the graphs automatically. I just hot-swap the SSD in, run the script, and then come back when it is done to look at the graphs.Also, the best way to present latency data is in a cumulative distribution function (CDF) plot with a normal probability scale on the y-axis, like this:
http://i.imgur.com/RcWmn.png
http://i.imgur.com/arAwR.png
One other tip is that it does not take hours to reach steady state if you use a random map. This means that you do a random write to all the LBAs, but instead of sampling with replacement, you keep a map of the LBAs you have already written to and don't randomly select the same ones again. In other words, write each 4K-aligned LBA on a tile, put all the tiles in a bag, and randomly draw the tiles out but do not put the drawn tile back in before you select the next tile. I use the 'fio' program to do this. With an SSD like the Samsung 840 Pro (or any SSD than can do 300+ MB/s 4KQD32 random writes), you only have to write a little more than the capacity of the SSD (eg., 256GB + 7% of 256GB) to reach steady state. This can be done in 10 or 20 minutes on fast SSDs.
Brahmzy - Wednesday, November 28, 2012 - link
I consistently over-provision every single SSD I use by at least 20%. I have had stellar performance doing this with 50-60+ SSDs over the years.I do this on friend's/family's builds and tell anybody I know to do this with theirs. So, with my tiny sample here, OP'ing SSDs is a big deal, and it works. I know many others do this as well. I base my purchase decisions with OP in mind. If I need 60GB of space, I'll buy a 120GB. If I need 120GB of usable space, I'll buy a 250GB drive, etc.
I think it would be valuable addition to the Anand suite of tests to account for this option that many of us use. Maybe a 90% OP write test and maybe an 80% OP write test. Assuming there's a constitent difference between the two.
Brahmzy - Wednesday, November 28, 2012 - link
I should note that this works best after a secure erase and during the Windows install, don't let Windows take up the entire partition. Create a smaller pertition from the get-go. Don't shrink it later in Windows, once the OS has been installed. I believe the SSD controller knows that it can't do it's work in the same way if there is/was an empty partition taking up those cells. I could be wrong - this was the case with the older SSDs - maybe the newer controllers treat any free space as fair game to do their garbage collection/wear leveling.jwilliams4200 - Wednesday, November 28, 2012 - link
If the SSD has a good TRIM implementation, you should be able to reap the same OP benefits (as a secure erase followed by creating a smaller-than-SSD partition) by shrinking a full-SSD partition and then TRIMming the freed LBAs. I don't know for a fact the Windows 7 Disk Management does a TRIM on the freed LBAs after a shrink, but I expect that it does.I tend to use linux more than Windows with SSDs, and when I am doing tests I often use linux hdparm to TRIM whichever sectors I want to TRIM, so I do not have to wonder whether Windows TRIM did I wanted or not. But I agree that the safest way to OP in Windows is to secure erase and then create a partition smaller than the SSD -- then you can be absolutely sure that your SSD has erased LBAs, that are never written to, for the SSD to use as spare area.
seapeople - Sunday, December 2, 2012 - link
Wouldn't it be better if you just paid half price and bought the 60GB drive (or 80GB if you actually *need* 60GB) for the amount of space you needed at the present, and then in a year or two when SSD's are half as expensive, more reliable, and twice as fast you upgrade to the amount of space your needs have grown to?Your new drive without overprovisioning would destroy your old overprovisioned drive in performance, have more space (because we're double the size and not 30% OP'ed), you'd have spent the same amount of money, AND you now have an 80GB drive for free.
Of course, you should never go over 80-90% usage on an SSD anyway, so if that's what you're talking about then never mind...
kozietulski - Wednesday, November 28, 2012 - link
Nice results and great pictures. Really shows importance of free space/OP for random write preformance. Even more amazing is that the results you got seem to fit quite good with simplified model of SSD internal workings:Lets assume we have SDD with only the usual ~7% of OP which was nearly 100% filled (one can say trashed) by purely random 4KB writes (should we now write KiB just to make a few strange guys happy?) and assuming also that the drive operates on 4KB pages and 1MB blocks (today drives seem to be using rather 8KB/2MB but 4KB makes things simpler to think about) so having 256 pages per block. If trashing was good enough to achieve perfect randomisation we can expect that each block contains about 18-19 free pages (out of 256). Under heavy load (QD32, using ncq etc) decent firmware should be able to make use of all that free pages in given block before it (the firmware) decides to write the block back to NAND. Thus under heavy load and with above assumptions ( 7% OP) we can expect at worst case (SSD totaly trashed by random writes and thus free space fully randomized) Wear Amplification of about 256:18 ~= 14:1.
Now when we allow for 20% of free space (in addition to implicit ~7%OP) we should see on average about 71-72 out of 256 pages free in each and every block. This translates to WA ~= 3.6:1 (again assuming that firmware is able to consume all free space in the block before writing it back to nand. That is maybe not so obvious as there are limits in max number of i/o bundled in single ncq request but should not be impossible for the firmware to delay the block write few msecs till next request comes to see if there are more writes to be merged into the block).
Differences in WA translate directly to differences in performance (as long as there is no other bottleneck of course) so with 14:3.6 ~= 3.9 we may expect random 4KB write performance nearly 4 higher for drive with 20% free space compared to drive working with only bare 7% of implicit OP.
May be just an accident but that seem to fit pretty close to results you achieved. :)
jwilliams4200 - Wednesday, November 28, 2012 - link
Interesting analysis. Without knowing exactly how the 840 Pro firmware works I cannot be certain, but it does sound like you have a reasonable explanation for the data I observed.kozietulski - Wednesday, November 28, 2012 - link
Yeah, there is lot of assumptions and simplifications in above... well surely woudn't call it analysis, perhaps hypothesis would be also a bit too much. Modern SSDs have lots of bells and whistles - as well as quirks - and most of them - particularly quirks - aren't documented well. All that means that conformity of the estimation with your results may very well be nothing more then just a coincidence.The one thing I'm reasonably sure however - and it is the reason I thought it is worth to write the post above - is that for random 4K writes on the heavily (ab)used SSD the factor which is limiting performance the most is Write Amplification. (at least as long as we talk about decent SSDs with decent controllers - Phisons does not qualify here I guess :)
In addition to obvious simplifications there was one another shortcut I took above: I based my reasoning on the "perfectly trashed" state of SSD - that is one where SSD's free space is in pages equally spread over all nand blocks. In theory the purpose of GC algos is to prevent drives reaching such a state. Still I think there are workloads which may bring SSDs close enough to that worst possible state and so that it is still meaningful for worst case scenario.
In your case however the starting point was different. AFAIU you used first sequential I/O to fill 100 / 80 % of drive capacity so we can safely assume that before random write session started drive contained about 7% (or ~27% in second case) of clean blocks (with all pages free) and the rest of blocks were completely filled with data with no free pages (thanks to nand-friendly nature of sequential I/O).
Now when random 4K writes start to fly... looking from the LBA space perspective these are by definition overwrites of random chunks of LBA space but from SSD managed perspective at first we have writes filling pool of clean blocks coupled by deletions of randomly selected pages within blocks which were until now fully filled with data. Surely such deletion is actually reduced to just marking of such pages as free in firmware FTL tables (any GC at that moment seem highly unlikely imho).
At last comes the moment when clean blocks pool is exhausted (or when size of clean pool falls below the threshold) which is waking up GC alhorithms to make their sisyphean work. At that moment situation looks like that (assuming that there was no active GC until now and that firmware was capable enough to fill clean blocks fully before writing to NAND): 7% (or 27% in second case) of blocks are fully filled with (random) data whereas 93/73 % of blocks are now pretty much "trashed" - they contain (virtually - just marked in FTL) randomly distributed holes of free pages. Net effect is that - compared to the starting point - free space condensed at first in the pool of clean blocks is now evenly spread (with page granularity) over most of drive nand blocks. I think that state does not look that much different then the state of complete, random trashing I assumed in post above...
From that point onward till the end of random session there is ongoing epic struggle against entropy: on one side stream of incoming 4K writes is punching more free page holes in SSD blocks thus effectively trying to randomize distribution of drive available free space and GC algorithms on the other side are doing their best to reduce the chaos and consolidate free space as much as possible.
As a side note I think it is really a pity that there is so little transparency amongst vendors in terms of comunicating to customers internal workings of their drives. I understand commercial issues and all but depriving users of information they need to efficiently use their ssds leads to lots of confused and sometimes simply disappointed consumers and that is not good - in the long run - for the vendors too. Anyway maybe it is time to think about open source ssd firmware for free community?! ;-)))
ps. Thanks for reference to fio. Looks like very flexible tool, may be also easier to use then iometer. Surely worth to try at least.
cbutters - Wednesday, November 28, 2012 - link
So based on 36TB to end the warranty, basically you can only fill up your 512GB drive 72 times before the warranty expires? That doesn't seem like a whole lot of durability. reinstalling a few large games several times could wear this out pretty quickly... or am I understanding something incorrectly.According to my calculations, assuming a gigabit network connection, running at 125MB per second storing data, that is .12GB per second, 7.3242GB per minute, 439GB per hour, or 10.299TB per day... Assuming this heavy write usage,... that 36 TB could potentially be worn out in as little as 3.5 days using a conservative gigabit network speed as the baseline.
Makaveli - Wednesday, November 28, 2012 - link
I've had an SSD 3 years and it currently has 3TB's of host writes!jimhsu - Wednesday, November 28, 2012 - link
I assume warranties such as this assume an unrealistically high write amplification (e.g. 10x) (to save the SSD maker some skin, probably). Your sequential write example (google "rogue data recorder") most likely has a write amplification very close to 1. Hence, you can probably push much more data (still though the warranty remains conservative).jwilliams4200 - Wednesday, November 28, 2012 - link
Even WA=10 is not enough to account for the 512GB warranty of 36TB. That only comes to about 720 erase cycles if WA=10.I think the problem is that the warrantied write amount should really scale with the capacity of the SSD.
Assuming WA=10, then 128GB @ 3000 erase cycles should allow about 3000 *128GB/10 = 38.4TB. The 256GB should allow 76.8TB. And the 512GB should get 153.6TB.
Anand Lal Shimpi - Wednesday, November 28, 2012 - link
The write amount does actually scale with capacity, OCZ just tried to simplify things with how they presented the data here. In actuality, even the smallest capacity Vector should be good for more than 20GB of host writes per day x 5 years.Take care,
Anand
jwilliams4200 - Wednesday, November 28, 2012 - link
Wait, what? I thought OCZ claimed the warranty was the same for all capacities, 5 years of 36TB, whichever comes first.Are you saying that the 36TB number is only for the 128GB Vector, and the other two have double and quadruple that amount allowed before the warranty runs out?
Kristian Vättö - Wednesday, November 28, 2012 - link
OCZ only says 20GB of writes a day for 5 years in the Vector datasheet, no capacity differentiation:http://www.ocztechnology.com/res/manuals/OCZ_Vecto...
JellyRoll - Wednesday, November 28, 2012 - link
These endurance tests that they use to generate the predicted life of the SSD are with 100% fill and full span random writes. This prevents the SSD from doing many of the internal tasks as efficiently that reduce write amplification. You would need to be doing full span random writes to see these types of endurance numbers.Free capacity on the drive, and different types of data other than 4K random will result in much higher endurance.
These numbers are intentionally worst case scenarios.
A5 - Wednesday, November 28, 2012 - link
If your usage case is saturating a Gigabit connection 24/7, you need to be buying SLC Enterprise drives (and get a better network connection :P).36TB doesn't sound like much if you're making up crazy scenarios, but that is probably near a decade of use for a normal power-user. Another way to put it is that you'd have to re-install a 12GB game 3,000 times to get that number..
seapeople - Sunday, December 2, 2012 - link
But if you reinstall a 12GB game four times per day, and eight times on a Saturday, then your drive could be worn out after just three months!It's a reasonable use case for someone who only wants to spring for the budget 40GB SSD, but still wants to oscillate between playing four large games on a daily basis.
jwilliams4200 - Monday, December 3, 2012 - link
Your math is off. That is only 713GB.jwilliams4200 - Monday, December 3, 2012 - link
Oops, my math is off, too. But yours is still off.3 months is 13 weeks, so 13 Saturdays and 78 non-Saturdays.
12*(4*78 + 8*13) = 4992GB
So you have to do that 7.2 more times to get to 36TB, which is about 1.8 years.
jeff3206 - Wednesday, November 28, 2012 - link
Vertex, Octane, Agility, Synapse, Revodrive, Z-Drive, Velodrive and now Vector, plus an array of generation numbers and suffixes. Could OCZ's flash product naming system be any more complicated?Numerical product names may not be sexy, but they sure are easy to understand.
wpcoe - Wednesday, November 28, 2012 - link
Chart on first page of review shows Sequential Write speed for 128GB model as 530MB/s, when the OCZ site (http://www.ocztechnology.com/vector-series-sata-ii... shows it as 400MB/s.Kristian Vättö - Wednesday, November 28, 2012 - link
Thanks for the heads up, I fixed the table.Death666Angel - Wednesday, November 28, 2012 - link
So, this drive costs as much as a 840 Pro (or a little less for the 512GB version) and has slightly worse performance in most cases. But if I use more than 50% of its capacity, I get much worse performance?That's something that bugged me in the Vertex 4 reviews: you test with the performance mode enabled in pretty much all graphs, but I will use it without it, because if I buy an SSD, I intend to use more than 50% of the drive.
I don't get it.
ocztony - Wednesday, November 28, 2012 - link
You ONLY see the slow down when you write to the whole of the drive in 1 go..so you will only ever see it if you sit running HDtach or a similar bench to the whole of the drive. The drive is actually intelligent, say you write a 4.7GB file for instance, it writes the data in a special way, more like an enhanced burst mode. Once writes have finished it then moves that written data to free up this fast write nand so its available again.It does this continually as you use the drive, if you are an average user writing say 15GB a day you will NEVER see a slow down.
Its a benchmark quirk and nothing more.
jwilliams4200 - Wednesday, November 28, 2012 - link
That is incorrect.The way it works is that in the STEADY STATE, performance mode is faster than storage mode. This should be obvious, because why would they even both having two modes if the steady state performance is not different between the modes?
Now, there is a temporary (but severe) slowdown when the drive switches from performance mode to storage mode, but I don't think that is what Death666Angel was talking about.
By the way, if you want a simple demonstration of the STEADY STATE speed difference between the modes, then secure erase the SSD, then use something like HD Tune to write to every LBA on the SSD. It will start out writing at speed S1, then around 50% or higher it will write at a lower speed, call it Sx. But that is only temporary. Give it a few minutes to complete the mode switch, then run the full drive write again. It will write at a constant speed over the drive, call it S2. But the key thing to notice is that S2 is LESS THAN S1. That is a demonstration that the steady-state performance is lower once the drive has been filled past 50% (or whatever percentage triggers the mode switch).
ocztony - Wednesday, November 28, 2012 - link
you do know who i am?I know how the drives work...you do not fully, im working on getting you info so you will know more.
jwilliams4200 - Wednesday, November 28, 2012 - link
Yes, I know who you are. You have posted incorrect information in the past. I have noticed on the OCZ forums that whatever makes OCZ or OCZ products look good is posted, whether true or not.I am talking about facts. Do you dispute what I wrote? Because hardocp measured exactly what I wrote here:
http://www.hardocp.com/article/2012/11/27/ocz_vect...
Besides, you did not address the elephant in the room. If there is no difference in speed between the two modes, than why do you have two modes, hmmm?
ocztony - Thursday, November 29, 2012 - link
Just for you..http://dl.dropbox.com/u/920660/vector/linear%20wri...
http://dl.dropbox.com/u/920660/vector/average%20wr...
In the end you do NOT fully understand how the drives work, you think you do, you do not. If a 100% write to all LBA test is run on the 128 and 256's you get what Anand shows, the reason for this is the drive is unable to move data around during the test. So...if you like running 100% LBA write tests to your drive all day them knock yourself out...buy the 512 and as you see it delivers right thru the LBA range. However if you just want to run the drive as an OS drive and you average a few GB writes per day, with coffee breaks and time away from the PC then the drive will continually recover and deliver full speed with low write access for every write you make to the drive right up till its full..the difference is you are not writing to 100% LBA in 1 go.
So what I said about it being a benchmark quirk is 100% correct, yes when you run that benchmark the 256's and 128's do slow up, however if you install an OS, and then load all your MP3's to the drive and it hist 70% of a 128 it may slow if it runs out of burst speed nand to write to BUT as soon as you finish writing it will recover..infact if you wrote the MP3's in 10GB chunks with a 1min pause between each write it would never slow down.
The drives are built to deliver with normal write usage patterns...you fail to see this though.
Maybe we need to give the option to turn the bust mode off and on, maybe then you will see the benefits.
ocztony - Thursday, November 29, 2012 - link
BTW the test was rin on an MSI 890FX with SB850, so an old sata3 AMD based platform...this is my work station. The drive is much faster on an Intel platform due tot he AMD sata controller not being as fast.jwilliams4200 - Thursday, November 29, 2012 - link
And you completely failed to answer either of the questions I posed.It seems to me that you do not understand how your own SSDs work, and you are unable to explain.
ocztony - Thursday, November 29, 2012 - link
I show you a vector with no slow down, same write access latency for 100% LBA and explain why the 2 other capacity drives work the way they do and its still not good enough.
Come to my forum, ask what you want and we will do everything we can to answer every question within the realms of not disclosing any IP we have to protect.
In fact Jwilliams email me at tony_@_ocztechnology.com without the _ and I will forward an NDA to you, sign it and get it back to me and I will call you and explain exactly how the drives work..you will then know.
jwilliams4200 - Thursday, November 29, 2012 - link
No NDA. All we are talking about is basic operation of the device. If you cannot explain that to everyone, then you are not worth much as a support person.I asked two questions, which you still have not answered:
1) How do you explain the HD Tune results on HardOCP that I linked to?
2) If storage mode and performance mode are the same speed, then why bother having two modes? Why call one "performance"?
jwilliams4200 - Thursday, November 29, 2012 - link
By the way, it seems like your explanation is, if you do this, and only this, and do not do that, and do this other thing, but do not do that other thing, then the performance of OCZ SSDs will be good.So I have another question for you. Why should anyone bother with all that rigamarole, when they can buy a Samsung 840 Pro for the same price, and you can use it however you want and get good performance?
That is basically the original question from Death666Angel.
Death666Angel - Friday, November 30, 2012 - link
Heh, didn't think I'd break off such a discussion. jwilliams is right about what my question would be. And just showing a graph that does not have the slow down in the 2nd 50% is not proof that the issue of a slow down in the 2nd 50% does not exist (as it has been shown by other sites and you cannot tell us why they saw that). I also don't care about the rearranging of the NAND that takes place between the 2 operation modes, that slow down is irrelevant to me. What I do care about is that there are 2 different modes, one operating when the disc is less than 50% full, the other operating over that threshold, and that I will only use the slower one because I won't buy a 512GB drive just to have 256GB useable space. And if they two modes have exactly the same speed, why have them at all? NDA information about something as vital as that is bullshit btw. :)HisDivineOrder - Wednesday, November 28, 2012 - link
...these drives idle a lot more of the time than they work at full speed. A considerably higher idle is just bad all around.I don't think OCZ's part warrants the price they're asking. Its performance is less most of the time, its a power hog, its obviously hotter, it has the downsides of their 50% scheme, and it has OCZ's (Less Than) "Stellar" track record of firmware blitzkrieg to go along with it.
I wonder, how many times will I lose all my data while constantly updating its firmware? 10? 20 times?
fwip - Wednesday, November 28, 2012 - link
If the drive is idle 24/7 for an entire year, it will cost you less than a dollar, when compared to running no drive at all."Power hog" is pretty relative.
Shadowmaster625 - Wednesday, November 28, 2012 - link
Be careful what you wish for. I been reading horror stories about the 840!A5 - Wednesday, November 28, 2012 - link
From where? I haven't seen any.Brahmzy - Wednesday, November 28, 2012 - link
That was beta firmware that Samsung has admitted had a problem. They said all retail drives shipped with the newer, fixed firmware. There have been ZERO reported failures of retail 840 Pro drives.designerfx - Wednesday, November 28, 2012 - link
I keep wanting to buy an SSD, but every time I wait the prices drop and the performance increases substantially.How long are we going to wait for 1-2TB SSD's? Bring them on already at reasonable prices ($250)!
mark53916 - Wednesday, November 28, 2012 - link
Anand(tech) did lots of good testing, but seems to have left out copy performance.Copy performance can be less than one tenth of the read or write performance,
even after taking into account that copying a file takes 2 times the interface bandwidth
of moving the file one direction over a single directional interface. (Seeing that
one drive is only able to copy less than 10MB/second, compared to 200MB for
another drive when both each can both read or write faster than 400MB/s over a
6Gb/s interface is much more important than seeing that one can read
500MB/s and the other only at 400MB/s.)
I use actual copy commands (for single files and trees) and the same
on TrueCrypt volumes, as well as HD Tune Pro File Benchmark for these
tests. (For HD Tune Pro the difference between 4 KB random single
and multi is often the telling point.)
I'd also like to see the performance of the OCZ Vector at 1/2 capacity.
I'd also like to see how the OCZ Vector 512GB performs on the
Oracle Swingbench benchmark. It would be interesting to see
how wht Vector at 1/2 capacity compares to the Intel SSD DC S7300.
extide - Wednesday, November 28, 2012 - link
Copy Performance is tied to the block size you use when reading and writing. IE if you read 4k at a time, then write 4k at a time, you will get different performance than reading 4MB at a time and then writing 4MB. So it largely depends on the specific app you are using. Copy isnt anything special, just reads and writes.mark53916 - Thursday, November 29, 2012 - link
Maybe I should have explained more:I have found that most USB keys and many SATA SSDs perform
MUCH worse (factor of 10 and even up to more than
300 decrease in performance) when reads and writes are mixed,
rather than being a bunch of reads followed by a bunch writes.
The reads and writes can be to random locations and there still
can be a big performance it.
A feel that a simple operating system copy of a large sequential
file and a tree of a bunch of smaller files should be done since
the two tests have shown me large performance differences
between two devices that have the about the same:
. sequential read rate
. sequential write rate
. Read/second
. Writes/second
when the reads and writes aren't mixed.
I also found that HD Tune Pro File Benchmark sometimes shows
significant (factor of 10 or more) differences between the
Sequential 4 KB random single and 4 KB random multi tests.
(For my own personal use, the best benchmark seems to be
copying a tree of my own data that has about 6GB in about
25000 files and copying from one 8GB TrueCrypt virtual disk
to another on the same device. I see differences of about
15 to one between stuff that I have tested in the last year
that all show speeds limited by my 7 year old motherboards
in sequential tests and all performing much slower with the
tree copy tests.
Since the tree is my ad-hoc data and my hardware is so old
I don't expect anyone to be able to duplicate the tests, but I
have given results in USENET groups that shows that there
are large performance differences that are not obviously
related to bottlenecks or slowness of my hardware.
There could be something complicated happening that
is due, for instance, in a problem with intermixing
read and write operations on USB 3 or SATA interface
that is dependent on the device under test but not
due to an inherent problem with the device under test,
but I think that the low performance for interleaved reads
and writes is at least 90% due to the device under test
and less than 10% due to problems with mixing
operations on my hardware since some devices don't
have a hit in performance when read and write operations
are mixed and have sequential uni-directional performance
much higher than 200MB/s on SATA and up to 134MB/s
on USB 3.
There could be some timing issues caused by having
a small number of buffers (much less than 1000), only
2 CPUs, having to wait for encryption, etc., but I don't
think these add up to a factor of 4, and, as I have said,
I see performance hits of much more than 15:1
for the same device, and all I did was switch from copying
from another flash device to the flash device under test
to copying from one location on the flash device under test
to another location. on the same device. Similarly, the
HD Tune Pro File Benchmark Sequential 4 KB random single
compared to 4 KB random multi with multi 4 or more
takes a hit of up to 100 for some USB 3 flash memory keys,
whereas other flash memory keys may run about the same speed
for random single and multi as well as about the same speed for
as the poorly performing device does for 4 KB random single.
MarchTheMonth - Wednesday, November 28, 2012 - link
Anand, I just want to know what you think of as a difference with the new CEO sending a formal, official compared to the hand-written notes by Ryan. To me (an outsider), official letters bore me, as they are just a carbon copy of the same letter sent to many others.A handwritten note would mean more to me. Now, given that the handwritten note was more of a nudge, I can understand that perhaps a less "nudging" note would be more appreciated, but I digress.
Just curious.
-March
BrightCandle - Wednesday, November 28, 2012 - link
Do you have more confidence this time that OCZ is actually being honest about the contents of their controller chip? Clearly last time you were concerned about OCZ's behaviour when you reviewed the Octane (both in terms of reviewing their drives and allowing them to advertise) and they out right lied to you about the contents of the chip, they lied to everyone until they got caught.This time do you think the leopard has changed its spots or is this just business as usual for a company that cheats so frequently?
gammaray - Wednesday, November 28, 2012 - link
The real question is,Why pay for an OCZ Vector when you can get a Samsung 840 Pro for the same price??
jwilliams4200 - Thursday, November 29, 2012 - link
Very good question.Hood6558 - Wednesday, November 28, 2012 - link
If these are priced to compete with Samsung's 840 Pro, only a die-hard OCZ fanboy would buy one, since the 840 Pro beats it in almost every benchmark, and is considered the most reliable brand, while OCZ has a long, rich history of failed drives, controllers, and firmware. Even if they were priced $50 below the Samsung I wouldn't buy one, at least not until they had 6 months under their belt without major issues. It get's old re-inventing your system every time your SSD has issues.SanX - Thursday, November 29, 2012 - link
Remember that excluding typed vs handwritten letter to Anand this is still 100% Ryan Petersen in each SSDskroh - Thursday, November 29, 2012 - link
I noticed that in the consistency testing, the Intel 330 seemed to outperform just about everything except the Intel 3700. That seems like a story worth exploring! Is the 330 a sleeper user-experience bargain?jwilliams4200 - Thursday, November 29, 2012 - link
For one thing, it did not look to me like the 330 had yet reached steady-state in the graphs provided. Maybe it had, but at the point where the graph cut-off things were still looking interesting.Death666Angel - Friday, November 30, 2012 - link
In one of the podcasts (E10?) Anand talks about how SF controllers have less issues with these IO latency worst case scenarios. So it's not necessarily an Intel feature, but a SF feature and the graph might look the same with a Vertex 3 etc.Also, it may behave differently if it were filled with different sequential data at the start of the test and if the test were to run longer. I wouldn't draw such a positive conclusion from the test Anand has done there. :)
jonjonjonj - Thursday, November 29, 2012 - link
did they have to name their 2 drives Vector and Vertex? they couldnt have picked 2 names that looked more alike if they tried. i have to image this was done on purpose for some reason that i can think of. now that ocz has its own controller are they retiring the vertex or will they just use barefoot controllers in vertex ssd's going forward?deltatux - Sunday, December 2, 2012 - link
While it is great that the OCZ Vector is able to compete with the Samsung SSDs in terms of performance, but OCZ's past reliability records have been iffy at most, they fail prematurely and RMA rates have been quite high. I've known countless people suffering issues with OCZ drives.I'll wait for a bit before recommending OCZ drives to anyone again due to reliability issues if the OCZ Vector can meet the reliability of Corsair, Intel or Samsung drives. Until then, I'll keep recommending Samsung drives as they exceed in performance and reliability than most manufacturers.
rob.laur - Sunday, December 2, 2012 - link
If you check the review of the Vector on the hardwarecanucks website, page 11 you will see the Vector AND Vertex crush every other drive listed when filled with over 50% capacity. This is probably the most important bench to judge SSD performance by."While the Vector 256GB may not have topped our charts when empty, it actually blasted ahead of every other drive available when there was actual data housed on it. To us, that’s even more important than initial performance since no one keeps their brand new drive completely empty. "
jwilliams4200 - Sunday, December 2, 2012 - link
If you are talking about this:http://www.hardwarecanucks.com/forum/hardware-canu...
you will note that the Samsung 840 Pro is conspicuously absent from the list, so we do not know how the Vector fares against its most difficult competitor.
rob.laur - Sunday, December 2, 2012 - link
you are right and I wish they did include the 840 Pro but they didn't. Point is compared to all those other SSDs/Controllers, the BF3 clearly outperforms everything in the real world with actual data on the drive. The 840 Pro uses faster nand than the Vector yet both drives are pretty much equal. The toshiba toggle version of Vector cant come soon enough!jwilliams4200 - Monday, December 3, 2012 - link
The Samsung 840 Pro is significantly faster (about 33%) than the Vector for 4KiB QD1 random reads. This is an important metric, since small random reads are the slowest operation on a drive, and if you are going to take just one figure of merit for an SSD, that is a good one.rob.laur - Monday, December 3, 2012 - link
well according to most sites, the Vector beats it on writes and in mixed read/write environments especially with heavy use. Not to mention the 840 takes a long time to gets its performance back after getting hammered hard whereas the Vector recovers very quickly.jwilliams4200 - Monday, December 3, 2012 - link
First of all, the type of heavy writes I believe you are referring to are a very uncommon workload for most home users, even enthusiasts. Reads are much more common.Second, I have seen only one credible study of Vector use with heavy writes, and the Samsung 840 Pro does a better than the Vector with steady-state, heavy writes:
http://www.anandtech.com/show/6363/ocz-vector-revi...
I have seen nothing to suggest the Vector recovers more quickly. If anything, there is circumstantial evidence that the Vector has delayed recovery after heavy writes (assuming the Vector is similar to the Vertex 4) due to the Vector's quirky "storage mode" type behavior:
http://www.tomshardware.com/reviews/vertex-4-firmw...
jwilliams4200 - Monday, December 3, 2012 - link
My first link was meant to go to storagereview:http://www.storagereview.com/ocz_vector_ssd_review
rob.laur - Monday, December 3, 2012 - link
the 840 Pro has terrible recovery when pushed hard.http://www.tomshardware.com/reviews/vector-ssd-rev...
jwilliams4200 - Tuesday, December 4, 2012 - link
I certainly would not call it "terrible" -- it actually looks pretty good to me. And if you want even better performance under sustained heavy workloads, just overprovision the SSD.dj christian - Monday, February 4, 2013 - link
How do you overprovision the SSD?somebody997 - Thursday, April 11, 2013 - link
You don't. Most SSDs will come in 128, 256 or 512 GB sizes. If you have an SSD and you see a decrease in size, usually at 120, 240 or 480 GB sizes, it means the controller has already over provisioned the SSD for you.batguiide - Sunday, December 9, 2012 - link
Thanks for these tips!share a website with you :socanpower,ca,You will love it! I believe!
jdtwoseven - Monday, December 10, 2012 - link
I have WAY too much scar tissue from this vendor to ever buy their products again. I bought five of their SSDs, and was five for five RMAing them back. I have the replacements, but don't trust them enough to use them in anything other than evaulation work because them are just not dependable. I would avoid them like the plague.somebody997 - Thursday, April 11, 2013 - link
I have had multiple SSDs from OCZ, and none of them have failed up till today. I boot Mac OS X from my OCZ Vector, and from every OCZ SSD before that. In my experience, it's not the OCZ SSDs that have terrible reliability, it's Windows. Besides, have any of you guys complaining about OCZ SSDs ever tried turning off automatic disk defragmentation in Windows?Windows has an automated disk defragmenting tool to defragment HDDs, but when you plug in an SSD, the tool is automatically disabled.
Chances are, those of you with SSD problems have a PC with windows that did not successfully disable automated disk defragmentation, and have had your SSDs killed due to that.
Mac OS X does not have an automated disk defragmenting tool as it generally tries not to write in fragments. Without the automated defragmentation tool, my OCZ SSDs have never failed.
ewh - Tuesday, April 30, 2013 - link
My Vector 256 drive completely failed in just under 4 months. OCZ is going to replace it but if the replacement fails in less than 48 months I will look for alternatives.jhboston - Wednesday, May 8, 2013 - link
My OCZ VTR1-25SAT3-512G failed after just 33 days. This was 3 days after the vendors replacement agreement expired. Has to go to OCZ, OCZ is replacing the drive, but they are following a delayed time frame to get the new drive in my hands.djy2000 - Wednesday, July 31, 2013 - link
Ok my OCZ vector catastrophically failed within 3 months :-( Think I'll be going with intel or samsung nextNetGod666 - Monday, December 23, 2013 - link
My first Vector 256GB died aftef 6 months. Now the RMA they sent died after 2 months. Going for the third. No wonder they went bankrupt!