I don't want to be PIA, I've asked this before, but there was no reply, so, here it goes:
Would it be possible to add some left and right margins to print page layout? I know it's meant to be printed, but I guess a lot of readers use it to read the whole article at once (me included), and it is slightly inconvenient to read without margins.
That's wrong, CSS has the capability of having a few different stylesheets. Most notably, there is one for "screen" and one for "print", which would apply here. All AT has to do is create a margin for that page in the screen css and set that margin to 0 in the print css.
No, it is not. It is about the printer-friendly version of the page, which can still have both screen and print stylesheet as vol7ron suggested. Try to have a clue about web development before engaging in a conversation about it.
To the OP: you can use Greasemonkey or some equivalent (or even just make a javascript bookmarklet) and "fix" such minor things by yourself on any sites that you want.
Remember that the Print view doesn't have banner ads. That's the major source of revenue for this site, so I doubt Anand is going to make the Print view more attractive for on-screen reading.
The site makes money based on either ad views or ad clicks. Clearly, they'll get less of both if everyone reads the text on a single page that has no ads.
I was going to suggest page zipper (FF plugin), but it doesn't work with this site, and even if it did, since they have feed back directly below each page, you'd have to get through every singe post to get to scroll to the next page (rinse/repeat for each page of text).
I think it'd be smarter for Anand to put the feed back after the last page and setup pages to work with page zipper.....we get a single page with all the text, but we also see all of the adds.
With that said, I don't really mind the current set.
Just in case if you are using IE8 - open the Print view; then simply from the View menu select Style - No Style. You will get some small margins. Then adjust the window size as comfortable for reading.
Hi there, thanks for the great review. I couldn't find from the article what kind of data you are writing for the random 4k read/write tests. Those random write numbers look stellar.
Which might have to do with the data being written being not very random at all and allowing for big gain coming from the sandforce voodoo/magicsauce/compression???
This is a very important question - nobody is interested in how quickly they can write zeroes to their drive. If these benchmarks are really writing completely random data (which by definition cannot be compressed at all) then where does all this performance come from? It seems to me that we have a serious problem benchmarking this drive. If the bandwidth of the NAND were the only limiting factor (rather than the SATA interface or the processing power of the controller) then the speed of this drive should be anything from roughly the same as a similar competitor (for completely random data) to maybe 100x faster (for zeroes). So to get any kind of useful number you have to decide exactly what type of data you are going to use (which makes it all a bit subjective). In fact, there's another consideration. Note that the spare NAND capacity made available by the compression is not available to the user. That means the controller is probably using it to augment the reserved NAND. This means that a drive that has been "dirtied" with lots of nice compressable data will perform as though it has a massive amount of reserved NAND whereas a drive that has been "dirtied" with lots of random data will perform much worse.
My understanding is that completely random and uncompressible are not the same thing. An uncompressible data set would need to be small and carefully constructed to avoid repetition. A random data set by definition is random, and therefore almost certain to contain repetitions over a large enough data set.
No; given a random sequence of 0/1 bits with equal probability of each, the expected number of bits to encode the stream (i.e. on average--you could, through extremely unlikely outcome, have a compressible random sequence: e.g. a stream of 1 million 0's is highly compressible, but also extremely unlikely, at 2^(-1,000,000) probability of occurrence).
So onwards to the entropy bits required calculation: H = -0.5*log2(0.5) -0.5*log2(0.5) = -0.5*(-1) -0.5*(-1) = 1.
In other words, a random, equal-probability stream of bits can't be compressed at a rate better than 1 bit per bit.
Of course, this only holds for an infinite, continuous stream; as you shorten the length of the data, the probability of the data being compressible increases, at least slightly--but even 1KB is 8192 bits, so compressibility is *hard*.
Just for example's sake, I generated a few (10 bytes to 10MB) random data files, and compressed using gzip and bzip2: in every case (I repeated several times) the compressed version ended up larger than the original.
I'm also not convinced by the way Anand has arrived at a compression factor of 2:1 based on the power consumption. The specification for the controller and Anand's own measurements show that about 0.57W of power is being used just by the controller. That only leaves 0.68W for writing data to NAND. Compare that with 2.49W for the Intel drive and you end up with a compression factor of more like 4:1. But actually this calculation is still a long way out because 2MB/s sequential writes are 250MB/s on the SandForce and only 100MB/s on the Intel. So we've written 2.5x as much (uncompressed) data using 1/4 as much NAND power consumption. So the compression factor is actually more like 10:1. I think that pretty much proves we're dealing with very highly compressable data.
That should definitely be checked, as this is the first drive where different kinds of data will perform differently. Due to the extremely high aligned random write performance, I suspect that the data written is either compressible or repeated, so the drive manages to either compress or deduplicate to a large degree.
One other point regarding the IOMeter tests: the random reads perform almost identical to the unaligned random writes. Would it be possible to test both unaligned and aligned random reads, in order to find out if the drive is also capable of faster random reads under specific circumstances?
Anand, do you therefore have any explanation for why the SandForce controller is apparently about 10x more efficient than the Intel one even on random (incompressible) data? Or can you see a mistake in my analysis?
That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).
That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.
Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.
Call me anal, but I am still not happy with the response ;) Maybe the first 4k block is filled with random data, but then that block is used over and over again.
That random read/write performance is too good to be true.
A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?
Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.
this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.
I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.
For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!
ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
I commented on this in the "This Just In" article, but to recap:
In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.
So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.
If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
I did test it. If you create the test file it compressable to 0 percent of its original size. But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)
I created an 512kb seq. write IOMeter test pattern which writes to a space of 1GB. When you use IOMeter for the first time, it creates that 1GB file to reserve the space. I then stopped the test as soon as the 1GB file was written and before the actual test even began. I then used 7-zip to compress the file, that's the result:
It's in german and it says to the right that the uncompressed size is 336MB (I paused at that point) and the compressed file size is 404KB. So the level of compression is nearly 0%.
I aborted then and did the above test again. This time, I let the harddisk write data for about 11 seconds (HD does about 100MB/s) so the complete 1GB file has been used. I used again 7-Zip and this is the result:
Do you feel a noticable difference between this new Cosair Drive and say the Intel X-25M 160 GB? Benchmarks are nice, but I wonder how much of the speed difference can you really feel?
Ive noticed this quite a bit on reviews here. When a benchmark is low and the text of the result doesnt fit in the bar the text gets squished into the text of the name of the item being tested. Could you please move those results to the outside of the bar off to the right? Since the bar is so small you will have plenty of room out there to put the result and it will be legible. Other than that thanks for a great review and thanks for still including a spindle disk or two as well (though I do question the decision to use a 5400rpm drive unless you were trying to throw that in for laptop users or something).
Any chance of comparing the drives using compressed NTFS. I tend to do this to my SSD drive anyway and probably wouldn't see a difference on a drive that was trying to compress data internally that was already compressed).
Oh, my drive is only 30GB and I needed the space..quad core CPU, figured I wouldn't notice a difference speed wise.
Very nice review, it soothes some of my fears regarding these drives.
I'm curious as to whether you know at this point if there's going to be reviews of the OCZ Vertex 2 and Agility 2 as well?
Seeing as these drives are based on the SF1500 and SF1200 as well it'd be interesting to see the performance difference from drives from the same vendor, using the different chips. There's the Vertex LE of course but it seems it's more or less the bastard child in this comparison.
I was curious if it's possible to overclock these SSDs either through the SF1200 or the RAM in some way or another?
I know pencil modding has been dead in recent years. This is due in part to smaller components, but also to the fact that manufacturers have both (1) implemented in-place safeguards to reduce problems from overclocking and (2) started encouraging overclocking by providing more options through the BIOS.
I'm just curious if anyone's figured out how to do it on these SSDs. I know the average shouldn't - you typically don't want to fiddle with the sole thing that stores your data - but I suspect some tweaking could take place by enthusiasts to really up performance, if wanted. Anyone know a place to look into this?
Thanks for the power consumption charts, Anand! Any chance you can throw in a typical 2.5" 5400RPM HDD that usually comes stock in most laptops as a reference point for those of us who are thinking of upgrading? Also, could keeping Device Initiated Power Management disabled account for the significant discrepancies between your numbers and the recent article on Tom's HW? (ie: Tom's got an idle of 0.1W for the intel drive- a lot better than the competition) http://www.tomshardware.com/reviews/6gb-s-ssd-hdd,...
The Nova seems to do surprisingly well under Anand's heavy workload test, compared to other Indillix-based drives... Altho it's performance is just average (and similar to other Indillix drives) in most other tests. Isn't the Nova essentially the same thing as the OCZ Solid 2? (and the G.Skill Falcon II) That drive has been priced VERY competitively from what I've seen, I'm surprised there isn't more buzz around it. Looking forward to Anand's review of the Nova.
I might be buying soon as a gift for my sister, she really needs more than 80GB for her laptop (80GB X25-M is still the best bang for the buck out there imo), so a $300 120GB is right up her alley.
Sorry for being a little off-topic (still SSD though)...
Are there any news on when is the Indilinx JetStream going to be released in some drive from any vendor?
Cause as far as I know, this was initially going to be out in end of 2009, then there was a delay... but no further info that I could find more recently.
I'm thinking to buy a SSD for system drive and the v+ 2nd gen appeared to be a decent choice for home use. Only got review on X-bit and Hexus so far, would like to see an Anand review.
"The Mean Time To Failure numbers are absurd. We’re talking about the difference between 228 years and over 1100 years. I’d say any number that outlasts the potential mean time to failure of our current society is pretty worthless."
In a couple thousand years, when very little is remembered of this time, entire dissertation papers on early 21st century culture will be written based on the contents of your hard drive!
(And if that isn't a scary thought, I don't know what is....)
Was wondering if the charts could be fixed to correctly display the HDD values for random ops? Something like if the value doesn't fit in the bar it should be displayed after the bar, to its right... and not to its left as it does now colliding with the name of the HDD.
BTW, that was actually a factor I'm looking into seeing in SSD reviews... how good are they at the really useful operations, not that sequential doesn't matter, that much.
Given the SandForce controllers effectively compress data on the drive, I'm just curious about precisely how much data you can write to the drive...
For example, if data is say text, and you create a 1MB text file, how many of those 1MB text files can you physically store on the drive versus another non-SandForce drive? Just a curiosity.
the OCZ LE isnt using the SF-1500 only the vertex 2 pro is using the SF-1500 and the vertex 2 is using SF-1200 the LE has a chip that sort of in the middle of the 2 .........look it up on the OCZ forms you will see
Needless to say the author is completely wrong about what a MTBF number means. That number has nothing to do with infant mortality rate nor with life expectancy. It is the statistical failure rate over a large number of units operating in their prime life period.
Hey, I COMPLETELY agree with the printing thing....I hate articles that have "pages", I'd rather view a 10 page long single document....yes, I'm that frickin lazy. You shouldn't remove the pages, but offer a FULL view option? I use the print option sometimes as well just to read an article all at once.
"Performance is down, as you'd expect, but not to unbearable levels and it's also pretty consistent."
Why is performance down? Why should we "expect" this? Do I have to read every SSD article you've written previously to understand new articles? Or is there one big article that has all the info that you obliquely allude to in subsequent articles?
Where can I find the current recommendations for SSDs (SandForce vs Indilinx vs Intel vs Micron vs Samsung, latest firmware updates, etc.)? Is there a central repository of SSD information that is assimilated and arranged categorically (for easy research), or must all this info be followed like a blog?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
63 Comments
Back to Article
Carleh - Wednesday, April 14, 2010 - link
I don't want to be PIA, I've asked this before, but there was no reply, so, here it goes:Would it be possible to add some left and right margins to print page layout? I know it's meant to be printed, but I guess a lot of readers use it to read the whole article at once (me included), and it is slightly inconvenient to read without margins.
Thank you.
taltamir - Wednesday, April 14, 2010 - link
interesting use of the print option.i haven't considered it before but i will definitely use it now...
problem with giving it margins is that it will not print correctly.
but it would be great if there was another option to display the article all at once with margins like the print command does.
taltamir - Wednesday, April 14, 2010 - link
actually, you just need to NOT maximize, instead stretch to a comfortable size that doesn't touch the edge of the monitorvol7ron - Wednesday, April 14, 2010 - link
That's wrong, CSS has the capability of having a few different stylesheets. Most notably, there is one for "screen" and one for "print", which would apply here. All AT has to do is create a margin for that page in the screen css and set that margin to 0 in the print css.deputc26 - Wednesday, April 14, 2010 - link
Anand SSD reviews create deeper understanding than anything else I've found on the web, your investigative approach is awesome. Thanks!teohhanhui - Friday, April 16, 2010 - link
Try to read what the original poster has said. The discussion is about the print stylesheet.Visual - Thursday, November 18, 2010 - link
No, it is not. It is about the printer-friendly version of the page, which can still have both screen and print stylesheet as vol7ron suggested.Try to have a clue about web development before engaging in a conversation about it.
To the OP: you can use Greasemonkey or some equivalent (or even just make a javascript bookmarklet) and "fix" such minor things by yourself on any sites that you want.
donjuancarlos - Wednesday, April 14, 2010 - link
Why not just change your settings in Page setup on the File menu to get the margins you want when you print an article?donjuancarlos - Wednesday, April 14, 2010 - link
$#!7 ! Ignore that last comment.romansky - Wednesday, April 14, 2010 - link
If you are using firefox, you can open Firebug and change the style of the DIV under the BODY tag to the following:Original: div style="width: 100%; overflow: hidden;"
To: div style="overflow: hidden; margin: 0px 40px;"
Should work :)
Carleh - Wednesday, April 14, 2010 - link
Thanks for the tip, I'll try it.bstowe94 - Wednesday, April 14, 2010 - link
Yes the idea of using the print to read the whole article at once is quite nice. I will be doing this from now on.therealnickdanger - Wednesday, April 14, 2010 - link
It's also very nice for bathroom breaks. Some men read the paper. I read AT!dgz - Wednesday, April 14, 2010 - link
That's what I do.You can also to this in Chrome and Opera, too.
Spivonious - Wednesday, April 14, 2010 - link
Remember that the Print view doesn't have banner ads. That's the major source of revenue for this site, so I doubt Anand is going to make the Print view more attractive for on-screen reading.nsiboro - Wednesday, April 14, 2010 - link
... then insert a few adverts between pages, I personally don't mind. AT can feed me Ads and Tech, anytime!chrnochime - Wednesday, April 14, 2010 - link
Hate to break it to you but not everything in life can be bent to your pleasing.Carleh - Wednesday, April 14, 2010 - link
Before redesign, print layout used to have margins, so it's not really "bending to my pleasing", I'm not asking for something new.BTW, what's the point of your comment?
mariush - Wednesday, April 14, 2010 - link
File > Print Preview > Page Setup > Margins & Header/FooterSet the values you're comfortable width and print.
No need to mess with the site for such things.
nilepez - Wednesday, April 14, 2010 - link
The site makes money based on either ad views or ad clicks. Clearly, they'll get less of both if everyone reads the text on a single page that has no ads.I was going to suggest page zipper (FF plugin), but it doesn't work with this site, and even if it did, since they have feed back directly below each page, you'd have to get through every singe post to get to scroll to the next page (rinse/repeat for each page of text).
I think it'd be smarter for Anand to put the feed back after the last page and setup pages to work with page zipper.....we get a single page with all the text, but we also see all of the adds.
With that said, I don't really mind the current set.
JohnQ118 - Thursday, April 15, 2010 - link
Just in case if you are using IE8 - open the Print view; then simply from the View menu select Style - No Style.You will get some small margins. Then adjust the window size as comfortable for reading.
remosito - Wednesday, April 14, 2010 - link
Hi there,thanks for the great review. I couldn't find from the article what kind of data you are writing
for the random 4k read/write tests. Those random write numbers look stellar.
Which might have to do with the data being written being not very random at all and allowing for big gain coming from the sandforce voodoo/magicsauce/compression???
Mr Alpha - Wednesday, April 14, 2010 - link
I believe the build of IOMeter he uses writes randomized data.shawkie - Wednesday, April 14, 2010 - link
This is a very important question - nobody is interested in how quickly they can write zeroes to their drive. If these benchmarks are really writing completely random data (which by definition cannot be compressed at all) then where does all this performance come from? It seems to me that we have a serious problem benchmarking this drive. If the bandwidth of the NAND were the only limiting factor (rather than the SATA interface or the processing power of the controller) then the speed of this drive should be anything from roughly the same as a similar competitor (for completely random data) to maybe 100x faster (for zeroes). So to get any kind of useful number you have to decide exactly what type of data you are going to use (which makes it all a bit subjective). In fact, there's another consideration. Note that the spare NAND capacity made available by the compression is not available to the user. That means the controller is probably using it to augment the reserved NAND. This means that a drive that has been "dirtied" with lots of nice compressable data will perform as though it has a massive amount of reserved NAND whereas a drive that has been "dirtied" with lots of random data will perform much worse.nafhan - Wednesday, April 14, 2010 - link
My understanding is that completely random and uncompressible are not the same thing. An uncompressible data set would need to be small and carefully constructed to avoid repetition. A random data set by definition is random, and therefore almost certain to contain repetitions over a large enough data set.jagerman42 - Wednesday, April 14, 2010 - link
No; given a random sequence of 0/1 bits with equal probability of each, the expected number of bits to encode the stream (i.e. on average--you could, through extremely unlikely outcome, have a compressible random sequence: e.g. a stream of 1 million 0's is highly compressible, but also extremely unlikely, at 2^(-1,000,000) probability of occurrence).So onwards to the entropy bits required calculation: H = -0.5*log2(0.5) -0.5*log2(0.5) = -0.5*(-1) -0.5*(-1) = 1.
In other words, a random, equal-probability stream of bits can't be compressed at a rate better than 1 bit per bit.
Of course, this only holds for an infinite, continuous stream; as you shorten the length of the data, the probability of the data being compressible increases, at least slightly--but even 1KB is 8192 bits, so compressibility is *hard*.
Just for example's sake, I generated a few (10 bytes to 10MB) random data files, and compressed using gzip and bzip2: in every case (I repeated several times) the compressed version ended up larger than the original.
For more info on this (it's called the Shannon theory, I believe, or also "Shannon entropy" according to the following), see: http://en.wikipedia.org/wiki/Entropy_(information_...
shawkie - Wednesday, April 14, 2010 - link
I'm also not convinced by the way Anand has arrived at a compression factor of 2:1 based on the power consumption. The specification for the controller and Anand's own measurements show that about 0.57W of power is being used just by the controller. That only leaves 0.68W for writing data to NAND. Compare that with 2.49W for the Intel drive and you end up with a compression factor of more like 4:1. But actually this calculation is still a long way out because 2MB/s sequential writes are 250MB/s on the SandForce and only 100MB/s on the Intel. So we've written 2.5x as much (uncompressed) data using 1/4 as much NAND power consumption. So the compression factor is actually more like 10:1. I think that pretty much proves we're dealing with very highly compressable data.HammerDB - Wednesday, April 14, 2010 - link
That should definitely be checked, as this is the first drive where different kinds of data will perform differently. Due to the extremely high aligned random write performance, I suspect that the data written is either compressible or repeated, so the drive manages to either compress or deduplicate to a large degree.One other point regarding the IOMeter tests: the random reads perform almost identical to the unaligned random writes. Would it be possible to test both unaligned and aligned random reads, in order to find out if the drive is also capable of faster random reads under specific circumstances?
Anand Lal Shimpi - Wednesday, April 14, 2010 - link
Correct. The June 08 RC build of Iometer uses randomized data. Older versions used 0s.Take care,
Anand
shawkie - Wednesday, April 14, 2010 - link
Anand, do you therefore have any explanation for why the SandForce controller is apparently about 10x more efficient than the Intel one even on random (incompressible) data? Or can you see a mistake in my analysis?Anand Lal Shimpi - Wednesday, April 14, 2010 - link
That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.
Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.
Take care,
Anand
keemik - Wednesday, April 14, 2010 - link
Call me anal, but I am still not happy with the response ;)Maybe the first 4k block is filled with random data, but then that block is used over and over again.
That random read/write performance is too good to be true.
Per Hansson - Wednesday, April 14, 2010 - link
Just curious about the missing capacitor, will there not be a big risk of dataloss incase of power outage?Do you know what design changes where done to get rid of the capacitor, where any additional components other than the capacitor removed?
Because it can be bought in low quantities for a quite ok retail price of £16.50 here;
http://www.tecategroup.com/ultracapacitors/product...
bluorb - Wednesday, April 14, 2010 - link
A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.
Am I on to something or completely of the track ?
semo - Wednesday, April 14, 2010 - link
this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.ptmixer - Wednesday, April 14, 2010 - link
I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!
ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
JarredWalton - Wednesday, April 14, 2010 - link
I commented on this in the "This Just In" article, but to recap:In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.
So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
KaarlisK - Wednesday, April 14, 2010 - link
Just resize the browser window.Margins won't help if you have a 1920x1080 screen anyway.
RaistlinZ - Wednesday, April 14, 2010 - link
I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
Chloiber - Wednesday, April 14, 2010 - link
I did test it. If you create the test file it compressable to 0 percent of its original size.But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)
Chloiber - Wednesday, April 14, 2010 - link
Just to show you what I mean.I created an 512kb seq. write IOMeter test pattern which writes to a space of 1GB. When you use IOMeter for the first time, it creates that 1GB file to reserve the space. I then stopped the test as soon as the 1GB file was written and before the actual test even began. I then used 7-zip to compress the file, that's the result:
http://www.abload.de/image.php?img=compr7uv5.png
It's in german and it says to the right that the uncompressed size is 336MB (I paused at that point) and the compressed file size is 404KB. So the level of compression is nearly 0%.
I aborted then and did the above test again. This time, I let the harddisk write data for about 11 seconds (HD does about 100MB/s) so the complete 1GB file has been used.
I used again 7-Zip and this is the result:
http://www.abload.de/image.php?img=compr2165l.png
Uncompressed size: 138MB (paused earlier here), compressed size: 138MB. So it cannot be compressed at all.
That leads to the conclusion that IOMeter uses indeed a completely random pattern for its tests.
darckhart - Wednesday, April 14, 2010 - link
i didn't see listed physical dimensions. is this 7.5mm or 9mm height? also sata 3gb/s i assume?GDM - Wednesday, April 14, 2010 - link
Do you feel a noticable difference between this new Cosair Drive and say the Intel X-25M 160 GB? Benchmarks are nice, but I wonder how much of the speed difference can you really feel?shin0bi272 - Wednesday, April 14, 2010 - link
Ive noticed this quite a bit on reviews here. When a benchmark is low and the text of the result doesnt fit in the bar the text gets squished into the text of the name of the item being tested. Could you please move those results to the outside of the bar off to the right? Since the bar is so small you will have plenty of room out there to put the result and it will be legible. Other than that thanks for a great review and thanks for still including a spindle disk or two as well (though I do question the decision to use a 5400rpm drive unless you were trying to throw that in for laptop users or something).Kary - Wednesday, April 14, 2010 - link
Any chance of comparing the drives using compressed NTFS. I tend to do this to my SSD drive anyway and probably wouldn't see a difference on a drive that was trying to compress data internally that was already compressed).Oh, my drive is only 30GB and I needed the space..quad core CPU, figured I wouldn't notice a difference speed wise.
Lazlo Panaflex - Wednesday, April 14, 2010 - link
The Force is strong with this one.Exodite - Wednesday, April 14, 2010 - link
Ouch.I'm old enough to appreciate bad puns, well done Sir!
Lazlo Panaflex - Wednesday, April 14, 2010 - link
hehe..thanks. I couldn't resist ;-)Exodite - Wednesday, April 14, 2010 - link
Very nice review, it soothes some of my fears regarding these drives.I'm curious as to whether you know at this point if there's going to be reviews of the OCZ Vertex 2 and Agility 2 as well?
Seeing as these drives are based on the SF1500 and SF1200 as well it'd be interesting to see the performance difference from drives from the same vendor, using the different chips. There's the Vertex LE of course but it seems it's more or less the bastard child in this comparison.
Thanks in advance.
vol7ron - Wednesday, April 14, 2010 - link
I was curious if it's possible to overclock these SSDs either through the SF1200 or the RAM in some way or another?I know pencil modding has been dead in recent years. This is due in part to smaller components, but also to the fact that manufacturers have both (1) implemented in-place safeguards to reduce problems from overclocking and (2) started encouraging overclocking by providing more options through the BIOS.
I'm just curious if anyone's figured out how to do it on these SSDs. I know the average shouldn't - you typically don't want to fiddle with the sole thing that stores your data - but I suspect some tweaking could take place by enthusiasts to really up performance, if wanted. Anyone know a place to look into this?
vol7ron
hybrid2d4x4 - Wednesday, April 14, 2010 - link
Thanks for the power consumption charts, Anand! Any chance you can throw in a typical 2.5" 5400RPM HDD that usually comes stock in most laptops as a reference point for those of us who are thinking of upgrading? Also, could keeping Device Initiated Power Management disabled account for the significant discrepancies between your numbers and the recent article on Tom's HW? (ie: Tom's got an idle of 0.1W for the intel drive- a lot better than the competition)http://www.tomshardware.com/reviews/6gb-s-ssd-hdd,...
Impulses - Wednesday, April 14, 2010 - link
The Nova seems to do surprisingly well under Anand's heavy workload test, compared to other Indillix-based drives... Altho it's performance is just average (and similar to other Indillix drives) in most other tests. Isn't the Nova essentially the same thing as the OCZ Solid 2? (and the G.Skill Falcon II) That drive has been priced VERY competitively from what I've seen, I'm surprised there isn't more buzz around it. Looking forward to Anand's review of the Nova.I might be buying soon as a gift for my sister, she really needs more than 80GB for her laptop (80GB X25-M is still the best bang for the buck out there imo), so a $300 120GB is right up her alley.
_Q_ - Wednesday, April 14, 2010 - link
Sorry for being a little off-topic (still SSD though)...Are there any news on when is the Indilinx JetStream going to be released in some drive from any vendor?
Cause as far as I know, this was initially going to be out in end of 2009, then there was a delay... but no further info that I could find more recently.
Thanks for any help.
Hauraki - Wednesday, April 14, 2010 - link
I'm thinking to buy a SSD for system drive and the v+ 2nd gen appeared to be a decent choice for home use. Only got review on X-bit and Hexus so far, would like to see an Anand review.Thanks.
IvanChess - Wednesday, April 14, 2010 - link
"The Mean Time To Failure numbers are absurd. We’re talking about the difference between 228 years and over 1100 years. I’d say any number that outlasts the potential mean time to failure of our current society is pretty worthless."Well said!
Rindis - Thursday, April 15, 2010 - link
But think of the future!In a couple thousand years, when very little is remembered of this time, entire dissertation papers on early 21st century culture will be written based on the contents of your hard drive!
(And if that isn't a scary thought, I don't know what is....)
stalker27 - Thursday, April 15, 2010 - link
Was wondering if the charts could be fixed to correctly display the HDD values for random ops?Something like if the value doesn't fit in the bar it should be displayed after the bar, to its right... and not to its left as it does now colliding with the name of the HDD.
BTW, that was actually a factor I'm looking into seeing in SSD reviews... how good are they at the really useful operations, not that sequential doesn't matter, that much.
zzing123 - Thursday, April 15, 2010 - link
Given the SandForce controllers effectively compress data on the drive, I'm just curious about precisely how much data you can write to the drive...For example, if data is say text, and you create a 1MB text file, how many of those 1MB text files can you physically store on the drive versus another non-SandForce drive? Just a curiosity.
xilb - Thursday, April 15, 2010 - link
the OCZ LE isnt using the SF-1500only the vertex 2 pro is using the SF-1500 and the vertex 2 is using SF-1200
the LE has a chip that sort of in the middle of the 2 .........look it up on the OCZ forms you will see
paulpod - Thursday, April 15, 2010 - link
Needless to say the author is completely wrong about what a MTBF number means. That number has nothing to do with infant mortality rate nor with life expectancy. It is the statistical failure rate over a large number of units operating in their prime life period.marc_soad - Thursday, April 15, 2010 - link
If I'm not wrong, what the author means is 2 000 000 hours = 83 333 days = 228 years.So the drive is meant to fail after 228 years if it's powered on all the time, or more if the computer is powered on only for a portion of the day.
Thanks for the review by the way! :)
p05esto - Friday, April 16, 2010 - link
Hey, I COMPLETELY agree with the printing thing....I hate articles that have "pages", I'd rather view a 10 page long single document....yes, I'm that frickin lazy. You shouldn't remove the pages, but offer a FULL view option? I use the print option sometimes as well just to read an article all at once.poee - Friday, July 16, 2010 - link
"Performance is down, as you'd expect, but not to unbearable levels and it's also pretty consistent."Why is performance down? Why should we "expect" this? Do I have to read every SSD article you've written previously to understand new articles? Or is there one big article that has all the info that you obliquely allude to in subsequent articles?
Where can I find the current recommendations for SSDs (SandForce vs Indilinx vs Intel vs Micron vs Samsung, latest firmware updates, etc.)? Is there a central repository of SSD information that is assimilated and arranged categorically (for easy research), or must all this info be followed like a blog?