Good stuff, as usual. But at what point do SSD performance numbers cease to matter because they're all so fast that the difference doesn't matter?
Back when there were awful JMicron SSDs that struggled along at 2 IOPS in some cases, the difference was extremely important. More recently, your performance consistency numbers offered a finer grained way to say that some SSDs were flawed.
But are we heading toward a future in which most SSDs do well in any test that you can come up with shows all of the SSDs performing well? Does the difference between 10000 IOPS and 20000 really matter for any consumer use? How about the difference between 300 MB/s and 400 MB/s in sequential transfers? If so, do we declare victory and cease caring about SSD reviews?
If so, then you could claim some part in creating that future, at least if you believe that vendors react to flaws that reviews point out, even if only because they want to avoid negative reviews of their own products.
Or maybe it will be like power supply reviews, where mostly only good ones get sent in for reviews, while bad ones just show up on New Egg and hope that some sucker will buy it, or occasionally get a review when some tech site buys one rather than getting a review sample sent from the manufacturer?
I feel the same way. Almost need an order of magnitude improvement to notice anything different.
My question now is, where are the bottlenecks?
What causes my PC to boot in 30 seconds as opposed to 10?
I don't think I ever use the same amount of throughput as what these SSD's offer My 2500K @ 4.5GHz doesn't seem to ever get stressed (I didn't notice a huge difference between stock vs OC)
Is it now limited to the connections between devices? i.e. transferring from SSD to RAM to CPU and vice versa?
Storage is still the bottleneck for performance in most cases. Bandwidth between CPU and DDR3 1600 is 12.8GB/s. The fastest consumer SSDs are still ~25 times slower than that in a best case scenario. Also, you have to take into account all the different latencies associated with any given process (i.e. fetch this from the disk, fetch that from the RAM, do an operation on them, etc.). The reduced latency is really what makes the SSD so much faster than an HDD.
As for the tests - I think that the new 2013 test looks good in that it will show you real world heavy usage data. At this point it looks like the differentiator really is worst case performance - i.e. the drive not getting bogged down under a heavy load.
I came in to post that same thing, talldude2. Remember why RAM is around in the first place: Storage is too slow. Even with SSDs, the latency is too high, and the performance isn't fast enough.
Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD. That changes a lot of the ways that things can be accessed, and perhaps frees up RAM for more important things. I don't know this for a fact, but if the possibility is there you never know.
Either way, back to my original point, until RAM becomes redundant, were not fast enough, IMO.
-- Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD.
It's called an organic normal form relational schema. Lot's less bytes, lots more performance. But the coder types hate it because it requires so much less coding and so much more thinking (to build it, not use it).
When I was an undergraduate, freshman actually, whenever a professor (english, -ology, and such) would assign us to write a paper, we'd all cry out, "how long does it have to be????" One such professor replied, "organic length, as long as it has to be." Not very satisfying, but absolutely correct.
When I was in grad school, a professor mentioned that he'd known one guy who's Ph.D. dissertation (economics, mathy variety) was one page long. An equation and its derivation. Not sure I believe that one, but it makes the point.
Hey, as one of these here "Coders" I can tell you my bread and butter is a ratio of 10:1 on thinking to coding ;) I suspect most programmers are similar.
But in a sense Tukano is right, the SATA 3 standard can already be saturated by the fastest SSDs, so the connections between components are indeed the bottleneck. Most SSDs are still getting there, but the standard was saturated by the best almost as soon as it became widespread. They need a much bigger hop next time to leave some headroom.
The first round of SATA Express will give 16 Gbps for standard drives and up to 32 Gbps for mPCIe-style cards (used to be known as NGFF). I think we'll see a cool round of enthusiast drives once NGFF is finalized.
Storage is almost always the bottleneck. Faster storage = faster data moving around your PC's various subsystems. It's always better. You are certainly not likely to actually notice the incremental improvements from drive to the next, but it's important that these improvements are made, because you sure as hell WILL notice upgrading from something 5-6 generations different.
What causes your PC to boot in 30 seconds is a combination of a lot of things, but seeing as mine boots in much closer to 5 seconds, I suspect you must be running a Windows 7 without a really fast SSD (I'm running 8 with an Intel 240Gb 520 series drive).
Storage is never a bottle neck . if you have enough memory , they will load once to the memory and thats it.
you need to eliminate the need to read the same data again thats all.
try to max your memory to 32G or 64 G , and make a 24G Ramdisk and install the application you want there. you will have instant running programs. there is no real bottlenecks.
5 seconds would be very fast, i get to windows desktop in w8 in 11 seconds. Calculated from pressing the power button on my laptop and stopped when i get to real desktop (not metro). I have older samsung 830 and first generation i7 cpu and 16gb memory.
After getting an SSD with a SATA 3 computer, it's mostly likely driver initialization, timers and stuff like that that is the bottleneck during bootup.
Regarding PC Boot time, easily for me it was my motherboard post time.
My old Asus took minimum 20 seconds to post! When I bought my new system I researched post times and ended up with an ASRock which posts in about 5 seconds. Boom, now I can barely sit down before I'm ready to log in. :)
I wish there were more latency measurements. The only latency measurements were during the Destroyer benchmark. Latency under a lower load would be a useful metric. We are using NFS on top of ZFS, and latency is the biggest driver of performance.
there is still a lot of headroom left; storage is still the bottleneck of any computer; even with 24 SSDs in RAID 0 you still don't get lightening speed. Try a RAM drive which allows for 7700Mb/s write speed and 1000+mb/s at 4k random write http://www.madshrimps.be/vbulletin/f22/12-software...
put some data on there, and you can now start stressing your CPU again :)
The biggest bottleneck is Software. And even with my SSD running in SATA 2, my Core2Duo running Windows 8 can boot just 10 seconds. And Windows 7 within 15 seconds. With my Ivy Bridge +SATA 3 SSD running 1 - 2 seconds faster on boot time.
In terms of consumer usage, 99% of us will properly need much faster Seq Read Write Speed. We are near the end of Random Write improvement. Where Random Read we could do with a lot more increase.
Once we move to SATA Express with 16Gbps, we could be moving the bottleneck back to CPU. And Since we are not going to get much more IPC and Ghz improvements, we are going to need software written with Multi Core in mind to see further improvement gains. So quite possible the next generation of Chipset and CPU will be the last of this generation before we have software move to Multi Core paradigm. Which, looking at it now is going to take a long time.
Outside of enterprise server workloads, I don't think you will notice a difference between current SSDs.
However, higher random write IOPS is an indicator of lower write amplification, so it could be a useful signal to guess at how long before the drive wears out.
For most users, a long time ago it stopped mattering. In a machine used for Word/Excel/Powerpoint, Internet, Email, Movies, I stopped being able to perceive a difference day to day after the Intel X25-M/320. I tried Samsung 470's and 830's and got rid of both for cheaper Intel 320's.
Honestly for the average person and most enthusiasts SSDs are plenty fast and a difference isn't noticeable unless you are benchmarking (unless the drive is flat out horrible; bute the difference between a m400 and a 840 pro are unnoticible unless you are looking for it). The most important parts of an SSD then become performance consistency (though really few people actually apply a workload where that is a problem), power use (mainly for mobile), and RELIABILITY.
I agree 100%. I can't tell the difference between the fast SSDs from the last generation and those of the current generation in day-to-day usage. The fact that Anand had to work so hard to create a testing environment that would show significant differences between modern SSDs is very telling. Given that reality, I choose drives that are likely to be highly reliable (Crucial, Intel, Samsung) over those that have better benchmark scores.
I think Anand's penchant for on-drive encryption ignores an important aspect of firmware: it's software like everything else. Correctness trumps speed in encryption, and I would rather trust kernel hackers to encrypt my data than an OEM software team responsible for an SSD's closed-source firmware.
I'm not trying to malign OEM programmers, but encryption is notoriously difficult to get right, and I think it would be foolish to assume that an SSD's onboard encryption is as safe as the mature and widely used dm-crypt and Bitlocker implementations in Linux and Windows.
In my mind the lack of firmware encryption is a plus: the team at SanDisk either had the wisdom to avoid home-brewing an encryption routine from scratch, or they had more time to concentrate on the actual operation of the drive.
Amazing. I am using an OCZ Vertex4 256GB drive. Bought it last Nov for about $224. Very happy with it. This SanDisk drive is the same price ($229), same capacity (240GB), same format. However, it is performing a full 5% to almost 100% better, depending on block size, random/sequential, read/write activity. Amazing what 7 to 12 months has brought to the SSD market!
You wrote: "In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time"
In fact this is not what your test does. Your test records IOPS in one-second periods, but does not measure the latency of individual IOs. It would in fact be interesting to see the latency distribution for these drives.
Is it just me or have the SSD prices stagnated since the past year or so? I bought a 120GB Plextor M5S for $85 in July 2012 and the 128 GB SSDs still seem to hover in the 100-120$ range.
The benches on this drive are good.....not great, and I don think the opening bias is necessary. Who runs any disk at capacity 24/7? Perhaps some people temporarily... But 24/7 drive full???
Only a fool.
Kudos to sandisk for making a competitive offering, but please anandtech keep the bias out of the reviews....specially when it's not warranted.
Storage bench is great, but it's not the only metric.
Haswell is good, not great. But if your rocking a 2600k from 2 years ago? Meh.
Where are the legendary power savings? Why don't we have 4 ghz + skus? 8 cores? 64gb ram support? Quick sync degraded lol!! Good job on iris pro. Why can't I buy it and slap it into an enthusiast board?
Yet you read this review and the haswell review and come away feeling positive.
Real life:
Intel, A mild upgrade in IPC, higher In use TDP, 2 year old CPU's are still competitive
Sandisk, Mixed bag of results, on unproven firmware. .
Why do you keep ignoring the Samsung 840 Pro with spare area increased when it comes to consistency. It seems to me to be the best drive around. And if you value and know about consistency it seems pretty straight forward to increase the spare area and you should have the abilities to do so as well.
Agreed, it looks like a Samsung 840 Pro that's not completely full would be the performance king in every aspect - most consistent (check the 25% spare area graphs!), fastest in every test, good reliability history, and the best all around power consumption numbers, especially in the idle state which is presumably the most important.
Yet this drive is virtually ignored in the review, other than the ancillary mention in all the performance benchmarks it still wins, "The SanDisk did great here! Only a little behind all the Samsung drives... and as long as the Samsung drives are completely full, then the SanDisk gets better consistency, too! The SanDisk is my FAVORITE!"
The prevailing theme of this review should probably be "The SanDisk gives you performance nearly as good as a Samsung at a lower price." Not, "OMG I HAVE A NEW FAV0RIT3 DRIVE! Look at the contrived benchmark I came up with to punish all the other drives being used in ways that nobody would actually use them in..."
Seriously, anybody doing all that junk with their SSD would know to partition 25% of spare area into it, which then makes the Samsung Pro the clear winner, albeit at a higher cost per usable GB.
To the extent that "cloud" (re-)creates server-dense/client-thin computing, how well an SSD behaves in today's "client" doesn't matter much. Server workloads, with lots o random operations, will be where storage happens. Anand is correct to test SSDs under loads more server-like. As many have figured out, HDD in the enterprise are little different from consumer parts. "Cloud" vendors, in order to make money, will segue to "consumer" SSD. Thus, we do need to know how well they behave doing "server" loads; they will in any case. Clients will come with some amount flash (not necessarily even on current file system protocols).
Any word on whether this drive will be offered in a 960 GB capacity for a reasonable price in the near future?
This looks like the best performing drive yet reviewed, but I doubt I will see that big of difference from my 120 GB Crucial M4 in day to day usage. I really don't think most of us will see a large difference until we go to a faster interface.
So unless this drastically change in the next few months, I think my next drive will be the Crucial M500 960GB. Yes it will not be as consistent or quite as fast as the SanDisk Extreme II, but I won't have to worry about splitting my files, or moving steam games from my 7200 rpm drive to the SSD if they have long load times.
Question for those more knowledgeable: I'm building a new DAW (4770k, win 8) which will also be used for development (Eclipse in linux). Based on earlier anandtech reviews I ordered a 128GB 840P Pro for use as the OS drive and eclipse workspace directory and the like. Reading this article, i'm not sure if I should return the 840P for the SanDisk... the 840P leads it in almost all the metrics except the one that is the most "real-world" and which seems to mimic what I'll be using it for (i.e. Eclipse.)
I gave up on SanDisk after they totally botched TRIM on their previous generation drive. They did such a poor job admitting it and finally fixing it that it left a bad taste in my mouth. They'd have to *give* me a drive for me to try their products again.
Quick question. You mentioned a method to create an unused block of storage that could be used by the controller by creating a new partition (I assume fully formatting it) and then deleting it. This assumes TRIM marks the whole set of LBAs that covered the partition as being available. What is the comparable procedure on a Mac? Particularly if you don't get TRIM by default. And if you do turn it would it work in this case? Is there a way to guarantee you are allocating a block of LBAs to non-use on the Mac?
great review. You made me read all of it!! and learn everything about SSDs. Really great work!!
do you know if SanDisk Extreme II is compatible with macbook white 2009? (last version of white)
I read about DRAM 512 MB 1600MHZ and my macbook has MEMORY: 8GB RAM DDR3 1067 MHZ. It is not campatible with more MHZ. It crashes and doesn't work properly.
so... i don't know if this is a problem. I Play live keyboards with Logic Pro and don't want any crashes that would not matter in other occasions. But also need the speed of an SSD running through my projects.
I know friends that have serious monitor crashes of 10 to 20 seconds with older macbooks(2006,2007) and older SSDs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
51 Comments
Back to Article
Quizzical - Monday, June 3, 2013 - link
Good stuff, as usual. But at what point do SSD performance numbers cease to matter because they're all so fast that the difference doesn't matter?Back when there were awful JMicron SSDs that struggled along at 2 IOPS in some cases, the difference was extremely important. More recently, your performance consistency numbers offered a finer grained way to say that some SSDs were flawed.
But are we heading toward a future in which most SSDs do well in any test that you can come up with shows all of the SSDs performing well? Does the difference between 10000 IOPS and 20000 really matter for any consumer use? How about the difference between 300 MB/s and 400 MB/s in sequential transfers? If so, do we declare victory and cease caring about SSD reviews?
If so, then you could claim some part in creating that future, at least if you believe that vendors react to flaws that reviews point out, even if only because they want to avoid negative reviews of their own products.
Or maybe it will be like power supply reviews, where mostly only good ones get sent in for reviews, while bad ones just show up on New Egg and hope that some sucker will buy it, or occasionally get a review when some tech site buys one rather than getting a review sample sent from the manufacturer?
Tukano - Monday, June 3, 2013 - link
I feel the same way. Almost need an order of magnitude improvement to notice anything different.My question now is, where are the bottlenecks?
What causes my PC to boot in 30 seconds as opposed to 10?
I don't think I ever use the same amount of throughput as what these SSD's offer
My 2500K @ 4.5GHz doesn't seem to ever get stressed (I didn't notice a huge difference between stock vs OC)
Is it now limited to the connections between devices? i.e. transferring from SSD to RAM to CPU and vice versa?
talldude2 - Monday, June 3, 2013 - link
Storage is still the bottleneck for performance in most cases. Bandwidth between CPU and DDR3 1600 is 12.8GB/s. The fastest consumer SSDs are still ~25 times slower than that in a best case scenario. Also, you have to take into account all the different latencies associated with any given process (i.e. fetch this from the disk, fetch that from the RAM, do an operation on them, etc.). The reduced latency is really what makes the SSD so much faster than an HDD.As for the tests - I think that the new 2013 test looks good in that it will show you real world heavy usage data. At this point it looks like the differentiator really is worst case performance - i.e. the drive not getting bogged down under a heavy load.
whyso - Monday, June 3, 2013 - link
Its twice that If you have two RAM sticks.Chapbass - Monday, June 3, 2013 - link
I came in to post that same thing, talldude2. Remember why RAM is around in the first place: Storage is too slow. Even with SSDs, the latency is too high, and the performance isn't fast enough.Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD. That changes a lot of the ways that things can be accessed, and perhaps frees up RAM for more important things. I don't know this for a fact, but if the possibility is there you never know.
Either way, back to my original point, until RAM becomes redundant, were not fast enough, IMO.
FunBunny2 - Monday, June 3, 2013 - link
-- Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD.It's called an organic normal form relational schema. Lot's less bytes, lots more performance. But the coder types hate it because it requires so much less coding and so much more thinking (to build it, not use it).
crimson117 - Tuesday, June 4, 2013 - link
> It's called an organic normal form relational schemaI'm pretty sure you just made that up... or you read "Dr. Codd Was Right" :P
FunBunny2 - Tuesday, June 4, 2013 - link
When I was an undergraduate, freshman actually, whenever a professor (english, -ology, and such) would assign us to write a paper, we'd all cry out, "how long does it have to be????" One such professor replied, "organic length, as long as it has to be." Not very satisfying, but absolutely correct.When I was in grad school, a professor mentioned that he'd known one guy who's Ph.D. dissertation (economics, mathy variety) was one page long. An equation and its derivation. Not sure I believe that one, but it makes the point.
santiagoanders - Tuesday, June 4, 2013 - link
I'm guessing you didn't get a graduate degree in English. "Whose" is possessive while "who's" is a contraction that means "who is."FunBunny2 - Tuesday, June 4, 2013 - link
Econometrics. But, whose counting?klmccaughey - Wednesday, June 5, 2013 - link
Hey, as one of these here "Coders" I can tell you my bread and butter is a ratio of 10:1 on thinking to coding ;) I suspect most programmers are similar.tipoo - Monday, June 3, 2013 - link
But in a sense Tukano is right, the SATA 3 standard can already be saturated by the fastest SSDs, so the connections between components are indeed the bottleneck. Most SSDs are still getting there, but the standard was saturated by the best almost as soon as it became widespread. They need a much bigger hop next time to leave some headroom.A5 - Monday, June 3, 2013 - link
The first round of SATA Express will give 16 Gbps for standard drives and up to 32 Gbps for mPCIe-style cards (used to be known as NGFF). I think we'll see a cool round of enthusiast drives once NGFF is finalized.althaz - Tuesday, June 4, 2013 - link
Storage is almost always the bottleneck. Faster storage = faster data moving around your PC's various subsystems. It's always better. You are certainly not likely to actually notice the incremental improvements from drive to the next, but it's important that these improvements are made, because you sure as hell WILL notice upgrading from something 5-6 generations different.What causes your PC to boot in 30 seconds is a combination of a lot of things, but seeing as mine boots in much closer to 5 seconds, I suspect you must be running a Windows 7 without a really fast SSD (I'm running 8 with an Intel 240Gb 520 series drive).
sna1970 - Tuesday, June 4, 2013 - link
not really.Storage is never a bottle neck . if you have enough memory , they will load once to the memory and thats it.
you need to eliminate the need to read the same data again thats all.
try to max your memory to 32G or 64 G , and make a 24G Ramdisk and install the application you want there. you will have instant running programs. there is no real bottlenecks.
kevith - Wednesday, June 5, 2013 - link
"Closer to 5 seconds".... From what point do you start counting...?seapeople - Wednesday, June 5, 2013 - link
Probably after he logs in.compvter - Friday, July 19, 2013 - link
5 seconds would be very fast, i get to windows desktop in w8 in 11 seconds. Calculated from pressing the power button on my laptop and stopped when i get to real desktop (not metro). I have older samsung 830 and first generation i7 cpu and 16gb memory.ickibar1234 - Friday, December 20, 2013 - link
After getting an SSD with a SATA 3 computer, it's mostly likely driver initialization, timers and stuff like that that is the bottleneck during bootup.Occas - Tuesday, June 4, 2013 - link
Regarding PC Boot time, easily for me it was my motherboard post time.My old Asus took minimum 20 seconds to post! When I bought my new system I researched post times and ended up with an ASRock which posts in about 5 seconds. Boom, now I can barely sit down before I'm ready to log in. :)
jhh - Monday, June 3, 2013 - link
I wish there were more latency measurements. The only latency measurements were during the Destroyer benchmark. Latency under a lower load would be a useful metric. We are using NFS on top of ZFS, and latency is the biggest driver of performance.jmke - Tuesday, June 4, 2013 - link
there is still a lot of headroom left; storage is still the bottleneck of any computer; even with 24 SSDs in RAID 0 you still don't get lightening speed.Try a RAM drive which allows for 7700Mb/s write speed and 1000+mb/s at 4k random write
http://www.madshrimps.be/vbulletin/f22/12-software...
put some data on there, and you can now start stressing your CPU again :)
iwod - Tuesday, June 4, 2013 - link
The biggest bottleneck is Software. And even with my SSD running in SATA 2, my Core2Duo running Windows 8 can boot just 10 seconds. And Windows 7 within 15 seconds. With my Ivy Bridge +SATA 3 SSD running 1 - 2 seconds faster on boot time.In terms of consumer usage, 99% of us will properly need much faster Seq Read Write Speed. We are near the end of Random Write improvement. Where Random Read we could do with a lot more increase.
Once we move to SATA Express with 16Gbps, we could be moving the bottleneck back to CPU. And Since we are not going to get much more IPC and Ghz improvements, we are going to need software written with Multi Core in mind to see further improvement gains. So quite possible the next generation of Chipset and CPU will be the last of this generation before we have software move to Multi Core paradigm. Which, looking at it now is going to take a long time.
glugglug - Tuesday, June 4, 2013 - link
Outside of enterprise server workloads, I don't think you will notice a difference between current SSDs.However, higher random write IOPS is an indicator of lower write amplification, so it could be a useful signal to guess at how long before the drive wears out.
vinuneuro - Tuesday, December 3, 2013 - link
For most users, a long time ago it stopped mattering. In a machine used for Word/Excel/Powerpoint, Internet, Email, Movies, I stopped being able to perceive a difference day to day after the Intel X25-M/320. I tried Samsung 470's and 830's and got rid of both for cheaper Intel 320's.whyso - Monday, June 3, 2013 - link
Honestly for the average person and most enthusiasts SSDs are plenty fast and a difference isn't noticeable unless you are benchmarking (unless the drive is flat out horrible; bute the difference between a m400 and a 840 pro are unnoticible unless you are looking for it). The most important parts of an SSD then become performance consistency (though really few people actually apply a workload where that is a problem), power use (mainly for mobile), and RELIABILITY.TrackSmart - Monday, June 3, 2013 - link
I agree 100%. I can't tell the difference between the fast SSDs from the last generation and those of the current generation in day-to-day usage. The fact that Anand had to work so hard to create a testing environment that would show significant differences between modern SSDs is very telling. Given that reality, I choose drives that are likely to be highly reliable (Crucial, Intel, Samsung) over those that have better benchmark scores.jabber - Tuesday, June 4, 2013 - link
Indeed, considering the mech HDDs the average user out there is using is lucky to push more than 50MBps with typical double figure access times.When they experience just 150MBps with single digit access times they nearly wet their pants.
MBps isn't the key for average use (as in folks that are not pushing gigabytes of data around all day) it's the access times.
SSD reviews for many are getting like graphics card reviews that go "Well this new card pushed the framerate from 205FPS to an amazing 235FPS!"
Erm great..I guess.
old-style - Monday, June 3, 2013 - link
Great review.I think Anand's penchant for on-drive encryption ignores an important aspect of firmware: it's software like everything else. Correctness trumps speed in encryption, and I would rather trust kernel hackers to encrypt my data than an OEM software team responsible for an SSD's closed-source firmware.
I'm not trying to malign OEM programmers, but encryption is notoriously difficult to get right, and I think it would be foolish to assume that an SSD's onboard encryption is as safe as the mature and widely used dm-crypt and Bitlocker implementations in Linux and Windows.
In my mind the lack of firmware encryption is a plus: the team at SanDisk either had the wisdom to avoid home-brewing an encryption routine from scratch, or they had more time to concentrate on the actual operation of the drive.
thestryker - Monday, June 3, 2013 - link
http://www.anandtech.com/show/6891/hardware-accele...I'm not sure you are aware of what he is referring to.
HardwareDufus - Monday, June 3, 2013 - link
Amazing. I am using an OCZ Vertex4 256GB drive. Bought it last Nov for about $224. Very happy with it.This SanDisk drive is the same price ($229), same capacity (240GB), same format. However, it is performing a full 5% to almost 100% better, depending on block size, random/sequential, read/write activity. Amazing what 7 to 12 months has brought to the SSD market!
Vincent - Monday, June 3, 2013 - link
You wrote: "In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time"In fact this is not what your test does. Your test records IOPS in one-second periods, but does not measure the latency of individual IOs. It would in fact be interesting to see the latency distribution for these drives.
Tjalve - Tuesday, June 4, 2013 - link
Ive done som IO Latency tests based on my own trace-based benchmark if your interested.http://www.nordichardware.se/SSD-Recensioner/svens...
http://www.nordichardware.se/SSD-Recensioner/svens...
http://www.nordichardware.se/SSD-Recensioner/svens...
The text is in swedish, but you should be able to understand the graphs. I could make aplot diagram of individual IOs Latency if anyone is interested,
kallogan - Tuesday, June 4, 2013 - link
I still have an indilinx 64GB.dishayu - Tuesday, June 4, 2013 - link
Is it just me or have the SSD prices stagnated since the past year or so? I bought a 120GB Plextor M5S for $85 in July 2012 and the 128 GB SSDs still seem to hover in the 100-120$ range.sna1970 - Tuesday, June 4, 2013 - link
Hey Anand , can you please test 6 SSD Raid 0 with the new Haswell Z87 motherboards ?we need to make sure we can hit 3G/s , what is the maximum bandwidth of the new chipset ?
cbk - Tuesday, June 4, 2013 - link
This looks awesome, it's almost neck-to-neck to the 840 Pro, at a lower price.jeffrey - Tuesday, June 4, 2013 - link
Hi Anand,Do you plan on covering the OCZ Vertex 450?
jeffrey - Tuesday, June 4, 2013 - link
Press Release:http://ocz.com/consumer/company/newsroom/press/ocz...
Kristian Vättö - Tuesday, June 4, 2013 - link
All tests have been run but I guess Haswell and other Computex stuff got on the way.dsumanik - Tuesday, June 4, 2013 - link
The benches on this drive are good.....not great, and I don think the opening bias is necessary. Who runs any disk at capacity 24/7? Perhaps some people temporarily... But 24/7 drive full???Only a fool.
Kudos to sandisk for making a competitive offering, but please anandtech keep the bias out of the reviews....specially when it's not warranted.
Storage bench is great, but it's not the only metric.
Haswell is good, not great. But if your rocking a 2600k from 2 years ago? Meh.
Where are the legendary power savings? Why don't we have 4 ghz + skus? 8 cores? 64gb ram support? Quick sync degraded lol!! Good job on iris pro. Why can't I buy it and slap it into an enthusiast board?
Yet you read this review and the haswell review and come away feeling positive.
Real life:
Intel,
A mild upgrade in IPC, higher In use TDP, 2 year old CPU's are still competitive
Sandisk,
Mixed bag of results, on unproven firmware.
.
Death666Angel - Tuesday, June 4, 2013 - link
Why do you keep ignoring the Samsung 840 Pro with spare area increased when it comes to consistency. It seems to me to be the best drive around. And if you value and know about consistency it seems pretty straight forward to increase the spare area and you should have the abilities to do so as well.seapeople - Wednesday, June 5, 2013 - link
Agreed, it looks like a Samsung 840 Pro that's not completely full would be the performance king in every aspect - most consistent (check the 25% spare area graphs!), fastest in every test, good reliability history, and the best all around power consumption numbers, especially in the idle state which is presumably the most important.Yet this drive is virtually ignored in the review, other than the ancillary mention in all the performance benchmarks it still wins, "The SanDisk did great here! Only a little behind all the Samsung drives... and as long as the Samsung drives are completely full, then the SanDisk gets better consistency, too! The SanDisk is my FAVORITE!"
The prevailing theme of this review should probably be "The SanDisk gives you performance nearly as good as a Samsung at a lower price." Not, "OMG I HAVE A NEW FAV0RIT3 DRIVE! Look at the contrived benchmark I came up with to punish all the other drives being used in ways that nobody would actually use them in..."
Seriously, anybody doing all that junk with their SSD would know to partition 25% of spare area into it, which then makes the Samsung Pro the clear winner, albeit at a higher cost per usable GB.
FunBunny2 - Tuesday, June 4, 2013 - link
To the extent that "cloud" (re-)creates server-dense/client-thin computing, how well an SSD behaves in today's "client" doesn't matter much. Server workloads, with lots o random operations, will be where storage happens. Anand is correct to test SSDs under loads more server-like. As many have figured out, HDD in the enterprise are little different from consumer parts. "Cloud" vendors, in order to make money, will segue to "consumer" SSD. Thus, we do need to know how well they behave doing "server" loads; they will in any case. Clients will come with some amount flash (not necessarily even on current file system protocols).joel4565 - Tuesday, June 4, 2013 - link
Any word on whether this drive will be offered in a 960 GB capacity for a reasonable price in the near future?This looks like the best performing drive yet reviewed, but I doubt I will see that big of difference from my 120 GB Crucial M4 in day to day usage. I really don't think most of us will see a large difference until we go to a faster interface.
So unless this drastically change in the next few months, I think my next drive will be the Crucial M500 960GB. Yes it will not be as consistent or quite as fast as the SanDisk Extreme II, but I won't have to worry about splitting my files, or moving steam games from my 7200 rpm drive to the SSD if they have long load times.
clepsydrae - Wednesday, June 5, 2013 - link
Question for those more knowledgeable: I'm building a new DAW (4770k, win 8) which will also be used for development (Eclipse in linux). Based on earlier anandtech reviews I ordered a 128GB 840P Pro for use as the OS drive and eclipse workspace directory and the like. Reading this article, i'm not sure if I should return the 840P for the SanDisk... the 840P leads it in almost all the metrics except the one that is the most "real-world" and which seems to mimic what I'll be using it for (i.e. Eclipse.)Opinions?
bmgoodman - Wednesday, June 5, 2013 - link
I gave up on SanDisk after they totally botched TRIM on their previous generation drive. They did such a poor job admitting it and finally fixing it that it left a bad taste in my mouth. They'd have to *give* me a drive for me to try their products again.samster712 - Friday, June 7, 2013 - link
So would anyone recommend this drive over the 840pro 256? Im very indecisive about buying a new drive.Rumboogy - Thursday, July 11, 2013 - link
Quick question. You mentioned a method to create an unused block of storage that could be used by the controller by creating a new partition (I assume fully formatting it) and then deleting it. This assumes TRIM marks the whole set of LBAs that covered the partition as being available. What is the comparable procedure on a Mac? Particularly if you don't get TRIM by default. And if you do turn it would it work in this case? Is there a way to guarantee you are allocating a block of LBAs to non-use on the Mac?pcmax - Monday, August 12, 2013 - link
Would have been really nice to compare it to their previous gen the Extreme I?qqqww314 - Monday, December 16, 2013 - link
great review. You made me read all of it!! and learn everything about SSDs. Really great work!!do you know if SanDisk Extreme II is compatible with macbook white 2009? (last version of white)
I read about DRAM 512 MB 1600MHZ
and my macbook has MEMORY: 8GB RAM DDR3 1067 MHZ. It is not campatible with more MHZ. It crashes and doesn't work properly.
so... i don't know if this is a problem. I Play live keyboards with Logic Pro and don't want any crashes that would not matter in other occasions. But also need the speed of an SSD running through my projects.
I know friends that have serious monitor crashes of 10 to 20 seconds with older macbooks(2006,2007) and older SSDs.
Thanks