I think for this drive you nee to do a test with completely random data which will give you an idea of the drive's real speed without SF sauce. If this thing is as quick with random data as indilinx drives then it'll be impressive.
You don't undertand. The point of using random DATA is that they don't compress. It would be interesting to find out if the SF controller can maintain its high performance once deprived of the ability to compress the data. The "random" in Anand's usual tests are in the pattern of access, not the contents of the data.
I guess I'm still missing what you're saying about the random data. I thought Anand's test are both about random access, and random data (hence read and write) with different programs.
If you're talking about data generation, then wouldn't that put work on the CPU, not the storage - compressed or not.
I think the OP meant randomly generated (e.g. noise) data. This data is in general impossible to compress in a lossless manner due to being at the entropy limit http://en.wikipedia.org/wiki/Entropy_%28informatio... I think the OP was just saying that this will allow a test which bypasses any lossless compression tricks and see what the underlying hardware is capable of.
you've described in a much better way than i could. And yes this is exactly what i meant. I'll go and bitch and moan about this in the review since the editors are probably not even reading these comments anymore
I'm holding out waiting patiently for improvements with SSD's. Even if I have to wait another year. I just don't think the maturity or reliability is there just yet. Maybe by the time the X68 chipset arrives.
Assuming those 8 flash chips add up to 128GB, along with the compression, there must be a lot of reserved blocks. That is probably the key to SF controller's high performance.
Looking at the images, those memory chips are 8GB each (Micron 29F64G08CFABA) and there are 16 of them (eight on each side of the PCB). That makes for 128GB total with 28GB of spare area. And of course, if Corsair is like most companies, that's 128GiB (Gibibytes) with an accessible capacity of 100GB (Gigabytes), which means formatted capacity is actually 93GiB (Gibibytes). So they're using 37% of the total capacity as spare area.
Ahm guys sorry to destroy your enthusiasm to correct a AT writer, but Flash blocks are usually sizes of 2, which means we're talking about 128 GiB like Jarred said correctly. And since they sell them as 100GB (notice the missing i) that means the drives have ~93.1 GiB (notice the i - yes I know SI vs. size of two is annoying). So we're either computing: 128/93.13 - 1 = 37.4% or 137.4/100 - 1 = 37.4%
He's actually correct, though. You're losing 37% of the capacity, but you're using 35GiB of capacity as spare area, hence using 27.3%. The joys of percentages.... :-)
Ah ok so just a dispute about what value we want to use. And yeah it's probably more useful to use percentage used as spare then the other way round. Though you've still got to be careful with GB/GiB.
The point trying to be made is not pulling a gotcha on the writer, it's to determine the optimal spare setup for these drives. Just as the amount of cache can lead to better performance for HDD's, the same needs to be known for spare area and SSD's.
Will changing the amount of spare size boost performance enough to justify the loss of usable space? Different drive manufacturers are bound to play with these ratios. Which will come out ahead? At what point does the rule of diminishing returns push one to go for usable space versus higher spare area? Can the spare area be adjusted in the drive firmware?
These numbers will become important if spare areas start showing up for more drives. Just wanted to make sure the numbers were on the up and up for future reference.
In terms of pure area used, yes, they set aside 27.3% of the available capacity. However, with their DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. I wonder if you're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining? Of course, even at 0% compression you'd still have at least 35GiB of spare, but with a reasonable 25% compression average you might have as much as ~58GiB of spare area. Hmmm.....
yeah, thats what im getting at. just trying to point it out now before the full article comes out. don't want the numbers to change in a later ssd comparison and have in-congruent charts for spare-to-drive space performance.
Yep these are still very enterprise-oriented drives. SandForce plans on delivering a version with less spare area but that's behind these SF-1500/1200 derivatives.
do you think having a large reserve is cheating for marketing purposes? They sustain the performance long enough to get it past reviewers, but long term users will encounter a slow down as the spare area eventually is used.
my x25 g1 is grinding to a halt these days. i freed up 26gb but of course it makes no difference now until I image and secure erase.
That's only if TRIM doesn't work or is not supported (i.e. for the G1 drives). For any modern drive with TRIM, there should be no "progressive slowdown" or anything of that sort. Performance is solely dependent on the percentage of free blocks.
In contrast, I think the 80GB intel drives are lacking in free space too much. 7.4% if I'm correct ... just from empirical testing I noticed that anything below 20GB of free space (in Windows) creates a noticeable slowdown esp. in sequential write scenarios (i.e. the drive bursts at the maximum transfer rate followed by intermittent pauses).
While the SandForce drives are impressive, as are most new SSDs really, I find myself unwilling to part with serious money for these drives until they've migrated to SATA 6.0 Gbps.
The SandForce drives especially, since both their sequential read- and write-speeds are pretty much at the limit of what SATA 3.0 Gbps can do once you take into account overhead. Frankly I don't understand while the developers didn't consider this in the first place.
First we need good SATA6G implementations... then we can worry about 6G drives. :-) Seriously, though, the current 6G chipsets often seem to reduce performance relative to 3G. I wouldn't be surprised to see it take the integration of 6G into the Northbridge before we get proper performance across the board.
That only applies to drives that don't exceed the 3G threshold.
A drive that exceeds the 3G theoretical limit (on 6G) still makes 6G worth it, despite the fact that the 6G controller is not yet matured, or as efficient as it could be. For those SATA2 drives that can't exceed the 3G limit, yes: stick to 3G.
Sequential read has Marvel's 6G in the lead, provided it's paired with PCIe 2.0. AMD's 890GX is slightly behind, but 3G off native X58 is better than 6G off PCIe 1.x.
On random read, the X58 solution is within spitting distance of the best 6G scores, and oddly enough AMD's 890GX does quite poorly (BIOS updates may have fixed this by now).
Random write has AMD's 890GX in the lead, but the Intel X58 beats all the Marvel results.
So I stand by my statement: we need better implementations of 6G before it makes a huge difference. The sequential read/write performance is nice for benchmark charts, but random performance is far more common and it's what really makes SSDs shine compared to HDDs. X58 has a very robust 3G implementation, it seems, and if that's what you're running you'd only lose a very small amount of performance worst case, and in other cases you'd end up quite a bit faster.
"3G off native X58 is better than 6G off PCIe 1.x." Well duh, but who would actually put a 6G controller in a much older PCIe 1.x slot???
You seem to be biasing your decision based on your own desire for random access in a desktop environment, but not every application has the same needs and these drives are not intended for the desktop level. Any setup that depends on raw throughput would certainly benefit greatly from 6G.File servers and Streaming media servers are obvious ones to take advantage of 6G.
Not sure what you are getting on about with the implementation line. Its a standard, its only implemented one way. Maybe you mean drive manufacturers should increase random performance, but I don't know what that has to do with the 6G implementation.
You really think that any drive today will be bottlenecked by PCI 1.0 speeds? That may bottleneck the theoretical performance (e.g. SATA3 protocoll could transport more), but there's no drive who comes close to those speeds.. at least non raided.
Just because you support a standard doesn't mean you support it *well*. The Intel X58 SATA 3G implementation is done very well; the P55 implementation is actually inferior to X58. Likewise, AMD's implementation in the 890GX of SATA 6G is not the same as the Marvel implementation. Sure, they conform to the same standard, but they don't perform the same because the devil is in the details. Right now, 6G is a brand new technology and as with any bleeding-edge technology there are some bugs and optimizations to work out.
What I'm suggesting is that down the road we'll see 6G implementations in the Northbridge (or even ones off a PCIe 2.0 connection) that will outperform the current 6G controllers. It's happened with every technology in the past, and it will happen again here as well. But of course, we need 6G SSDs before 6G controllers will get better--one pushes the other, and vice versa.
I see what you are getting at. There is always a controller that can squeeze a little more performance to edge closer to the theoretical limit.
Isn't there supposed to be drop-in SSD's with SAS connectors coming out this year for HDD SAS replacement? SAS has had 6G speed for some time. I don't have any Enterprise class servers with SATA, its all SCSI and SAS. Any word on SSD SAS drives in the pipeline?
I think you read, but didn't comprehend what I said.
If you're going to buy a motherboard and your SSD is SATA3 and capable of breaking 3Gbps speeds, get the SATA3 mobo. From the mere facts of what's stated, you will NEVER have a drive that's been clocked at speeds greater than 3Gbps, perform better on SATA2 interface. There might be some parts of the test that will perform better, but it had to exceed 3Gbps at some point.
While random write speeds are important the drives that are capable of exceeding the 3Gbps barrier, do not have bad writes. For those motherboards that have some issues with the SATA3 controller, it will mature and BIOS updates will fix them -- and as Anand has pointed out, there are other alternatives using PCIe, etc.
In real-world use you won't see any difference between 3G and 6G with the current crop of high end drives. You just won't. The difference barely even appears in entirely artificial benchmarks.
I/O is where it's at, and these drives deliver that in spades.
none of these drives perform at 250-300mb/s in real world small block or random operations. only large block sequential ops go that fast. so i dont think you will see any benefit outside the benchmarks.
This actually perks my curiosity. I thought the SF1200 was going to be an extreme under-performer, compared to the SF1500.
If that's not the case and the cost is cheaper, then I am also baffled by the price. Perhaps they are going to be using lower performing and/or cheaper RAM?
Not related to the article, but is anyone else having issues getting RSS working for Anandtech or Dailytech? Since the site change, none of my readers pickup the updates. I can't even pull up the feed info.
for a ESX 4.0 server host a x64 SQL instance! There not in the country yet (NZ)...hanging out, so the review will help keep me happy for a couple of hours.....
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
41 Comments
Back to Article
semo - Tuesday, April 13, 2010 - link
I think for this drive you nee to do a test with completely random data which will give you an idea of the drive's real speed without SF sauce. If this thing is as quick with random data as indilinx drives then it'll be impressive.vol7ron - Tuesday, April 13, 2010 - link
How does this differ from the other random tests that Anand performs? Anand's test comes pretty close to "general" real world situations.rmlarsen - Tuesday, April 13, 2010 - link
You don't undertand. The point of using random DATA is that they don't compress. It would be interesting to find out if the SF controller can maintain its high performance once deprived of the ability to compress the data. The "random" in Anand's usual tests are in the pattern of access, not the contents of the data.vol7ron - Tuesday, April 13, 2010 - link
I guess I'm still missing what you're saying about the random data. I thought Anand's test are both about random access, and random data (hence read and write) with different programs.If you're talking about data generation, then wouldn't that put work on the CPU, not the storage - compressed or not.
CommandoCATS - Tuesday, April 13, 2010 - link
I think the OP meant randomly generated (e.g. noise) data. This data is in general impossible to compress in a lossless manner due to being at the entropy limit http://en.wikipedia.org/wiki/Entropy_%28informatio... I think the OP was just saying that this will allow a test which bypasses any lossless compression tricks and see what the underlying hardware is capable of.semo - Wednesday, April 14, 2010 - link
you've described in a much better way than i could. And yes this is exactly what i meant.I'll go and bitch and moan about this in the review since the editors are probably not even reading these comments anymore
Rob94hawk - Tuesday, April 13, 2010 - link
I'm holding out waiting patiently for improvements with SSD's. Even if I have to wait another year. I just don't think the maturity or reliability is there just yet. Maybe by the time the X68 chipset arrives.Slaimus - Tuesday, April 13, 2010 - link
Assuming those 8 flash chips add up to 128GB, along with the compression, there must be a lot of reserved blocks. That is probably the key to SF controller's high performance.semo - Tuesday, April 13, 2010 - link
yeah i read somewhere that they have an unusually large reserve.btw, anyone know of anyone releasing a ram drive... those acard models seem too expensive to me.
JarredWalton - Tuesday, April 13, 2010 - link
Looking at the images, those memory chips are 8GB each (Micron 29F64G08CFABA) and there are 16 of them (eight on each side of the PCB). That makes for 128GB total with 28GB of spare area. And of course, if Corsair is like most companies, that's 128GiB (Gibibytes) with an accessible capacity of 100GB (Gigabytes), which means formatted capacity is actually 93GiB (Gibibytes). So they're using 37% of the total capacity as spare area.SandmanWN - Tuesday, April 13, 2010 - link
Not following you.28 out of 100 is 28%
28 out of 93 is 30%
Only way you get 37% is if you count the 7 lost from formatting as spare area, but...
JarredWalton - Tuesday, April 13, 2010 - link
To clarify, 128 is 37% more capacity than 93.SandmanWN - Tuesday, April 13, 2010 - link
You lost 7 for formatting which isn't spare so its still 28 spare/93 usable thus 30%Voo - Tuesday, April 13, 2010 - link
Ahm guys sorry to destroy your enthusiasm to correct a AT writer, but Flash blocks are usually sizes of 2, which means we're talking about 128 GiB like Jarred said correctly. And since they sell them as 100GB (notice the missing i) that means the drives have ~93.1 GiB (notice the i - yes I know SI vs. size of two is annoying). So we're either computing:128/93.13 - 1 = 37.4%
or
137.4/100 - 1 = 37.4%
Both the same and both correct =)
JarredWalton - Tuesday, April 13, 2010 - link
He's actually correct, though. You're losing 37% of the capacity, but you're using 35GiB of capacity as spare area, hence using 27.3%. The joys of percentages.... :-)Voo - Wednesday, April 14, 2010 - link
Ah ok so just a dispute about what value we want to use. And yeah it's probably more useful to use percentage used as spare then the other way round. Though you've still got to be careful with GB/GiB.SandmanWN - Tuesday, April 13, 2010 - link
The point trying to be made is not pulling a gotcha on the writer, it's to determine the optimal spare setup for these drives. Just as the amount of cache can lead to better performance for HDD's, the same needs to be known for spare area and SSD's.Will changing the amount of spare size boost performance enough to justify the loss of usable space? Different drive manufacturers are bound to play with these ratios. Which will come out ahead? At what point does the rule of diminishing returns push one to go for usable space versus higher spare area? Can the spare area be adjusted in the drive firmware?
These numbers will become important if spare areas start showing up for more drives. Just wanted to make sure the numbers were on the up and up for future reference.
JarredWalton - Wednesday, April 14, 2010 - link
Actually, here's another thought to consider:In terms of pure area used, yes, they set aside 27.3% of the available capacity. However, with their DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. I wonder if you're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining? Of course, even at 0% compression you'd still have at least 35GiB of spare, but with a reasonable 25% compression average you might have as much as ~58GiB of spare area. Hmmm.....
Dark Legion - Tuesday, April 13, 2010 - link
So they're using ~27% of the total capacity as spare area.Dark Legion - Tuesday, April 13, 2010 - link
Sorry, forgot the 7 GB that's not spare...28GB spare/128GB total = 22%.SandmanWN - Tuesday, April 13, 2010 - link
yeah, thats what im getting at. just trying to point it out now before the full article comes out. don't want the numbers to change in a later ssd comparison and have in-congruent charts for spare-to-drive space performance.Anand Lal Shimpi - Tuesday, April 13, 2010 - link
Yep these are still very enterprise-oriented drives. SandForce plans on delivering a version with less spare area but that's behind these SF-1500/1200 derivatives.I talked a bit about spare area on the SF drives here: http://anandtech.com/show/2899/5
Luke212 - Tuesday, April 13, 2010 - link
do you think having a large reserve is cheating for marketing purposes? They sustain the performance long enough to get it past reviewers, but long term users will encounter a slow down as the spare area eventually is used.my x25 g1 is grinding to a halt these days. i freed up 26gb but of course it makes no difference now until I image and secure erase.
jimhsu - Tuesday, April 13, 2010 - link
That's only if TRIM doesn't work or is not supported (i.e. for the G1 drives). For any modern drive with TRIM, there should be no "progressive slowdown" or anything of that sort. Performance is solely dependent on the percentage of free blocks.jimhsu - Tuesday, April 13, 2010 - link
In contrast, I think the 80GB intel drives are lacking in free space too much. 7.4% if I'm correct ... just from empirical testing I noticed that anything below 20GB of free space (in Windows) creates a noticeable slowdown esp. in sequential write scenarios (i.e. the drive bursts at the maximum transfer rate followed by intermittent pauses).Exodite - Tuesday, April 13, 2010 - link
While the SandForce drives are impressive, as are most new SSDs really, I find myself unwilling to part with serious money for these drives until they've migrated to SATA 6.0 Gbps.The SandForce drives especially, since both their sequential read- and write-speeds are pretty much at the limit of what SATA 3.0 Gbps can do once you take into account overhead. Frankly I don't understand while the developers didn't consider this in the first place.
Oh well, waiting won't cost me anything.
DigitalFreak - Tuesday, April 13, 2010 - link
I'm assuming that by the time the SATA 3 spec was finalized, they were too far into development of their controller to switch.JarredWalton - Tuesday, April 13, 2010 - link
First we need good SATA6G implementations... then we can worry about 6G drives. :-) Seriously, though, the current 6G chipsets often seem to reduce performance relative to 3G. I wouldn't be surprised to see it take the integration of 6G into the Northbridge before we get proper performance across the board.vol7ron - Tuesday, April 13, 2010 - link
That only applies to drives that don't exceed the 3G threshold.A drive that exceeds the 3G theoretical limit (on 6G) still makes 6G worth it, despite the fact that the 6G controller is not yet matured, or as efficient as it could be. For those SATA2 drives that can't exceed the 3G limit, yes: stick to 3G.
JarredWalton - Tuesday, April 13, 2010 - link
No, there are plenty of cases where 6G SATA implementations aren't doing as well as they should:http://www.anandtech.com/show/2973/6gbps-sata-perf...
Sequential read has Marvel's 6G in the lead, provided it's paired with PCIe 2.0. AMD's 890GX is slightly behind, but 3G off native X58 is better than 6G off PCIe 1.x.
On random read, the X58 solution is within spitting distance of the best 6G scores, and oddly enough AMD's 890GX does quite poorly (BIOS updates may have fixed this by now).
Random write has AMD's 890GX in the lead, but the Intel X58 beats all the Marvel results.
So I stand by my statement: we need better implementations of 6G before it makes a huge difference. The sequential read/write performance is nice for benchmark charts, but random performance is far more common and it's what really makes SSDs shine compared to HDDs. X58 has a very robust 3G implementation, it seems, and if that's what you're running you'd only lose a very small amount of performance worst case, and in other cases you'd end up quite a bit faster.
SandmanWN - Tuesday, April 13, 2010 - link
"3G off native X58 is better than 6G off PCIe 1.x."Well duh, but who would actually put a 6G controller in a much older PCIe 1.x slot???
You seem to be biasing your decision based on your own desire for random access in a desktop environment, but not every application has the same needs and these drives are not intended for the desktop level. Any setup that depends on raw throughput would certainly benefit greatly from 6G.File servers and Streaming media servers are obvious ones to take advantage of 6G.
Not sure what you are getting on about with the implementation line. Its a standard, its only implemented one way. Maybe you mean drive manufacturers should increase random performance, but I don't know what that has to do with the 6G implementation.
Voo - Tuesday, April 13, 2010 - link
You really think that any drive today will be bottlenecked by PCI 1.0 speeds? That may bottleneck the theoretical performance (e.g. SATA3 protocoll could transport more), but there's no drive who comes close to those speeds.. at least non raided.JarredWalton - Tuesday, April 13, 2010 - link
Just because you support a standard doesn't mean you support it *well*. The Intel X58 SATA 3G implementation is done very well; the P55 implementation is actually inferior to X58. Likewise, AMD's implementation in the 890GX of SATA 6G is not the same as the Marvel implementation. Sure, they conform to the same standard, but they don't perform the same because the devil is in the details. Right now, 6G is a brand new technology and as with any bleeding-edge technology there are some bugs and optimizations to work out.What I'm suggesting is that down the road we'll see 6G implementations in the Northbridge (or even ones off a PCIe 2.0 connection) that will outperform the current 6G controllers. It's happened with every technology in the past, and it will happen again here as well. But of course, we need 6G SSDs before 6G controllers will get better--one pushes the other, and vice versa.
SandmanWN - Tuesday, April 13, 2010 - link
I see what you are getting at. There is always a controller that can squeeze a little more performance to edge closer to the theoretical limit.Isn't there supposed to be drop-in SSD's with SAS connectors coming out this year for HDD SAS replacement? SAS has had 6G speed for some time. I don't have any Enterprise class servers with SATA, its all SCSI and SAS. Any word on SSD SAS drives in the pipeline?
vol7ron - Tuesday, April 13, 2010 - link
I think you read, but didn't comprehend what I said.If you're going to buy a motherboard and your SSD is SATA3 and capable of breaking 3Gbps speeds, get the SATA3 mobo. From the mere facts of what's stated, you will NEVER have a drive that's been clocked at speeds greater than 3Gbps, perform better on SATA2 interface. There might be some parts of the test that will perform better, but it had to exceed 3Gbps at some point.
While random write speeds are important the drives that are capable of exceeding the 3Gbps barrier, do not have bad writes. For those motherboards that have some issues with the SATA3 controller, it will mature and BIOS updates will fix them -- and as Anand has pointed out, there are other alternatives using PCIe, etc.
ergo98 - Tuesday, April 13, 2010 - link
In real-world use you won't see any difference between 3G and 6G with the current crop of high end drives. You just won't. The difference barely even appears in entirely artificial benchmarks.I/O is where it's at, and these drives deliver that in spades.
Luke212 - Tuesday, April 13, 2010 - link
none of these drives perform at 250-300mb/s in real world small block or random operations. only large block sequential ops go that fast. so i dont think you will see any benefit outside the benchmarks.vol7ron - Tuesday, April 13, 2010 - link
This actually perks my curiosity. I thought the SF1200 was going to be an extreme under-performer, compared to the SF1500.If that's not the case and the cost is cheaper, then I am also baffled by the price. Perhaps they are going to be using lower performing and/or cheaper RAM?
vol7ron
MonkeyPaw - Tuesday, April 13, 2010 - link
Not related to the article, but is anyone else having issues getting RSS working for Anandtech or Dailytech? Since the site change, none of my readers pickup the updates. I can't even pull up the feed info.vol7ron - Tuesday, April 13, 2010 - link
I've got an RSS feed for my Win7 desktop gadget - it's working fine. I had to subscribe through IE8, though, to get the feed to the gadget.SolMiester - Tuesday, April 13, 2010 - link
for a ESX 4.0 server host a x64 SQL instance! There not in the country yet (NZ)...hanging out, so the review will help keep me happy for a couple of hours.....