Comments Locked

87 Comments

Back to Article

  • IvanAndreevich - Tuesday, March 30, 2010 - link

    I am sure they would blow these away, and at $80 AR they are cheaper per GB. They also support GC and TRIM with 1.5 firmware.
  • machzero - Tuesday, March 30, 2010 - link

    Ya missed an important point on the first page.

    "1) There is currently no way to pass the TRIM instruction to a drive that is a member of a RAID array."

    The drives may support TRIM but no RAID controller on the market will pass the instruction to the drive.
  • Minion4Hire - Tuesday, March 30, 2010 - link

    And OCZ's garbage collection is entirely independent from the drive controller, and doesn't need instructions passed to it like TRIM. It will work regardless of your configuration.

    And on that note, I have two 60GB OCZ Agility SSDs in RAID 0 and am very happy with them. My sequential read performance sits at over 400MB/s and sequential writes are over 200MB/s. Although my random read and write performance isn't quite as nice as the Intels RAIDed here, I didn't pay any more for my drives ($140 ea. Canadian on sale after rebate) and have an extra 40GBs of storage over this Intel RAID. The extra capacity is what sold me on the OCZ drives over Intel, but I'd be willing to bet that the Intel X25-V drives would offer a better overall RAID experience than a set of Agilities, even with their lack of garbage collection as Anand noted.
  • funkyd99 - Wednesday, March 31, 2010 - link

    Is the lack of TRIM a limitation of RAID controllers or a limitation in the drivers? I.e. could an update to the Intel storage drivers remedy the problem on Intel chipsets?
  • plamengv - Friday, April 2, 2010 - link

    There is no lack of TRIM for RAIDs anymore thanks to Intel.

    http://guru3d.com/news/intel-brings-trim-to-ssds-i...
  • nwrigley - Friday, April 2, 2010 - link

    Nope, it only supports single drives running next to a RAID. RAID still lacks TRIM support.

    http://techreport.com/discussions.x/18653
  • Makaveli - Tuesday, March 30, 2010 - link

    Performance is great, since I already have one 160GB G2 drive I would love to see a second in Raid 0..hint hint :)

    should be able to hit 200 mb/sec writes!!
  • TheHolyLancer - Tuesday, March 30, 2010 - link

    I'm wondering, is there a way (be it a special raid card or something else) to allow me to to put put a raid 0 +1 array in there with standard hdds?

    Something like 2 SSDs + 2 hdds or hell 2 SSDs + 1 large hdd with 2 partitions. This way, you can get data protection on the cheap.

    Or for something like this to work, the drive performance has to be big, or else you need a huge data cache on the controller in order for the hdds to catch up to SSDs.

    Or is this just completely outside the scope of what current controllers can do?
  • Calin - Tuesday, March 30, 2010 - link

    You don't have that option (and anyway, it will slow your RAID to the speed of the slowest writing disk, even if reads will always take place from the SSD drives).
  • therealnickdanger - Tuesday, March 30, 2010 - link

    You're better off creating a SSD RAID-0 as your boot/app/game drive and then back up that partition every night (or twice per day?) to a HDD RAID-1/5/6. It's not as protected as a real-time RAID-1/5/6, but it's the best and cheapest of both worlds. Also, if you've ever tried to restore a RAID-1/5/6, it takes much, MUCH longer than restoring a partition from a backup. I use Windows Home Server to do this for my ~60GB SSD partition and it is bloody quick (the one time I had to do it).
  • rhvarona - Tuesday, March 30, 2010 - link

    Some Adaptec Series 2, Series 5 and Series 5Z RAID controller cards allows you to add one or more SSD drives as a cache for your array.

    So, for example, you can have 4x1TB SATA disks in RAID 10, and 1 32GB Intel SLC SSD as a transparent cache for frequently accessed data.

    The feature is called MaxIQ. One card that has it is the Adaptec 2405 which retails for about $250 shipped.

    The kit is the Adaptec MaxIQ SSD Cache Performance Kit, but it ain't cheap! Retails for about $1,200. Works great for database and web servers though.
  • GDM - Tuesday, March 30, 2010 - link

    Hi I was under the impression that intel has new raid drivers that can pass through the TRIM command. Can you please rerun the test if that is true. Also can you test the 160gbs in raid?

    And although benchmarks are nice, do you really notice it during normal use?

    Regards,
  • Makaveli - Tuesday, March 30, 2010 - link

    You cannot do TRIM to an SSD Raid even with the new intel drivers.

    The drivers will allow you to pass trim to a single SSD+ HD RAID setup.

  • Roomraider - Wednesday, March 31, 2010 - link

    Wrong, Wrong, Wrong!!!!!!!
    The new drivers does in fact pass Trim to Raid-0 in Windows 7. My 2 160 g2' striped in 0 now has trim running on the array "verified via Windows 7 Trim cmd" . According to Intel, this works with any Trim enabled SSD' No Raid 5 support yet.
  • jed22281 - Friday, April 2, 2010 - link

    what so Anand is wrong when he speak to Intel engineers directly?
    I've seen several other threads where this claims has since been quashed.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Well this is definitely a test I was looking for. I just bought 3 of the Kingston drives off Amazon cheap and was trying to decide whether to RAID them or use them separately for OS/apps and games. Would a partition of 97.5GB (so about 14GB unpartitioned) be good enough for a wear-leveling buffer?
  • GullLars - Tuesday, March 30, 2010 - link

    Yes, it should be. You can consider making it 90GiB (gibibytes, 90*2^30 bytes), if you anticipate a lot of random writes and not a lot of larger files going in and out regularly.

    You will likely get about 550MB/s sequential read, and enough IOPS for anything you may do (unless you start doing databases, WMvare and stuff). 120MB/s sustained and consistent write should also keep you content.

    Tip: use a small stripe size, even 16KB stripe will work whitout fuzz on these controllers.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Main reason I want to go with a 97.5GB partition is because that's the size of my current OS/apps/games partition. It's got about 21GB free, which I wanted to keep in case I wanted to install more games.

    In regards to stripe size, most of the posts I've seen suggest 64KB or 128KB are the best choices. What difference does this make? Why do you suggest smaller stripe sizes?

    Plans are for the SSDs to be OS/apps/games, with general data going on a pair of 1.5TB hard drives. Usage is mainly gaming, browsing, and watching videos, with some programming and the occasional fiddling with DVDs and video editing
  • GullLars - Tuesday, March 30, 2010 - link

    Then you should be fine with a 97,5GB partition.
    The reason smaller is better when it comes to stripe size on SSD RAIDs has to do with the nature of the storage medium combined with the mechanisms of RAID. I will explain in short here, and you can read up more for yourself you are more curious.

    Intel SSDs can do 90-100% of their sequential bandwidth with 16-32KB blocks @ QD 1, and at higher queue depths they can reach it at 8KB blocks. Harddisks on the other hand reach their maximum bandwidth around 64-128KB sequential blocks, and do not benefit noticably from increasing the queue depth.

    When you RAID-0, the files that are larger than the stripe size get split up in chucks equal in size to the stripe size and distributed amongs the units in the RAID. Say you have a 128KB file (or want to read a 128KB chunk of a larger file), this will get divided into 8 pieces when the stripe size is 16KB, and with 3 SSDs in the RAID this means 3 chunks for 2 of the SSDs, and 2 chukcs for the third. When you read this file, you will read 16KB blocks from all 3 SSDs at Queue Depth 2 and 3. If you check out ATTO, you will see 2x 16KB @ QD 3 + 1x 16KB @ QD 2 summarize to higher bandwidth than 1x 128KB @ QD 1.

    The bandwidth when reading or writing files equal to or smaller the stripe size will not be affected by the RAID. The sequential bandwidth of blocks of 1MB or larger will also be the same since the SSDs will be able to deliver max bandwidth with any stripe size since data is striped over all in blocks large enough or enough blocks to reach max bandwidth for each SSD.

    So to summarize, benefits and drawbacks of using a small stripe size:
    + Higher performance of files/blocks above the stripe size while still relatively small (<1MB)
    - Additional computational overhead from managing more blocks in-flight, although this is negligable for RAID-0.
    The added performance of small-medium files/blocks from a small stripe size can make a difference for OS/apps, and can be meassured in PCmark Vantage.
  • WC Annihilus - Tuesday, March 30, 2010 - link

    Many thanks for the explanation. I may just go ahead and fiddle with various configurations and choose which feels best to me.
  • BarneyBadass - Tuesday, March 30, 2010 - link

    Anand,

    Thank you so much for such a well thought out, executed and documented report. I was so surprised, but then, perhaps I shouldn't have been.

    So now the question has to be..... what happens with a pair of Crucial's RealSSD C300 in RAID 0?

    It would be interesting to see if the same kind of scalar results observed using the 2 Intel X25-V in RAID-0 are observed with 2 of the Crucial's RealSSD C300 in RAID 0.

    Over kill? perhaps. Heck, I could almost build a complete system for the cost of the 2 Crucial RealSSD C300 SSD's alone, but it's the whole performance thing I'm curious about.

    Do you have any numbers on how Crucial's RealSSD C300 in RAID 0 work out on SATA-3 CTLRs or did I miss the whole thing and all the comparisons are of SSD's in raid 0?

    TIA
  • GullLars - Tuesday, March 30, 2010 - link

    C300 will scale almost perfectly from ICH10R, but 4KB random read IOPS will scale as a function of Queue Depth and not be noticably higher than single untill QD 7-8. Anything above the stripe size will recieve a noticable boost in random performance. This is true for all SSD (the QD the difference will stand out will differ by controller type and #channels).

    I have not yet seen RAID numbers for any SATA 3 motherboard controllers, so there i cannot comment, but from a RAID card or HBA like LSI 9000-series, you should get perfect scaling. In larger arrays IOPS may be bound by the RAID logic chip. For the LSI 9000-series, IOPS regardless of block size, read or write, random or sequential, tops out around 80.000 for integrated RAID-0. (wich will likely not be an issue outside servers)
  • GullLars - Tuesday, March 30, 2010 - link

    This was a great test, and one that i've been nagging for a few months now. I'm a bit disappointed you stopped at 2 x25-V's and didn't do 3 and 4 also, the scaling would have blown your chart away, while still comming in at a price below a lot of the other entries.
    I would also love if you could do a test of 1-4 Kingston V+ G2 64GB RAID-0, as it seems as the sweet-spot for sequential oriented "value RAID".

    I feel the need to comment your comment under the random write IOmeter screenshot:
    "Random read performance didn't improve all that much for some reason. We're bottlenecked somewhere else obviously."
    YES, you are bottlenecked, and ALL of the drives you have tested to date that support NCQ have also been so. You test at Queue Depth = 3, wich for random read will utilize MAX 3 FLASH CHANNELS. I almost feel the need to write this section in ALL CAPS, but will refrain from doing so to not be ignored.
    The SSDs in your chart has anywhere between 4 and 16 flash channels. Indilinx Barefoot has 4, x25-V has 5, C300 has 8, x25-M has 10, and SF-1500 has 16. And i repeat: YOU ONLY TEST A FRACTION OF THESE, so there is no need to be confused when there is no scaling by adding more channels through RAID. 2 x25-V's in RAID-0 has the same ammount of channels as x25-M, but twice the (small block parallell, large block sequential) controller power.

    For your benefit and clarification, i will now list the 4KB random read bandwidth the controllers i've mentioned above are capable of if you test them at the SATA NCQ spec (32 outstanding commands).
    Indilinx Barefoot (MLC): 16K IOPS = 64MB/s
    Intel x25-V: 30K IOPS = 120MB/s
    C300: 50K IOPS = 200MB/s
    Intel x25-M: 40K IOPS = 160MB/s
    SF-1500: 35K IOPS = 140MB/s

    As a sidenote, most of these SSDs scale about 20MB/s pr outstanding IO with a flattening curve at the end. Indilinx Barefoot is the exception wich scales linearly to 60MB/s at QD 5 and then flattens completely.

    A RAID-0 of 2 x25-V will do 60K 4KB random read IOPS @ QD 32 = 240MB/s. I have IOmeter raw data of 4 x25-V's in RAID-0 from ICH10R performing 120K 4KB random read IOPS @ QD 32 = 480MB/s.

    4 Kingston V+ G2 RAID-0 is anticipated to come in at 15-20K IOPS 4KB random read = 60-80MB/s (@QD 4-5), 6K 4KB random write IOPS = 24MB/s (acceptable), 600+ MB/s sequential read (will max out ICH10R), and about 400-450 MB/s sequential write (possibly a bit lower after degrading).

    With this in mind, I again ask for a listing of IOPS @ QD 32 besides or below the test at QD 3 to show the _AVALIBLE PERFORMANCE_ and not only the "anticipated realistic IOPS performance".

    Also, it would be appreciated if you list wich stripe size you used. Generally on Intel SSDs, the smaller stripe the better the real life result. This is also reflected in PCmark Vantage.
  • rundll - Tuesday, March 30, 2010 - link

    Lars, could you kindly elaborate this:
    "if you test them at the SATA NCQ spec (32 outstanding commands)."
    How one can tweak this queue depth?

    And then one comment on the fact that TRIM doesn't work through RAID.
    Maybe a good choice for RAID is Kingston SSDNow V Series (2nd Gen) 128GB SSD (or perhaps 64GB would work similarly?). That's because it appears to recover from a heavy pounding in no time without TRIM. Allyn M. with PCper says like this after testing the Kingston:

    "the JMicron controller behaved like all other drives, but where it differed is what happened after the test was stopped. While most other drives will stick at the lower IOPS value until either sequentially written, TRIMmed, or Secure Erased, the JMicron controller would take the soonest available idle time to quickly and aggressively perform internal garbage collection. I could stop my tool, give the drive a minute or so to catch its breath. Upon restarting the tool, this drive would start right back up at it's pre-fragmented IOPS value.

    Because of this super-fast IOPS restoring action, and along with the negligible drop in sequential transfer speeds from a 'clean' to 'dirty' drive, it was impossible to evaluate if this drive properly implemented ATA TRIM. Don't take this as a bad thing, as any drive that can bring itself back to full speed without TRIM is fine by me, even if that 'full speed performance' is not the greatest."
  • GullLars - Tuesday, March 30, 2010 - link

    "Lars, could you kindly elaborate this:
    "if you test them at the SATA NCQ spec (32 outstanding commands)."
    How one can tweak this queue depth?"

    Anand does his testing in IOmeter. IOmeter has a parameter called # of outstanding IO's. This is the queue depth (QD). You tweak it by changing 1 number from 3 (as anand has) to 32.

    The Kingston V+ G2 (JMF618) 64GB is IMO a drive worth considering RAIDing if you care a great deal about sequential performance. It has the highest bandwidth/$ of any SSD (except the 32GB version). 200/110 MB/s read/write pr SATA port makes it fairly easy to scale bandwidth cheaply, while still getting sufficient IOPS. I say 64GB and not 128GB, since you don't get any scaling of read bandwidth from 64GB to 128GB while the price almost doubles, and write bandwidth only scales moderatly. If you have a higher budget and need high seq write bandwidth, the 128GB is worth looking at if you have few ports availible.

    JMF618 does about 4000-5000 4KB random read IOPS and roughly 1500 random write.
    That this controller seems to be resilient to degradation matters little when you only have 1500 random write IOPS in the first place. x25-V has 10.000, C300 and SF-1500 has roughly 40.000. None of these will degrade below 1500 IOPS no matter how hard you abuse them, and will typically only degrade 10-25% in realistic scenarios. If you increase spare area this is lowered.
    I have RAIDed SSDs since aug 2008, and i know people who have RAIDed larger setups since early 2009, and there are seldom problems with degrading, almost never with normal usage patterns. The degrading happens when you benchmark, or have intensive periods of random writes (VMware, database update, etc).
  • galvelan - Wednesday, March 31, 2010 - link

    @GullLars

    Have a question for you regarding advice you gave another regarding stripe size for a raid setup. I frequent the other forum you do as well but would like to know what your knowledge is of write amplification is. At the other forum Ourasi wrote that he found 16-32kb best, just like you mentioned. But the others said that is not good for performance and would destroy the SSD's quicker due to write amplification of having to do multiple writes with the write block being larger than the stripe size selected. I believe the sense of what you and Ourasi said in the other forum, but how does that effect the write amplification? Can you explain more on that?

    P.S. I replied to another comment you made but anands comment system didnt take it dont know why so trying again. ;-)
  • GullLars - Thursday, April 1, 2010 - link

    There is no impact on write amplification from the striping. It can be explained pretty simply.
    SSDs can only write a page at a time (though some can do a partial page write), and page size is 4KB on current SSDs. As long as the stripe size is above 4KB, you don't have writes that leaves space unusable.
    With a 16KB stripe size, you will write 4 pages on each SSD alternating and in sequential LBAs, so it's like writing sequential 16KB blocks on all SSDs, and as the file size becomes larger than {stripe size}*{# of SSDs} you will start increasing the Queue Depth, but it's still sequential.

    Since all newer SSDs use dynamic wear leveling with LBA->physical block abstraction, you won't run into problems with trying to overwrite valid data if there are free pages.

    The positive side of using a small stripe is a good boost in files/blocks between the stripe size and about 1MB. You will see this very clearly in ATTO as the read and write speeds doubles (or more) when you pass the stripe size. F.ex. 2R0 x25-M with 16KB stripe jumps from ~230 MB/s at 16KB to ~520MB/s at 32KB (QD4). This has a tangable effect on OS and app performance.
  • galvelan - Friday, April 2, 2010 - link

    Thanks GullLars!!!

    I will try out the 16k stripe. I have 3 40gb intel v SSD's on the a ICH10R. Would 16k be okay for this setup?

    By the way could you explain to me why they are confusing the write amplification as being a problem with these type of stripe sizes. You and Ourasi's explanations make perfect sense to me. Some actually even believed Ourasi and had questioned themselves. Maybe a problem of these synthetic benchmarks compared to real world usage as Ourasi mentioned?

    Here is from the forum..

    http://www.xtremesystems.org/forums/showthread.php...

    Something Ourasi said

    "At the moment I'm on 16kb stripe, and have been for a while now, and it is blisteringly fast in real world, not only in benches. There is not much performance difference between 16kb - 32kb - 64kb, but the smaller sizes edges ahead. As long as there is no measurable negative impact using these small stripes, I will continue using them. The X25-M is a perfect companion for those small stripes, some SSD's might not be. Intel was right about this in my opinion. But I must add: This is with my ICH9R, and I fully understand that these stripes might perform pretty bad on some controllers, and that's why I encourage people to test for them selves..."

    Response to his comment

    "There are like 10 times more people saying that their testing showed 128 or more to be the best. Who do you think tested properly? Things like this will never be 100% one sided. "

    What bothered me was this comment later which had me ask you the question earlier..

    "Most SSD's have a native block size of 32KB when erasing. Some have 64KB. This is the space that has to be trimmed or "re-zeroed" before a write. If you write 1,024KB to a RAID-0 pair with 64KB blocks, with a 32KB stripe size, it will be 32 writes, requiring 64 erases. With 128KB stripes, it will be 32 writes, 32 erases. You'll effectively see a 30-50% difference in writes. This does not affect reads quite as much, but it's still usually double-digits. Also, with double the erase cycles, you will cut the lifespan of the drive in half."

    I am interested in testing some stripe sizes now.. but i think i will use something more real world. Do you think Vantage HDD would be good test for stripe sizes since it uses real world applications. I dont like the synthetic benches
  • GullLars - Friday, April 2, 2010 - link

    Regarding the "Most SSD's have a native block size of 32KB when erasing......" quote, this is purely false.
    Most SSDs have 4KB pages and 512KB erase-blocks. Anyways, as long as you have LBA->Physical block abstraction, dynamic wear leveling, and garbage collection, you can forget about erase-blocks and only think of pages.
    This is true for Intels SSDs, and most newer SSD (2009 and newer).

    These SSDs have "pools" of pre-erased blocks wich are written to, so you don't have to erase evertime you write. The garbage collection is responsible for cleaning dirty or partially dirty erase-blocks and combine them to pure valid blocks in new locations, and the old blocks then enter the "clean" pool.

    Most SSDs are capable of writing faster than their garbage collection can clean, and therefore you get a lower "sustained" write speed than the max speed, it will however return back to max when the GC has had some time to replenish the clean pool. Some SSDs will sacrafice write amplification (by using more aggressive GC) to increase sustained sequential write.

    Intel on the other hand has focused on maximizing the random write performance in a way that also minimizes write amplification, and this either means high temporary and really low sustained write, or like intel has done, fairly low sequential write that does not degrade much. (this has to do with write placement, wear leveling, and garbage collection)

    This technique is what allows the x25-V to have random write equal to sequential write (or close to. 40MB/s random write, 45MB/s sequential write). x25-M could probably also get a random:seq write ratio close to 1:1, but the controller doesn't have enough computational power to deliver that high random write using intels technique.
  • WC Annihilus - Friday, April 2, 2010 - link

    I've done some testing on stripe sizes with the 3x X25-V's found in this thread:
    http://www.xtremesystems.org/forums/showthread.php...

    I have done 128k, 32k, and 16k so far. By the looks of it, 16k and 32k are neck and neck. 64k results will be going up in a couple hours.
  • galvelan - Friday, April 2, 2010 - link

    Looks forward to the info Annihilus.
  • GullLars - Saturday, April 3, 2010 - link

    Damn, that's a lot of RE:'s

    Anyways, i thought i'd post it here so everyone could see:
    The numbers he's refering to shows 16KB stripe as superior performance-wise.
    Here's the PCmark vantage HDD scores of 3 x25-V's in RAID-0 by stripe size:
    16KB: 74 164
    32KB: 70 364
    64KB: 63 710
    128KB: 55 045
    For those wondering, 16KB shows 540MB/s read and 131MB/s write in CrystalDiskMark 3.0 while 128KB shows 520MB/s read and 131MB/s write (1000MB lenght, 5 runs)

    Also, here are the AS SSD total scores by stripe size for 3 x25-V's in RAID-0:
    16KB: 809
    32KB: 797
    64KB: 795
    128KB: 774

    By doing PCmark vantage points multiplied by 2/3, i guess Anand used a 128KB stripe.
    If he'd used a 16KB stripe, the numbers would likely be around 48-49 000
    This is supported by benchmarking done by the user Anvil, who got 47 980 points in the Vantage HDD test with 2 x25-V's in RAID-0 off ICH10R with a 16KB stripe size. (IRST 9.6 driver, writeback cache disabled).
  • galvelan - Friday, April 2, 2010 - link

    Excellent info GullLars... Think others are just thinking that 128k is best for all SSD's.. But they obviously are not all the same.. Thanx alot!!
  • mschira - Tuesday, March 30, 2010 - link

    Hi I like to get two 160 25-M for RAID. Linux software RAID to be precise.
    Can I use TRIM then?
    best
    M.
  • yacoub - Tuesday, March 30, 2010 - link

    "earlier this month Intel launched its first value SSD: the X25-V."

    Last month.
    The drive was definitely available in early February. Maybe you started writing this article in February? :)
  • buzznut - Thursday, April 1, 2010 - link

    bought mine in January.
  • Lithium - Tuesday, March 30, 2010 - link


    Great test Mr. Anand.
    Few weeks ago I purchased two Kingston 40GB drives to do just that, RAID-0.
    Can you please explain which program you use for Secure Erase and in which enviroment, from DOS or Windows.
    Next, when you create smaller 60GB partition, from DOS or from Win7 setup. Should I use quick format from Win7 setup...

    All the best
    Thanks
  • 7Enigma - Tuesday, March 30, 2010 - link

    Hi Anand,

    It it really as simple as copying a large file(s) to the free space of these SSD's in RAID to bring performance back to similar to a secure erase?

    If so why doesn't Intel or some other 3rd party release a small program that simply uses My Computers' free space measurement, copy a file of the same size to the SSD and then delete it? Seems like it could be done very easily.

    Thanks for the mini-review....makes me want to get another 80gig G2 to RAID with my current one!
  • mervincm - Tuesday, March 30, 2010 - link

    I think this app does exactly that. You can even pick if it writes )'s or 1's
  • mervincm - Tuesday, March 30, 2010 - link

    Freespacecleaner AS-Clean
  • 7Enigma - Tuesday, March 30, 2010 - link

    So then why doesn't the major players offer this type of tool that works in the background? You'd have a little background task that once a week @ 2am does this when it detects no disk activity and TRIM wouldn't be important except in very rare circumstances where the drive gets no downtime (say server). Just like an unobtrusive virus scan or more apt a SSD defrag!?

    I have an G2 Intel 80gig and even though I have the latest firmware that supposedly TRIM's on it's own I once a month use the little toolbox utility to manually TRIM. It would be very nice if they offered a program like this to the G1 owners that they basically slapped in the face....

    This is along the same lines as GPUTool. For my 4870 which runs hot at idle for no reason I can drop down 40-80 watts just by downclocking and yet it takes a 3rd party to offer this?
  • Bolas - Tuesday, March 30, 2010 - link

    I thought TRIM was enabled for RAID now, due to a new firmware patch to the Intel chipset?!? Wouldn't that entirely change the outcome of this article?
  • WC Annihilus - Tuesday, March 30, 2010 - link

    No. As per Makaveli's reply to GDM, the new Intel drivers do NOT enable TRIM for RAID. They only allow pass through to SSDs not in an array, ie single SSD + RAID hard drives, TRIM now makes it to the SSD
  • Roomraider - Wednesday, March 31, 2010 - link

    New Intel chipset drivers do in fact enable trim only with raid-o array in windows 7. Read the documentation properly. It says any trim enabled SSD in Raid -0 only with Windows 7.
  • jed22281 - Friday, April 2, 2010 - link

    "New Intel chipset drivers do in fact enable trim only with raid-o array in windows 7. Read the documentation properly. It says any trim enabled SSD in Raid -0 only with Windows 7. "

    Where's this doco?

    Thank-you very much.
  • Roomraider - Wednesday, March 31, 2010 - link

    You my friend are absolutely correct. My read trippled my write more than doubled with 2xM 160 g2'. With trim verified enabled & working.
  • AnalyticalGuy2 - Tuesday, March 30, 2010 - link


    Great points and questions GullLars!! Don't give up!!

    What about the possible merits of keeping the drives independent? At the very least, you get to keep Trim. And maybe there is an advantage to having 2 independent data paths leading to the drives? Or to having 2 independent controllers?

    My goal is to minimize the time it takes to: boot Windows 7, allow the anti-virus to scan startup files and download updates, and load several applications into RAM.

    Question: Would this be faster with:
    A) One Intel 160GB G2 SSD (by the way, a 10-channel controller, right?)
    B) Two independent (i.e., no RAID involved) Intel 80GB G2 SSDs -- One for Win 7 and the other for anti-virus and applications? (by the way, each SSD has a 10-channel controller, for a total of 20 channels?)
  • GullLars - Tuesday, March 30, 2010 - link

    B would be faster, since each of the drives can then work independently in parallell. If you do something that hits both drives to read sequentially at the same time, you get 400+ MB/s read.

    A guy on a forum i frequent tried using W7 software RAID in hope of getting TRIM on Intel SSDs, it didn't work. Performance was comparable to ICH10R RAID.

    I would still reccomend using the two x25-M 80GB in RAID over your option B, since then you always get the doubling of sequential performance. Write degradation whitout TRIM is not a problem in most realistic cases when you're RAIDing, and you likely won't notice anything. If you should notice anything, writing a large file to all free space untill you have written 2-3x of the capasity will likely restore the performance to simelar levels as fresh. You can also increase the spare area a bit and greatly reduce the impact of random writes on performance. I would consider partitioning 2x x25-M 80GB to 140GiB if you anticipate a lot of random writes and few sequential, or just is afraid of performance degradation.
  • Beanwar - Tuesday, March 30, 2010 - link

    Anand, have you considered using Windows software RAID on two of these drives? If you're not using the hardware RAID, you should be able to pass TRIM commands to the drives, and given that even the newest applications out there aren't going to be sending all CPU cores to 100%, it's not like the overhead is going to be all that bad. Just a thought.
  • xyxer - Tuesday, March 30, 2010 - link

    Don't intel rapid storage 9.6.0.1014 support TRIM ? http://www.hardwarecanucks.com/news/cpu/intel-chip...
  • Makaveli - Tuesday, March 30, 2010 - link

    Why are so many of you having difficultly understanding this. YOU DO NOT GET TRIM SUPPORT WITH THE NEW INTEL DRIVERS IF YOU HAVE A RAID ARRAY BUILT OF JUST SSDs!

    Where every you guys are reading that stop its wrong!

  • vol7ron - Tuesday, March 30, 2010 - link

    Finally a RAID! Thank you, thank you, thank you. Just a few days too late before I bought the 80GB, but still this makes your review a little more meaningful - it is essentially the equivalent of showing overclocks for CPUs and more meaningful, since HDs are a bottleneck.

    Advice:
    RAID-0s see a greater impact with 3 or more HDs. I think the impact is exponential to the number of drives in the array, not just seemingly double. I know TRIM is not supported, but if you could get one more 40GB drive and also include the impact, that would be nice - I would consider anything more than 3 drives in the array as purely academic; 3 or less drives is a practical (and realistic) system setup.


    Notes to others:
    I saw the $75 discount on Newegg for 80GB X25MG2 (@ $225) and decided to grab it, since one of my 74GB Raptors finally failed in RAID. This discount (or price drop) is most likely due to the $125 40GB version. I also picked up Win7 Ultimate x64, to give it a try.
  • cliffa3 - Tuesday, March 30, 2010 - link

    Anand,

    On an install of Win7, I'm guessing a good bit of random writes occur.

    How much longer would you stave off the performance penalty due to having no TRIM with RAID if you took an image of the drive after installation, secure erased, and restored the image?

    Please correct me if I'm wrong in assuming restoring an image would be entirely sequential.

    I would probably image it anyway, but just trying to get a guess on what you think the impact would be in the above scenario to see if I should immediately secure erase and restore.

    I also would be interested in how much improvement you get by adding another drive to the array in RAID 0...is it linear?
  • GullLars - Tuesday, March 30, 2010 - link

    Some of the powerusers i know have used the method of secure erase + image to restore performance if/when it degrades. Mostly they do it after heavy benchmarking or once every few months on their workstations (WMvare, databases and the like).

    RAID scales linearly as long as the controller can keep up. This is the RAW performance numbers, the real life impact is not linear, and will have diminishing returns due to storage performance being divided in two major categories: Throughput and accesstime. Throughput scales linearly, accesstime stays unchanged. Though average accesstime for larger blocks and under heavy load takes less of a hit in RAIDs.

    Intels southbridges scale to roughly 600-650MB/s, and i've seen 400+ MB/s done at 4KB random.

    As for random read scaling in RAID, you have the formula IOPS = {average accesstime} * {Queue Depth}
    Average accesstime has a more gentle slope in RAID as the Queue Depth scales the more units you put in the RAID, but at low QD (1-4) there is little to gain for blocks smaller than stripe size. No matter how may SSDs you add in the RAID, you will never get scaling more than QD * IOPS @ QD 1.
  • ThePooBurner - Tuesday, March 30, 2010 - link

    Check out this video of 24 SSDs in a raid 0 array. Mind blowing.

    http://www.youtube.com/watch?v=96dWOEa4Djs
  • GullLars - Tuesday, March 30, 2010 - link

    Actually, that RAID has BAD performance compared to the number of SSDs.
    You are blinded by the sequential read numbers. Those Samsung SSDs have horrible IOPS performance, and the cost of the setup in the video compared to performance is just outright awfull.

    You can get the same read bandwidth with 12 x25-V's, and at the same time 10X the IOPS performance.
    Or if you choose to go for C300, 8 C300 will beat that setup in every test and performance meteric you can think of.

    Here is a youtube video of the performance of a Kingston V 40GB launching 50 apps for you to compare to the Samsung setup:
    http://www.youtube.com/watch?v=sax5wk300u4&fea...

    I will also point out my 2 Mtron 7025 SSD that were produced in dec 2007 can open the entire MS office 2007 suite in 1 second, from SB650 and prefetch/superfetch deactivated.
  • Slash3 - Tuesday, March 30, 2010 - link

    Speaking of which, is there a "report abuse" button for comments on this new site design system? I didn't notice one, just in fumbling around a bit.
  • waynethepain - Tuesday, March 30, 2010 - link

    Would defragging the SSDs mitigate some of the build up of garbage?
  • 7Enigma - Tuesday, March 30, 2010 - link

    You do not defrag an SSD.
  • GullLars - Tuesday, March 30, 2010 - link

    You don't need to defrag a SSD, but you can. This does not affect the physical placement of the files, but will defragment their LBA fragmentation in the file tables. Since most SSDs can reach full bandwidth (or close to) at 32-64KB random read, you need a seriously fragmented system before you will notice anything. There are almost no files that will get fragmented into that small pieces, and even if you get 50 files each in 1000 fragments of 4KB, the SSD will read each one when they are needed in a fraction of a second.

    It doesn't hurt to defrag if you notice a few files in hundreds or thousands of fragments, the lifespan of the SSD will be unaffected by one defrag a week, but it will cause spikes of random writes, wich may cause a _temporary_ performance degrading if you don't have TRIM.
  • jed22281 - Friday, April 2, 2010 - link

    @GullLars

    Could you please explain to RoomRaider (down the bottom of pg 6) that there is no
    TRIM support for RAID-0?

    He keeps insisting there is throughout these comments....
    He needs someone to explain why what he's citing as proof, is wrong.

    Thanks if you can!
  • jed22281 - Friday, April 2, 2010 - link

    Then again maybe he's right...
    http://www.tweaktown.com/articles/3116/tweaktown_s...
    But his cited no's for knowing this to be true haven't sounded right to me yet.
  • Boofster - Tuesday, March 30, 2010 - link

    It would be nice to see the X25-M G2 in RAID0 as well. Yes it is much more $ but still great value if you look at the performance. The 160gb will cost you ~$400 and possibly beat anything in this review. Of course you can say the same for the faster single drives in RAID0 as well but the value is lost.

    I can say for sure the Intel RAID tools do not let you TRIM the drive in RAID0 (X58 board). I am not sure if you temporarily drop the drive from the RAID, TRIM it, then put it back will work. Probably not because it will mess up the file system. I really hope Intel works this into their drivers as it is a very attractive option.

    Can you also elaborate on the cleaning process? How do I accomplish this "secure data wipe"?
  • Hauraki - Tuesday, March 30, 2010 - link

    There was a review on X-bit labs praised the v+ 2nd gen for home usage, and I'd like a second opinion from AnandTech. Thanks.
  • nobita1168 - Tuesday, March 30, 2010 - link



    Hai, i miss printed version , i like save to disk and read later, can anantech make a print version again? thanks
  • Ramon Zarat - Wednesday, March 31, 2010 - link

    I don't know if the question has been asked before, but I'm wondering if TRIM will eventually be implemented for RAID or if it's technically impossible. If it's possible, any clue as of when it will happen?
  • jed22281 - Saturday, April 3, 2010 - link

    I would love to know this too! :(
  • Roomraider - Wednesday, March 31, 2010 - link

    After reading this review, I had to post: I'm running 2xM 160 g2's in Raid-0 with full trim support from Intels latest chipset drivers designed just for trim to raid support.

    So what gives here? Did someone not get the news about the new chipset drivers?
  • jed22281 - Saturday, April 3, 2010 - link

    You are mistaken, there is not support for drives combined into RAID volumes.
    There is for individual drives connected to the controller while it's in RAID mode.
    http://www.intel.com//support/chipsets/imsm/sb/CS-...

    Also see
    http://communities.intel.com/community/tech/solids...
    Look for the gold star at the top of the page, select show details and then go to announcement 2.

    "Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0, 1, 5, 10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release."
  • buzznut - Thursday, April 1, 2010 - link

    Thank you Anand!
    This is exactly what I've been looking for. I thought the performance would be better, but I had no idea it would be this good! I still see very little being written about these cost effective little drives. its good to know where to go for my SSD advice.

    A quick question for anyone- My motherboard was running the drive (Intel x25-v) in ahci mode, but the mobo( Tforce ta790gx a2+) has never liked running in that mode and I started getting BSOD's after about two weeks. I had to switch back to ide mode, the board has always been quirky this way. Runs very smooth in IDE mode, no problems.

    I had heard someone mention that trim doesn't work in ide mode, only while running the drive in ahci. If this is the case, then I will truly not miss trim when I get another drive and raid!

    Can anyone confirm the loss of trim command in IDE mode? The intel toolbox seems to work just fine when I run the weekly optimizer.
  • GullLars - Thursday, April 1, 2010 - link

    I think TRIM works i IDE mode also, i remember reading that both drivers PCIIDE and MSAHCI supports TRIM. However, this is not the big problem with using IDE mode, the problem is the loss of NCQ, so your performance don't scale with load. Your SSD will essentially only be able to do about 20MB/s at 4KB random read, while it can do 120MB/s random read with NCQ enabled (at fairly high load, like launching multiple apps simultaneously).
  • buzznut - Thursday, April 1, 2010 - link

    Thanks for the reply. I have been considering getting a new mobo anyway, I think I'd like to get a 890gx with the new interfaces.

    Only pb there is I have ddr2 ram and am2 cpus. doh

    Guess I'll wait til some money comes in to do any upgrading...
  • Elganja - Friday, April 2, 2010 - link

    "Update (03/29/2010): Intel has recently released a new driver that allows Windows 7’s TRIM instructions to be passed through the Southbridge. The new driver is labeled "Rapid Storage Technology 9.6" and it can be found here. These drivers are also able to pass TRIM commands to RAID 0 and RAID 1 arrays. "
  • jed22281 - Saturday, April 3, 2010 - link

    TT is mistaken, there is no support for drives combined into RAID volumes.
    There is for individual drives connected to the controller while it's in RAID mode.
    http://www.intel.com//support/chipsets/imsm/sb/CS-...

    Also see
    http://communities.intel.com/community/tech/solids...
    Look for the gold star at the top of the page, select show details and then go to announcement 2.

    "Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0, 1, 5, 10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release."
  • Chloiber - Sunday, April 4, 2010 - link

    A colleague from another HW site asked Intel directly. It's definitely NOT SUPPORTED (just to point that out again). It's clearly a mistake of intel, as they didn't make themselves clear in the change logs/readmes and even in the rapid storage manager itself it's not clear.

    Here is what they said btw:

    "It will support TRIM with SSDs in an AHCI configuration, or with the RAID controller enabled and the SSD is used as a pass through device. An example of this use case is for users that want to use the SSD as a boot drive but still be able to RAID multiple HDDs together to allow for large protect data storage – a great use for the home theater PC."

    No RAID0 or anything. "Just" simple TRIM as we're used to from the MSAHCI drivers.
  • Elganja - Friday, April 2, 2010 - link

    The quote was from this article: http://www.tweaktown.com/articles/3116/tweaktown_s...
  • morphin1 - Friday, April 2, 2010 - link

    What do you mean by doing sequential write?
    How would you do that?
    Also will this apply to the new Sony Vaio Z series laptops SSD's that come in Raid 0 config?
    Is it recommended to buy a laptop in Raid 0 config with SSD's as over time they might become snails.
    Do you see the chance of Trim being supported on current SSD's in the near future?
    Thanks a lot in advance.
  • GullLars - Saturday, April 3, 2010 - link

    Sequential writes are the type of writes that typically occur when you save or copy large files (1MB or larger).

    As for RAID in sony Vaio. There is no way of telling if it will become snail-like whitout knowing wich SSD is in the laptop in question. If it's Intel, Sandforce, or C300, it sould be just fine, if it's some low-quality cheap SSD from last generation drives, or the crap they put in the netbooks last year, it will go really bad.
  • jed22281 - Saturday, April 3, 2010 - link

    You mean if it's:
    Postville (Intel), SF-1200/1500 (Sandforce), C300 (JMicron), or Barefoot (Indilinx)
  • GullLars - Saturday, April 3, 2010 - link

    I forgot barefoot, my bad.
  • morphin1 - Saturday, April 3, 2010 - link

    Thank you a lot Anand for the reply and clearing that up for me.
    The Sony Vaio Z has samsung MMCRE28G drives(2*64GB).
    I am holding my buying decision on your recomendation. What say you? Will the above drives be rendered useless overtime with Trim?
    Thanks you a lot for your time.
    I have been reading AT for over 2years now and love the indepth reviews you guys do as opposed to other sites.
    Love this site.
    Cheers
  • morphin1 - Saturday, April 3, 2010 - link

    How will the samsung drive fare without TRIM.
    I wrote incorrectly when i said With TRIM.
  • Chloiber - Sunday, April 4, 2010 - link

    What's the QD of the random read/write tests (did I miss it?)?
  • AnalyticalGuy3 - Wednesday, April 7, 2010 - link


    Suppose I have four X25V's in RAID 0 using the Intel ICH10. Two questions:

    1) Just to confirm my understanding, a random read request smaller than the stripe size should only access one member of the array, correct?

    2) Suppose the queue contains four random read requests smaller than the stripe size. And suppose I'm really lucky -- each random read request happens to tap a different member of the array. Is the ICH10 smart enough to dispatch all four random read requests in parallel?
  • boostcraver - Tuesday, April 13, 2010 - link

    Anand,

    I'm sure I read this article thoroughly but I didn't see exactly what the RAID configuration was - whether onboard raid, dynamic disks in Windows, or a dedicated RAID add-in card.

    Once that is explained, it'd be great if we could get a benchmark comparison of these different RAID technologies. Something to justify the simplicity of using Windows dynamic disks for raid0 / raid1 configuration, or to spend the extra bucks on a SAS/SATA raid card. Since everyone has been supporting RAID0 for the best performance, we should understand exactly the options at hand.

    Thanks for the great articles.

    - boostcraver
  • Shin0chan - Friday, June 18, 2010 - link

    "A standard 80GB X25-M wouldn't be this bad off, the X25-V gets extra penalized by having such a limited capacity to begin with. You can see that the drive is attempting to write at full speed but gets brought down to nearly 0MB/s as it has to constantly clean dirty blocks. Constant TRIMing would never let the drive get into this state. It's worth mentioning that a desktop usage pattern shouldn't get this happen either. Another set of sequential writes will clean up most of this though"

    Hi guys, I understood where this topic was going until I read the last sentence of this paragraph and this lost me. I'm trying to understand this from a normal day-to-day usage train of thought; the performance degradation is mind blowing.

    So how would I do a bunch of sequential writes to the drive if my OS and Apps would be sitting on this? (They normally take up about 60-65GB for me.) From my understanding, this means I would have to use Secure Erase and restore from say a complete backup.

    Please enlighten me lol. Thanks.
  • chaosfox97 - Tuesday, August 3, 2010 - link

    If it got over loaded and you were regretting putting it into RAID-0, couldn't you just change it to normal, use TRIM, and change it back to RAID-0?

Log in

Don't have an account? Sign up now