Comments Locked

30 Comments

Back to Article

  • Spoelie - Monday, May 3, 2010 - link

    One area I think that might still be affected is reliability. SandForce stated that (1) smaller geometries introduce more defects and (2) manufacturers could use cheaper, less reliable flash in drives with their controllers.

    Does the reduction of spare area impart reduced lifetime/reliability in the above scenarios or is its responsibility purely one for performance? I reckon it's not something one would be able to measure though.
  • GeorgeH - Monday, May 3, 2010 - link

    Reliability will go down. 28% wasn't a random choice, it was selected to deliver a certain MTBF. AFAIK the "enterprise" drives use the same 28%, though, so "consumer" usage models should be able to get by with less.

    The real question is how they arrived at 13% - is it Bean Counter Bob's number or Engineer Eric's number? Until they answer that question and release their methodology for arriving at 13%, I wouldn't touch one of these with a thousand foot pole. The chance that 13% was the misguided result of some accountant waddling over to the R&D department for 5 minutes is just too great relative to the small benefit of 10-20 "free" GB's.
  • softdrinkviking - Monday, May 3, 2010 - link

    i wonder how much of a role the spare area plays in maintaining the compression algorithms for the sandforce controller.

    it's seems like, with such a complex controller, it would be wise to have plenty of "hash or index" space to work with, or is that all stored somewhere else?
  • jleach1 - Monday, May 3, 2010 - link

    IDK about you...but i dont plan on keeping a drive this small for that long. A few years is reasonable. Right now, what people want is: a cheap drive that performs well. I'll gladly trade 6 months of the life of my drive for some badly needed space. In 6 months, theyll likely have a set of firmware options that increase the amount of usable space, and improved algorithms that offset the normal reliability problems.

    Good job OCZ. Less $/GB= a happier public.
  • softdrinkviking - Wednesday, May 5, 2010 - link

    my question was about how the amount of spare area would effect the short term reliability of the drive. assuming that these drives are relatively unproven, who's to say that they won't start losing data because of the complex compression used by the controller?

    i want to know if lessening the spare area could contribute to controller errors, leading to the loss of data.
  • Belard - Monday, May 3, 2010 - link

    Looking at your benchmarks, other than SATA 3/6GB/s system, the Intel X25-M (G2) are still constantly the fastest and most reliable on the market. Personally, I can't wait for the SSD market to have SATA-3 drives as standard.

    Seq. Read
    OCZ = 264 MB/s * (okay a bit faster)
    X25 = 256 MB/s

    Seq. Write
    OCZ = 252 MB/s * (Destroys the intel)
    X25 = 102 MB/s

    But most operations are random... So if you're doing Video encode/decode or copy, the OCZ kills.

    Random. Read
    OCZ = 52 MB/s
    X25 = 64 MB/s * Intel wins easily. Even the top 6GB/s is barely faster.

    Random Write
    OCZ = 44 MB/s
    X25 = 46 MB/s * (not bad for an OLD drive)
    Half the drives are much slower, but some of the best easily faster.

    It will be intresting to see what happens to the SSD market in 12 months.
  • 7Enigma - Monday, May 3, 2010 - link

    I kinda feel the same way. Since we have not yet reached the point where a large portion of our data is stored on these (most of us at least), these sequential writes just don't blow me away the same way the X25 changed the HD scene. After the intial setup (OS, programs, a couple games), the drive is basically going to be a random read/write drive with occassional install, and for that I can wait the extra time that a faster drive would have saved if the end result (gaming/bootup/etc.) is nearly the same.

    What I want to see is the game-changing performance the X25 did to the traditional HD in the random read/write metric. Get those into the 200-300MB/sec and THEN I'll get excited again.
  • The0ne - Monday, May 3, 2010 - link

    Reading all the latest Anandtech SSD reviews feels like I'm reading someone's hobby work :) So many changes. Can't wait til it stabilizes A LOT more.
  • sgilmore1962 - Monday, May 3, 2010 - link

    Random Write
    OCZ = 44 MB/s
    X25 = 46 MB/s * (not bad for an OLD drive)
    Half the drives are much slower, but some of the best easily faster.

    Conveniently omitting the part where if you are using Windows 7 4k random writes are aligned on 4k boundries. The Sandforce random 4k writes become 162mb/s a whopping margin over Intel G2.
  • Belard - Monday, May 3, 2010 - link

    Do you know that there is a REPLY button? That way your COMMENT would be attached to the post, rather than starting a whole new dis-connected thread.

    So look to the left, there my name is and you'll see the word REPLY. Give it a shot.

    - - - - -
    Man, wish there was a QUOTE function as well as the ability to save my LOG-IN on this revised site.

    "Conveniently omitting the part where if you are using Windows 7 4k random writes are aligned on 4k boundries. The Sandforce random 4k writes become 162mb/s a whopping margin over Intel G2."

    Er... no. I *DID* go with the Win7 performance test. I was comparing the REVIEWED drive to the Intel X25-M. And I ALSO said "but some of the best easily faster."... so I was NOT disregarding the SF drives.
    I was expecting people to be able to figure this out.

    And when it comes to RANDOM reads... All those SF drives your so concerned with, are easily SLOWER than the X25-M.

    Intel X25-M G2 160/80 = 64.3~5 MB/s
    Intel X25-M G1 160/80 = 57.9 MB/s

    SF 1200~1500s = 49.4~52.1 MB/s... Ouch, SF is slower than the year old G2 and even older G1!! Even losing up to 15MB.s! About 25% slower than intels!

    The intel drives were the most expensive... now they are generally cheaper (cost per GB).

    I will continue to buy G2 drives (even those without the intel label) for my clients until something that is better across the board comes out. As far as I am concerned, Random Reads are somewhat more important than random reads... and both are about Sequential. This is why Windows7 boots up in about 10~12 seconds vs 35~50sec for a HD on the same same desktop.

    And I am not even a big fan of intel. I usually build AMD systems. But I'll buy what is good.
    Intel X25-M G2 wins in:
    A - price
    B - Availability (Many of the OCZs are not even available. Some stores carry older models)
    C - Performance Random
    D - Performance Sequential (okay, at 256 vs 265.... intel is a bit slower)
    E - Reliability
    F - TRIM support (Its unclear if all the other drives support TRIM - depending on the Firmware)

    From the looks of things, the G2 will lose its position when the G3 comes out.

    I plan to get a G3 for my next build... Hopefully it'll be $150~200 for 80GB with SATA 3.0 delivering 375+MB/s Seq Read/Write and 200MB/s for random R/W. That, I would really drool over!
  • StormyParis - Monday, May 3, 2010 - link

    it seems SSD perf is the new dick size ?
  • Belard - Tuesday, May 4, 2010 - link

    No, it has nothing to do with "mine is bigger than yours"...

    They are still somewhat expensive. But when you use one in a notebook or desktop, it makes your computer SO much more faster. Its about SPEED>

    Applications like Word or Photoshop are ready to go by the time your finger leaves the mouse button. Windows7 - fully configured (not just some clean / bare install) boots in about 10 seconds (7~14sec depending on the CPU/Mobo & software used)... compared to a 7200RPM drive in 20~55seconds.

    And that is with todays X25-G2 drives using SATA 2.0. Imagine in about 2 years, a $100 should get you 128GB that can READ upwards of 450+ MB/s.

    Back in the OLD days, my old Amiga 1000 would boot the OS off a floppy in about 45 seconds, with the HD installed, closer to 10 seconds.

    I'll admit that Win7's sleep mode works very good with a wake up time on a HDD that is about 3~5 seconds.
  • neoflux - Friday, June 11, 2010 - link

    Wow, someone's an internet douche.

    Maybe you need to look at the overall system benchmarks, like PCMark Vantage or the AnandTech Storage Bench, where your precious Intels are destroyed by the SF-based drives.

    And even if you look at the random reads/writes, the Intels are again destroyed on the writes and the same speed for the reads. I'm of course looking at the aligned benchmarks (aligned read not in this article, but shown here: http://www.anandtech.com/bench/SSD/83 ), because I'm actually being objective and realistic about how SSDs are used rather than trying to justify my purchases/recommendations.
  • buggyfunbunny - Monday, May 3, 2010 - link

    What I'd like to see added to the TestBench, for all drive types (not just SSD), is a Real World, highly (or fully) normalized relational database. Selects, inserts, updates, deletes. The main reason for SSDs will turn out to be such join intensive databases; massive file systems, awash in redundant data, will always out pace silicon storage. In Real World size, tens of gigabytes running transactions. The TPC suite would be nice, but AT may not want to spend the money to join. Any consistent test is fine, but should implement joins over flat-file structures.
  • Zan Lynx - Monday, May 3, 2010 - link

    Big databases don't bother with SATA SSDs. They go straight to PCI-e direct-connected SSD, like Fusion-IO and others.

    If you want fast, you have to get rid of the overhead associated with SATA. SATA protocol overhead is really a big waste of time.
  • FunBunny - Monday, May 3, 2010 - link

    Of course they do. PCIe "drives" aren't drives, and are limited to the number of slots. This approach works OK if you're taking the Google way: one server one drive. That's not the point. One wants a centralized, controlled datastore. That's what BCNF databases are all about. (Massive flatfiles aren't databases, even if they're stored in a crippled engine like MySql.) Such databases talk to some sort of storage array; an SSD one running BCNF data will be faster than some kiddie koders java talking to files (kiddie koders don't know that what they're building is just like their grandfathers' COBOL messes; but that's another episode).

    In any case, the point is to subject all drives to join intensive datastores, to see which ones do best. PCIe will likely be faster in pure I/O (but not necessarily in retrieving the joined data) than SSD or HDD, but that's OK; some databases can work that way, most won't. Last I checked, the Fusion (which, if you've looked, now expressly say their parts AREN'T SSD) parts are substantially more expensive than the "consumer" parts that AT has been looking at. That said, storage vendors have been using commodity HDD (suitably QA'd) for years. In due time, the same will be true for SSD array vending.
  • Zan Lynx - Monday, May 3, 2010 - link

    It is solid state. It stores data and looks like a drive to the OS. That makes it a SSD by my definition.

    I wonder why Fusion-IO wants to claim their devices aren't SSDs? My guess is that they just don't want people thinking their devices are the same as SATA SSDs.
  • jimhsu - Monday, May 3, 2010 - link

    Hey Anand,

    I'm not doubting your Intel performance figures, but I wonder why your benchmarks only show about a 20% performance decrease, when threads like this (http://forums.anandtech.com/showthread.php?t=20699... show that it is possibly quite more? (20% is just on the edge of human perception, and I can tell you that a completely full X25-M does really feel much slower than that). Is the decrease in performance different for read vs/ write, sequential vs. random? Are you using the drive in an OS context (where the drive is constantly being hit with small read/writes while the benchmark is running)?
  • Anand Lal Shimpi - Monday, May 3, 2010 - link

    20% is a pretty decent sized hit. Also note that it's not really a matter of how full your drive is, but how you fill the drive. If you look back at the SSD Relapse article I did you'll see that SSDs like to be written to in a nice sequential manner. A lot of small random writes all over the drive and you run into performance problems much faster. This is one reason I've been very interested in looking at how resilient the SF drives are, they seem to be much better than Intel at this point.

    Take care,
    Anand
  • FunBunnyBuggyBuggy - Tuesday, May 4, 2010 - link

    (first: two issues; the login refuses to retain so I've had to create a new each time I comment which is a pain, and I wanted to re-read the Relapse to refresh my memory but it gets flagged as an Attack Site on FF 3.0.X)

    -- If you look back at the SSD Relapse article I did you'll see that SSDs like to be written to in a nice sequential manner.

    I'm still not convinced that this is strictly true. Controllers are wont to maximize performance and wear-leveling. The assumption is that what is a sequential operation from the point of view of application/OS is so on the "disc"; which is strictly true for a HDD, and measurements bear this out. For SSD, the reality is murkier. As you've pointed out, spare area is not a sequestered group of NAND cells, but NAND cells with a Flag; thus spare move around, presumably in erase block size. A sequential write may not only appear in non-contiguous blocks, but also in non-pure blocks, that is, blocks with data unrelated to the sequential write request from the application/OS. Whichever approach maximizes the controller writers notion of the trade off between performance requirement and wear level requirement will determine the physical write.
  • DigitalFreak - Monday, May 3, 2010 - link

    Apparently IBM trusts Sandforce's technology.

    http://www.engadget.com/2010/05/03/sandforce-makes...
  • MrSpadge - Monday, May 3, 2010 - link

    A 60 GB Vertex 2 for the price of the current 50 GB one would make me finally buy an SSD. Actually, even a 60 GB Agility 2 would do the trick!
  • Impulses - Monday, May 3, 2010 - link

    Interesting, Newegg's got the Agility 2 in stock for $399... Vertex 2 is OOS but has an ETA. That makes my choice of what drive to give my sister a lil' harder (I promised her a SSD as a birthday gift last month, gonna install it on her laptop when I visit her soon). The old Vertex/Agility drives are 20GB more for $80 less... I dunno whether the performance bump and capacity loss would be worth it.

    Do the SandForce and Crucial drives feel noticeably faster than an X25-M or Indillix Barefoot drive in everyday tasks or are they all so fast that the difference is not really appreciable outside of heavy multi-tasking or certain heavy tasks? I own an X25-M and an X25-V and I'm ecstatic with both...
  • MadMan007 - Monday, May 3, 2010 - link

    Hello Anand, thanks for the review. I am posting the same comment regarding capacity that I've posted before - I hope it doesn't get ignored this time :) While it's nice to say 'formatted capacity' it is not 100% clear whether that is in HD-style gigabytes (10^9 bytes) or gibibytes (base 1024 - what OSes actually report.) This is very important information imo because people want to know 'How much space am I really getting' or they have a specific space target they need to hit.

    Please clarify this in future reviews! (If not this one too :)) Thanks.
  • anurax - Tuesday, May 4, 2010 - link

    I've had 2 brand new OCZ Vertex Limited edition died on me in the span of 2 weeks, so you guys should really take into consideration the reliability when buying new SSD. Like Anand say WE are the test pigs here and the manufacture's dun really give a care about us or the inconvenience we experience when we have to re-install and reload our systems.

    My Vertex Limited Edition drive just died all of a sudden without any prompt or s.m.a.r.t. notification, it just simple cannot be detected anymore. It so damn frustrating to have such poor reliability standards.

    One thing is 100% sure, OCZ and SandForce are a NO NO NO, they have played me out enough and me forking out hard earned $$$ to be their test pig is simply not acceptable.

    To all you folks out there, seriously be careful about reliability and be even more careful about doing things to hamper the reliability, cuz in the end its your data, time and efforts that are at stake here (unless we are Anand whose job is to fully stress and review these new toys everyday)
  • mattmc61 - Wednesday, May 5, 2010 - link

    Sorry to hear you lost two drives, that must be pretty rare. I lost a 120g Vertex Turbo myself. No warning, just "poof", and it was gone. I think that's the nature of the beast there are no moving parts to let s.m.a.r.t. technology to know when a SSD is slowly dying. One this is for sure, you are right, we are guinea pigs when it comes to a technology in its infancy such as SSDs, which are experiencing growing pains. Anand did warn us a while back that we should procede at our own risk when it comes to these drives. He had a few SSDs go poof on him as well. It just suprizes me when guys buy bleeding edge technology, which usually costs a premium, has a high risk of failure, then procede to trash-mouth the manufacturer or the technology itself when it fails them. I think some people who want the latest and greatest so bad, that they have an "aah, that won't happen to me" attitude, go ahead and buy the product. Then when it fails they are shocked and take it personally like someone diliberately sabatage them. If you did your homework on that OCZ drive like you should have, you would know that the manufacturer really does care about their SSDs out in the wild are performing. I can tell you from personal experience, that when my drive died, they quickly replaced it. OCZ also has great support forum. I'm sure you won't lose all that money you spend if you just send back the drives for replacement. The bottom line is if you want reliability, then go back to machanical hard drives. If you want bleeding edge, the accept the risks and stop whining.

    MMc
  • thebeastie - Tuesday, May 4, 2010 - link

    There is no point letting the sequential performance have any baring on your choice of SSD, if you like sequential speed just by a mechanical hard drive. But you have been there and no how crap it makes your end user experience.

    Thats why Intel is still great value for SSD despite all the latest random read and write benchmarks Anandtech has come up with they are still killer speed while the Indilix controllers are running at 0.5megs/sec aligned Windows 7 type performance.

    In other words anyone looking at sequential performance is really failing a basic mental handicapped test.
  • Chloiber - Wednesday, May 5, 2010 - link

    Actually, Indilinx is faster on 4k Random Reads with 1 Queue Depth.
  • stoutbeard - Tuesday, May 11, 2010 - link

    So what about when you get the agility 2? How do you get the newest sf-1200 firmware (1.01)? It's not on OCZ's site.
  • hartmut555 - Tuesday, May 25, 2010 - link

    I guess it might be a little late to comment here and expect a response, but I have been reading a few posts on forums suggesting leaving a portion of a mainstream SSD unpartitioned, so that the drive has a little more spare area to work with. Basically, it is the opposite of what this article is about - instead of recovering some of the spare area capacity for normal use, you are setting aside some of the normal use capacity for spare area. (And yes, they are talking about SSDs, not short-stroking a HDD.)

    In this article, it states that both the Intel and SandForce controllers appear to be dynamic in that they use any unused sectors as spare area. However, the tests show that the SandForce controller can have pretty much equivalent performance even when the spare area is decreased. This makes me think that there is some point at which more spare area ceases to provide a performance advantage after the drive has been filled (both user area and spare area) - the inevitable case if you are using SSDs in a RAID setup, since there is no TRIM support.

    The spare area acts as a sort of "buffer", but the controller implementation would make a big difference as to how much advantage a larger buffer might provide. The workload used for testing might also make a big difference in benchmarks, depending on the GC implementation. For instance, if the SSD controller is "lazy" and only shuffles stuff around when a write command is issued, and only enough to make room for the current write, then spare area size will have virtually no impact on performance. However, if the controller is "active" and lines up a large pool of pre-erased blocks, then having a larger spare area would increase the amount of rapid-fire writes that could happen before the pre-erased blocks were all used up and it had to resort to shuffling data around to free up more erase blocks. Finally, real world workloads almost always include a certain amount of idle time, even on servers. If the GC for the SSD is scheduled for drive idle time, then benchmarks that consist of recorded disk activity which are played back all at once would not allow time for the GC to occur.

    Having a complex controller between the port and the flash cells really complicates the evaluation of these drives. It would be nice if we had at least a little info from the manufacturers about stuff like GC scheduling and dynamic spare area usage. Also, it would be interesting to see a benchmark test that is run over a constant time with real-world idle periods (like actually reading the web page that is viewed), and measures wait times for disk activity.

    Has anyone tested the affects of increasing spare area (by leaving part of the drive unpartitioned) for drives like the X25-M that have a small spare area when TRIM is not available and the drive has reached its "used" (degraded) state?

Log in

Don't have an account? Sign up now