Comments Locked

30 Comments

Back to Article

  • RealNinja - Tuesday, November 6, 2012 - link

    Looks like a nice enterprise drive. Will be interesting to see how reliable the new controller is in the "real world."

    For my consumer money...still gotta go with Samsung right now.
  • twtech - Tuesday, November 6, 2012 - link

    Looks like a nice workstation drive as well. With that kind of write endurance, it should be able to handle daily multi-gigabyte content syncs.
  • futrtrubl - Saturday, November 10, 2012 - link

    Umm, with that write endurance it should be able to handle daily multi-TERAbyte syncs, seeing as it is rated at 10x capacity/day for 5 years.
  • CeriseCogburn - Wednesday, January 2, 2013 - link

    I watched the interview, and saw all 3 of the braggarts spew their personal fantasies and pride talk, then came here to take a look, and I'm not impressed.
    I do wonder how people do that.
  • DanNeely - Tuesday, November 6, 2012 - link

    "I had to pull Micron's P400e out of this graph because it's worst case latency was too high to be used without a logarithmic scale. "

    Could you add the value to the text then?
  • crimson117 - Tuesday, November 6, 2012 - link

    Move away from NAND - to what?
  • stmok - Tuesday, November 6, 2012 - link

    ...To Phase Change Memory (PCM).
  • DanNeely - Tuesday, November 6, 2012 - link

    Everything old (CDRW) is new again!
  • martixy - Friday, November 9, 2012 - link

    Right... so we got that covered. :)
    Now we're eagerly awaiting the next milestone towards the tech singularity.
  • Memristor - Wednesday, November 7, 2012 - link

    To Memristor
  • JonnyDough - Thursday, November 15, 2012 - link

    There are a ton of new technologies that could replace NAND. There might even be a "betamax" or "HD DVD" in there that miss the mark and lose out to some better or cheaper tech. We'll just have to wait and see what comes to market and catches on. It won't be mere enthusiasts or gamers who decide, it will be the IT industry. It usually is.
  • mckirkus - Tuesday, November 6, 2012 - link

    On interesting point to note is that if you run benchmarks on a RAMDisk, you get random 4k write IOPS in the neighborhood of 600MB/s. So in that regard, flash has a long way to go before the 6Gbit/s limitations of SATA 3.0 really hurt enterprise performance.
  • extide - Tuesday, November 6, 2012 - link

    I am not sure I understand this. First of all random 4K against a ramdisk will be HIGHLY dependent on the hardware, and I am sure you could see wayy better numbers than 600MB/sec. Also, 600MB/sec is pretty close to 6Gbit/sec, anyways.
  • jwilliams4200 - Friday, November 9, 2012 - link

    I think mckirkus is trying to say that there is a lot of headroom before sustained 4KiB random I/O SSD throughput will saturate a SATA 6Gbps link.

    For example, the sustained QD32 4KiB random write speed for the S3700 is apparently less than 150MB/s (35K IOPS). It will need to double and double again before it saturates a 6Gbps SATA link
  • mayankleoboy1 - Saturday, November 10, 2012 - link

    How long do we have to wait before SATA Express drives and interface get commercial ?
  • justaviking - Saturday, November 10, 2012 - link

    If I read this the "Update" section correctly, Oracle recommends modifying their settings to change the way the log files are written.

    Would it be possible to re-run the the Swingbench tests using the modified settings? I'd love to see how performance changes, especially on THIS drive, and then also on some others for comparison purposes.
  • blackbrrd - Saturday, November 10, 2012 - link

    I am guessing most people will run their Oracle database behind a raid card with some nvram to cache, which would remove the problem if the raid controller combined the writes. It would be interesting to see the performance behind a typical raid controller card with nvram cache.
  • iwod - Sunday, November 11, 2012 - link

    I am a regular Anandtech Reader, ( actually it is on my RSS Feeds so i read it everyday ) and i dont ever record Anand doing a Review on Toshiba SSD. So when i saw the performance of the MK4001 i had to look it up in Google to know it is an SAS SLC Enterprise SSD.

    The article did eventually have a brief mention of its Spec. But i thought it was very late in the article. Would have help it the spec was actually listed out before hand.

    It seems to me the Magic is actually in the software and not the hardware. A 1:1 mapping of NAND data Address table making Random Read and Write a consistent behaviour seems more like Software magic and could easily be made on any other SSD Controller with enough amount of RAM in it. The only hardware side of things that requires this tweak is ECC Memory.

    And again we are fundamentally limited by Port Speed.
  • mmrezaie - Monday, November 12, 2012 - link

    I agree!
  • alamundo - Monday, November 12, 2012 - link

    Given the enterprise focus, this drive seems to be competitive with the Intel 910 PCI card. It would be interesting to see the 3700 benchmarked against the 910.
  • Hans Hagberg - Monday, November 12, 2012 - link

    An enterprise storage review today is not really complete without an array of 15K mechanical disks for comparison. This is still what is being used for performance in most cases and that is what we are up against when we are looking to motivate SSDs in existing configurations.

    And for completeness, please throw in PCI-based SSD storage as well. Such storage always come up in discussions around SSD but there is too little independent test data available to take decisions.

    Another question when reading the review is about the test system being used. I couldn't find this information?

    Also - enterprise storage is most often fronted by high-end controllers with lot's of cache. It would be interesting to see an analysis of how that impacts the different drives and their consistency. Will the consistency be equalized by a big controller and cache in front of it?

    The Swingbench anomaly is unfortunate because database servers are probably the primary application for massive implementation of SSD storage. It would be nice if the anomaly could be sorted out so we could see what the units can do. Normally, if one cares for enterprise performance, you are careful with alignment and separation of storage (data, logs etc.) so I agree with the Intel statement on this. Changing the benchmark would tear up the old test data so I'm not sure how to fix it without starting over.

    The review format and test case selection is excellent. Just give us some more data points.
    I would go as far as to say I would pay good money to read the review if the above was included.
  • Sb1 - Tuesday, November 13, 2012 - link

    "An enterprise storage review today is not really complete without an array of 15K mechanical disks for comparison."
    ... "And for completeness, please throw in PCI-based SSD storage as well."

    I __fully__ agree with Hans Hagberg

    I thought this was a good article, but it would be an excellent one with both of these.

    Still keep up the good work.
  • Troff - Wednesday, November 14, 2012 - link

    I agree as far as PCI-based SSDs go, but I see no point in including the 15K mechanical drive array for the same reason you don't see velocipedes in car reviews.
  • ilkhan - Tuesday, November 13, 2012 - link

    So what I see here is that for an enterprise server drive, go with this Intel. For a desktop drive, this intel or a samsung 840pro, for a laptop drive, the samsung 840pro is best.

    That about sum it up?
  • korbendallas - Friday, November 16, 2012 - link

    Instead of average and max latency figures, I would love to see percentiles: 50%, 90%, 99%, 99,9% for instance. If you look at intel's claims for these drives, they're in percentiles too.

    If your distribution does not follow a bell curve, which is the case in many of the SSDs you are testing, average is useless. And as you already know (and why you didn't include it before now), max is useless too.
  • dananski - Saturday, November 17, 2012 - link

    I'd really like to see more graphs like the ones on "Consistent Performance: A Reality" showing how much variation drives can have in instantaneous IOPS. These really do a great job of showing exactly what Intel has fixed and I can see the benefit in some enterprise situations. A millisecond hiccup is an eternity for the CPU waiting for that data.

    Personally I'd now like to know:
    * How much of a problem this can be on consumer drives, where sustained random IO is less common?
    * Is this test a good way to characterise the microstutter problem for a particular drive?
    * How badly are drives with uneven IOPS distributions affected by RAID? (I know this was touched on briefly in the webcast with Intel)
  • junky77 - Sunday, November 18, 2012 - link

    the consistency of current consumer SSDs?
  • virtualstorage - Tuesday, March 12, 2013 - link

    I see the test results upto 2000 seconds. With a enterprise array, there will be continuos ios in 24/7 production environment. What is the performance behavior of Intel SSD DCS3700 with continuous io's over many hours?
  • damnintel - Wednesday, March 13, 2013 - link

    heyyyy check this out damnintel dot com
  • rayoflight - Sunday, October 6, 2013 - link

    Got two of these. Both of them failed after approx. 30 boot up's. They arent recognised anymore by the bios or as external harddrives on a different system, as if they are completely dead. Faulty batch? Or do they "lock up" ? Anyone had this problem?

Log in

Don't have an account? Sign up now