Comments Locked

24 Comments

Back to Article

  • nicolapeluchetti - Friday, June 27, 2014 - link

    Has anyone any idea on why the Samsung SSD 840 Pro is so bad in Anandtech Bench 2013 and so good in 2011? Here is the link it did 142 in 2013 http://www.anandtech.com/show/8170/sandisk-extreme... Nut in 2011 it's number 1 http://www.anandtech.com/show/8170/sandisk-extreme...

    How is this possible?I mean are the workloads so different?Did Samsung optimize the controller for the test?
  • WithoutWeakness - Friday, June 27, 2014 - link

    The 2013 Bench is definitely different enough to have different results for a given drive. More detailed info on the differences between the 2011 and 2013 benches can be found here: http://www.anandtech.com/show/6884/crucial-micron-...
  • Muyoso - Friday, June 27, 2014 - link

    Yea, I bought the 840 Pro on the basis of that 2011 test bench, and now everytime I see an SSD review I am sad to see how ravaged it gets vs the competition.
  • CrystalBay - Friday, June 27, 2014 - link

    I wouldn't worry about the 840P it still a top drive with excellent support . Come this September Samsung is going bring out some new drives. I'm very curious about what's next from them.
  • Kristian Vättö - Friday, June 27, 2014 - link

    Maybe September is coming sooner than you think ;-)
  • CrystalBay - Friday, June 27, 2014 - link

    Oh what a nice surprise ! can't wait....
  • Galatian - Saturday, June 28, 2014 - link

    Which answers my question wether I should get the XP941 now for my ASRock Extreme9 or wait ;-)
  • Khenglish - Friday, June 27, 2014 - link

    It has to do with how the 840Pro handles garbage collection. Basically the way the 2013 test is structured the 840Pro delays far longer than it should before reorganizing itself, but the 2011 test is less stressful in this regard. This means that the 840Pro is a very fast drive if you don't have it running at 100% at all times, but if you are then other drives are likely preferable.
  • althaz - Sunday, June 29, 2014 - link

    The 2013 test is more enterprisey. The 2011 test is a better indicator of performance if you half-fill your SSD and use it for your OS plus a few core apps. If you fill it up and use it for everything, the 2013 test is more useful.
  • nitro912gr - Friday, June 27, 2014 - link

    I can find the evo 840 250GB at the same price with that adata sp610, should I go with the later since it is bundled with the 3.5" case?
    I can't see much more difference aside that.
  • mapesdhs - Friday, June 27, 2014 - link


    I'd say go with the EVO; Samsung drives have excellent long term consistency, at
    least that's what I've found from the range of models I've obtained.

    Looking at the initial spec summary, the SP610 just seems like a slower MX100,
    which puts it below the EVO or any other drive in that class, so unless it's priced
    like the MX100 I wouldn't bother with it.

    Kristian, may I ask, why are there so many models missing from the tables? eg.
    Vector/150, Neutron GTX, M500, V300, Force Series 3, M550, M5Pro Extreme,
    etc. I'm glad the Extreme II is there though, that's quite a good model atm.

    Would be interesting to include a few older ones too, ie. to see how performance
    has moved on from the likes of the Vertex3/4 and others from bygone days. I still
    bag Vertex4s and original Vectors if I can as they hold up very well to current models,
    though this week I snapped up four 128GB Extreme IIs (45 UKP each) as their IOPS
    rating for a 128GB seems ideal for tasks like a big Windows paging drive in a system
    with 64GB RAM.

    Ian.

    PS. Obvious point btw, perhaps ADATA can improve the consistency issue with a fw update?
  • stickmansam - Friday, June 27, 2014 - link

    The SP610 is actually about the same as the EVO and MX100 it seems based on overall results

    The firmware and controller actually seem pretty competitive

    I do agree that more drives should be compared if possible. Even the Bench tool seems to be missing drives that were in reviews in the past.
  • dj_aris - Friday, June 27, 2014 - link

    Why are we still testing sata 3 drives anyway?
  • mapesdhs - Friday, June 27, 2014 - link

    Because the vast majority of people still want to know how they perform. Remember
    there will be many with older SSDs who are perhaps considering an upgrade by now,
    from the likes of the venerable Crucial M4/V4, Vertex2/3, older Intels, Samsung 830, etc.
    For newer reviews, it's less the sequential rates and more about the random behaviour,
    consistency, and other features like encryption that people want to know about now,
    especially with so many being used in laptops, notebooks, etc. I also like to know how
    what's being offered anew compares wrt pricing, ie. are things really getting better?
    It's great that 1TB models are finally available, but I still yearn for the day when SSDs
    can exceed HDDs in offered capacities. I read that SanDisk seem determined to push
    forward this is as quickly as possible, moving to 2TB+ next year. I certainly hope so.
    Nothing wrong with having 4TB+ rust-spinners, but backing them up is a total pain (and
    quite frankly anyone who uses a 4TB non-Enterprise SATA HDD to hold their precious
    data is nuts). By contrast, having 4TB+ SSDs at least means doing backups wouldn't
    be slow. When I use Macrium to create a backup image of a 256GB C-drive SSD onto
    some other SSD, the speeds achieved really are impressive.

    I guess the down side will be that, inevitably at first, high capacity SSDs will be expensive
    purely because it'll be possible to sell them at high prices no problem, whatever they
    actually cost to make. I just hope at least one vendor will break away from the price
    gouging for a change and really move this forward; if nothing else, they'll grab some
    hefty market share if they do.

    Ian.
  • name99 - Friday, June 27, 2014 - link

    "The benefit of ARC is that it is configurable and the client can design the CPU to fit the task, for example by adding extra instructions and registers. Generally the result is a more efficient design because the CPU has been designed specifically for the task at hand instead of being an all around solution like the most ARM cores are."

    This is marketing speak. In future, rather than just repeat the claims about why "CPU you've never heard of is more awesome than anything you've actually heard of" please provide numbers to back up the claim, or ditch the PR speak.
    If this CPU is "more efficient" than, e.g., an ARM (or MIPS or PPC) competitor, let's have some power numbers.

    My complaint is not that they are using ARC --- they can use whatever CPU they like. My complaint is that the two sentences I quoted are absolutely no different from simply telling us, e.g. "this SSD is more efficient than its competitors" with no data to back that up. Tech claims require data. If MSI aren't willing to provide data to back up a tech claim, you shouldn't be printing their advertising in a tech story.
  • Kristian Vättö - Saturday, June 28, 2014 - link

    Nothing regarding the controller's architecture came from ADATA or SMI. In fact, I got the ARC part from Tom's Hardware, although I added the parts about ARC's benefits. If I just put ARC there and leave out the explanation, what is the usefulness of that? Yay, yet another acronym that means absolutely nothing to the reader unless it is opened to them.

    To be clear, I did not mean that an ARC CPU is always more efficient in every task. However, for a specific task with a limited set of operations (like in an SSD), it usually is because the design can be customized to remove unnecessary features or add ones that are needed. It's not an "ARM killer", it is simply an alternative option that can suit the task better by removing some of the limitations that off-the-self CPU designs have. Ultimately the controller is just a piece of silicon and everything it does is operated by the firmware.
  • epobirs - Saturday, June 28, 2014 - link

    After all of these years, when I see the Argonaut name I find myself wondering when a new Star Glider will be published. (Star Fox doesn't count other than spiritually.)
  • s44 - Saturday, June 28, 2014 - link

    How do we know this is even going to be the controller in future units of this model? The bait-and-switch with the Optima deserves more than passing mention, I think.
  • hojnikb - Saturday, June 28, 2014 - link

    Well, to be fair, sandforce version of optima is faster, so really, you're getting a better drive.
    Still not okay, but not nearly as bad as kingston's bait and switch.
  • smadhu - Sunday, June 29, 2014 - link

    To muddy the controller waters further, we are planning to launch a fully open source NVM Express controller this year. This is from IIT-Madras, an Indian university in conjunction with the IT Univ. of Copenhagen. The development itself is in public, the main source is at bitbucket.org/casl/ssd-controller. This is part of a larger open storage stack project called lightstor, see lightstor.org. Lightstor is an extremely ambitious effort to reinvent storage from the controller up to the application stack. The SW stack, called Lightnvm is up and running on a Linux branch. Google Lightnvm. There is also an emulator to run it now.

    We will be launching a Xilinx based PCIe card with our controller IP and open source CPU (1-4 cores). CPU is based on the RISC-V ISA from UCB, another partner of ours. The PCIe EP and ONFI will have to be proprietary initially but we will be completing our open source ONFI 4.0 early next year. All PHY will still have to be 3rd party since we do not want to get into analog PHY development.

    Add a SATA controller instead of PCIe and you have a SATA SSD.

    All HW source is BSD licensed, so anyone can download it and tape it out with no copyleft hassles. Final version should be the fastest controller out there. Idea is to beat every controller out there and not just launch a univ. test bed. Core currently runs at 700 Mhz on a 32 bit datapath.

    If anybody wants to help in testing, bench-marking or coding drop us a note. If nothing else, have fun going through the source of an SSD controller. It is a lot of fun. Language is Bluespec, a very high level HDL which is easy for SW geeks to understand. Hides a lot of the HW. plumbing. Comes from MIT, another collaborator of ours so we are partial to the language.

    Hopefully we will do to the storage world what Linux did to the OS world !

    The core is being bench-marked right now, hope to publish something by early winter.

    I apologize for what technically is an ad for our project but I figure a BSD licensed open source SSD controller qualifies for free ads !
  • skiboysteve - Sunday, June 29, 2014 - link

    Very very cool. Thanks for sharing
  • shodanshok - Sunday, June 29, 2014 - link

    Really interesting. How can we help in benchmarking?
  • smadhu - Sunday, June 29, 2014 - link

    WE are trying to get a benchmarking setup on a Zync zedboard card first. It is a partially simulated environment. That PCIe and the NAND flash are simulated using RAM but the controller and the CPU is the actual IP. Most universities want this setup first since it assumes an infinite source and sink and let you tune the protocol and the controller.

    We will also simultaneously release using the Xilinx AC701 card. This is a PCIe card but has no bulitin NAND modules. We are working with Xilinx to get a NAND module done ASAP. But even without it at lease the env. get more real in the sense that now the IP and PCIe are actual IP and only NAND is simulated.

    Once proven on this card, we are creating a dedicated PCIe SSD card that will also be open sourced. That will a full fledged card with user replaceable NAND modules and will also be cost optimized. Hopefully Asian vendors will clone those in large quantities to being down cost. We neither charge any royalty nor do we apply for patents on an of our IP. Since the NAND modules are standard, we hope to create a 3rd party eco-system for NAND modules. So you can upgrade your PCIe card when you run out of storage space or when new NAND tech is available.

    This effort is actually kind of a trojan horse for our larger project, the SHAKTI open source CPU. We have about 6 classes/families of CPU being developed, ranging from Cortex M-3 level microcontrollers to Xeon class 16-24 core server parts. HPC variants will have 512 bit SIMD with 64-100 cores (NoC fabric). All BSD licensed open source of course. We are running GCC on the cores now and wrapping up SoC integration for the lower end cores. Hope to get Linux running by Christmas. Low end target is the Diglinet Nexus 4 FPGA board

    Th cores are important for Storage since we allows us to do the following
    - modify the ISA for storage specific operations and remove instructions that is not needed for storage
    - allow user defined code to run on the storage controllers
    - add functional units for database acceleration

    All SoC integration is via AXI framework, so vendors can easily use this IP without retraining their engineers. WE are not alone in such cores, Cambridge just released their MIPS compatible secure CPU.

    see beri-cpu.org.

    UCB will also shortly release its full blown cores.

    Somebody asked me why we did such massive open source HW IP without expecting monetary returns. My answer was simple, I could either build a billion dollar startup or remove a few billion dollars from the IP market ! I chose the latter !
  • Beagus - Monday, June 30, 2014 - link

    Page one Table MB/GB/TB.

    As always - Good work

Log in

Don't have an account? Sign up now