Comments Locked

8 Comments

Back to Article

  • Jeff7181 - Sunday, June 8, 2014 - link

    "In the test Plextor used a Word file that they modified and unplugged the drive after saving the file. With RAPID the modifications were lost, whereas thanks to Plextor's use of write-through cache the modifications remained intact."
    Well, duh. That is the difference between write-back and write-through cache. Write-back sends an ack as soon as the write is committed in the cache. Write-through doesn't send an ack until the write is committed on the back end. The result is improved data integrity with write-through, but lower write performance. The opposite is true for write-back. One can't say that one approach is better than the other without defining a use case.
  • extide - Sunday, June 8, 2014 - link

    It appears to be some sort of hybrid approach, because they are getting 5GB/sec write speeds in that benchmark screen shot. You would need write-back cache to get that fast of a write on a SATA6Gbit SSD. So it is a little bit confusing to me. Maybe they are flushing the buffer really fast, and if the buffer is <= 1GB it should be able to flush pretty fast.
  • TheWrongChristian - Monday, June 9, 2014 - link

    According to the RAPID white paper (http://www.samsung.com/pl/business-images/resource... RAPID makes the same cache write guarantees as the Windows built in cache, so is only write back until the point that a flush is issued. Applications should be flushing their files when saving them, so if Word in the demo is not flushing when saving, then that is the fault of Word, not RAPID caching. This seems more an indictment of Microsoft's Word.
  • sheh - Sunday, June 8, 2014 - link

    I fail to see why software caching is anything worth mentioning. This is the OS's job and it does it anyway. Maybe this or that 3rd party software can slightly improve performance in very specific cases, but anyway it has little to do with the hardware.
  • Alexvrb - Sunday, June 8, 2014 - link

    I doubt the OS does a whole ton of write caching on it's own. Either way I don't like solutions that have potential data integrity issues, but for mobile devices that have reliable battery power, either approach should be fine as long as the hardware/OS combination is stable.
  • ShieTar - Sunday, June 8, 2014 - link

    Windows definitly does a significant amount of write caching, and it warns you in its setting of the potential loss of data. Which mean, everybody who never had a problem with the normal Windows cache (enabled by default) should have no real problem with any other write-back implementation either.

    As far as slight improvements in specific cases go, AS-SSD is showing me at least very tangible improvements for random writes. And disabling the Windows cache really cripples the drive:

    https://www.dropbox.com/s/8q4u75hndsdb5cl/Windows_...

    I don't know how much of the difference is still there for a real-world scenario, and I don't know why the Samsung Implementation seems so much better OS one. Maybe the Windows cache is still optimized for HDDs.
  • Alexvrb - Sunday, June 8, 2014 - link

    The built in write caching in Windows doesn't do nearly as much as RAPID and PlexTurbo. It's really night and day, otherwise Plextor and Samsung wouldn't have bothered. 1GB is a pretty big cache, for one thing. Their algorithms might be more aggressive too. That's why there's a large difference in scores vs the default caching.

    Anyway, the built in caching operates more like RAPID, because they're both write-back. So if you lose power or otherwise fail to shut down gracefully, you lose any data that hasn't been flushed yet. So yeah, if you have "never had a problem" that means you haven't had a system go down with a bunch of important data in the cache. So in that regard PlexTurbo is the best choice, being a write-through cache. Hard to say how it will compare in real-world performance compared to RAPID, however.
  • ShieTar - Monday, June 9, 2014 - link

    I havn't had a sytem go down since I switched from Win98 to NT4.0, period. And I never experienced a power loss. And I assume my situation is rather typical for people in industrialised countries who don't challenge their system unusually, e.g who don't write low-level software or fill up their memory far beyond its capacity.

    I understand why write-through is the safer choice, but I think for the majority of computer users, including many professional users and enthusiast, the risk of a write-back is already extremely close to 0.

    That being said, my personal preference would always be to offer both options to the user and let him make his own decision.

Log in

Don't have an account? Sign up now