Comments Locked

49 Comments

Back to Article

  • chrysrobyn - Friday, February 27, 2015 - link

    Is there one of these COTS boxes that runs any flavor of ZFS?
  • SirGCal - Friday, February 27, 2015 - link

    They run Syn's own format...

    But I still don't understand why one would use RAID 5 only on an 8 drive setup. To me the point is all about data protection on site (most secure going off site) but that still screams for RAID 6 or RAIDZ2 at least for 8 drive configurations. And using SSDs for performance fine but if that was the requirement, there are M.2 drives out now doing 2M/sec transfers... These fall to storage which I want performance with 4, 6, 8 TB drives in double parity protection formats.
  • Kevin G - Friday, February 27, 2015 - link

    I think you mean 2 GB/s transfers. Though the M.2 cards capable of doing so are currently OEM only with retail availability set for around May.

    Though I'll second your ideas about RAID6 or RAIDZ2: rebuild times can take days and that is a significant amount of time to be running without any redundancy with so many drives.
  • SirGCal - Friday, February 27, 2015 - link

    Yes I did mean 2G, thanks for the corrections. It was early.
  • JKJK - Monday, March 2, 2015 - link

    My Areca 1882 ix-16 raid controller uses ~12 hours to rebuild a 15x4TB raid with WD RE4 drives. I'm quite dissappointed with the performance of most "prouser" nas boxes. Even enterprise qnaps can't compete with a decent areca controller.

    It's time some one built som real NAS boxes, not this crap we're seeing today.
  • JKJK - Monday, March 2, 2015 - link

    Forgot to mention it's a Raid 6
  • vol7ron - Friday, February 27, 2015 - link

    From what I've read (not what I've seen), I can confirm that RAID-6 is the best option for large drives these days.

    If I recall correctly, during a rebuild after a drive failure (new drive added) there have been reports of bad reads from another "good" drive. This means that the parity drive is not deep enough to recover the lost data. Adding more redundancy, will permit you to have more failures and recover when an unexpected one appears.

    I think the finding was also that as drives increase in size (more terabytes), the chance of errors and bad sectors on "good" drives increases significantly. So even if a drive hasn't failed, it's data is no longer captured and the benefit of the redundancy is lost.

    Lesson learned: increase the parity depth and replace drives when experiencing bad sectors/reads, not just when drives "fail".
  • Romulous - Sunday, March 1, 2015 - link

    Another benefit of RAID 6 besides 2 drives being able to die, is the prevention of bit rot. In Raid 5, if i have a corrupt block, and one block of parity data, it wont know which one is correct. However since RAID 6 has 2 parity blocks for the same data block, its got a better chance if figuring it out.
  • 802.11at - Friday, February 27, 2015 - link

    RAID5 is evil. RAID10 is where it's at. ;-)
  • seanleeforever - Friday, February 27, 2015 - link

    802.11at:
    cannot tell whether you are serious or not. but
    RAID 10 can survive a single disk failure, RAID 6 can survive a failure of two member disks. personally i would NEVER use raid 10 because your chance of losing data is much greater than any raid that doesn't involve 0 (RAID 0 was a afterthought, it was never intended, thus called 0).
    RAID 6 or RAID DP are the only ones used in datacenter for EMC or Netapp.
  • Dug - Saturday, February 28, 2015 - link

    Actually RAID 10 is used far more than RAID 5 or 6. With RAID 5 actually not even being listed as an option with Dell anymore.
    The random write IOPS loss from RAID6 is not worth it vs RAID10.
    Rebuild times are 300% faster with RAID10.

    The marginal cost of adding another pair of drives to increase the RAID10 array would be easier than trying to increase IO performance later on a RAID6 array.

    But then again, this is mostly for combining os, apps, and storage (VM). For just storage, it may not make any difference depending on the how many users or application type.
  • SirGCal - Sunday, March 1, 2015 - link

    That's missing the point entirely. If you lose a drive from each subset of RAID10, you're done. It's basically a RAID 0 array, mirrored to another one (RAID 1). You could lose one entire array and be fine, but lose one disk out of the working array and you're finished. The point of RAID 6 is you can lose any 2 disks and still operate. So most likely scenario is you lose one, replace it and the rebuild is going and another fails.

    RAID0 is pure performance, RAID1 is drive for drive mirroring, RAID10 is a combination of the two, RAID 5 offers one drive (any) redundancy. Not as useful anymore. RAID 6 offers two. The other factor is you lose less storage room with RAID 6 then RAID 0. More drive security, less storage loss. More overhead sure but that's still nothing for the small business or home user's media storage. So, assuming 4TB drives x 8 drives... RAID 6 = 24TB or usable storage space (well, more like 22 but we're doing simple math here). RAID 10 = 16TB. And I'm all about huge storage with as much security as reasonably possible.

    And who gives a crap what Dell thinks anyhow? We never had more trouble with our hardware then the few years the company switched to them. Then promptly switched away a few years after.
  • DigitalFreak - Monday, March 2, 2015 - link

    You are confusing RAID 0+1 with RAID 10 (or 1+0). http://www.thegeekstuff.com/2011/10/raid10-vs-raid...
    0+1 = Striped then mirrored
    1+0 = Mirrored then striped
  • Jaybus - Monday, March 2, 2015 - link

    RAID 10 is not exactly 1+0, at least not in the Linux kernel implementation. In any case, RAID 10 can have more than 2 copies of every chunk, depending on the number of available drives. It is a tradeoff between redundancy and disk usage. With 2 copies, every chunk is safe from a single disk failure and the array size is half of the total drive capacity. With 3, every chunk is safe from two-disk failure, but the array size is down to 1/3 of the total capacity. It is not correct to state that RAID 10 cannot withstand two-drive failures. Also, since not all chunks are on all disks, it is also possible that a RAID 10 survives a multi-disk failure. It is just not guaranteed that it will unless copies > 2. A positive for RAID 10 is that a degraded RAID 10 generally has no corresponding performance degradation.
  • questionlp - Friday, February 27, 2015 - link

    There's the FreeNAS Mini that can be ordered via Amazon. I think you can order it sans drives or pre-populated with four drives. I've been considering getting one, but I don't know how well they perform vs a Syn or other COTS NAS boxen.
  • usernametaken76 - Friday, February 27, 2015 - link

    iXsystems sells a few different lines of ZFS capable hardware. The FreeNAS Mini which was mentioned wouldn't compete with this unit as it is more geared towards the home user. I see this product as more SOHO oriented than consumer level kit. The TrueNAS products sold by iXsystems are much more expensive than the consumer level gear, but you get what you pay for (backed by expert FreeBSD developers, FreeNAS developers, quality support.)
  • zata404 - Sunday, March 1, 2015 - link

    The short answer is no.
  • bleppard - Monday, March 2, 2015 - link

    Infortrend has a line of NAS that use ZFS. The EonNAS Pro 850 most closely lines up with the NAS under review in this article. Infortrend's NAS boxes seem to have some pretty advanced features. I would love to have Anandtech review them.
  • DanNeely - Monday, March 2, 2015 - link

    I'd be more interested in seeing a review of the 210/510 because they more closely approximate mainstream SOHO NASes in specifications; although at $500/$700 they're still a major step up in price over midrange QNap/Synology units.

    It's not immediately clear from their documentation, I'm also curious if they're running a stock version of OpenSolaris that allows easy patching from Oracle's repositories, or have customized it enough to make customers dependent on them for major OS updates.
  • DanNeely - Monday, March 2, 2015 - link

    Also of interest in those models would be performance scaling to more modest hardware, the x10 units only have baytrail based processors.
  • Essence_of_War - Friday, February 27, 2015 - link

    Ganesh,

    I understand why you used 120 GB SSDs for performance to try to capture the maximum throughput of the 10GbE, but I was confused to see that you stuck with those for things like raid expansion/rebuild etc.

    Was it a time constraint, or is this a change to the review platform in general? Is 8x small capacity SSDs in a RAID-5 an effective benchmark of RAID-5 rebuild times?
  • DanNeely - Friday, February 27, 2015 - link

    When Ganesh reviewed the DS1815 using HDDs for the rebuild it took almost 200 hours to do all the rebuild tests. (Probably longer due to delays between when one finished and the next was started.) That sort of test is prohibitively time consuming.

    http://www.anandtech.com/show/8693/synology-ds1815...
  • Essence_of_War - Friday, February 27, 2015 - link

    Yikes, when put that in context, that makes a lot more sense. I think we can reasonably extrapolate for larger capacities from the best-case scenario SSDs.
  • DanNeely - Friday, February 27, 2015 - link

    Repeating a request from a few months back: Can you put together something on how long/well COTS nas vendors provide software/OS updates for their products?
  • DigitalFreak - Friday, February 27, 2015 - link

    Synology is still releasing new OS updates for the 1812+ which is 3 years old.
  • DanNeely - Saturday, February 28, 2015 - link

    I poked around on their site since only 3 years surprised me; it looks like they're pushing full OS updates (at least by major version number, I can't tell about feature/app limits) as far back as the 2010 model with occasional security updates landing a few years farther back.

    That's long enough to make it to the upslope on the HDD failure bathtub curve, although I'd appreciate a bit longer because with consumer turnkey devices, I know a lot of the market won't be interested in a replacement before the old one dies.
  • M4stakilla - Friday, February 27, 2015 - link

    Currently I have a desktop with an LSI MegaRAID and 8 desktop HDDs.
    This is nice and comfortably fast (500MB/sec+) for storing all my media.

    Soon I will move from my appartement to a house and I will need to "split up" this desktop into a media center, a desktop pc and preferably some external storage system (my desktop uses quite some power being on 24/7).

    I'd like this data to remain available at a similar speed.

    I've been looking into a NAS, but either that is too expensive (like the above 1400$ NAS) or it is horribly slow (1gbit).

    Does anyone know any alternatives that can handle at least 500MB/sec and (less important, but still...) a reasonable access time?
    A small i3 / celeron desktop connected with something other than ethernet to the desktop? USB3.1 (max cable length?) Some version of eSATA? Something else? Would be nice if I could re-use my LSI megaRAID.

    Anyone have ideas?
  • SirGCal - Friday, February 27, 2015 - link

    Honestly, for playing media, you don't need speed. I have two 8 drive rigs myself, one with an LSI card and RAID 6 and one using ZFS RAIDZ2. Even hooked up to just a 1G network, it's still plenty fast to feed multiple computers live streaming BluRay content. Use 10G networks if you have the money or just chain multiple 1G's together within the system to enhance performance if you really need to. I haven't needed to yet and I host the entire house right now of multiple units. I can hit about 7 full tilt before the network would become an issue.

    If you're doing something else more direct that needs performance, you might consider something else then a standard network connection. But most people, using a 4-port with teaming 1G network PCIe card would be beyond overkill for the server.
  • M4stakilla - Sunday, March 1, 2015 - link

    I do not need this speed for playing media ofcourse ;)
    I need it for working with my media...

    And yes, I do need it for that... it is definately not a luxuary...
  • SirGCal - Sunday, March 1, 2015 - link

    Ahh, I work my media on my regular gaming rig and then just move the files over to the server when I'm finished with them. However, without using something specific like thunderbolt, your cheapest options (though not really CHEAP) might still be using two of the 4-port teamed 1G connections. That should give you ~ 400M throughput since I get about 110M with a single 1G line. Teaming loses a tiny bit but. You'd need it at both ends. Or get a small 10G network going. I got a switch from a company sale for < $200, cards are still expensive though, realistically your most expensive option. But a single connection gets you ~ 1100M throughput.

    I plan on getting the new Samsung SM951. Given 2G reads, something like 1.5G writes, that might be your cheapest option. Even if you need to get a capable M.2 PCIe riser card to use it. Then you just have transfer delays to/from the server. Unless 512G isn't enough cache space for work (good lord). Again, something like that might be your cheapest option if plausible.
  • M4stakilla - Monday, March 2, 2015 - link

    I currently have a 1TB M550 and 6x 4TB desktop HDDs (will expand to 8x) in RAID5 + an offline backup (5x 4TB)

    So nothing exceeds 500MB/sec and no real upgrade plans for that either

    but it would be a a shame to waste 400MB/sec of the 500MB/sec on stupid network limitations

    4x 1Gbit teamed might be worth a look though, thanks
  • usernametaken76 - Friday, February 27, 2015 - link

    Yes, a Mac Mini with Thunderbolt and, just one example, a LaCie 5big Thunderbolt (in sizes from 10 to 30 TB) does offer exactly this, almost times 2. The Thunderbolt 2 models, even more. These are geared more towards video editing but provides every bit of the bandwidth you crave.
  • M4stakilla - Sunday, March 1, 2015 - link

    Thanks for the advice!

    Looking further into Thunderbolt... Cabling seems quite expensive though : 300+ euro for 10m, 500+ euro for 20m :(

    Out of ethical reasons, I'm trying to avoid Apple at all costs, so no Mac Mini for me...
    Also the LaCie 5big is a bit silly, as I already have the HDDs and the LaCie is including HDDs.
  • usernametaken76 - Tuesday, March 3, 2015 - link

    You can get four drive empty Thunderbolt cases from OWC. And of course a PC motherboard with Thunderbolt are available via add-in card, Asus makes a good Z97 board for about $400 with everything but the kitchen sink. Not sure why you're seeing such high prices for a 10m cable. They shouldn't be more than $50 for a 6m cable. They were working on optical cable extensions to the original copper cabling (with Mini-DP headers)..perhaps that's what you're seeing?
  • usernametaken76 - Tuesday, March 3, 2015 - link

    Make that $39 for a 2m cable. I believe you are looking at active optical cables that you wouldn't need unless you have to have a very long run for some reason. Is there a reason the storage has to be so far away from the workstation?
  • DCide - Friday, February 27, 2015 - link

    I'm unclear about the DAS tests. It appears you were testing throughput to a single Windows Server 2012 client. I would expect the ATTO read throughput to top out at around 1200MBps, and the real-world read performance to top out around 900-950MBps, as it did.

    I thought teaming didn't usually increase throughput to a single client from the same source. I imagine Synology's claim of around 1900MBps throughput will pan out if two clients are involved, perfectly inline with your real-world throughput of 950MBps to a single client.
  • usernametaken76 - Friday, February 27, 2015 - link

    A single client with multiple transfers would be treated as such.
  • usernametaken76 - Friday, February 27, 2015 - link

    That is, provided the single client also has teaming configured.
  • DCide - Friday, February 27, 2015 - link

    I think teaming was configured - that was the point of using Windows Server 2012 for the client, if I understood correctly.

    So it would appear that both tests (ATTO & real world) only consisted of a single transfer. I don't see any evidence that two Blu-ray folders were transferred concurrently, for example.
  • ganeshts - Friday, February 27, 2015 - link

    Our robocopy tests (real world) were run with MT:32 option. The two Emulex SFP+ ports on Windows Server 2012 were also teamed. In one of the screenshots, you can actually see them even treated separately (no teaming) and iPerf reporting around 2.8 Gbps each. In the teamed case, iPerf was reporting around 5 Gbps. iPerf was run with 16 simultaneous transfers.

    I will continue to do more experiments with other NAS units to put things in perspective in future reviews. As of now, this is a single data point for the Synology DS2015xs.
  • DCide - Friday, February 27, 2015 - link

    Ganesh, thanks for the response. Unless you really know the iperf code (I sure don't!) I don't believe you can make many conclusions based on the iperf performance, considering you hit a CPU bottleneck. There's no telling how much of that CPU went to other operations (such as test data creation/reading) rather than getting data across the pipe. Because of the bottleneck, the iperf results could easily have no relationship whatsoever to SSD RAID R/W performance across the network, which might not be bottlenecking at all (other than the 10GbE limits themselves, which is what we want).

    Could you please run a test with a couple of concurrent robocopys (assuming you can run multiple instances of robocopy)? I'm not sure the number of threads necessarily effects whether both teamed network interfaces are utilized. Please correct me if I'm wrong, but I think it's worth a try. In fact, if concurrent robocopys don't work, it might be worth trying concurrently running any other machine you have available with a 10GbE interface, to see if this ~1GB/s barrier can be broken.
  • usernametaken76 - Friday, February 27, 2015 - link

    Unless we're purchasing agents for the government, can we avoid terms like "COTS"? It has an odor of bureaucracy associated with it.
  • FriendlyUser - Saturday, February 28, 2015 - link

    I am curious to find out how it compares with the AMD-based QNAP 10G NAS (http://www.anandtech.com/show/8863/amd-enters-nas-... I suppose the AMD cores, at 2.4GHz, are much more powerful.
  • Haravikk - Saturday, February 28, 2015 - link

    I really don't know what to make of Synology; the hardware is usually pretty good, but the DSM OS just keeps me puzzled. On the one hand it seems flexible which is great, but the version of Linux is a mess, as most tools are implemented via a version of BusyBox that they seem unwilling to update, even though the version has multiple bugs with many of the tools.

    Granted you can install others, for example a full set of GNU tools, but there really shouldn't be any need to do this if they just kept it up-to-date. A lack of ZFS or even access to BTRFS is disappointing too, as it simply isn't possible to set these up yourself unless you're willing to waste a disk (since you HAVE to setup at least one volume before you could install these yourself).

    I dunno; if all I'm looking for is storage then I'm still inclined to go Drobo for an off-the-shelf solution, otherwise I'd look at a ReadyNAS system instead if I wanted more flexibility.
  • thewishy - Wednesday, March 4, 2015 - link

    I think the point you're missing is that people buying this sort of kit are doing so because they want to "Opt out" of managing this stuff themselves.
    I'm an IT professional, but this isn't my area. I want it to work out the box without much fiddling. The implementation under the hood may be ugly, but I'm not looking under the hood. For me it stores my files with a decent level of data security (No substitute for backup) and allows me to add extra / larger drivers as I need more space, and provides a decent range of supported protocols (SMB, iSCSI, HTTP, etc)
    ZFS and BRTFS are all well and good, but I'm not sure what practical advantage it would bring me.
  • edward1987 - Monday, February 22, 2016 - link

    You can get 1815+ a bit cheaper if you don't really need enterprise class:
    http://www.span.com/compare/DS1815+-vs-DS2015xs/46...
  • Asreenu - Thursday, September 14, 2017 - link

    We bought a couple of these a year ago. All of them had component failures and support is notorious for running you through hoops until you give up because you don't want to be without access to your data for so long. They have ridiculous requiresments to prove your purchase before they even reply to your question. In all three cases we ended up buying replacements and figuring out how to restore data ourselves. I would stick with netgear for the support alone because that's a major sell. Anandtech shouldn't give random ratings to things they don't have experience with. Just announcing they have support doesn't mean a thing.

Log in

Don't have an account? Sign up now