Yikes, that's a highly questionable decision, to go with btrfs instead of ZFS as the default file system. ZFS has been in production use for seven years now, proven through widespread deployments and available on every *nix platform you can think of, while btrfs is still beta quality (without even an official stable release) and nowhere near feature-competitive with ZFS...
Agreed. This is a full-fledged Xeon PC with ECC RAM, so why not go with ZFS? It would seem to be the obvious choice for a high-quality, time-tested software RAID system.
By the way, it would really be better if you listed the suggested retail price on the first page of reviews along with the other specs. (A quick Google search seems to indicate that the street price is $2500-$3000.)
Probably because it takes a bit more effort to get ZFS running in Linux than btrfs, but not that much. It recently went stable and is working just fine on a 72 bay Supermicro chassis I have in test for the past 3 months. All this being said, why didn't they just go with a BSD solution?
While, BTRFS has been supported as a root file system in SLE and Oracle Linux since 2012. ZFS: not available from the vendor on either (even though Solaris is owned by Oracle). That's probably it right there.
I agree. While BTRFS is quite stable now, considering the critical role assigned to a filesystem I would go with a FS with a proven track record (and fsck). Moreover being a CoW filesystem, BTRFS tend to be extremely fragmentation prone in some circumstances, basically everytime a file rewrite is required, for example a database or a virtual machine (but I think that a similar NAS units is primarily assigned with archiving role).
Yup, I have two 8-disk systems myself. One running hardware LSI controller for RAID 6 and one using ZFS for the same effective protection. Sure the hardware controller is actually a tiny bit faster at hard reads, but for the $600 price tag, so what. All of my current systems are going to be ZFS. These arrays in a box are interesting until they decide to go with some other pooling system... If there is a real comparable reason and argument for BTRFS instead of ZFS, I'd like to see it.
I tested btrfs recently with a large disk array (read 45 4TB drives) and the performance was very poor. Ended up going with JFS and shunned XFS because it's not stable in the event of power issues.
Hi, from my understanding JFS and JFS2 are more or less unsupported from some time now.
What problem did you have with XFS? It is designed to manage the exact case you describe: a lot of space spread over a lot of spinning disks. When using XFS, the only two thing that can lead to data loss are: 1. no barrier/FUA support in the disk/controller combo 2. an application that rewrite files with truncate and do _not_ use fsyncs
Case n.1 is common to all filesystem: if your disk lies about cache flush, then no filesystem can save you. The only thing that can somewhat lessen the risk is journal checksumming, that is implemented in both XFS, EXT4 and BRTFS, but I don't know about JFS.
Case n.2 is really an application shortcoming, but EXT4 and BTRFS choice here are the more sensible one: detect such corner case and apply a work-around. Anyway, with application the properly use fsync, XFS is rock stable.
On one hand, I'm happy that 10g is becoming more prevalent slowly for the con/prosumer grade market, however products like this make my head hurt. The performance that you were able to get out of this host were nothing short of embarrassing, and could of easily been handled by a single gigabit link. I think this primarly stems from vendors still using software RAID without using good quality HBAs. You can most certainly have a fantastic software solution that is high performance without a real RAID controller or even a high end HBA, however it requires you use ceph, or ZFS.
The performance you are seeing out of this is actually very similar to a HP Microserver that I have running on FreeNAS with 2GB of ram, LAGG gigabit ports, 4x4TB 7200rpm Seagates + 32GB USB3 OS drive, granted the entire unit cost no more than $1800, and only has 4 slots instead of 6. Without a doubt if I was going to build something bigger, I'd use a Supermicro X9DR7-TF+, same as what I use in production for $800, get a decent chassis, the LSI BBU and have support for up to 16 drives with 2 10G ports with an Intel X540 chipset, which all toghether would still be significantly less than this solution, and obviously blow the performance of this out of the water.
Runiteshark not good at reading or convertng bits to bytes? With some of the tests pushing over 600 MB/sec a 1G ethernet port would be saturated more than 4 times over not including packet overhead. A 1Gb ethernet port is good for only 125 MB/sec.
Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.
One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.
If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.
6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.
As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
hmm am i missing something here ? you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?
dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cards
I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.
One thing I would like to see in all your NAS reviews is any "Branch Office" replication features and how well they work as compared to DFS on a Windows box. (over VPN etc)
Synology DS3612xs isn't even mentioned as a comparable product here??
12 bays for $3000, plus the extra $350 or so to install an Intel X540-T1 10GbE NIC.
I have a DS3612xs, fully populated with 3TB drives in RAID-6. Direct-connected to a desktop PC because 10GbE switches are not ready for the home office market yet.
Has been utterly reliable for >1 year. For large file transfers (typically a few 10's of GB of media files), I routinely get 700-900 MB/s writing to the NAS and 400MB/s reading from it.
(The SSD's on the desktop PC are 2x SATA-3 in RAID-0. They are the limiting factor when reading from the NAS because each disk can only support about 200MB/s sustained sequential write... typical for current high-end SSD's.)
I am thinking about buying one of these Ds3612xs for a mission critical production environment to host a number of VMware virtual machines. What kind of IOPs are you getting? Are you running the SSD read cache and does it help? Thanks!
Any chance of actually testing the error detection / correction and redundancy features? What happens if you yank the power cord during a metadata write? What if you flip a bunch of bits on a drive? These are primary selling points of these devices, and have the potential to massively impact buyers, so it'd be really useful to know this kind of thing.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
24 Comments
Back to Article
Guspaz - Wednesday, January 1, 2014 - link
Yikes, that's a highly questionable decision, to go with btrfs instead of ZFS as the default file system. ZFS has been in production use for seven years now, proven through widespread deployments and available on every *nix platform you can think of, while btrfs is still beta quality (without even an official stable release) and nowhere near feature-competitive with ZFS...JDG1980 - Wednesday, January 1, 2014 - link
Agreed. This is a full-fledged Xeon PC with ECC RAM, so why not go with ZFS? It would seem to be the obvious choice for a high-quality, time-tested software RAID system.By the way, it would really be better if you listed the suggested retail price on the first page of reviews along with the other specs. (A quick Google search seems to indicate that the street price is $2500-$3000.)
Runiteshark - Wednesday, January 1, 2014 - link
Probably because it takes a bit more effort to get ZFS running in Linux than btrfs, but not that much. It recently went stable and is working just fine on a 72 bay Supermicro chassis I have in test for the past 3 months. All this being said, why didn't they just go with a BSD solution?nafhan - Thursday, January 2, 2014 - link
While, BTRFS has been supported as a root file system in SLE and Oracle Linux since 2012. ZFS: not available from the vendor on either (even though Solaris is owned by Oracle). That's probably it right there.shodanshok - Friday, January 3, 2014 - link
I agree. While BTRFS is quite stable now, considering the critical role assigned to a filesystem I would go with a FS with a proven track record (and fsck). Moreover being a CoW filesystem, BTRFS tend to be extremely fragmentation prone in some circumstances, basically everytime a file rewrite is required, for example a database or a virtual machine (but I think that a similar NAS units is primarily assigned with archiving role).SirGCal - Wednesday, January 1, 2014 - link
Yup, I have two 8-disk systems myself. One running hardware LSI controller for RAID 6 and one using ZFS for the same effective protection. Sure the hardware controller is actually a tiny bit faster at hard reads, but for the $600 price tag, so what. All of my current systems are going to be ZFS. These arrays in a box are interesting until they decide to go with some other pooling system... If there is a real comparable reason and argument for BTRFS instead of ZFS, I'd like to see it.Runiteshark - Wednesday, January 1, 2014 - link
I tested btrfs recently with a large disk array (read 45 4TB drives) and the performance was very poor. Ended up going with JFS and shunned XFS because it's not stable in the event of power issues.shodanshok - Friday, January 3, 2014 - link
Hi,from my understanding JFS and JFS2 are more or less unsupported from some time now.
What problem did you have with XFS? It is designed to manage the exact case you describe: a lot of space spread over a lot of spinning disks. When using XFS, the only two thing that can lead to data loss are:
1. no barrier/FUA support in the disk/controller combo
2. an application that rewrite files with truncate and do _not_ use fsyncs
Case n.1 is common to all filesystem: if your disk lies about cache flush, then no filesystem can save you. The only thing that can somewhat lessen the risk is journal checksumming, that is implemented in both XFS, EXT4 and BRTFS, but I don't know about JFS.
Case n.2 is really an application shortcoming, but EXT4 and BTRFS choice here are the more sensible one: detect such corner case and apply a work-around. Anyway, with application the properly use fsync, XFS is rock stable.
Regards.
Runiteshark - Wednesday, January 1, 2014 - link
On one hand, I'm happy that 10g is becoming more prevalent slowly for the con/prosumer grade market, however products like this make my head hurt. The performance that you were able to get out of this host were nothing short of embarrassing, and could of easily been handled by a single gigabit link. I think this primarly stems from vendors still using software RAID without using good quality HBAs. You can most certainly have a fantastic software solution that is high performance without a real RAID controller or even a high end HBA, however it requires you use ceph, or ZFS.The performance you are seeing out of this is actually very similar to a HP Microserver that I have running on FreeNAS with 2GB of ram, LAGG gigabit ports, 4x4TB 7200rpm Seagates + 32GB USB3 OS drive, granted the entire unit cost no more than $1800, and only has 4 slots instead of 6. Without a doubt if I was going to build something bigger, I'd use a Supermicro X9DR7-TF+, same as what I use in production for $800, get a decent chassis, the LSI BBU and have support for up to 16 drives with 2 10G ports with an Intel X540 chipset, which all toghether would still be significantly less than this solution, and obviously blow the performance of this out of the water.
hpglow - Wednesday, January 1, 2014 - link
Runiteshark not good at reading or convertng bits to bytes? With some of the tests pushing over 600 MB/sec a 1G ethernet port would be saturated more than 4 times over not including packet overhead. A 1Gb ethernet port is good for only 125 MB/sec.Runiteshark - Wednesday, January 1, 2014 - link
Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.
If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
imsabbel - Thursday, January 2, 2014 - link
Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.Runiteshark - Friday, January 3, 2014 - link
I didn't even see that! What the hell was the point of the test then?Gigaplex - Wednesday, January 1, 2014 - link
Am I reading this correctly? You used 1GbE, not 10GbE adapters on the test bed? I'd like to see single client speeds using 10GbE.ZeDestructor - Wednesday, January 1, 2014 - link
6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
BMNify - Thursday, January 2, 2014 - link
hmm am i missing something here ?you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?
dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
ganeshts - Thursday, January 2, 2014 - link
6x quad ports = 24 1-GbE ports + one onboard 1GbE = 25 GbE in total.BMNify - Thursday, January 2, 2014 - link
oh right so its 25 "1GbE" ports and NO actual "10GbE" card to max out the end to end connectionBMNify - Thursday, January 2, 2014 - link
it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cardshoboville - Thursday, January 2, 2014 - link
I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.lazn_ - Thursday, January 2, 2014 - link
One thing I would like to see in all your NAS reviews is any "Branch Office" replication features and how well they work as compared to DFS on a Windows box. (over VPN etc)xbrit - Thursday, January 2, 2014 - link
Synology DS3612xs isn't even mentioned as a comparable product here??12 bays for $3000, plus the extra $350 or so to install an Intel X540-T1 10GbE NIC.
I have a DS3612xs, fully populated with 3TB drives in RAID-6. Direct-connected to a desktop PC because 10GbE switches are not ready for the home office market yet.
Has been utterly reliable for >1 year. For large file transfers (typically a few 10's of GB of media files), I routinely get 700-900 MB/s writing to the NAS and 400MB/s reading from it.
(The SSD's on the desktop PC are 2x SATA-3 in RAID-0. They are the limiting factor when reading from the NAS because each disk can only support about 200MB/s sustained sequential write... typical for current high-end SSD's.)
centosfan - Saturday, January 18, 2014 - link
I am thinking about buying one of these Ds3612xs for a mission critical production environment to host a number of VMware virtual machines. What kind of IOPs are you getting? Are you running the SSD read cache and does it help? Thanks!klassobanieras - Sunday, January 12, 2014 - link
Any chance of actually testing the error detection / correction and redundancy features? What happens if you yank the power cord during a metadata write? What if you flip a bunch of bits on a drive?These are primary selling points of these devices, and have the potential to massively impact buyers, so it'd be really useful to know this kind of thing.