I can see the merit in a single point of contact, however at 5.1kusd it is overpriced IMO. Sure the disk = roughly50% of the cost but still 2.5k disk-less for a system that does not support SSH and practically has no ecosystem or encryption...
For such a large investment I'm pretty surprised by the lack of attention to detail here. There is no hardware support for encryption, which is crazy; my ~$250 Synology DS212j has an ARM processor with hardware encryption, so why doesn't a $5000+ machine? Also, 2x gigabit ethernet seems pretty meagre these days when any serious data users will be (or should be) investing in 10 gigabit ethernet at the very least, and while the controllers are pricey it would fit well within the huge premium here.
I mean, I'm nearly finished building a DIY storage box; it's not racked (since I'm building it around a tower case), but it has 15 hot-swappable 3.5" hard drive bays. I'm using it for direct attached storage and it's coming in around $800 or so, but I don't think a small form factor motherboard sufficient to run ReadyNAS would push me much higher after swapping out the DAS parts. I dunno, for $5000+ I would think an enterprise oriented product should be able to do a lot better than what I can build myself! Even if I switched everything for enterprise parts I'd still come in under.
It seems to have become the norm that companies release products with half finished software and expect their customers to be their beta testers. Why would any business in their right mind pay $5K for an unfinished product when there are much better alternatives available?
Did a back-of-the-envelope calculation a while back; the 2.5" ones just make more sense if you need maximum storage at the moment. That said, when we have the next gen of HDDs filled with helium and holding 10+ TB apiece, 3.5" all the way.
I've replaced all our Seagate Constelation.2 drives over the past 3 years with Hitachi's, as they have failed like clockwork in our HP ML380 that came equipped with them.
When I get the replacement back from HP, I put a Hitachi in the cage, install it in the server, and put the Constelation on eBay where I usually get $50. That's all they're worth, apparently.
I love Seagate, but between their load/unload cycle-happy desktop drives that have a pre-determined death, and their ridiculously poor quality SAS drives, I just hope their SSD's are their saving grace, because my how the mighty have fallen from the 7200.7 days.
9U for 50 2.5" drives? Something's not right with that.
You can get 24 2.5" drives into a single 2U chassis (all on the front, slotted vertical). So, if you go to 4U, you can get 48 2.4" drives into the front of the chassis, with room on the back for even more.
LOL, it is almost as fast as a single mechanical drive. At that price - a giant joke. You need that much space with such slow access - this doesn't even qualify for indie professional workstations, much less for the enterprise. With 8 drives in raid 5 you'd think it will perform at least twice as well as it does.
Well, as a short-stroked RAID 10 device, you might be able to get 4TB of SSD speed. With drives of decent reliability, not necessarily the Seagates, you get more TB/$/time than some enterprise SSD. Someone could do the arithmetic?
Even only considering rotational delay and entirely discarding seek time (eg: an extemed short-stroked disk), disk access time remain much higher then SSD. A 15k enterprise class drive need ~4ms to complete a platter rotation, with an average rotational delay of ~2ms. Considering that you can not really cancel seek time, the resulting access latency of even short-stroked disk surely is above 5ms.
And 15k drives cost much more that consumer drives.
A simple consumer-level MLC disk (eg: Crucial M500) has a read access latency way lower than 0.05 ms. Write access latency is surely higher, but way better than HD one.
So: SSDs completely eclipse HDDs on the performance front. Moreover, with high capacity (~1TB) with higher-grade consumer level / entry-level enterprise class SSDs with power failure protection (eg: Crucial M500, Intel DC S3500) you can build a powerfull array at reasonable cost.
I think he means sequential speed. You need big storage for backup or highly sequential data like raw audio/video/whatever, you will not put random read/write data on such storage. That much capacity needs high sequential speeds. Even if you store databases on that storage, the frequently accessed sets will be cached, and overall access will be buffered.
SSD sequential performance today is pretty much limited by the controller speed to about ~530 mb/sec. A 1TB WD raptor drive does over 200 mb/sec in its fastest region, so I imagine that 4 of those would be able to hit SSD speed at tremendously higher capacity and even more so volume to price ratio.
This thing seems too expensive to me. I mean, if the custom linux based OS has the limitations explained in the (very nice!) article, it is better to use a general purpose distro and simply manage all via LVM. Or even use a storage-centric distribution (eg: freenas, unraid) and simply buy a general-purpose PC/server with many disks...
I have a hard time wrapping my head around the price.
Other than the ECC RAM, that is VERY close to my server setup (same CPU for example). Except mine also has a couple of USB3 ports, twice the USB 2 ports, a third GbE NIC (the onboard) and double the RAM.
Well...it can't take 8 drives without an add on card, as it only has 6 ports...but that isn't too expensive.
Total cost of building...less than $300.
I can't fathom basically $300 of equipment being upsold for 10x the price! Even an upsale on the drives in it doesn't seem justified to get it in to that range of price.
Heck, you could get a RAID card and do 7 drives in RAID5/6 for redundancy and use commercial 4TB drives with an SSD as a cache drive and a REALLY nice RAID card in to my system, and you'd probably come out at less than half the price and probably with better performance.
I get building your own is almost always cheaper, but a $3000 discount is just a we bit cheaper on a $5000 hardware price tag, official support or no official support.
I might also add, looking at the power consumption figures, with my system being near identical, other than lack of ECC memory, but more RAM, more networking connectivity and WITH disks in it, mine consumes 14w less at idle (21w idle). The RAID rebuild figures on 1-2 disks and 2-3 is also a fair amount lower on my server, but more than 10w difference (mine has 2x2TB RAID0 right now and a 60GB SSD as boot drive).
Also WAY more networking performance. I don't know if the OS doesn't support SMB3.0, or if Anandtech isn't running any network testing with SMB3.0 utilized, but with Windows 8 on my server, I am pushing 2x1GbE to the max, or at least I was when my desktop RAID array was less full (need new array, 80% utilized on my desktop right now as it is only 2x1TB RAID0).
Even looking at some of the below GbE saturation benchmarks, I am pushing a fair amount more data over my links than the Seagate NAS here is.
With better disks in my server and desktop I could easily patch in the 3rd GbE NIC in the machine to push up over 240MB/sec over the links to the limit of what the drives can do. I realize a lot of SOHO/SMB implementations are about concurrent users and less about maximum throughput, but the beauty of SMB3.0 and SMB Multichannel is...it does both. No limits on per link speed, you can saturate all of the links for a single user or push multiple users through too.
I've done RAM disk testing with 3 links enabled and SMB Multichannel enabled and saw duplex 332MB/sec Rx AND Tx between my server and desktop. I just don't have the current array to support that, so I leave only the Intel NICs enabled and leave the on-board NICs on the machines disabled.
Sorry but the comment "Most users looking for a balance between performance and redundancy are going to choose RAID-5" is just plain stupid if you value your data at all. Look at anyone serious in enterprise storage and they will tell you Raid 6 is a must with SATA disks over 1TB. SATA is just pants when it comes to error detection and the likelyhood of one disk failling and then finding a second one fail with previously undetected errors when you try a rebuild is quite high. Rebuild times are often longer, I have seen 3TB drives stretch in to a third day. So on an 8 disk system you are now looking at only 6 disks and you really want a hot spare so now you are down to just 5 disks and 20TB raw, formated this is going to be down to 19TB. Where has that 32TB storage system gone? If you are doing SATA drive you need shelves of them, the more the merrier to make any kind of sense in the business world.
I don't quite get who's the target audience for this, surely an rack mount NAS must mean SMB/Enterprise. But can't really see this fit here. Lack of encryption is just one point there, but at this price it surely lacks in many other regards, it has no 10GbE, no raid-controller (rebuild time seems to be ridiculous). Software doesn't really seem up for small enterprises. What is this appliance supposed to be used against? iSCSI is it's main feature but what use is it at this speed? No proper remote management of hardware that costs around 2500 USD? That is using a 42 dollar processor? I don't get this product, what are you suppose to use it for?
We often use open filer or other linux based NAS/SAN platforms.
Looking at this configuration I agree that most with an 8 disk array who are looking for maximum storage space would use RAID5, normally we use more disks and RAID10 for improved performance.
My curiosity is how CPU and Memory bound this thing must be, but I saw no mention of these being limiting factors. The performance is far below most configurations I've used with 8 disks in RAID5 (with a traditional RAID card).
The thing is that you get pretty decent hardware at 2000-2500 USD. Say a barebone Intel/Supermicro with IPMI/IPKVM (BMC), some Xeon-processor in the lower ends, AES-NI and all that and a case with hotswap bays and two PSU's. No problem running 10GbE, fiberchannel or 8 disks (you might need an add-on card or two). I would expect them to at least spend more then 500 for CPU, ram and board on appliances in this price range. It's not like the software and case itself is worth 2500 USD, plus whatever markup they have on their drives.
Well, I used retired hardware and built a RAID6 (RAIDZ2) box with 8 drives, 2TB each, with nothing more then a case to hold them and a $41 internal SATA 4-port controller card. Downloaded Ubuntu, installed the ZFS packages, configured the array, and setup monitoring. Now I have a fully functional Linux rig with SSH, etc. and ~ 11,464,525,440 1K blocks. (roughly 11TB usable).
I have another 23TB array usable using 4TB drives and an actual, very expensive, 6G, 8 port RAID card. The ZFS rig is right there in performance, even using slower (5400 RPM) drives.
So you can do it as cheap as you like and get more functionality then this box offers. Need multiple NIC, throw em in, need ECC, server boards are just as available. Need U-factor, easy enough. I agree with the others, I don't see the $2k+ justification in cost... Even if they had the 'self encrypting' versions for $400 each, that's $3200, leaving $1900 for the hardware... Eww...
half-assed product. why is it only 30 inches deep? You could fit another row of disks if you use the entire depth of the rack. assuming you have a meter-deep rack of course, but who doesnt?
I just want an empty chassis with a backplane for 3 rows of 4 disks. I want to supply the rest of the gear on my own.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
buffhr - Friday, March 14, 2014 - link
I can see the merit in a single point of contact, however at 5.1kusd it is overpriced IMO. Sure the disk = roughly50% of the cost but still 2.5k disk-less for a system that does not support SSH and practically has no ecosystem or encryption...Samus - Friday, March 14, 2014 - link
I can't believe what a hit encryption has on read performance. 25MB/s opposed to 102MB/sec? Holy...extide - Friday, March 14, 2014 - link
It would be a lot better if they used a CPU with AES-NImax1001 - Friday, March 14, 2014 - link
Look at the CPU. There's your answer.Ammohunt - Thursday, March 20, 2014 - link
I agree $5k buys alot of jbod that you can hang off an exiting server and configure however you want ZFS, tgtd, SMB, CIFS, NFS etc..Haravikk - Friday, March 21, 2014 - link
For such a large investment I'm pretty surprised by the lack of attention to detail here. There is no hardware support for encryption, which is crazy; my ~$250 Synology DS212j has an ARM processor with hardware encryption, so why doesn't a $5000+ machine? Also, 2x gigabit ethernet seems pretty meagre these days when any serious data users will be (or should be) investing in 10 gigabit ethernet at the very least, and while the controllers are pricey it would fit well within the huge premium here.I mean, I'm nearly finished building a DIY storage box; it's not racked (since I'm building it around a tower case), but it has 15 hot-swappable 3.5" hard drive bays. I'm using it for direct attached storage and it's coming in around $800 or so, but I don't think a small form factor motherboard sufficient to run ReadyNAS would push me much higher after swapping out the DAS parts. I dunno, for $5000+ I would think an enterprise oriented product should be able to do a lot better than what I can build myself! Even if I switched everything for enterprise parts I'd still come in under.
tech6 - Friday, March 14, 2014 - link
It seems to have become the norm that companies release products with half finished software and expect their customers to be their beta testers. Why would any business in their right mind pay $5K for an unfinished product when there are much better alternatives available?Sadrak85 - Friday, March 14, 2014 - link
Did a back-of-the-envelope calculation a while back; the 2.5" ones just make more sense if you need maximum storage at the moment. That said, when we have the next gen of HDDs filled with helium and holding 10+ TB apiece, 3.5" all the way.Sadrak85 - Friday, March 14, 2014 - link
I eat my words, the 2.5" ones are 9U for 50 drives...which is fewer TB/U, if you can accept the units. This one can make sense after all.Samus - Friday, March 14, 2014 - link
I've replaced all our Seagate Constelation.2 drives over the past 3 years with Hitachi's, as they have failed like clockwork in our HP ML380 that came equipped with them.When I get the replacement back from HP, I put a Hitachi in the cage, install it in the server, and put the Constelation on eBay where I usually get $50. That's all they're worth, apparently.
I love Seagate, but between their load/unload cycle-happy desktop drives that have a pre-determined death, and their ridiculously poor quality SAS drives, I just hope their SSD's are their saving grace, because my how the mighty have fallen from the 7200.7 days.
phoenix_rizzen - Friday, March 14, 2014 - link
9U for 50 2.5" drives? Something's not right with that.You can get 24 2.5" drives into a single 2U chassis (all on the front, slotted vertical). So, if you go to 4U, you can get 48 2.4" drives into the front of the chassis, with room on the back for even more.
Supermicro's SC417 4U chassis holds 72 2.5" drives (with motherboard) or 88 (without motherboard).
http://www.supermicro.com/products/chassis/4U/?chs...
Shoot, you can get 45 full-sized 3.5" drives into a 4U chassis from SuperMicro using the SC416 chassis. 9U for 50 mini-drives is insane!
jasonelmore - Saturday, March 15, 2014 - link
all HDD's have heliumddriver - Friday, March 14, 2014 - link
LOL, it is almost as fast as a single mechanical drive. At that price - a giant joke. You need that much space with such slow access - this doesn't even qualify for indie professional workstations, much less for the enterprise. With 8 drives in raid 5 you'd think it will perform at least twice as well as it does.FunBunny2 - Friday, March 14, 2014 - link
Well, as a short-stroked RAID 10 device, you might be able to get 4TB of SSD speed. With drives of decent reliability, not necessarily the Seagates, you get more TB/$/time than some enterprise SSD. Someone could do the arithmetic?shodanshok - Friday, March 14, 2014 - link
Mmm, no, SSD speed are too much away.Even only considering rotational delay and entirely discarding seek time (eg: an extemed short-stroked disk), disk access time remain much higher then SSD. A 15k enterprise class drive need ~4ms to complete a platter rotation, with an average rotational delay of ~2ms. Considering that you can not really cancel seek time, the resulting access latency of even short-stroked disk surely is above 5ms.
And 15k drives cost much more that consumer drives.
A simple consumer-level MLC disk (eg: Crucial M500) has a read access latency way lower than 0.05 ms. Write access latency is surely higher, but way better than HD one.
So: SSDs completely eclipse HDDs on the performance front. Moreover, with high capacity (~1TB) with higher-grade consumer level / entry-level enterprise class SSDs with power failure protection (eg: Crucial M500, Intel DC S3500) you can build a powerfull array at reasonable cost.
ddriver - Sunday, March 16, 2014 - link
I think he means sequential speed. You need big storage for backup or highly sequential data like raw audio/video/whatever, you will not put random read/write data on such storage. That much capacity needs high sequential speeds. Even if you store databases on that storage, the frequently accessed sets will be cached, and overall access will be buffered.SSD sequential performance today is pretty much limited by the controller speed to about ~530 mb/sec. A 1TB WD raptor drive does over 200 mb/sec in its fastest region, so I imagine that 4 of those would be able to hit SSD speed at tremendously higher capacity and even more so volume to price ratio.
shodanshok - Friday, March 14, 2014 - link
This thing seems too expensive to me. I mean, if the custom linux based OS has the limitations explained in the (very nice!) article, it is better to use a general purpose distro and simply manage all via LVM. Or even use a storage-centric distribution (eg: freenas, unraid) and simply buy a general-purpose PC/server with many disks...M/2 - Friday, March 14, 2014 - link
$5100 ??? I could buy a Mac mini or a Mac Pro and a Promise2 RAID for less than that! ....and have Gigabit speedsazazel1024 - Friday, March 14, 2014 - link
I have a hard time wrapping my head around the price.Other than the ECC RAM, that is VERY close to my server setup (same CPU for example). Except mine also has a couple of USB3 ports, twice the USB 2 ports, a third GbE NIC (the onboard) and double the RAM.
Well...it can't take 8 drives without an add on card, as it only has 6 ports...but that isn't too expensive.
Total cost of building...less than $300.
I can't fathom basically $300 of equipment being upsold for 10x the price! Even an upsale on the drives in it doesn't seem justified to get it in to that range of price.
Heck, you could get a RAID card and do 7 drives in RAID5/6 for redundancy and use commercial 4TB drives with an SSD as a cache drive and a REALLY nice RAID card in to my system, and you'd probably come out at less than half the price and probably with better performance.
I get building your own is almost always cheaper, but a $3000 discount is just a we bit cheaper on a $5000 hardware price tag, official support or no official support.
azazel1024 - Friday, March 14, 2014 - link
I might also add, looking at the power consumption figures, with my system being near identical, other than lack of ECC memory, but more RAM, more networking connectivity and WITH disks in it, mine consumes 14w less at idle (21w idle). The RAID rebuild figures on 1-2 disks and 2-3 is also a fair amount lower on my server, but more than 10w difference (mine has 2x2TB RAID0 right now and a 60GB SSD as boot drive).Also WAY more networking performance. I don't know if the OS doesn't support SMB3.0, or if Anandtech isn't running any network testing with SMB3.0 utilized, but with Windows 8 on my server, I am pushing 2x1GbE to the max, or at least I was when my desktop RAID array was less full (need new array, 80% utilized on my desktop right now as it is only 2x1TB RAID0).
Even looking at some of the below GbE saturation benchmarks, I am pushing a fair amount more data over my links than the Seagate NAS here is.
With better disks in my server and desktop I could easily patch in the 3rd GbE NIC in the machine to push up over 240MB/sec over the links to the limit of what the drives can do. I realize a lot of SOHO/SMB implementations are about concurrent users and less about maximum throughput, but the beauty of SMB3.0 and SMB Multichannel is...it does both. No limits on per link speed, you can saturate all of the links for a single user or push multiple users through too.
I've done RAM disk testing with 3 links enabled and SMB Multichannel enabled and saw duplex 332MB/sec Rx AND Tx between my server and desktop. I just don't have the current array to support that, so I leave only the Intel NICs enabled and leave the on-board NICs on the machines disabled.
lorribot - Friday, March 14, 2014 - link
Sorry but the comment "Most users looking for a balance between performance and redundancy are going to choose RAID-5" is just plain stupid if you value your data at all. Look at anyone serious in enterprise storage and they will tell you Raid 6 is a must with SATA disks over 1TB. SATA is just pants when it comes to error detection and the likelyhood of one disk failling and then finding a second one fail with previously undetected errors when you try a rebuild is quite high.Rebuild times are often longer, I have seen 3TB drives stretch in to a third day.
So on an 8 disk system you are now looking at only 6 disks and you really want a hot spare so now you are down to just 5 disks and 20TB raw, formated this is going to be down to 19TB. Where has that 32TB storage system gone?
If you are doing SATA drive you need shelves of them, the more the merrier to make any kind of sense in the business world.
Penti - Saturday, March 15, 2014 - link
Audience?I don't quite get who's the target audience for this, surely an rack mount NAS must mean SMB/Enterprise. But can't really see this fit here. Lack of encryption is just one point there, but at this price it surely lacks in many other regards, it has no 10GbE, no raid-controller (rebuild time seems to be ridiculous). Software doesn't really seem up for small enterprises. What is this appliance supposed to be used against? iSCSI is it's main feature but what use is it at this speed? No proper remote management of hardware that costs around 2500 USD? That is using a 42 dollar processor? I don't get this product, what are you suppose to use it for?
ravib123 - Saturday, March 15, 2014 - link
We often use open filer or other linux based NAS/SAN platforms.Looking at this configuration I agree that most with an 8 disk array who are looking for maximum storage space would use RAID5, normally we use more disks and RAID10 for improved performance.
My curiosity is how CPU and Memory bound this thing must be, but I saw no mention of these being limiting factors. The performance is far below most configurations I've used with 8 disks in RAID5 (with a traditional RAID card).
Penti - Saturday, March 15, 2014 - link
The thing is that you get pretty decent hardware at 2000-2500 USD. Say a barebone Intel/Supermicro with IPMI/IPKVM (BMC), some Xeon-processor in the lower ends, AES-NI and all that and a case with hotswap bays and two PSU's. No problem running 10GbE, fiberchannel or 8 disks (you might need an add-on card or two). I would expect them to at least spend more then 500 for CPU, ram and board on appliances in this price range. It's not like the software and case itself is worth 2500 USD, plus whatever markup they have on their drives.SirGCal - Sunday, March 16, 2014 - link
Well, I used retired hardware and built a RAID6 (RAIDZ2) box with 8 drives, 2TB each, with nothing more then a case to hold them and a $41 internal SATA 4-port controller card. Downloaded Ubuntu, installed the ZFS packages, configured the array, and setup monitoring. Now I have a fully functional Linux rig with SSH, etc. and ~ 11,464,525,440 1K blocks. (roughly 11TB usable).I have another 23TB array usable using 4TB drives and an actual, very expensive, 6G, 8 port RAID card. The ZFS rig is right there in performance, even using slower (5400 RPM) drives.
So you can do it as cheap as you like and get more functionality then this box offers. Need multiple NIC, throw em in, need ECC, server boards are just as available. Need U-factor, easy enough. I agree with the others, I don't see the $2k+ justification in cost... Even if they had the 'self encrypting' versions for $400 each, that's $3200, leaving $1900 for the hardware... Eww...
alyarb - Thursday, March 20, 2014 - link
half-assed product. why is it only 30 inches deep? You could fit another row of disks if you use the entire depth of the rack. assuming you have a meter-deep rack of course, but who doesnt?I just want an empty chassis with a backplane for 3 rows of 4 disks. I want to supply the rest of the gear on my own.