I was wondering if one could install a proper Linux on these systems, such as Fedora. Then one could use it as a medium powerful computation server with lot's of local storage. Has anybody tried that? Cheers M.
It's a bit late for coffee, so please forgive me for not finding a price in the review.
This looks like an interesting product, but I'm a ZFS zealot. Running ZFS with only 4 GB RAM isn't going to fly.
That said, I am *very* interested in anything resembling a mid-range NAS. Storage is a real pain point, and it is tough to build acceptable storage out of cheap disks. So thanks for reviewing this thing!
Use Nexenta Community Edition (uses OpenIndiana + ZFS) on top of a supermicro server with the same 12 drive bays (and SAS drives) and I'd kill this particular box AND have a more robust solution to boot.
Dave, that's such a good idea, I switched to OpenIndiana in 2009. I'm running 8 2TB drives as four mirrored pairs with a $100 LSI controller. But it will be quite some time before I have the budget to upgrade to a server motherboard with more than 16GB ECC RAM.
ZFS deduplication is *expensive*, folks. Don't do it. I tried adding a 60Gb SSD for L2ARC but it turns out that I would be better off with 60GB of *swap* to hold the deduplication tables.
My kung fu is weak. But I've been running this system through numerous hardware failures, PEBKAC events, and system software updates, and I haven't lost any data. Solaris isn't bulletproof, but it does warn me of impending drive failures before I lose anything.
Sorry for the long rant -- but it IS possible to play with "enterprise" class system configurations on lousy hardware if you are willing to waste^W commit some time doing so.
can you please nix the usage of the word "enterprise" from your reviews? These QNAP boxes (and pretty much any other storage device y'all review these days) are Commercial, SMB, or Consumer level devices at best. Enterprise describes a category of business that would never use this based on uptime, data integrity, performance, and capability requirements.
Hmm.. Not sure why you are doubting the performance and capability of these units. With SSDs, they form a very good storage backend for medium sized work groups. Uptime and data integrity - These need more QA, but with the stable firmware version, I really had no trouble keeping it bombarded with data accesses for days together
Read your very own "cons" section. This is exactly why Enterprise wouldn't look at it. As for performance? I've got a dirt cheap home build Llano box using 5 WD Green drives in software RAID 5 and it easily reaches 250-300MB/s transfers. This system had 12 SSDs. Colour me underwhelmed.
I understand you have 10Gb network at home right? Or InfiniBand 4x perhaps? Else I do not see how you push 300MBps over 1Gb line... or you are talking about your desktop? Man, this article is about NAS...
having worked in the storage industry (and now working for an enterprise and carrier networking company doing data center architecture and design) QNAP, Drobo, et al. aren't names that carry any weight for enterprise-class storage. The systems I deal with (for example, EMC Symmetrix VMAX 40K) are considered "enterprise class" storage systems (99.999% uptime, SSD caching and tiering, finely tuned atomic memory and storage access, multiple active processing storage engines/directors, fibre channel/FCoE/iSCSI front ends, extensive API command/control sets, replication [local & remote], snapshotting/cloning, etc.). As Jeff7181 notes below, these stand alone in a class by themselves.
Unless this comes in a model with dual controllers (not just dual PSUs), then it's squarely in the SMB rather than enterprise space.
Support for SAS as well as SATA disks would also be high on list of potential requirements for enterprise. With RAID rebuild times on large drives so long, you need disks with decent reliability to give you more confidence in making it through the rebuild.
I agree with the fact that this QNAP is not really a "Enterprise" or "High-End" solution for NAS, however, I have to disagree when it is being compared to something like EMC Symmetrix VMAX 40K, for those are really SAN solutions and not NAS.
We have the Synology RS10613sx+ in the pipeline, but it costs approx. twice that of the TS-EC1279U-RP and caters to users who require more performance / features.
So, you spend $3500 for box plus 12 SSD (not free) and you get the 1/3 of the effective bandwidth of a single SSD plugged in a $300 PC. Is there a point to these NAS boxes?
the concept behind a NAS box is shareable storage across N-number of users in a SoHo or SMB environment. at that point, it makes more sense to have a common pool of storage that can be "protected" (remember, RAID is NOT backup) and utilized more efficiently, than a scattered or siloed collection of independent disk in a laptop or desktop.
it also is a basic requirement for most virtualization (the concept of shared storage) solutions to maintain high availability and portability for virtual machines within a cluster. As a standalone box, you're right, you can hit better performance #'s because you're just straddling a PCIe bus vs. ethernet. however, change the venue and you're looking at a more ideal solution.
Why is this so expensive for the performance, and why is single client performance so bad? Granted I deal with actual enterpise class SAN devices from EMC and the like, but even my ~4 year old personal server can beat this box. My crappy home server is a 20 rotational Hitachi 3 TB GST Deskstar 0S03230 disks in RAID 60, a E5200 CPU and an Adaptec 52445 running on MS Server 2008, not even close to being decent for enterprise level. Besides the disks, it cost under a grand and will max out a quadlinked 4gbps connection with one client, I don't need to add 3 or 4 as your graphs show that this box needs. There is no excuse for a 20 rotational disk device to beat this 12 disk SSD NAS/SAN before hitting the network limit. I should get a dozen SSD's and a 10 gig switch and see what my crappy box can do just for kicks. *makes notes to see if a spare switch can be found in the office*
The single client performance is for a single client with a 1 GbE link (so it can't max out a 4GbE link obviously). Client machines usually have only a single GbE port.
Our multi-client graphs show performance with multiple clients and indicate limitation because of the network link bandwidth on the NAS side
I must be misreading the graphs being presented then. This real world graph: http://images.anandtech.com/doci/6922/qnap_ts1279u... shows 5 clients, each at ~20MB/s for a total of 80 MB/s. Theoretical maximum is 125 MB/s, Adding the control data to the payload of the frame, and you should have about 97.5% data. So it looks like it taking more than 5 clients to get to the 1gbps limit. On the single client CIFS graph here: http://images.anandtech.com/graphs/graph6922/54437... only 2 of the performance benchmarks appear network limited at 123 MB/s. Office Productivity is low at 25-28 MB/s, as that is probably what a small business is going to be doing the most of. Is this a client/CIFS issue and not a NAS/SAN issue?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
mschira - Monday, April 29, 2013 - link
I was wondering if one could install a proper Linux on these systems, such as Fedora.Then one could use it as a medium powerful computation server with lot's of local storage.
Has anybody tried that?
Cheers
M.
watersb - Monday, April 29, 2013 - link
It's a bit late for coffee, so please forgive me for not finding a price in the review.This looks like an interesting product, but I'm a ZFS zealot. Running ZFS with only 4 GB RAM isn't going to fly.
That said, I am *very* interested in anything resembling a mid-range NAS. Storage is a real pain point, and it is tough to build acceptable storage out of cheap disks. So thanks for reviewing this thing!
ganeshts - Monday, April 29, 2013 - link
Price is $5K (MSRP), but retailers are selling it at prices ranging from $3500 to $5000davegraham - Tuesday, April 30, 2013 - link
Watersb,Use Nexenta Community Edition (uses OpenIndiana + ZFS) on top of a supermicro server with the same 12 drive bays (and SAS drives) and I'd kill this particular box AND have a more robust solution to boot.
D
watersb - Tuesday, April 30, 2013 - link
Thanks, Ganesh, for the pricing info.Dave, that's such a good idea, I switched to OpenIndiana in 2009. I'm running 8 2TB drives as four mirrored pairs with a $100 LSI controller. But it will be quite some time before I have the budget to upgrade to a server motherboard with more than 16GB ECC RAM.
ZFS deduplication is *expensive*, folks. Don't do it. I tried adding a 60Gb SSD for L2ARC but it turns out that I would be better off with 60GB of *swap* to hold the deduplication tables.
My kung fu is weak. But I've been running this system through numerous hardware failures, PEBKAC events, and system software updates, and I haven't lost any data. Solaris isn't bulletproof, but it does warn me of impending drive failures before I lose anything.
Sorry for the long rant -- but it IS possible to play with "enterprise" class system configurations on lousy hardware if you are willing to waste^W commit some time doing so.
Walkeer - Thursday, May 9, 2013 - link
I do not understand either why such powerful NAS has only 4GB or RAM looking at the RAM prices these days...davegraham - Monday, April 29, 2013 - link
can you please nix the usage of the word "enterprise" from your reviews? These QNAP boxes (and pretty much any other storage device y'all review these days) are Commercial, SMB, or Consumer level devices at best. Enterprise describes a category of business that would never use this based on uptime, data integrity, performance, and capability requirements.ganeshts - Monday, April 29, 2013 - link
Hmm.. Not sure why you are doubting the performance and capability of these units. With SSDs, they form a very good storage backend for medium sized work groups. Uptime and data integrity - These need more QA, but with the stable firmware version, I really had no trouble keeping it bombarded with data accesses for days togetherGigaplex - Monday, April 29, 2013 - link
Read your very own "cons" section. This is exactly why Enterprise wouldn't look at it. As for performance? I've got a dirt cheap home build Llano box using 5 WD Green drives in software RAID 5 and it easily reaches 250-300MB/s transfers. This system had 12 SSDs. Colour me underwhelmed.Walkeer - Thursday, May 9, 2013 - link
I understand you have 10Gb network at home right? Or InfiniBand 4x perhaps? Else I do not see how you push 300MBps over 1Gb line... or you are talking about your desktop? Man, this article is about NAS...Jeff7181 - Tuesday, April 30, 2013 - link
EMC, Hitachi and NetApp provide enterprise class NAS and SAN arrays. This, nor any, QNAP product is anywhere near that level.Walkeer - Thursday, May 9, 2013 - link
agreed, plus NAS is not really enterprise anyway since these is SANdavegraham - Tuesday, April 30, 2013 - link
Ganesh,having worked in the storage industry (and now working for an enterprise and carrier networking company doing data center architecture and design) QNAP, Drobo, et al. aren't names that carry any weight for enterprise-class storage. The systems I deal with (for example, EMC Symmetrix VMAX 40K) are considered "enterprise class" storage systems (99.999% uptime, SSD caching and tiering, finely tuned atomic memory and storage access, multiple active processing storage engines/directors, fibre channel/FCoE/iSCSI front ends, extensive API command/control sets, replication [local & remote], snapshotting/cloning, etc.). As Jeff7181 notes below, these stand alone in a class by themselves.
cheers,
D
Walkeer - Thursday, May 9, 2013 - link
agreed, this is a SOHO toy...jaziniho - Wednesday, May 1, 2013 - link
Unless this comes in a model with dual controllers (not just dual PSUs), then it's squarely in the SMB rather than enterprise space.Support for SAS as well as SATA disks would also be high on list of potential requirements for enterprise. With RAID rebuild times on large drives so long, you need disks with decent reliability to give you more confidence in making it through the rebuild.
aloginame - Saturday, May 11, 2013 - link
I agree with the fact that this QNAP is not really a "Enterprise" or "High-End" solution for NAS, however, I have to disagree when it is being compared to something like EMC Symmetrix VMAX 40K, for those are really SAN solutions and not NAS.golemite - Monday, April 29, 2013 - link
Hi Ganesh, any chance of getting reviews of lower end rackmount NAS systems like the Synology RS812/812+?ganeshts - Wednesday, May 1, 2013 - link
We have the Synology RS10613sx+ in the pipeline, but it costs approx. twice that of the TS-EC1279U-RP and caters to users who require more performance / features.mmayrand - Tuesday, April 30, 2013 - link
So, you spend $3500 for box plus 12 SSD (not free) and you get the 1/3 of the effective bandwidth of a single SSD plugged in a $300 PC. Is there a point to these NAS boxes?davegraham - Tuesday, April 30, 2013 - link
Mmayrand,the concept behind a NAS box is shareable storage across N-number of users in a SoHo or SMB environment. at that point, it makes more sense to have a common pool of storage that can be "protected" (remember, RAID is NOT backup) and utilized more efficiently, than a scattered or siloed collection of independent disk in a laptop or desktop.
it also is a basic requirement for most virtualization (the concept of shared storage) solutions to maintain high availability and portability for virtual machines within a cluster. As a standalone box, you're right, you can hit better performance #'s because you're just straddling a PCIe bus vs. ethernet. however, change the venue and you're looking at a more ideal solution.
D
Evadman - Wednesday, May 1, 2013 - link
Why is this so expensive for the performance, and why is single client performance so bad? Granted I deal with actual enterpise class SAN devices from EMC and the like, but even my ~4 year old personal server can beat this box. My crappy home server is a 20 rotational Hitachi 3 TB GST Deskstar 0S03230 disks in RAID 60, a E5200 CPU and an Adaptec 52445 running on MS Server 2008, not even close to being decent for enterprise level. Besides the disks, it cost under a grand and will max out a quadlinked 4gbps connection with one client, I don't need to add 3 or 4 as your graphs show that this box needs. There is no excuse for a 20 rotational disk device to beat this 12 disk SSD NAS/SAN before hitting the network limit. I should get a dozen SSD's and a 10 gig switch and see what my crappy box can do just for kicks. *makes notes to see if a spare switch can be found in the office*ganeshts - Wednesday, May 1, 2013 - link
The single client performance is for a single client with a 1 GbE link (so it can't max out a 4GbE link obviously). Client machines usually have only a single GbE port.Our multi-client graphs show performance with multiple clients and indicate limitation because of the network link bandwidth on the NAS side
Evadman - Thursday, May 2, 2013 - link
I must be misreading the graphs being presented then. This real world graph: http://images.anandtech.com/doci/6922/qnap_ts1279u... shows 5 clients, each at ~20MB/s for a total of 80 MB/s. Theoretical maximum is 125 MB/s, Adding the control data to the payload of the frame, and you should have about 97.5% data. So it looks like it taking more than 5 clients to get to the 1gbps limit. On the single client CIFS graph here: http://images.anandtech.com/graphs/graph6922/54437... only 2 of the performance benchmarks appear network limited at 123 MB/s. Office Productivity is low at 25-28 MB/s, as that is probably what a small business is going to be doing the most of. Is this a client/CIFS issue and not a NAS/SAN issue?