Original Link: https://www.anandtech.com/show/6922/qnaps-tsec1279urp-12bay-flagship-rackmount-nas-review
QNAP's TS-EC1279U-RP 12-bay Flagship Rackmount NAS Review
by Ganesh T S on April 29, 2013 4:30 PM EST- Posted in
- Storage
- NAS
- QNAP
- Enterprise
Introduction
Our enterprise NAS reviews have focused on Atom-based desktop form factor systems till now. These units have enough performance for a moderately sized workgroup and lack some of the essential features in the enterprise space such as acceptable performance with encrypted volumes. A number of readers have mailed in asking for more coverage of the NAS market straddling the high-end NAS and the NAS - SAN (storage area network) hybrid space. Models catering to this space come in the rackmount form factor and are based on more powerful processors such as the Core series or the Xeon series.
QNAP's 12-bay flagship unit is the TS-EC1279U-RP. Based on the Xeon E3-1225 processor, this 2U rackmount system comes with twelve hot-swappable bays and 4GB of ECC RAM. Both SATA and SAS disks in 3.5" as well as 2.5" form factor are supported.
The last QNAP unit that we reviewed was the TS-659 Pro II. Around that time, QNAP was struggling with some data corruption issues in their firmware. We were hesitant to carry on reviewing QNAP's products while this issue remained outstanding. The solution came in the form of FW release v3.7.1 almost a year after the problem was first reported. We have been using a couple of QNAP units in-house with later firmware releases and couldn't replicate the bug in our environment. This gave us enough confidence to request QNAP for one of their latest NAS units. They obliged with the TS-1279U-RP, and it also marks our foray into reviewing units in the rackmount space.
The specifications of the TS-EC1279U-RP are as below:
QNAP TS-EC1279U-RP Specifications | |
Processor | Intel Xeon E3-1225 (4C, 3.1 GHz) |
RAM | 4 GB DDR3 ECC RAM (Upgradable to 8 GB) |
Flash | 512MB DOM |
Drive Bays | 12x 3.5"/2.5" SATA / SAS 6 Gbps HDD / SSD |
Network Links | 4x 1 GbE (Two links upgradable to 10 GbE) |
USB Slots | 2x USB 3.0 / 4x USB 2.0 |
eSATA Slots | 2x |
Expansion Slots | 2x (Network or Storage Expansion) |
VGA | Reserved for Maintenance |
Full Specifications Link | QNAP TS-EC1279U-RP Hardware Specs |
In the rest of the review, I will cover some unboxing and setup impressions. A detailed description of the testbed setup and testing methodology is followed by performance numbers in both single and multi-client modes. As requested by multiple readers, we will also briefly cover performance with encryption enabled. In the final section, power consumption numbers as well as RAID rebuild times will be covered along with some closing notes.
Unboxing and Setup Impressions
The QNAP TS-EC1279U-RP package weighs in north of 50 pounds, with a diskless unit by itself coming in at 48 pounds (21.77 kg). As mentioned earlier, the unit is in the 2U form factor, with dimensions of 19" x 3.5" x 21". Along with the main unit, QNAP also bundles the following with the package:
- Quick Installation Guide and CD with QNAP utilities
- Four Cat 5e Ethernet cables
- Two US AC power plugs
- Screws for mounting hard disks in the drive bays
The front panel has twelve 3.5" drive bays arranged in three rows of four bays each. This leaves no room for a LCD display. The power button as well as status LEDs (network and power) are on the right side of the front panel. On the rear side, we have the VGA port for attaching a display, two USB 3.0 ports, two eSATA ports and four USB 2.0 ports. There are two GbE ports on the mainboard and two more on a half-height PCIe card. Two removable redundant power supplies round up the rear side of the unit. The power supplies are rated for 600 W each.
Inside the chassis, we have the E3-1225 CPU with a massive aluminium heat sink on top (passive cooling for the 95W CPU), one 4GB ECC DDR3 DIMM and a couple of spare SATA ports. The main board is isolated from the rear panel of the drive bays with three high speed fans which are user-replaceable (if the need arises). The SATA backplane consists of Marvell 88SE9125 SATA to PCIe bridges. There is one bridge chip for each SATA / SAS port, with a 6 Gbps SATA link on one side and a single PCIe 2.0 lane (5 Gbps) on the other.
On the software side, we found that not much has changed in terms of the UI since we reviewed the TS-659 Pro II. The setup process is very similar. There is an underlying assumption that the user / administrator is aware of various RAID levels (which is perfectly acceptable given the target market for this unit) as there is no automatic RAID level management. SSH access is available.
Even though multiple volumes can be created, a given disk can be a part of only one volume at a time. Disks can also be set up as hot spares. On the whole, the firmware has all the features that most IT admins would frequently use. In addition to the ability to configure the unit using the GUI, it would also be nice to have a CLI (command line interface) reference manual for this product level.
An overview of the setup process as well as the other features available in the current firmware is provided in the gallery below.
The firmware also comes with the ability to add more features using QPKG software expansion. There are a host of other features too, which we covered in detail in our review of the TS 659-Pro II. They are only of tangential relevance to this unit's target market, and hence, we won't go into detail into those features.
The disks can be set up in Single Disk, JBOD, RAID 0, RAID 5, RAID 6 or RAID 10 configurations. Available file systems are EXT3 and EXT4. Encryption is available on a per-volume basis.
Testbed Setup and Testing Methodology
Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with two ports teamed with the second pair used as a backup and also with all four ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evoluion 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the RevoDrive Hybrid
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
Our testing environment also required some updates for evaluation of rackmount units. Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)
We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the QNAP TS-EC1279U-RP was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Various shares and iSCSI LUNs were configured in this 1285 GB volume.
Thank You!
We thank the following companies for helping us out with our rackmount NAS evaluation:
- Thanks to Supermicro for the CES-RACK14U mini rack
- Thanks to OCZ Technology for the twelve 120 GB Vector SSDs.
In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.
Single Client Performance - CIFS, NFS and iSCSI
The single client CIFS performance of the QNAP TS-EC1279U-RP was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results
We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.
A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:
mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o async,hard,proto=tcp,noac
mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o directio
Note that we have not tried to optimize NFS performance for the NAS. In order to keep things as uniform as possible, we will just look at comparative performance on the basis of the default mounting options combined with the above (which are set to avoid caching effects on the client side). The scripts available here were run on IOMeter. The graph below presents the various results.
Multi-Client Performance - CIFS
We put the QNAP TS-EC1279U-RP through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. We configured the network links of the NAS in two ways. In the first configuration, two of the ports were link aggregated (and the other two teamed and configured as active backup). In the second configuration, all four links were link aggregated as a single 4 Gb link. The following graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Selected metrics from the evaluation of the ioSafe N2 are available here for the dual teaming case and here for the quad teaming case.
We don't have any comparison graphs for these because this is the first unit we are evaluating using SSDs (the 120 GB OCZ Vector units). The previous tower form factor NAS units were evaluated using the Western Digital 4TB RE disks. We haven't graphed the results from the sequential 50% read tests since the aggregate bandwidth figures put out by IOMeter seem implausible (even though the standalone read and write bandwidth figures seem to make sense).
Comparing the sequential 100% reads performance between the dual and quad teaming cases, we find that the performance is limited by the network links. In the other cases, we see that it might be possible to extract even more performance out of the unit by having more clients accessing the unit simultaneously.
Encryption Support Evaluation
Consumers looking for encryption capabilities can opt to encrypt a iSCSI share with TrueCrypt or some in-built encryption mechanism in the host OS. However, if requirements dictate that the data must be shared across multiple users / computers, relying on encryption in the NAS is the best way to move forward. Most NAS vendors use the industry-standard 256-bit AES encryption algorithm. One approach is to encrypt only a particular shared folder while the other approach is to encrypt the full volume. Some NAS vendors have support for both approaches in their firmware, but QNAP only opts for the latter. Details of the QNAP's encryption key management mechanism and other caveats are available here.
On the hardware side, encryption support can be in the form of specialized hardware blocks in the SoC (common in ARM / PowerPC based NAS units). In x86-based systems, encryption support is dependent on whether the AES-NI instruction is available on the host CPU (not considering units based on the Intel Berryville platform). Current generation Atom CPUs (and even the Core i3 CPUs) don't support AES-NI. Fortunately, the Xeon E3-1225 used by QNAP in the TS-EC1279U-RP supports AES-NI. The impact of enabling encryption should be minimal.
We created an encrypted volume and reran our Intel NASPT / robocopy benchmarks on a CIFS share. The results are presented in the graph below (with the unencrypted volume numbers for comparison purposes).
As expected, encryption does carry a performance hit, but the impact is not very extensive. Depending on the type of workload, the penalty varies between 7.2% and 38%. We will be shortly carrying out a review of an Atom-based NAS where we will see that the penalty is much bigger.
Miscellaneous Factors and Final Words
The QNAP TS-EC1279U-RP is a 12-bay NAS, and there are many applicable disk configurations (JBOD / RAID-0 / RAID-1 / RAID-5 / RAID-6 / RAID-10). Most users looking for a balance between performance and redundancy are going to choose RAID-5. Hence, we performed all our expansion / rebuild duration testing as well as power consumption recording with the unit configured in RAID-5 mode. The disks used for benchmarking (OCZ Vector 120 GB) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.
QNAP TS-EC1279U-RP RAID Expansion and Rebuild / Power Consumption | ||||
Activity |
Duration (HH:MM:SS) |
Power Consumption (Outlet 1 / W) | Power Consumption (Outlet 2 / W) | Avg. Power Consumption (W) |
Diskless | 40.95 | 50.51 | 91.46 | |
Single Disk Initialization | 43.48 | 53.71 | 97.19 | |
RAID-0 to RAID-1 (116 GB to 116 GB / 1 to 2 Drives) | 00:12:41 | 44.99 | 54.22 | 99.21 |
RAID-1 to RAID-5 (116 GB to 233 GB / 2 to 3 Drives) | 00:25:27 | 44.95 | 54.03 | 98.98 |
RAID-5 Expansion (233 GB to 350 GB / 3 to 4 Drives) | 00:41:42 |
46.56 |
55.56 |
102.12 |
RAID-5 Expansion (350 GB to 467 GB / 4 to 5 Drives) | 00:40:51 |
48.63 |
57.87 |
106.5 |
RAID-5 Expansion (467 GB to 584 GB / 5 to 6 Drives) | 00:37:23 |
50.58 |
60.1 |
110.68 |
RAID-5 Expansion (584 GB to 700 GB / 6 to 7 Drives) | 00:40:39 |
51.81 |
61.72 |
113.53 |
RAID-5 Expansion (700 GB to 817 GB / 7 to 8 Drives) | 00:36:41 |
52.8 |
62.82 |
115.62 |
RAID-5 Expansion (817 GB to 934 GB / 8 to 9 Drives) | 00:41:33 |
55.03 |
64.24 |
119.27 |
RAID-5 Expansion (934 GB to 1051 GB / 9 to 10 Drives) | 00:42:17 |
57.66 |
65.39 |
123.05 |
RAID-5 Expansion (1051 GB to 1168 GB / 10 to 11 Drives) | 00:42:34 |
60.44 |
66.7 |
127.14 |
RAID-5 Expansion (1168 GB to 1285 GB / 11 to 12 Drives) | 00:44:23 |
61.72 |
67.49 |
129.21 |
RAID-5 Rebuild (1168 GB to 1285 GB / 11 to 12 drives) | 00:22:31 |
58.47 |
65.13 |
123.6 |
Unlike Atom-based units, RAID expansion and rebuild don't seem to take progressively longer as the number of disks increase. Coming to the business end of the review, the pros and cons of the unit must be analyzed while keeping in mind that the unit comes in at less than $5000.
Pros:
- High end features such as ECC RAM for mission critical environments
- Upgradable with 10G network cards
- Minimal performance hit when encryption is enabled
- Affordable price tag considering available feature set
Cons:
- Firmware and UI need rework and updates for usability and feature set parity with competing NAS vendors (Eg.: Automatic RAID level management / Disks can't be shared across multiple volumes / Encryption can only be on a per-volume basis and not on a per-folder basis etc.)
- Target market for this unit may find a CLI (command line interface) guide useful
- Firmware releases should go through more QA.
The last point was triggered by a new firmware update[ 3.8.2 Build 20130301 ] which we installed just as the review was about to go live. We tried to do some volume expansion experiments, but the unit became unresponsive twice during the process. We didn't encounter any such issues with the previous firmware release [ 3.8.1 Build 1205 ]. Strangely, the new firmware update got pushed through the NAS web UI, but, is not available yet on QNAP's website. I am willing to cut QNAP some slack here, but, definitely, the firmware QA must be improved. On the hardware side, the unit could also do with some noise dampening (the fans run without modulation during the startup sequence, but that is probably not much of a deal when the unit is placed in a server room).
In closing, the balance of feature set and price tips the recommendation in QNAP's favour. More stable firmware would make the deal for prospective consumers even sweeter.