Original Link: https://www.anandtech.com/show/7605/synology-rs10613xs-10gbe-10bay-rackmount-review



Introduction and Setup Impressions

Our enterprise NAS reviews have focused on Atom-based desktop form factor systems till now. These units have enough performance for a moderately sized workgroup and lack some of the essential features in the enterprise space such as acceptable performance with encrypted volumes. A number of readers have mailed in asking for more coverage of the NAS market straddling the high-end NAS and the NAS - SAN (storage area network) hybrid space. Models catering to this space come in the rackmount form factor and are based on more powerful processors such as the Intel Core series or the Xeon series.

Synology's flagship in this space over the last 12 months or so has been the RS10613xs+. Based on the Intel Xeon E3-1230 processor, this 2U rackmount system comes with twelve hot-swappable bays (two of which are dedicated for caching purposes) and 8GB of ECC RAM (expandable to 32 GB). Both SATA and SAS disks in 3.5" as well as 2.5" form factor are supported. In addition to the 10-bays, the unit has also got space for 2x 2.5" drives behind the display module. SSDs can be used in these bays to serve as a cache.

The specifications of the RS10613xs+ are as below:

Synology RS10613xs+ Specifications
Processor Intel Xeon E3-1230 (4C/8T, 3.2 GHz)
RAM 8 GB DDR3 ECC RAM (Upgradable to 32 GB)
Drive Bays 10x 3.5"/2.5" SATA / SAS 6 Gbps HDD / SSD + 2x 2.5" SSD Cache Bays
Network Links 4x 1 GbE + 2x 10 GbE (Add-on PCIe card)
USB Slots 4x USB 2.0
SAS Expansion Ports 2x (compatible with RX1213sas)
Expansion Slots 2x (10 GbE card occupies one)
VGA / Console Reserved for Maintenance
Full Specifications Link Synology RS10613xs+ Hardware Specs

Synology is well regarded in the SMB space for the stability as well as wealth of features offered on their units. The OS (DiskStation Manager - DSM) is very user-friendly. We have been following the evolution of DSM over the last couple of years. The RS10613xs+ is the first unit that we are reviewing with DSM 4.x, and we can say with conviction that DSM only keeps getting better.

Our only real complaint about DSM has been the lack of seamless storage pools with the capability to use a single disk across multiple RAID volumes (the type that Windows Storage Spaces provides). This is useful in scenarios with, say, four bay units, where the end user wants some data protected against a single disk failure and some other data protected against failure of two disks. This issue is not a problem with the RS10613xs+, since it has plenty of bays to create two separate volumes in this scenario. In any case, this is a situation more common in the home consumer segment rather than the enterprise segment towards which the RS10613xs+ is targeted.

The front panel has ten 3.5" drive bays arranged in three rows of four bays each. The rightmost column has a two-row / one-column wide LCM display panel with buttons to take care of administrative tasks. This panel can be pulled out to reveal the two caching SSD bays. On the rear side, we have redundant power supplies (integrated 1U PSUs of 400W each), a console and VGA port (not suggested for use by the end consumer), 4x USB 2.0 ports, 4x 1Gb Ethernet ports (all natively on the unit's motherboard) and two SAS-out expansion ports to connect up to 8 RX1213sas expansion units. There is also space for a half-height PCIe card, and it was outfitted with a dual 10 GbE SFP+ card in our review unit.

On the software side, not much has changed with respect to the UI in DSM 4.x compared to the older versions. There is definitely a more polished look and feel. For example, we have drag and drop support while configuring disks in different volumes. These types of minor improvements tend to contribute to a better user experience all around. The setup process is a breeze, with the unit's configuration page available on the network even in diskless mode. As the gallery below shows, the unit comes with a built-in OS which can be installed in case the unit / setup computer is not connected to the Internet / Synology's servers. A Quick Start Wizard prompts the user to create a volume to start using the unit.

An interesting aspect of the Storage Manager is the SSD Cache for boosting read performance. Automatic generation of file access statistics on a given volume helps in deciding the right amount of cache that might be beneficial to the system. Volumes are part of RAID groups. All volumes in a given RAID group are at the same RAID level. In addition, the storage manager also provides for configuration of iSCSU LUNs / targets and management of the disk drives (S.M.A.R.T and other similar disk-specific aspects).

RAID expansions / migrations as well as rebuilds are handled in the storage manager too. The other interesting aspect is the Network section. In the gallery above, one can see that it is possible to bond all the 6 network ports together in 802.3ad dynamic link aggregation mode. SSH access is available (as in older DSM versions). A CLI guide to work on the RAID groups / volumes in a SSH session would be a welcome complementary feature to the excellent web UI.

In the rest of this review, we will talk about our testbed setup, present results from our evaluation of single client performance with CIFS and NFS shares as well as iSCSI LUNs. Encryption support is also evaluated for CIFS shares. A section on performance with Linux clients will also be presented. Multi-client performance is evaluated using IOMeter on CIFS shares. In the final section we talk about power consumption, RAID rebuild durations and other miscellaneous aspects.



Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with all available network ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier. This is the first 10 GbE-equipped NAS we have evaluated. Special mention must be made of the Netgear ProSafe GSM7352S-200 in our setup. It provided us with the necessary infrastructure to properly evaluate the capabilities of the Synology RS10613xs+.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the Synology RS10613xs+ was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Tests were also done using Intel SSD 520 240 GB disks that were supplied by Synology along with the review unit. However, to keep benchmark results consistent across different NAS units, the results we present are those obtained using the OCZ Vector SSDs.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.



Single Client Performance - CIFS and iSCSI on Windows

The single client CIFS performance of the Synology RS10613xs+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results. It must be noted that all the shares / iSCSI LUNs are created in a RAID-5 volume.

Synology RS10613xs+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology RS10613xs+ iSCSI Performance - Windows

Encryption Support Evaluation:

Consumers looking for encryption capabilities can opt to encrypt a iSCSI share with TrueCrypt or some in-built encryption mechanism in the client OS. However, if requirements dictate that the data must be shared across multiple users / computers, relying on encryption in the NAS is the best way to move forward. Most NAS vendors use the industry-standard 256-bit AES encryption algorithm. One approach is to encrypt only a particular shared folder while the other approach is to encrypt the full volume. Some NAS vendors have support for both approaches in their firmware, but Synology only opts for the former. Details of Synology's encryption strategy can be found in this tutorial.

On the hardware side, encryption support can be in the form of specialized hardware blocks in the SoC (common in ARM / PowerPC based NAS units). In x86-based systems, accelerated encryption support is dependent on whether the AES-NI instruction is available on the host CPU (not considering units based on the Intel Berryville platform). Fortunately, the Xeon CPU used in the Synology RS10613xs+ does support AES-NI. So, we can expect performance loss due to encryption enabling to be minimal.

We enabled encryption on a a CIFS share to repeat our Intel NASPT / robocopy benchmarks. The results are presented in the graph below (with the unencrypted folder numbers for comparison purposes).

Synology RS10613xs+ Encryption Performance - Windows

As expected, encryption carries almost no performance hit. In a couple of cases, the numbers seem to even favour the encryption case. It goes to show that the bottleneck is on the disk or network side for those cases, rather than the RAID and encryption-related computation on the NAS CPU.



Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these areslightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.



Multi-Client Performance - CIFS

We put the Synology RS10613xs+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. 

We put the NAS through this evaluation in two modes. In one, we just teamed up two 1Gb ports and used the others as redundant links (with the 10G ports disconnected). In the second mode, we teamed everything together to provide a link theoretically capable of providing up to 24 Gbps. The graphs below present the results.

Synology RS10613xs+ Multi-Client CIFS Performance - 100% Sequential Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Max Throughput - 50% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Random 8K - 70% Reads

Synology RS10613xs+ Multi-Client CIFS Performance - Real Life - 65% Reads

Readers interested in the actual values can refer to our evaluation metrics table available here (with teaming of two of the 1 Gbps ports together and the others were left unconnected), here (with teaming of two of the 10 Gbps ports together and the others left unconnected) and here (with a 24 Gbps uplink - teaming all available network ports together).

The graphs for the QNAP TS-EC1279U-RP as well as the Synology DS1812+ are also presented as reference, but do remember that the QNAP unit had twelve drives in RAID-5 compared to ten here. The DS1812+ was also evaluated with hard drives in RAID-5 in its eight bays. In addition, none of the other units were equipped with 10 Gb links. With speeds reaching up to 800 MBps in RAID-5 for certain access patters,  the RS10613xs+ is, by far, the fastest NAS we have evaluated in our labs as yet. Synology claims speeds of up to 2000 MBps, and this is definitely possible in other RAID configurations with specific access patterns.



Miscellaneous Factors and Final Words

The Synology RS10613xs+ is a 10-bay NAS, and there are many applicable disk configurations (JBOD / RAID-0 / RAID-1 / RAID-5 / RAID-6 / RAID-10). Most users looking for a balance between performance and redundancy are going to choose RAID-5. Hence, we performed all our expansion / rebuild duration testing as well as power consumption recording with the unit configured in RAID-5 mode. The disks used for benchmarking (OCZ Vector 120 GB) were also used in this section. The table below presents the average power consumption of the unit as well as time taken for various RAID-related activities.

RS10613xs+ RAID Expansion and Rebuild / Power Consumption
Activity Duration (HH:MM:SS) Power Consumption (Outlet 1 / W) Power Consumption (Outlet 2 / W) Total Power Consumption (W)
Diskless   52.9 67.4 120.3
Single Disk Initialization   46.5 61.61 108.11
RAID-0 to RAID-1 (116 GB to 116 GB / 1 to 2 Drives) 0:30:05 44.4 59.37 103.77
RAID-1 to RAID-5 (116 GB to 233 GB / 2 to 3 Drives) 0:37:53 49.82 65.91 115.73
RAID-5 Expansion (233 GB to 350 GB / 3 to 4 Drives) 00:24:10 54.42 70.98 125.4
RAID-5 Expansion (350 GB to 467 GB / 4 to 5 Drives) 00:21:40 57.61 74.29 131.9
RAID-5 Expansion (467 GB to 584 GB / 5 to 6 Drives) 00:21:10 61.1 78.29 139.39
RAID-5 Expansion (584 GB to 700 GB / 6 to 7 Drives) 00:21:10 63.77 81.23 145
RAID-5 Expansion (700 GB to 817 GB / 7 to 8 Drives) 00:20:41 66.8 85 151.8
RAID-5 Expansion (817 GB to 934 GB / 8 to 9 Drives) 00:22:41 67.92 86.16 154.08
RAID-5 Expansion (934 GB to 1051 GB / 9 to 10 Drives) 00:25:11 69.34 87.36 156.7
RAID-5 Rebuild (1168 GB to 1285 GB / 9 to 10 drives) 00:19:33 59.78 76.6 136.38

Unlike Atom-based units, RAID expansion and rebuild don't seem to take progressively longer as the number of disks increase.

Coming to the business end of the review, the Synology RS10613xs+ manages to tick all the right boxes in its market segment. Support for both SAS and SATA disks ensures compatibility with the requirements of a wide variety of SMBs and SMEs. We have not even covered some exciting SMB-targeted features in DSM such as Synology High Availability (which uses a dedicated second unit as a seamless failover replacement) and official support for multiple virtualization solutions including VMWare, Citrix and Hyper-V.

A couple of weeks back, Synology introduced the follow-up SATA-only RS3614xs+ with 12-bays and slots for up to two 10G NICs. Compared to the advertised 2000 MBps for the RS10613xs+, the RS3614xs+ can go up to 3200 MBps and 620K IOPS. Given Synology's commitment to the this lineup, SMBs looking for enterprise features in their storage server would do little wrong in going with Synology's xs+ series for the perfect mid-point between a NAS and a SAN.

Log in

Don't have an account? Sign up now