Original Link: https://www.anandtech.com/show/9519/advatronix-nimbus-400-file-server-review
Advatronix Nimbus 400 File Server Review
by Ganesh T S on August 12, 2015 8:00 AM EST- Posted in
- NAS
- storage server
- Avoton
- Advatronix
Introduction and Testing Methodology
The increasing role of computers and electronic communication has resulted in the need to manage large amounts of data. This need has been felt across a wide spectrum of consumers ranging from home users to big enterprises. In the home / small business space, one option is to use network attached storage units that are ready to use out of the box. Sometimes, it is also necessary to have a server with added flexibility to store the data. This is where storage servers based on Windows or Linux distros come into play. These servers can either be bought as an appliance or assembled in a DIY fashion. Today, we will be looking at a system based on the former approach - the Advatronix Nimbus 400.
Advatronix is a US-based vendor of computing server solutions. We reviewed their Intel Xeon-based Cirrus 1200 solution last year. The Nimbus 400 is a server solution targeting a completely different market segment. Based on the Intel Avoton (Atom) platform, the lineup's members targets scenarios where the primary need is to act as a file server and the general computing workload is not expected to be taxing. That said, the Nimbus 400 ticks the feature checkboxes that consumers usually expect from enterprise-level file server platforms:
- ECC RAM
- IPMI 2.0 with iKVM for headless operation and full management control over the network
- TPM (Trusted Platform Module) provision
The Nimbus 400 provides plenty of customization options.
- Choice of OS: Diskless w/o OS, CentOS 7.1, FreeNAS, Windows Home Server 2011, Windows Server 2012 R2 Essentials and Windows Server 2012 R2 Standard
- SoC: Intel Atom C2550 (4C/4T @ 14W TDP) or Intel Atom C2750 (8C/8T @ 20W TDP)
- DRAM: 2GB to 64GB of DDR3 ECC RAM @ 1600 MHz
- PSU: 250W or 660W Modular Gold Rated
- Fixed / OS Drives: 1x or 2x RAID-1 160GB SSD
- Data Drives: 1x - 4x 3.5" NAS Drives (2, 3 or 4TB) or 4x 3.5" Enterprise Drives (2 or 4TB)
The table below compares our review configuration against other systems that have been evaluated by us using the same methodology (and hence, serve as comparison points).
Comparative File Server Configurations | ||
Aspect | Advatronix Nimbus 400 | ASRock Rack C2750D4I + U-NAS NSC-800 |
Form Factor | 4-bay mini-tower / mITX motherboard | 8-bay mini-tower / mITX motherboard |
Platform | Intel Avoton C2550 | Intel Avoton C2750 |
CPU Configuration | 4C/4T Silvermont x86 Cores 2 MB L2, 14W TDP 2.4 GHz (Turbo: 2.6 GHz) |
8C/8T Silvermont x86 Cores 4 MB L2, 20W TDP 2.4 GHz (Turbo: 2.6 GHz) |
SoC SATA Ports | 2x SATA III (for two hot-swap bays) 4x SATA II (None Used) |
2x SATA III (for two hot-swap bays) 4x SATA II (for one OS drive) |
Additional SATA Ports | Marvell SE9172 (2x) Marvell SE9230 (4x) (for two fixed OS drives and two hot-swap bays) |
Marvell SE9172 (2x) (for two hot-swap bays) Marvell SE9230 (4x) (for four hot-swap bays) |
I/O Ports | 4x USB 3.0 2x USB 2.0 1x D-Sub 2x RJ-45 GbE LAN 1x RJ-45 IPMI LAN 1x COM1 Serial Port |
3x USB 2.0 1x D-Sub 2x RJ-45 GbE LAN 1x RJ-45 IPMI LAN 1x COM1 Serial Port |
Expansion Slots | 1x PCIe 2.0 x8 (used by Renesas uPD720201 USB 3.0 Host Controller) | 1x PCIe 2.0 x8 (Unused) |
Memory | 4x 4GB DDR3-1333 ECC UDIMM Kingston KVR13E9/8HM 9-9-9-24 |
2x 4GB DDR3-1333 ECC UDIMM Samsung M391B5273DH0-YH9 |
Data Drives | 4x OCZ Vector 128 GB | 8x OCZ Vector 128 GB |
Chassis Dimensions | 270mm x 200mm x 302mm | 316mm x 254mm x 180mm |
Power Supply | 250W Internal PSU | 400W Internal PSU |
Diskless Price (when built) | USD 767 | USD 845 |
Evaluation Methodology
A file server can be used for multiple purposes, unlike a dedicated NAS. Evaluating a file server with our standard NAS testing methodology wouldn't do justice to the eventual use-cases and would tell only a part of the story to the reader. Hence, we adopt a hybrid approach in which the evaluation is divided into two parts - one, as a standalone computing system and another as a storage device on a network.
In order to get an idea of the performance of the file server as a standalone computing system, we boot up the unit with a USB key containing a Ubuntu-on-the-go installation. The drives in the bays are configured in a mdadm RAID-5 array. Selected benchmarks from the Phoronix Test Suite (i.e, those benchmarks relevant to the usage of a system as a file server) are processed after ensuring that any test utilizing local storage (disk benchmarks, in particular) point to the mdadm RAID-5 array. Usage of the Phoronix Test Suite allows readers to have comparison points for the file server against multiple systems (even those that haven't been benchmarked by us).
As a storage device on a network, there are multiple ways to determine the performance. One option would be to repeat all our NAS benchmarks on the system, but that would be take too much time to process for a given system that we are already testing as a standalone computer. On the other hand, it is also important to look beyond numbers from artificial benchmarks and see how a system performs in terms of business metrics. <a href="https://www.spec.org/sfs2014/>SPEC SFS 2014 comes to our help here. The benchmark tool is best used for evaluation of SANs. However, it also helps us here to see the effectiveness of the file server as a storage node in a network. The SPEC SFS 2014 has been developed by the IOZone folks, and covers evaluation of the filer in specific application scenarios like the number of virtual machines that can be run off the filer, number of simultaneous databases, number of video streams that can be simultaneously recorded and the number of simultaneous software builds that can be processed.
Our SPEC SFS 2014 setup consists of a SMB share on the file server under test connected over an Ethernet network to our NAS evaluation testbed outlined below. Further details about the SPEC SFS 2014 workloads will be provided in the appropriate section.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evolution 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
The above testbed runs 10 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 10 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
Setup Impressions and Platform Analysis
The Nimbus 400 chassis looks solid and well-built. The hands-on experience doesn't disappoint either. The ventilated front panel design with curved edges provides a premium look to the system.
The side panels have ventilation support at the bottom to allow the air to be drawn in by the fans placed inside. The PSU is mounted at the top end of the chassis and the motherboard at the bottom end. The rear panel shows a clean motherboard face plate with a serial port above the VGA port, a management LAN port with two USB 2.0 ports and two GbE LAN ports next to it. At the far end, we see two USB 3.0 ports from what is obviously a PCIe card.
Removing the side panels allows us to see two fixed SATA drives (Intel 3Gbps 120GB SSDs) mounted on either side of the 4-bay drive cage. The cables are tucked neatly. The first photo below shows the mounting of the fan directly beaneath the drive cage. This fan pulls the air in from the front and down through the openings in the drive cage (visible in one of the gallery photos further down) and then pushes it out over the motherboard and through the ventilation holes in the rear as well as the sides. This creates airflow over the passive heatsink on top of the SoC. All in all, this looks like a very nifty design that ought to keep the temperatures down while also creating a relatively quiet system (the fan is not even visible from the outside).
The other side panel reveals the PCIe card that serves up the four USB 3.0 ports. We have two ports directly on the card, while the other two are enabled by headers (allowing them to place the ports in the front panel).
The drive trays don't give us any cause for complaint - they support both 2.5" and 3.5" drives, and have an intuitive removal mechanism with a distinctive blue tab. It would have been nice to have a screwless drive tray design, but the ones that come with the Nimbus 400 serve their purpose well. The gallery below provides some more photographs of the Nimbus 400 chassis and internals.
We had no trouble accessing the server using the management LAN interface. The gallery below shows some of options available over IPMI.
Our review unit came with Windows Server 2012 R2 Essentials pre-installed in a hardware RAID-1 volume. Even though the boot process showed the Advatronix logo, it didn't take long for us to find that the internal motherboard was the ASRock Rack C2550D4I. The diagram below shows the layout of the motherboard. It is the same as that of the ASRock Rack C2750D4I that we reviewed in a DIY configuration earlier.
The photographs above confirm that all the four hot-swap bays come the SATA III 6Gbps connectors (two of which are direct from the SoC). The Marvell RAID 1 volume for the OS is from two of the four SATA III ports enabled by the Marvell SE9230. This setup ensures that there are no bottlenecks in accessing the storage drives. The PCIe slot is occupied by the USB 3.0 card. Despite the available x8 link, the card only uses one lane. This implies that it is possible to not obtain expected bandwidth numbers if two or more USB 3.0 devices are concurrently active.
Performance Metrics - Phoronix Test Suite
The file server's BIOS settings allow it boot off a USB key. We had no trouble doing the same with a portable installation of Ubuntu 14.04 (kernel version 3.16).
Database Benchmarks
The first test we look at involves determination of the time taken to perform 12500 record insertions into an indexed database. We use SQLite v3.7.3. SQLite performance depends to a large extent on the capabilities of the CPU. Note that the C2750D4I system has a 20W TDP and has more cores / better clocks compared to the C2550D4I in the Nimbus 400. Benchmarks from other systems can be viewed on OpenBenchmarking.org.
The pgbench database benchmark involves recording the transaction rate for database operations using PostgreSQL. Unlike SQLite insertions-only test, pgbench is based on TPC-B, running five SELECT, UPDATE and INSERT commands per transaction.
Benchmarks from other systems can be viewed on OpenBenchmark.org
Web Server Benchmarks
The NGINX and Apache benchmarks record the number of static web page requests that can be serviced in a given time interval. It gives an idea of the load that can be handled if a given system were to be configured as a web server. The test load consists of a total of 500K requests (for NGINX) / 1M requests (for Apache) with 100 concurrent ones.
Benchmark numbers for both of these are available on OpenBenchmarking.org (NGINX, Apache).
TCP Loopback
The efficiency of the networking stack in the system (not to be confused with the hardware network adapter itself) can be determined by measures the loopback TCP performance. We record the time taken to transfer 10GB of data via loopback.
Given that we have the same networking stack for a given OS release across different hardware configurations, the efficiency is going to vary based solely on the CPU capabilities again. Benchmarks from other systems can be viewed on OpenBenchmarking.org.
CacheBench
CacheBench is an artificial benchmark to determine the performance of the cache and DRAM components in a system. It consists of three profiles - reads, writes and read/modify/writes. The bandwidth is recorded for each profile, with bigger numbers indicating better performance.
The numbers depend on the internal cache access speeds as well as the speed of the DRAM in the system. Benchmarks from other systems can be viewed on OpenBenchmarking.org
Stream
The system memory is tested out using the stream benchmark. The STREAM benchmark is a simple, synthetic benchmark designed to measure sustainable memory bandwidth (in MB/s) and a corresponding computation rate for four simple vector kernels (Copy, Scale, Add and Triad).
7-Zip Compression
The 7-Zip compression benchmark records the MIPS for the compression mode. This is the same benchmark that we use in the evaluation of mini-PCs, except that this is based on the Linux version. Higher MIPS ratings correspond to better performance, and the numbers are primarily based on the performance of the CPU in the system.
Benchmark numbers for other systems can be viewed on OpenBenchmarking.org
Linux Kernel Compilation
The timed Linux kernel compilation benchmark records the time taken to build the Linux 3.18 kernel. It is a good multi-discipline benchmark, stressing multiple aspects of the system including the DRAM, CPU and, to a certain extent, even the storage.
Benchmark numbers for other systems can be viewed on OpenBenchmarking.org
C-Ray
C-Ray is a simple raytracer designed to evaluate the floating point performance of a CPU. This is a multi-threaded test, and the time taken to complete the routine is recorded.
Benchmark numbers for other systems can be viewed on OpenBenchmarking.org
Performance Metrics - Storage Subsystem
In the previous section, we looked at various benchmarks for databases, web servers, general memory and CPU performance etc. For a file server, the storage performance is of paramount importance, since the main expectation from the system is one of writing to and reading from a disk volume protected against disk failure with some sort of resiliency configuration. In this section, we use Ubuntu 14.04 and mdadm to configure the disks in the hot-swap drive bays in a RAID-5 volume. Selected benchmarks from the Phoronix Test Suite are run with the RAID-5 volume as the target disk.
AIO Stress
Our first test in the storage benchmark is the AIO Stress PTS test profile. It is an asynchronous I/O benchmark, and our configuration tests random writes to a 2048MB test file using a 64KB record size, enabling apples-to-apples comparison with the other results reported to OpenBenchmarking.org. Note that the Nimbus 400 has only four disks while the ASRock C2750D4I ssystem was tested with eight drives.
FS-Mark
FS-Mark is used to evaluate the performance of a system's file-system. The benchmark involves determination of the rate of processing files in a given volume. Different test profiles are used - processing 1000 files of 1MB each, processing 5000 files of 1MB each using four threads, processing 4000 files of 1MB each spread over 32 sub-directories and finally, 1000 files of 1MB each without using sync operations to the disk. The processing efficiencies are recorded in the graphs below.
PostMark
This benchmark simulates small-file testing similar to the tasks endured by web and mail servers. This test profile performs 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes.
Numbers from the evaluation of other systems can be found on OpenBenchmarking.org.
Both FS-Mark and PostMark seem to be CPU-bound, rather than just disk-bound, according to the above results.
NAS Performance - SPEC SFS 2014
Evaluation of the system as a storage node on the network can be done using multiple approaches. As a simple NAS accessed from a single client, Intel NASPT would work very well. There are other artificial benchmarking programs such as IOMeter and IOZone (all of which are used in our dedicated NAS reviews). However, when it comes to file servers used in business scenarios, business metrics make more sense. For example, a database administrator might wonder how many simultaneous databases could be sustained on a given machine? An administrator in a software company might want to know how many simultaneous software builds could be processed on the machine if it were to be used as a storage node. SPEC SFS 2014 allows us to evaluate systems based on such business metrics.
Prior to discussing about the various business scenarios, let us take a look at the test setup (including details of the testbed and how the file server itself was configured.
Solution Under Test Bill of Materials
- Advatronix Nimbus 400 (ASRock Rack C2550D4I with 16GB RAM)
- AnandTech NAS Testbed (64GB RAM, 1GB to each member VM)
- Netgear GSM7352S Ethernet Switch
Component Software
- Advatronix Nimbus 400 system running Windows Server 2012 R2 Essentials
- Load generators running on AnandTech NAS Testbed (10x Windows 7 VMs in a Windows Server 2008 R2 Hyper-V Installation)
Storage and File-Systems
- Advatronix Nimbus 400 - 4x OCZ Vector 128GB SSDs : Storage Spaces with Parity Space
- AnandTech NAS Testbed - NTFS partitions created at OS install time on OCZ Vertex 4 64GB SSDs
Transport Configuration
- Advatronix Nimbus 400 - 2x 1GbE LAN Ports in 802.11ad LACP to Netgear GSM7352S
- AnandTech NAS Testbed - 11x 1GbE LAN Ports to Netgear GSM7352S (1x management, 1x to each of 10 VMs)
- All SMB benchmark traffic flowed through the Netgear GSM7352S network switch
The four business metrics that we will be looking at today include:
- Database
- Software Build
- Video Data Acquisition (VDA)
- Virtual Desktop Infrastructure (VDI)
The database and software build categories are self-explanatory. The VDA profile refers to usage of a storage node as a recording target for streaming video (usually from IP cameras). The VDI profile refers to the number of virtual desktops / virtual machines that can be supported using the file server as a storage node for the virtualization infrastructure.
Database
The following graphs show the requested and achieved op rates for the database workload. Note that beyond four databases, the gap between them is more than 10% - this automatically means that the storage system is unable to support more than four databases concurrently. In all the workloads, it is the latency which decides the suitability and not the bandwidth available.
The SPEC SFS 2014 benchmark also provides a summary file for each workload which contains data additional to whatever is graphed above. The summary for the database workload is available here
Software Build
The software build workload simulations suggest that handling more than 7 concurrent software builds off the NAS would not be a good experince,
The report summary for the software build workload is available here
Video Data Acquisition
The Advatronix Nimbus 400 has no trouble with handling up to 10 data sources in the video data acquisition benchmark profile.
The report summary for the VDA workload is available here
Virtual Desktop Infrastructure
VDI again presents a big challenge. While the 8-bay U-NAS build could not handle even one virtual desktop, the Nimbus 400 starts off OK. However using a Windows Storage Spaces volume for VM data store proves to be a stumbling block when attempting to run two or more virtual machines.
The report summary for the VDI workload is available here
Miscellaneous Aspects and Concluding Remarks
Low power consumption and the ability to keep the drives cool under stress are some of the desirable characteristics of file servers. In order to evaluate this aspect, we subject the system with Windows Server 2012 R2 Essentials to a stability test using AIDA64. For stressing the disks (storage spaces volume using the drive bays), we used Microsoft's DiskIO with the pre-defined worst-case SATA patterns issued randomly with verification.
We find that the CPU doesn't throttle and the cores stabilize around 48 C, with the motherboard at 51 C. During this process, we also recorded power consumption at the wall. We found that the maximum power consumption was around 67 W, while the idling power consumption was around 40 W. Note that this is a 14W TDP SoC (compared to the 20W TDP version we had in the U-NAS build).
The Advatronix Nimbus 400 is the second of three file servers that we have evaluated with Ubuntu 14.04 and Windows Server / Storage Spaces. While the Phoronix Test Suite benchmarks have been chosen for their relevance to file server workloads, it appears that the SPEC SFS 2014 benchmarks do not bring out what people use the Storage Spaces feature for. We are still optimizing the testbed for SPEC SFS 2014, and we do have plans to evaluate other NAS OSes used in DIY builds using the same approach down the road.
Coming to the business end of the review, we have a few things to discuss. The chassis is solid and worthy of praise. The thermal solution is top notch and the choice of a USB 3.0 PCIe card alleviates one of the shortcomings of the Intel Avoton platform / ASRock Rack C2550D4I motherboard. Advatronix has taken care to ensure that the hot-swap bays are all connected to SATA 6 Gbps ports in the system. The only problem / complaint that one could mention here is the sharing of a PCIe 2.0 x1 link amongst four USB 3.0 ports. Performance-wise, the Intel Avoton SoC a mature and known quantity. The choice of a Windows Storage Space with simple parity is not efficient when looking at the system from a performance perspective. However, people opting for Windows Server as the OS often have other compelling reasons to do so.
The build presented in this review has a diskless cost of around USD 767. Including a license for Windows Server 2012 R2 Essentials (our review configuration) drives up the price to USD 1493. The Intel Avoton platform has proved very popular for systems with file serving and / or NAS duties. Consumers considering the Avoton option (either build or purchase) would do well to take a closer look at the Advatronix Nimbus 400.