Original Link: https://www.anandtech.com/show/2372
Affordable storage for the SME, part one
by Johan De Gelas on November 7, 2007 4:00 AM EST- Posted in
- IT Computing
Introduction
For years, centralized storage meant complex Fiber Channel Storage Area Networks (FC SAN) which were very expensive. Large enterprises were prepared to pay big premiums for such centralized storage networks, as storing valuable data scattered over hundreds of servers would cost them a lot more. The risk of losing data was higher, and decentralized storage resulted in a lot more work for the system administrators. As the necessary storage capacity doubled and still doubles every 18 months, scalability not simplicity was the priority. Hence, the expensive but scalable SANs paid for themselves over time by making the datacenter easier to maintain with fewer people and lower data loss risks.
However, for an SME (Small/Medium sized Enterprise), SANs were simply overpriced storage arrays. The SME has fewer servers that need access to shared storage, so complex switching fabrics with routing are usually unnecessary. The need for less storage capacity and more simplicity doesn't mean that centralized storage capacity cannot be a huge advantage for some SMEs. SMEs that offer web services are especially interested in an affordable form of centralized storage.
We have been working with quite a few SMEs the past several years, and making storage more scalable is a bonus for those companies. However, it is not the main reason companies are looking into SANs. If your company depends on a web service, you want your server to be available around the clock. That means that you will almost certainly be looking towards clustering and failover techniques. These High Availability (HA) technologies - whether on a virtual (VMware HA, Xen HA) or a physical server - in many cases require a shared storage device to work well.
HA together with making storage easier to maintain are the two main reasons why affordable shared storage is desirable, even in an environment where only a few servers are necessary. VMware's Vmotion is another reason why the interest for centralized storage is increasing. Vmotion is not really an alternative for the traditional failover and HA technologies, but it allows for hardware maintenance and server migration from one machine to another without any downtime. In order to make this work, you also need shared storage.
The SME's renewed interest for centralized storage has drawn the attention of the big storage vendors. Since 2006, HP, Netapp, Sun, IBM, Fujitsu-Siemens, and EMC have all launched quite a few product lines targeted at the SME. Many of these "SME products" start at a relatively low price, but a complete storage solution can still carry a very hefty price tag. It is not surprising that the SME product lines are in fact somewhat downsized high-end solutions if you consider that the SME market (about $1 billion) is probably only a small fraction of the $17 billion storage market (See IDC's 2006 report).
Anyway, the idea behind this article is not to discuss the technology and business trends in the professional IT market. There are enough articles covering that. As part of a larger project of helping the SMEs with their datacenter choices, we will try to find out which solutions offer good price/performance without omitting any critical features. If you are relatively new to storage, we'll give you a crash course.
Crash course in Storage and Storage Networks
Here's a quick introduction for those of you who are new to enterprise storage. We'll try to cover the most important concepts.
The first decision to make is what disk interface you are going to use:
- 15000 RPM SAS disks are bought for their robustness and low latency and are mainly used for disk intensive server applications. Transaction based applications in particular need this kind of hard disk. The biggest disks (300GB) consume a lot of power and are very expensive.
- 10000 RPM SAS disks offer higher capacities (up to 400GB) and are used in the same applications as their 15000 RPM brothers, but they are available at lower price points and have lower power consumption. Less intensive databases and web servers are their natural habitat.
- Nearline or "RAID enabled" SATA disk are more robust than normal SATA disks. With capacities at 750GB, SATA disks have pushed the 10000 RPM SCSI based disks out of the backup and fileserver market. With fewer disks needed and lower rotation speeds, power consumption is a lot lower.
Direct Attached Storage (DAS): thanks to the SAS protocol, you can attach two (but no more) servers to a DAS storage rack. Most of the time, you use a 12 Gb/s Infiniband cable to attach your DAS rack directly to the storage controller card in your server. From the point of view of your server, a DAS is just one or more logical disks it has access to.
Network Attached Storage (NAS): basically the appliance version of a fileserver. As it is yet another device attached to your network, you can in theory access it with as many servers as you like. You use a UTP cable to attach the NAS to your network. To your server, a NAS behaves like a fileserver - a shared storage that is accessible via the network.
Storage Area Network (SAN): a number of disk arrays accessed by one or more servers, most of the time over a switched network. The servers that need access to the shared storage run "initiator" software to send off block requests to the "target" software running on the disk array. You could say that in this case the server is a "storage client", while the real server is the storage server on which the target software runs, and which will send back the requested blocks. From the server OS point of view, parts of the SAN are yet another logical disk it can access.
Above you can see how the Microsoft (iSCSI) target appears as a logical disk. The target is in fact running on the storage racks we tested, but for our testing "client" OS it looks like a local disk.
Just like a DAS, the access to the SAN occurs by sending block transfers, not file transfers (NAS). Implementing a SAN is not enough to give servers concurrent access to the same data. Only with a clustered file system that implements the necessary locks are the servers able to read and write to/from the same data. Otherwise, each server will have access to different parts of the SAN, or it will only be able to read files/data that it can't write. In a way, SANs are DAS with added network capabilities. SANs can be accessed over a TCP/IP Ethernet network (iSCSI) or a FC protocol based switched network (using fiber optic cabling most of the time), also called a "switched fabric".
A different low-cost approach
In a first attempt to find affordable storage solutions, we gathered two interesting storage systems in our lab. The two systems are not really competitors but representatives of two different approaches to deliver high performance, centralized-but-affordable storage.
The Promise VTRAK E310f on top, the Intel SSR212MC2 on the bottom
The first option is the Promise VTRAK E310f, which is a fiber channel based storage rack. Promise's roots are in low-cost ATA RAID controllers. Recently Promise started to cater to the low-end and midrange enterprise storage market with a strong focus on keeping those racks affordable. The VTRAK E310f should thus give us a good idea whether or not it is possible to build a high performance but affordable solution with FC SAN building blocks.
The second system is the Intel SSR212MC2, which is an industry standard server optimized for storage. The reseller can turn it into anything he likes: a NAS, DAS, an iSCSI SAN, or an FC SAN. In this article, we will use the SSR212MC2 as an iSCSI SAN and (SAS) DAS. This will give us an idea what advantages and disadvantages an iSCSI device with industry standard parts has compared to a FC Appliance.
One of the big advantages that both the Intel and the Promise approach offer is that you can use industry standard hard disks and no one requires you to buy disks from the SAN manufacturer. While it is normal that a manufacturer tries to avoid users plugging unreliable hardware into their systems, this argument doesn't hold water when it comes to hard disks. After all, most OEMs also buy from Seagate, Fujitsu, and others. The big OEMs will force you to use their disks in two ways: either the controller will check the vendor of the disk ROM, or you will get "dummy trays". These trays don't allow you to add hard disks; the "real" hot-swappable trays come with the hard disks you order.
We decided to see how big the difference is between a "normal" hard disk and a "storage vendor hard disk". What you find below are the prices we could find for the end-user (second to last column) and the price range we could find when ordering extra disks when buying a storage systems from Dell/EMC, HP and IBM.
Hard Drive Price Comparison | |||
Harddrive speed & interface | Capacity | Retail pricing (US) | Storage vendor pricing (US) |
15000 rpm SAS | 147 GB | $250-$300 | $370-$550 |
10000 rpm SAS | 300 GB | $200-$300 | $600-$850 |
10000 rpm FC | 300 GB | $450-$500 | $600-$950 |
7200 RPM SATA (Nearline) | 500 GB | $150-200 | $500-$800 |
One reason why the "proprietary" SATA drives are a lot more expensive is that they usually include an Active-Active Multiplexer (AAMux). AAMUX technology enables single-ported SATA drives to connect like native dual-ported SAS drives for use in enterprise storage systems using SAS expanders. This is necessary if you want to mix SATA drives and SAS drives in the same enclosure. With an AAMux, two hosts can access a single SATA storage device independently, each through its own SATA interface. However, it is very unlikely the AAMUX device is enough to justify a $400 premium.
Granted, the premiums that vendors used to charge for "certified" disks used to be a lot bigger. Still, if you need high capacity courtesy of numerous hard disks, these premiums can rapidly add up to a significant amount of money. Let's take a closer look at pricing on a few options.
Pricing, Continued
We wanted to calculate how much a small SAN with failover would cost. We assumed that four servers would share a dual controller SAN. We opted for a 16-port switch as we assume that additional servers will use this SAN in the future, and 16-port switches probably give the best port/price ratio. Note also that we can easily expand our 12 disk SAN with several JBODs if those servers need more disk capacity.
First, we checked out several tier one storage vendors. To keep things simple, we made an average of the prices that we encountered at Dell/EMC, IBM, and HP at the end of October 2007. The table below is not a precise calculation or a "best buy" recommendation; it is simply an estimate to give us a reasonable overview of the costs.
Several things make a typical FC SAN quite expensive. One of most important ones is the high quality, very low latency FC Switch (a Brocade Silkworm for example). Secondly, the FC HBA required for each server that gets access to the SAN is rather expensive. Other small components also quickly push the cost higher: LC optic cables are still expensive and each link between your switch and the storage rack needs a small form-factor pluggable (SFP). These compact optical transceivers are yet another cost that is usually not included in your storage rack.
SFPs add to the price of the already expensive FC SAN
The result is that for a relatively simple HA SAN configuration with less than 1.7 TB of raw storage capacity, the total cost quickly rises to $35,000 or more. It is nearly impossible to get under $20,000, even without double path HA.
Let us compare this to a SAN based on a storage appliance that leaves all options open. We tried to keep the components the same as much as possible:
- A Brocade M4400 FC Switch
- Seagate ST3146755SS 146GB SAS 15K RPM hard drive
- FC HBA: Emulex LPe1150-F4
The idea is clear: you save a lot of money if you can pick your own switch, your own hard disks, and your own HBAs. In both configurations (HA and no HA) the Promise configuration is significantly less expensive (25-30% less) than a typical tier one configuration. Of course, it may take a bit more effort to put your configuration together depending on your skill. You also need one reliable reseller who can sell you everything, so you have one point of contact if something goes wrong. Even with this stipulations, you can save quite a bit of money.
If this is still too expensive, iSCSI comes to the rescue. iSCSI appliances are not much cheaper than FC appliances; in fact, in some cases they are priced almost as high as their FC counterparts. However, the pricing of switches, cables, and HBAs is significantly lower. That allows you to build a basic SAN for less than $10000.
Intel's SSR212MC2 barebones starts at prices as low as $2500, bringing the price of a basic storage device down ~$3500. Naturally, you have to install the iSCSI software yourself. If you feel that's either too time consuming or too difficult, quite a few resellers offer complete ready-to-use iSCSI boxes based on the Intel SSR212MC2.
Promise VTRAK
The Promise VTRAK E310f 2U RBOD has room for up to 12 hot-swappable SAS and SATA hard drives with support for RAID levels 0, 1, 5, 6, 1E and 50. Capacity scalability is not a problem: each RBOD supports up to four additional RBODs, which allows up to 60 hard drives. That is about 18 TB of 15000 RPM 300GB SAS disks or 45TB of 750GB SATA disks. If that is not enough, the 3U VTE610fD will let you use 16 drives and in combination with 3U JBODs, allowing up to 80 drives.
As long as you keep your FC SAN down to a few switches, it should be easy to set up and maintain. With 16-port FC switches, the cost of your SAN should stay reasonable and still give you a huge amount of storage capacity depending on how many servers need to access to the SAN. That is exactly the power of using FC: a few hundred TB of storage capacity is possible. The E310f also supports two 4Gbps FC host ports per controller and two controllers per storage rack, making "dual path" configurations possible. This in turn makes load balancing and failover possible, but not without drivers which understand that there are multiple paths to the same target. Promise does have drivers ready for Windows (based on the MPIO driver development kit) and is working on Linux Multi Path drivers.
The heart of the Promise VTRAK E310f RBOD is the Intel IOP341 CPU (one core). This system-on-a-chip I/O processor is based on the low-power XScale architecture and runs at 1.2 GHz. It has a rather large 512KB L2 for an embedded chip. The XScale chip provides "pure hardware" RAID, including support for RAID 6. RAID 0, 1, 1E, 5, 10, 50, 60 are also supported. Promise by default equips the E310f with 512MB cache of 533 MHz DDR2 (expandable to a maximum 2GB).
Each RBOD can use a dual active/active controller configuration (with failover/failback), or it can use a cheaper single controller configuration.
Intel SSR212MC2: ultra flexible platform
Whether you want an iSCSI target, a NAS, an iSCSI device that used as a NAS fileserver, RBOD, or just a simple JBOD, you can build it with the Intel SSR212MC2. If you want to use it as an iSCSI device, you have several options:
- Using software like we did. You install SUSE SLES on the internal 2.5" SATA/SAS hard disk and make sure that the iSCSI daemon runs as soon as the machine is booted. If you are an OEM, you buy Microsoft's Windows 2003 Storage Server with Microsoft iSCSI target, or other third party iSCSI targets.
- Using a SATA, IDE or USB Disk on Module (DOM). If you don't like to administer a full OS, just buy a minimal one on a flash module that attaches to your IDE/USB/SATA connector with a converter which makes it pretend to be disk.
The superbly flexible 2U chassis contains a S5000PSL server board with support for two dual-core (51xx) or quad-core (53xx) Intel Xeon CPUs. In the front are twelve SATA/SAS hard disk bays controlled by the Intel RAID SRCSAS144E controller with 128MB of ECC protected DDR-400 RAM. This controller uses the older Intel IOP333 processor running at 500MHz. That was a small disappointment, as by the time the SSR212MC2 launched the more potent IOP341 was available at speeds up to 1.2 GHz. This chip not only offers a higher clock, but it has a lot more internal bandwidth (6.4GBs vs. 2.7GB/s) and supports hardware enabled RAID 6. Intel's manual claims that a firmware update will enable RAID 6, but we fear that the 500MHz IOP333 might be slightly underpowered to perform RAID 6 quickly. (We'll test this in a later article.) Of course, nothing stops the OEM or you from using a different RAID card.
The S5000PSL provides dual Intel PRO/1000 gigabit Ethernet connections. As it allows you to have up to eight cores in your storage server, you can use this storage server as a regular server performing processing intensive tasks at the same time.
A single or dual redundant 1+1 850W power supply keeps everything powered while 10 hot-swappable fans keep everything cool. If you want to turn this into a simple JBOD, you can buy the SSR212MC2 without the motherboard. The highly integrated VSC410 controller on the enclosure management card works together with the SAS expander (PMC-Sierra PM8388 SXP) to offer 24 ports. You can daisy chain another JBOD onto the first one.
Configuration and benchmarking setup
Thanks to Loes van Emden and Robin Willemse of Promise (Netherlands) for giving us the chance to test the E310f. Billy Harrison and Michael Joyce answered quite a few of the questions we had, so we definitely want to thank them as well. Our thanks also go out to Poole, Frank, and Sonny Banga (Intel US) for helping us to test the Intel SSR212MC2. Last but not least, a big thanks to Tijl Deneut, who spent countless hours in the labs while we tried to figure out the best way to test these storage servers.
SAN FC Storage Server: Promise VTRAK E310f, FC 4Gb/s
Controller: IOP341 1.2 GHz, 512MB Cache
Disks: 8 Fujitsu MAX3073RC 73GB 15k RPM
DAS Storage Server: Intel SSR212MC2
Controller: SRCSAS144E 128MB Cache
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM
SAN iSCSI Storage Server: Intel SSR212MC2, 1Gb/s
Server configuration: Xeon 5335 (quad-core 2 GHz), 2GB of DDR2-667, Intel S5000PSL motherboard
Controller: SRCSAS144E via 1Gb/s Intel NIC, Firmware 1.03.00-0211 (Ver. 2.11)
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM
iSCSI Target: StarWind 3.5 Build 2007080905 or Microsoft iSCSI Target software (alias WinTarget) or the iSCSI software found on Linux SLES 10 SP1
Client Configuration
Configuration: Intel Pentium D 3.2Ghz (840 Extreme Edition), Intel Desktop Board D955XBK, 2GB of DDR2-533
NIC (iSCSI): Intel Pro/1000 PM (driver version: 9.6.31.0)
iSCSI Initiator: Windows Microsoft Initiator 2.05
FC HBA: Emulex LightPulse LPe1150-F4 (SCSIport Miniport Driver version: 5.5.31.0)
IOMeter/SQLIO Setup
Your file system, partitioning, controller configuration and of course disk configuration all influence storage test performance. We chose to focus mostly on RAID 5 as it is probably the most popular RAID level. We selected a 64KB stripe size as we assumed a database application that has to perform sequential/random reads and writes. As we test with SQL IO, Microsoft's I/O stress tool for MS SQL server 2005, it is important to know that if the SQL Server Database accesses the disks in random fashion this happens in blocks of 8KB. Sequential accesses (Read-ahead) can use I/O sizes up to from 16KB up to 1024KB, so we used a stripe size of 64KB as a decent compromise.
Next, we aligned our testing partition with a 64KB offset with the diskpart tool. For some prehistoric reasons, Windows (XP, 2003, and older) puts the first sector of a partition on the 64th sector (it should be on the 65th or the 129th) which results in many unnecessary I/O operations and wasted cache slots on the cache controller. Windows Longhorn Server (and Vista) automatically aligns to 2048 sectors as a starting offset, so it will not have this problem. Then we formatted the partition with NTFS and a cluster size of 64KB (first sector of the partition is the 129th sector or the 128th block). To get an idea how much this type of tuning helps, take a look below. The non-tuned numbers are using the "default Windows installation": 4KB clusters and non-aligned partitions (partition starts at 64th sector).
All tests are done with Disk Queue Length (DQL) at 2 per drive (16 in total thus). DQL indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. A DQL that averages 2 per drive or higher means that the disk system is the bottleneck.
As you can see, tuning the file system and partition alignment pays off.
The number of storage testing scenarios is huge: you can alter
- RAID level
- Stripe size
- Cache policy settings (write cache, read cache behavior)
- File system and cluster size
- Access patterns (different percentages of sequential and random access)
- Reading or writing
- Access block size (the amount of data that is requested by each access)
- iSCSI target - this is the software that receives the requests of the initiators and processes them
- RAID 5 (most of the time, unless indicated otherwise)
- Stripe size 64KB (always)
- Always Adaptive Read Ahead and Write back
- NTFS, 64KB cluster size
- 100% random or 100% sequential
- 100% read, 100% write and 67% read (33% write)
- Access block size 8KB and 64KB
- iSCSI SLES, StarWind (not all tests), and MS iSCSI Target software
I/O Meter
IOMeter is an open source (originally developed by Intel) tool that can measure I/O performance in a large variety of ways: random, sequential, or a combination of the two; read, write, or a combination of the two; in blocks from a few KB to several MB, etc. IOMeter can generate the workload and measure how fast the I/O system performs. To see how close we can get to the limits of our interface, we did one test with RAID 0.
Considering that one of our Fujitsu 15000 RPM SAS disks can do a bit more than 90MB/s, 718MB/s is about the maximum performance we can expect. The iSCSI and FC Channel come close to maxing out their interface, 125MB/s and 400MB/s respectively.
Using RAID 0 for running databases is not a good practice, given the potential for data loss, so from now on we'll test with RAID 5. The next test is the fastest way to access the database: reading sequentially.
Again, the Intel IOP333 in our DAS does not let us down. Seven striped disks should achieve about 630MB/s, and our DAS configuration comes very close to this theoretical maximum. The interface speed bottlenecks the other setups. Only the StarWind target starts to shown that it's a lower performance offering.
If we read randomly, the disk array has to work a lot harder.
The iSCSI StarWind target gives up, as it cannot cope with a random access pattern. It performed rather badly in other tests too. To reduce the number of tests, we did not include this iSCSI target in further testing.
This is where the iSCSI SLES target shines, delivering performance that is equal to DAS and FC setups. With random accesses, it is little surprise that the larger cache of the Promise VTRAK doesn't help. However, we would have expected a small boost from the newer Intel IOP341 used in the Promise Appliance.
Some applications write a lot to the disks. What happens if we do nothing but writing sequentially or randomly?
The large 512MB cache of the Promise VTRAK E310f pays off: it is capable of writing almost at the maximum speed that its 4Gb/s interface allows. The smaller 128MB cache that we find on the controller of our Intel SSR212MC2 is about 15% slower. Microsoft's iSCSI is about 38% faster in sequential writes than the iSCSI target that comes with SUSE's SLES 10 SP1.
A similar advantage for the Microsoft iSCSI target exists in the random write benchmark. The way Microsoft's initiator sends off the blocks to the iSCSI target is apparently helping in this type of test. The VTRAK E310f is the winner again. This is clearly not a result of its faster interface, but probably a consequence of the newer Intel IOP processor.
An OLTP database and other applications will probably do a mix of both reading and writing, so a benchmark scenario with 66% reading and 33% writes is another interesting test.
In this case, the Linux iSCSI target is about 20% faster than the Microsoft iSCSI target. The Linux iSCSI target is quicker in random reads, mixing reads with writes but a lot slower than the Microsoft target when doing nothing else but writing. It will be interesting to research this further. Does Linux have a better I/O system than Windows, especially for reads, or is the SLES iSCSI target not optimized well enough for writing? Is using a Microsoft initiator a disadvantage for the Linux iSCSI target? These questions are out of the scope of this article, but they're interesting nonetheless.
The Promise VTRAK E310f has won most of the benchmarks thanks to the larger cache and newer IOP processor. We'll update our benchmarks as soon as we can use a newer RAID controller in our Intel system based on the IOP341.
SQLIO
SQLIO is a tool provided by Microsoft which can determine the I/O capacity of a given disk subsystem. It should simulate somewhat how MS SQL 2000/2005 accesses the disk subsystem. The following tests use RAID 5. You can see a typical SQLIO test command below:
sqlio" -s120 -b64 -LS -o16 -fsequential file.tst >> result.txt
We ran tests as long as 1000 seconds, but there was very little difference with our standard 120 second testing.
The SQLIO results mimic the I/O Meter results. The DAS configuration is limited by the maximum throughput of our disks, and the other configurations are limited by their interface speed.
The VTRAK E310f outruns the rest in the SQLIO sequential write test. An important reason is that as we write, parity blocks have to be recalculated and a fast 1.2 GHz IOP processor helps in this scenario.
Next, we tested with Random reads.
Surprisingly, the iSCSI SLES target is almost twice as fast as our DAS configuration. Thus, the SLES target is essentially faster than the "maximum performance" of the disk array. This is the result of rather clever caching as we will show further.
The SQLIO random write was the only test where the Promise VTRAK E310f was a bit slower than the rest of the pack. We could not determine any reason for this.
Latency and Further Analyses
So let us delve a bit deeper in our SQLIO benchmarking. What kind of latency may we expect with these systems? In many cases latency (seek time + 1/2 rotation) will be the bottleneck. To understand the behavior of our different storage system better we tested with both a 2GB and 20GB file.
Bandwidth doesn't really get any lower when you access the hard disk sequentially in our configuration. This is a result of "zone recording": the tracks in the "outside zone" of the hard disk have a lot more sectors (and thus data) than the inner tracks. As we only tested with a 20GB file on a total of 500GB (7x 73GB) of disks space, all disk activity was in the fast outside zone of the disks.
Latency is very low as we don't need to move the head of the hard disk; most of the time the heads stay on the outside tracks. When we have to move the heads, they only have to make a very small move from one track to an adjacent one. Random access is a lot more interesting...
With a 20GB file, the chance that the actuator has to move the head to jump to the next random block is a lot higher than with a 2GB file. In addition, the head movements become longer and are no longer only short strokes.
It now gets clear why the iSCSI SLES target performs so well at Random Reads. Look at the latency at 2GB; it's "impossibly low" as it is lower than the DAS configuration. This indicates that there is more cache activity going on than on the DAS configuration. Since both use the same RAID controller this "extra cache activity" is not happening on the level of the RAID controller but on the OS Level. Indeed, as we looked at the buffers of the SLES installation we saw the buffers increase quickly from a few KB to 1708MB. This means that the majority of our 2GB RAM on the Intel SSR212MC2 is caching the 2GB test file. Once we moved to 20GB this Linux file system caching could not help anymore and will probably add latency instead of lowering it. The Microsoft iSCSI target software does not seem to use this kind of caching.
This has an interesting result: the iSCSI SLES target is very interesting if you want the best random performance with a relatively small database. You can then try to put as much memory as possible in your iSCSI storage rack. On the other side of the coin is the fact that once the cache is too small, performance decreases quickly.
RAID 6 ?
As the Promise system was the only one with RAID 6 we did not put all our results in graphs. Our testing shows that RAID 6 is almost in every circumstance about 5 to 10% slower than RAID 5. For many people that will be a small price to pay as a failed disk no longer means that the array is unprotected until a replacement disk is installed. In case that the RAID 5 has to be rebuild, disks get accessed very intensively and as such disks are more prone to fail.
Management Interface
While it is not the focus of this article, we should mention that both the Intel Storage server and the Promise VTRAK E310f run a web server that offers management access to the storage server configuration via your LAN or the internet. Promise provides a very extensive GUI that guides users through all the possible options, a CLI and a menu driven CLU. The CLI or CLU can be accessed via a relatively fast 115200 serial connection. (We don't have fond memories of accessing the Cisco OS via a 9600 bps interface).
You use the CLI or CLU to setup password and the network IP, after which you can configure the disk array in a very nice GUI
Besides diagnostics, disk array management, and user management, it is also possible to set up several other services such as a mail server that warns you if one of the drives fails and if the hot spare has been used or not.
Intel's software is a bit more sober; you won't find red flashy lights going off on a picture of the rack if something is wrong. However, the Intel RAID web console does a great job of quickly showing all the technical data you need such as stripe size, caching policies, etc.
Conclusion so far
This was our second attempt (the first attempt can be seen here: Promise VTRAK j300s) at professional storage benchmarking. We want to remind our readers that the objective was not to compare the Intel SSR212MC2 and the Promise VTRAK E310f directly: the target market is quite different, with only a moderate amount of overlap. The main reason that we reviewed them together is that they are the representatives of affordable SAN storage arrays.
The Promise VTRAK E310f is a very attractive alternative to the expensive FC SANs of the big storage vendors. It targets medium sized enterprises, which will like the excellent storage capacity scalability via FC switches and JBODs. Promise keeps the total price of a small SAN lower thanks to the fact that you can choose which FC switch, HBAs, and hard drives you want to buy. The counterargument is of course that having only one vendor to blame for problems is easier, but incompatibility/interoperability problems are easy to avoid if you follow Promise's certification documents and guidelines. Promise's support might not be as luxurious as the big OEMs, which offer next business day on site support, but the support is free.
According to Promise you get 24/7 support (by phone only) which covers all Promise subsystems (M-Class, E-Class and J-Class). Email support is available five days a week (Monday - Friday). Support representatives can help on all subsystem related issues and/or questions and can also process RMAs (including advance replacements). Support is worldwide.
So where is the catch? Promise is relatively new to the SCSI/SAS/SAN world. That means that their products lack some of the more advanced features that the well-established players provide. One of the very handy ones is the ability to make snapshots - backups without any interruption of service. More advanced failover capabilities are limited to the Windows world. Promise has still some way to go before it can be an alternative to the big players for every storage buyer, but it will certainly attract some of the price conscious buyers.
The Intel SSR212MC2 naturally appeals to techies like us. At a very low price you can get a NAS, an iSCSI SAN, and a "normal" server all in one. The caveat is that you need to know what you are doing and need to be capable of installing a Linux iSCSI target for example. Without the proper knowledge, the price advantage will evaporate as you try to configure the system. It is also impossible to get the Microsoft target separately (it is only sold to OEMs), so you must have some Linux knowledge if you want to do it yourself. The alternative Windows iSCSI target (StarWind) we tried so far did not convince us, and the free MySAN iSCSI target is very limited.
That doesn't mean that this storage server is only suited for storage DIYers. At a slightly higher price, you can get this storage server completely ready to deploy with the iSCSI target configured and more. In that case, it is just a matter of checking how the reseller will support you, and you need a lot less (storage) knowledge to configure and troubleshoot. You can get this server with both Microsoft's iSCSI target as with user friendly (and quick to setup) Linux based iSCSI targets such as Open-E.
Promise VTRAK E310f Advantages
- Offers excellent all around performance...
- ... even with RAID 6!
- Attractive price for the storage rack...
- ... low price when you start building the complete SAN
- You are the one who decides which drives and switches you want to use
- Excellent Capacity scalability (thanks to JBODs) with little hassle
- Easy to use and rich web based management interface
- Low cost but 24/7 Support
- No multipath HA drivers for Linux yet
- Not suited for SMEs without any storage knowledge - you will want (expensive) onsite support in that case
- e610f (3U) probably has a better performance/capacity/price ratio
Intel SSR212MC2 advantages
- Very flexible: Can combine a NAS and iSCSI
- Very flexible part 2: Can combine a server and a storage server into one server
- Very low price, especially if you build it yourself
- You can get a fully loaded OEM version from various resellers - slightly more expensive but less knowledge required
- Support will depend on the reseller, which can be good news
- Support could also be bad - support will depend on the reseller you chose
- Supports 32 drives at most (limited JBOD expansion)
- RAID 6? Slightly older IOP processor...
- Performance depends a lot on the chosen iSCSI target and configuration
So is iSCSI really an alternative to fiber channel? What about CPU load, network load, and TCP/IP offloading? We also need to explain some of the weird performance issues we've encountered. However, as this article is getting too long already, we decided to do more in depth testing for our next article.