Original Link: https://www.anandtech.com/show/21202/seagate-ironwolf-pro-22tb-hdd-capsule-review



Seagate's IronWolf Pro lineup of hard drives for network-attached storage units has consistently offered good value for money, particularly at the highest capacity points. I purchased two 22TB IronWolf Pro drives for production deployment late last year. As part of the burn-in testing prior to actual deployment, they were put through our evaluation routine for direct-attached storage drives in both internal and external (Thunderbolt 3 DAS) modes. This capsule review presents an overview of the performance you can expect from the drive in standalone, RAID 0, and RAID 1 modes.

Introduction and Product Specifications

Data storage requirements have kept increasing over the last several years. SSDs have taken over the role of the primary drive in most computing systems. However, when it comes to sheer bulk storage, hard drives (HDDs) continue to be the storage media of choice in areas dealing with large amounts of relatively cold data. The Seagate IronWolf Pro NAS hard drive family targets NAS units up to 24 bays and is meant for creative professionals, SOHO, and small-to-medium enterprises. These CMR SATA drives have a workload rating of 550 TB/yr, unrecoverable read error rate (URE) of 1 in 10E15, MTBF of 1.2M hours, and a rated load/unload cycle of 600K for the heads. The family carries a 5-year warranty. The 22TB version contains ten platters with an areal density of 1260 Gb/in2. It has a 512MB DRAM cache. Acoustics range from 20 dBA to 34 dBA depending upon the operating mode.

 

We put two IronWolf Pro 22TB NAS HDDs to test in three distinct configurations:

All systems were updated to the latest version of Windows 11 22H2. It must be noted that for the last several years Windows has been interfering with the performance of removable storage drives connected via Thunderbolt / PCIe by disabling the write caching completely. Therefore, the RAID0 and RAID1 configurations were evaluated in two modes each - one with write caching off (default / 'Quick removal' under Disk Properties > Policy), and another with the write caching on ('Better performance' mode under Disk Properties > Policy).

The S.M.A.R.T readout for the different configurations are presented below.

Full System Drive Benchmark Bandwidth (MBps)

Power Consumption

The power consumption of both the 5V and 12V rails was tracked using Quarch's HD Programmable Power Module with the disk connected to the legacy SATA HDD testbed. The graph below presents the recorded numbers while processing the CrystalDiskMark workload and following it up with 5 minutes of idling.

The HDD specs do allow for peak currents of as much as 2A on the 12V rail, and we see that happening. However, for the vast majority of the action, the drive power is around 7W, with the number dropping down to 2.8W under 'idling' conditions. It is possible that the drive enters an even more lower power state after extended idling.

Concluding Remarks

While enterprise data storage requirements have skyrocketed in the last decade or so, the peak capacity increase in HDDs has surpassed the requirements in consumer and SOHO scenarios. When 4TB and 6TB HDDs were the norm, I used to run 8-bay NAS units (still have a couple of those in use) with drives configured in RAID5. Rebuilds were not particularly pleasant. Since then, HDD capacities have increased, but peak speeds have not scaled up (they are not even at the SATA 6 Gbps limit yet). Performing RAID5 / RAID6 rebuilds with 10TB+ HDDs, while praying fervently for another disk in the array to not fail, is best avoided. Thankfully, increased HDD capacities have made it feasible to operate drive arrays in RAID10. For a 4-bay array, RAID5 operation with 10TB HDDs would have yielded 30TB of usable storage with support for a single disk failure (and hoping that one doesn't need to rebuild). With 22TB HDDs, RAID10 operation provides 44TB of usable space with support for dual disk failure (as long as they are not from the same mirrored set). Rebuilds involve copying over data from the other drive in the mirrored set, and do not stress the rest of the drives in the array. It is a different matter that 6+ bays are still useful in home and SOHO scenarios for SSD caching and running other applications (such as VMs) from the SSD volumes.

Currently, the 22TB drives seem to offer the best $/TB metric at higher capacities, particularly after the launch of the 24TB drives (which command a premium, as expected). I had purchased them for $400 each, but the price continues to fluctuate around that mark. Having seen a few reports on various forums about some IronWolf Pro 22TB HDDs being dead-on-arrival and needing to be RMA-ed, I was a bit apprehensive at first. Fortunately, the drives I purchased managed to complete their burn-in process without any hiccups. Performance is nothing to write home about, but I will be configuring them in RAID1 (for now), with plans to shift to RAID10 later.

 

Log in

Don't have an account? Sign up now