I think the Re at least is only relevant when you are space or controller constrained, as otherwise getting a second cheaper disk is probably going to give better speed and reliability on average.
Generally, I'd have preferred a comparison with the cheaper drives, as I don't see the point of spending more on something that will probably have the same observed failure rates in real usage, and will saturate Gbit LAN when streaming
Of course, if you commit to only a 2-bay NAS, then it might pay off to go with disks with slightly tighter tolerances and more thorough QA, but once you hit 4+ bays, there's rarely a reason to not just throw redundancy at the problem.
Excellent review. Very helpful for allowing us to select drives to target specific workloads in smaller (or budget constrained) environments.
Are you planning to continue this style of review with other ESATA/SAS drives such as the Constellation.2? Those drives seem to enjoy wider OEM support and are in the same price range as the Se/Re.
Running the hard drive(s) at temperatures beyond their stated maximum simply decreases their lifespan; it won't cause a dramatic failure or lead to an escape scenario for the magic smoke within the drive. At least, not for the duration that Ganesh T S devoted to this comparison.
Google said that temperature variations WITHIN A NORMAL DATACENTER ENVIRONMENT did not noticably affect drive failure rates. e.g. none were overheating.
Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.
A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.
A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.
Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.
Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.
brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is.
I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore. Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.
"you can always successfully recover the RAID after replacing one single drive."
This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.
If you have a home usage scenario though, you can schedule surface scans to run every other day, in that case this becomes essentially a non-issue, At worst you'll lose a handful of KB or so.
And of course you have backups to cover anything going wrong on a separate array.
Of course, going RAID 5 beyond 6 disks is being slightly reckless, but that's still 20TB. By the time you manage that kind of data, ZFS is there for you.
My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.
Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.
++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end.
Avoid Storage Spaces from Windows. It's an unproven and slow "re-imagination" of RAID as Microsoft likes to call it. The main selling point is flexibility of adding more drives, but that feature doesn't work as advertised because it doesn't rebalance. If you avoid adding more drives over time it has no benefits over conventional RAID, is far slower, and has had far less real world testing on it.
For home use I've gone from RAID 5 to pooling + snapshot parity (DriveBender and SnapRAID respectively). It's still one big ass pool so it's easy to manage, I can survive two disks failing simultaneously with no data loss, and even in the event of a disaster where 3+ fail simultaneously I'll only lose whatever data was on the individual disks that croaked. Storage Spaces was nice in theory, but the write speed for the parity spaces is _horrendous_, and it's still striped so I'd risk losing everything (not to mention expansion in multiples of your column size is a bitch for home use).
If you have a good hardware raid card, with BBU and memory, and decent drives, then I think Raid 5 works just fine for home use.
I currently have a Raid 5 array using a 3Ware 9560SE Raid card, consisting of 4 x 1.5TB WD Black drives. This card has battery backup and onboard memory. My RAID 5 array works beautifully for my home use. I ran into an issue with a drive going bad. I was able to get a replacement, and the rebuild worked well. There's an automatic volume scan once a week, and I've seen it fix a few error quite a while ago. But nothing very recent.
I get tremendous speed out of my Raid5, and even boot my Windows7 OS from a partition on the Raid 5. Probably, eventually move that to a SSD, but they're still expensive for the size I need for the C: drive.
My biggest problem with Raid1 is that it's hugely wasteful in terms of disk space, and it can be slower than just a single drive. I can understand for mission critical stuff, Raid5 might give issues. However, for home use, if you combine true hardware Raid5 with backup of important files, I think it's a great solution in terms of reliability and performance.
"First off, error checking should in general be done by the RAID system, not by the drive electronic."
The "should in general" port is where the crux of the issue lies. A RAID controller SHOULD takeover the error-correcting functions if the drive itself is having a problem - but it doesn't do it exclusively, it lets the drives have a first go at it. A non-ERC/TLER/CCTL drive will keep working on the problem for too long and not pass the reigns to the RAID controller as it should.
Also, RAID1 is the most basic RAID level in terms of complexity and I wouldn't have any qualms about running consumer drives in a consumer setting - as long as I had backups. But deal with any RAID level beyond RAID1 (RAID10/6), especially those that require parity data, and you could be in for a world of hurt if you use consumer drives.
No. Hard drives have, for a very very long time, included their own error checking and correcting codes, to deal with small errors. Ever heard of bad blocks?
RAID 1 exists to deal more with catastrophic failures of entire drives.
RAID systems can't do error checking at that level because they don't have access to it: only the drive electronics do. The problems with recovering RAID arrays don't usually show up with RAID-1 arrays, but with RAID-5 arrays, because you have a LOT more drives to read. I swore off consumer level raid-5 when my personal raid-5 (on an Intel Matrix RAID-5 :P) dropped two drives and refused to rebuild with them even though they were still perfectly functional.
Just fix it by hand - it's not that difficult. Of course, with pseudo hardware RAID, you're buggered, as getting the required access to the disk, and forcing partial rebuilds isn't easily possible.
I've had a second disk drop out on me once, and I don't recall how exactly I ended up fixing it, but it was definitely possible. I probably just let the drive "repair" the unreadable sectors by writing 512 rubbish bytes to the relevant locations, and tanked the loss of those few bytes, then rebuilt to the redundancy disk. So yeah, there probably was some data loss, but bad sectors aren't the end of the world.
And by using surface scans you can make the RAID drop drives with bad sectors at the first sign of an issue, then resync and be done with it. 3-6 drive RAID 5 is perfectly okay, if you only have intermediate availability requirements. For high availability RAID 6/RAID 10 arrays with 6-12 disks are a better choice.
Intel chipsets do not offer hardware RAID. The RAID you see is purely software. The Intel BIOS just formats your hard drive with Intel's IMSM (Intel Matrix Storage Manager) format. The operating system has to interpret the format and do all the RAID parity/stripe calculations. Consider it like a file system.
Calling Intel's RAID "hardware" or "pseudo-hardware" is a misconception I'd like to see die. :)
"First off, error checking should in general be done by the RAID system, not by the drive electronic. "
You need to keep in mind how drives work. they are split into 512b/4k sectors... and each sector has a significant chunk of ECC at the end of the sector, so all drives are continually doing both error checking and error recovery on every single read they do.
plus, if it is possible to quickly recover an error, then obviously it is advantageous for the drive to do this, as there may not be a second copy of the data available (i.e. when rebuilding a RAID 1 or RAID 5 array)
With a difference of 1 to 2 watts for the Seagate I fail to see how that would be too much of a cause for concern for cooling systems? Even with a 5 disk array it should still be under 10 watts difference in the most demanding circumstances and about 5 watts on average.
I was thinking about that. 1W difference has got to be negligible for any desktop based system. Even 3-4W differences, while large on the relative scale, are small in the absolute sense. I don't see how you could make the statement "if you want more performance, Seagate, if you need cool and quiet, WD" Is there no other reason to pick one drive over the other besides a 1W performance consumption difference?
It is hard to see the relative differences quickly switching between the performance graphs for the different drives because some of them are on different scales for each drive. Is there any way the graph scales can be made uniform?
I can see the scale on the side, but for example the random read graph has the max Y scale value at 50ms for the WD SE drive, 100ms for the Red drive and WD RE and 200 ms for the Seagate. At first glance, it looks like the Seagate is owning because of the scale -- it requires extra thought to figure out what the graph would look like on the same scale for comparison.
If you look closely at those graphs you can see some outliers that are very high on the graph. They are away from the statistical clump but they cause the graph to have the scales that they do.
The Seagate NAS HDDs seem quite good in terms of reliability thus far. I have a 3 TB and 4 TB in my WHS (JBOD) and they've made it past the crucial 1 month mark without issues. But as mentioned in the review, these haven't been on the market very long.
These are the first Seagates I've purchased in years due to past issues you alluded to.
Carlwu, I had that thought also but set up one NAS with 8 x 3TB Seagate 7200rpm (the ST3000DM001) in RAID 6 and have had no issues for the six months they have run 24/7. Fingers crossed.
Does anyone know how read patrolling factors into usage numbers? There is no way I would come even close to 150 TB/yr in a home NAS with my own data, but with ZFS read patrolling going on in the background I don't exactly know what the true load is.
I don't really understand these read or read/write ratings... iirc, Google's data said reads and writes do not affect failure rate on hard drives. (SSD's are obviously a different story, for writes).
I have had good experience with Hitachi drives in NAS use. HGST has both consumer class and enterprise class 7200 rpm 4tb drives capable of NAS use. Any plans to include the HGST in the review evaluation of 4tb NAS capable drives?
To me, Speed doesn't matter any more. Not for NAS Market. Since even the slowest HDD will saturate 1Gbits Ethernet in Sequential Read Write, and Random Read Write are slow as well as mostly limited by the NAS CPU as well. I want Price and Disk Size. Reliability is also a concern as well but since most HDD will just fail in one way or another over time It is best to have something like Synology where you over a number of disk you could have up to 2 HDD failure.
The power numbers are wall power, so it includes power supply losses and the power consumed by the LenovoEMC PX2-300D, in addition to the power consumed by the hard drive. So the absolute values aren't useful (unless you own a PX2-300D), but the numbers do show which drives consume less power.
Doing a 'torture test' means you use them a lot constantly though, not that you put them on a burner to see what happens. And frankly a drive should adhere to its stated lifetime/performance somewhat regardless how heavy you use it. And don't forget that all drives unless powered down spin constantly anyway.
And quite a few NAS boxes for the home have so-so cooling, so it would be valid to test how hot HD's get during intensive (but normal) use.
I am planning to buy a Drobo 5N as a Plex video server and also for TimeMachine backup. That would seem to require limited data transfer. From the review it would seem that the Red is just as good as the RE and at nearly half the price would be the better choice. Do you agree that the Red is a better choice than the RE for my needs?
First 2 purchased 2 years ago, they report NCQ supported, 4k sectors, security mode, and have no drive parking. Brought 2 more yesterday. Dont support NCQ (wtf?), drive parking same as green drives, one is bigger than the other has a few extra LBA blocks and reports an extra gig size and 512byte sectors instead of 4k.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
54 Comments
Back to Article
Rick83 - Wednesday, September 4, 2013 - link
I think the Re at least is only relevant when you are space or controller constrained, as otherwise getting a second cheaper disk is probably going to give better speed and reliability on average.Generally, I'd have preferred a comparison with the cheaper drives, as I don't see the point of spending more on something that will probably have the same observed failure rates in real usage, and will saturate Gbit LAN when streaming
Of course, if you commit to only a 2-bay NAS, then it might pay off to go with disks with slightly tighter tolerances and more thorough QA, but once you hit 4+ bays, there's rarely a reason to not just throw redundancy at the problem.
colleenames - Thursday, September 5, 2013 - link
hyVMguy - Wednesday, September 4, 2013 - link
Excellent review. Very helpful for allowing us to select drives to target specific workloads in smaller (or budget constrained) environments.Are you planning to continue this style of review with other ESATA/SAS drives such as the Constellation.2? Those drives seem to enjoy wider OEM support and are in the same price range as the Se/Re.
Thanks!
VMguy - Wednesday, September 4, 2013 - link
Er, sorry. that should have read Constellation ES.3edlee - Wednesday, September 4, 2013 - link
I wish you did a temperature torture test, would have loved to see the results.arthur449 - Wednesday, September 4, 2013 - link
Running the hard drive(s) at temperatures beyond their stated maximum simply decreases their lifespan; it won't cause a dramatic failure or lead to an escape scenario for the magic smoke within the drive. At least, not for the duration that Ganesh T S devoted to this comparison.tuxRoller - Wednesday, September 4, 2013 - link
I thought Goggle's data showed this (higher temperature implies lower lifecycle) to be false?bobbozzo - Thursday, September 5, 2013 - link
Google said that temperature variations WITHIN A NORMAL DATACENTER ENVIRONMENT did not noticably affect drive failure rates.e.g. none were overheating.
dingetje - Wednesday, September 4, 2013 - link
would like to see the plattercount of the 4tb red confirmedganeshts - Wednesday, September 4, 2013 - link
Confirmed to be four 1 TB platters.dingetje - Thursday, September 5, 2013 - link
thanks GaneshArbie - Wednesday, September 4, 2013 - link
Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.Is this so?
fackamato - Wednesday, September 4, 2013 - link
Arbie, good point. You're talking about SCTERC. Some consumer drives allow you to alter that timeout, some don't.brshoemak - Wednesday, September 4, 2013 - link
A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.
Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.
Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.
tjoynt - Wednesday, September 4, 2013 - link
brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is.ShieTar - Wednesday, September 4, 2013 - link
I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore.Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.
DanNeely - Wednesday, September 4, 2013 - link
"you can always successfully recover the RAID after replacing one single drive."This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.
Rick83 - Wednesday, September 4, 2013 - link
If you have a home usage scenario though, you can schedule surface scans to run every other day, in that case this becomes essentially a non-issue, At worst you'll lose a handful of KB or so.And of course you have backups to cover anything going wrong on a separate array.
Of course, going RAID 5 beyond 6 disks is being slightly reckless, but that's still 20TB.
By the time you manage that kind of data, ZFS is there for you.
Dribble - Wednesday, September 4, 2013 - link
My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.
tjoynt - Wednesday, September 4, 2013 - link
++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end.Gigaplex - Wednesday, September 25, 2013 - link
Avoid Storage Spaces from Windows. It's an unproven and slow "re-imagination" of RAID as Microsoft likes to call it. The main selling point is flexibility of adding more drives, but that feature doesn't work as advertised because it doesn't rebalance. If you avoid adding more drives over time it has no benefits over conventional RAID, is far slower, and has had far less real world testing on it.Bob Todd - Monday, September 9, 2013 - link
For home use I've gone from RAID 5 to pooling + snapshot parity (DriveBender and SnapRAID respectively). It's still one big ass pool so it's easy to manage, I can survive two disks failing simultaneously with no data loss, and even in the event of a disaster where 3+ fail simultaneously I'll only lose whatever data was on the individual disks that croaked. Storage Spaces was nice in theory, but the write speed for the parity spaces is _horrendous_, and it's still striped so I'd risk losing everything (not to mention expansion in multiples of your column size is a bitch for home use).coolviper777 - Tuesday, October 1, 2013 - link
If you have a good hardware raid card, with BBU and memory, and decent drives, then I think Raid 5 works just fine for home use.I currently have a Raid 5 array using a 3Ware 9560SE Raid card, consisting of 4 x 1.5TB WD Black drives. This card has battery backup and onboard memory. My RAID 5 array works beautifully for my home use. I ran into an issue with a drive going bad. I was able to get a replacement, and the rebuild worked well. There's an automatic volume scan once a week, and I've seen it fix a few error quite a while ago. But nothing very recent.
I get tremendous speed out of my Raid5, and even boot my Windows7 OS from a partition on the Raid 5. Probably, eventually move that to a SSD, but they're still expensive for the size I need for the C: drive.
My biggest problem with Raid1 is that it's hugely wasteful in terms of disk space, and it can be slower than just a single drive. I can understand for mission critical stuff, Raid5 might give issues. However, for home use, if you combine true hardware Raid5 with backup of important files, I think it's a great solution in terms of reliability and performance.
tjoynt - Wednesday, September 4, 2013 - link
++ this. At work we *always* use raid-6: nowadays single drive redundancy is a disaster just waiting to happen.brshoemak - Wednesday, September 4, 2013 - link
"First off, error checking should in general be done by the RAID system, not by the drive electronic."The "should in general" port is where the crux of the issue lies. A RAID controller SHOULD takeover the error-correcting functions if the drive itself is having a problem - but it doesn't do it exclusively, it lets the drives have a first go at it. A non-ERC/TLER/CCTL drive will keep working on the problem for too long and not pass the reigns to the RAID controller as it should.
Also, RAID1 is the most basic RAID level in terms of complexity and I wouldn't have any qualms about running consumer drives in a consumer setting - as long as I had backups. But deal with any RAID level beyond RAID1 (RAID10/6), especially those that require parity data, and you could be in for a world of hurt if you use consumer drives.
Egg - Wednesday, September 4, 2013 - link
No. Hard drives have, for a very very long time, included their own error checking and correcting codes, to deal with small errors. Ever heard of bad blocks?RAID 1 exists to deal more with catastrophic failures of entire drives.
tjoynt - Wednesday, September 4, 2013 - link
RAID systems can't do error checking at that level because they don't have access to it: only the drive electronics do.The problems with recovering RAID arrays don't usually show up with RAID-1 arrays, but with RAID-5 arrays, because you have a LOT more drives to read.
I swore off consumer level raid-5 when my personal raid-5 (on an Intel Matrix RAID-5 :P) dropped two drives and refused to rebuild with them even though they were still perfectly functional.
Rick83 - Thursday, September 5, 2013 - link
Just fix it by hand - it's not that difficult. Of course, with pseudo hardware RAID, you're buggered, as getting the required access to the disk, and forcing partial rebuilds isn't easily possible.I've had a second disk drop out on me once, and I don't recall how exactly I ended up fixing it, but it was definitely possible. I probably just let the drive "repair" the unreadable sectors by writing 512 rubbish bytes to the relevant locations, and tanked the loss of those few bytes, then rebuilt to the redundancy disk.
So yeah, there probably was some data loss, but bad sectors aren't the end of the world.
And by using surface scans you can make the RAID drop drives with bad sectors at the first sign of an issue, then resync and be done with it. 3-6 drive RAID 5 is perfectly okay, if you only have intermediate availability requirements. For high availability RAID 6/RAID 10 arrays with 6-12 disks are a better choice.
mooninite - Thursday, September 5, 2013 - link
Intel chipsets do not offer hardware RAID. The RAID you see is purely software. The Intel BIOS just formats your hard drive with Intel's IMSM (Intel Matrix Storage Manager) format. The operating system has to interpret the format and do all the RAID parity/stripe calculations. Consider it like a file system.Calling Intel's RAID "hardware" or "pseudo-hardware" is a misconception I'd like to see die. :)
mcfaul - Tuesday, September 10, 2013 - link
"First off, error checking should in general be done by the RAID system, not by the drive electronic. "You need to keep in mind how drives work. they are split into 512b/4k sectors... and each sector has a significant chunk of ECC at the end of the sector, so all drives are continually doing both error checking and error recovery on every single read they do.
plus, if it is possible to quickly recover an error, then obviously it is advantageous for the drive to do this, as there may not be a second copy of the data available (i.e. when rebuilding a RAID 1 or RAID 5 array)
joelypolly - Wednesday, September 4, 2013 - link
With a difference of 1 to 2 watts for the Seagate I fail to see how that would be too much of a cause for concern for cooling systems? Even with a 5 disk array it should still be under 10 watts difference in the most demanding circumstances and about 5 watts on average.owan - Wednesday, September 4, 2013 - link
I was thinking about that. 1W difference has got to be negligible for any desktop based system. Even 3-4W differences, while large on the relative scale, are small in the absolute sense. I don't see how you could make the statement "if you want more performance, Seagate, if you need cool and quiet, WD" Is there no other reason to pick one drive over the other besides a 1W performance consumption difference?glugglug - Wednesday, September 4, 2013 - link
It is hard to see the relative differences quickly switching between the performance graphs for the different drives because some of them are on different scales for each drive. Is there any way the graph scales can be made uniform?ganeshts - Wednesday, September 4, 2013 - link
Can you let me know the specific graphs you are seeing the problem in? The numbers are also reported by HD Tune Pro on the side..glugglug - Wednesday, September 4, 2013 - link
The random read and random write graphs.I can see the scale on the side, but for example the random read graph has the max Y scale value at 50ms for the WD SE drive, 100ms for the Red drive and WD RE and 200 ms for the Seagate. At first glance, it looks like the Seagate is owning because of the scale -- it requires extra thought to figure out what the graph would look like on the same scale for comparison.
ganeshts - Wednesday, September 4, 2013 - link
Got it. From what I remember, HD Tune Pro doesn't give user control over graph scales. But, I will see what can be done for future articles.ZRohlfs - Wednesday, September 4, 2013 - link
If you look closely at those graphs you can see some outliers that are very high on the graph. They are away from the statistical clump but they cause the graph to have the scales that they do.carlwu - Wednesday, September 4, 2013 - link
Can anyone comment on Seagate reliability as of late? Their 1TB drive fiasco left a bad taste my mouth.dawza - Wednesday, September 4, 2013 - link
The Seagate NAS HDDs seem quite good in terms of reliability thus far. I have a 3 TB and 4 TB in my WHS (JBOD) and they've made it past the crucial 1 month mark without issues. But as mentioned in the review, these haven't been on the market very long.These are the first Seagates I've purchased in years due to past issues you alluded to.
RealBeast - Thursday, September 5, 2013 - link
Carlwu, I had that thought also but set up one NAS with 8 x 3TB Seagate 7200rpm (the ST3000DM001) in RAID 6 and have had no issues for the six months they have run 24/7. Fingers crossed.zlandar - Wednesday, September 4, 2013 - link
Would really like a comparison in a RAID-5 setup with 4 drives since that's what I use for media storage.Tell Seagate to send you 3 more drives!
otherwise - Wednesday, September 4, 2013 - link
Does anyone know how read patrolling factors into usage numbers? There is no way I would come even close to 150 TB/yr in a home NAS with my own data, but with ZFS read patrolling going on in the background I don't exactly know what the true load is.bobbozzo - Thursday, September 5, 2013 - link
I don't really understand these read or read/write ratings... iirc, Google's data said reads and writes do not affect failure rate on hard drives. (SSD's are obviously a different story, for writes).htspecialist - Wednesday, September 4, 2013 - link
I have had good experience with Hitachi drives in NAS use. HGST has both consumer class and enterprise class 7200 rpm 4tb drives capable of NAS use. Any plans to include the HGST in the review evaluation of 4tb NAS capable drives?wintermute000 - Wednesday, September 4, 2013 - link
yah I've had several WD and Seagate failures over the last 6-7 years of running 4 drives in a RAID5 but no Hitachi failures, running all hitachi nowiwod - Wednesday, September 4, 2013 - link
To me, Speed doesn't matter any more. Not for NAS Market. Since even the slowest HDD will saturate 1Gbits Ethernet in Sequential Read Write, and Random Read Write are slow as well as mostly limited by the NAS CPU as well.I want Price and Disk Size. Reliability is also a concern as well but since most HDD will just fail in one way or another over time It is best to have something like Synology where you over a number of disk you could have up to 2 HDD failure.
tuxRoller - Wednesday, September 4, 2013 - link
Are the idle power numbers in the chart correct?It looks like the decimal point was pushed to right...
KAlmquist - Thursday, September 5, 2013 - link
The power numbers are wall power, so it includes power supply losses and the power consumed by the LenovoEMC PX2-300D, in addition to the power consumed by the hard drive. So the absolute values aren't useful (unless you own a PX2-300D), but the numbers do show which drives consume less power.mcfaul - Tuesday, September 10, 2013 - link
seconded, i have 32 x 3tb drives.. the heat adds up....mcfaul - Tuesday, September 10, 2013 - link
"We have also been very impressed with WD's response to various user complaints about the first generation Red drives."Can you expand on what the complaints were, and what WD have done about them? I've only heard good things about the Red drives
Wwhat - Tuesday, September 10, 2013 - link
Doing a 'torture test' means you use them a lot constantly though, not that you put them on a burner to see what happens.And frankly a drive should adhere to its stated lifetime/performance somewhat regardless how heavy you use it.
And don't forget that all drives unless powered down spin constantly anyway.
And quite a few NAS boxes for the home have so-so cooling, so it would be valid to test how hot HD's get during intensive (but normal) use.
chubbypanda - Thursday, September 26, 2013 - link
Ganesh, rated reliability for WD Se is also 1 per 10E15 (see http://www.wdc.com/wdproducts/library/SpecSheet/EN... ), same as Re.Oller - Friday, October 11, 2013 - link
I am planning to buy a Drobo 5N as a Plex video server and also for TimeMachine backup. That would seem to require limited data transfer.From the review it would seem that the Red is just as good as the RE and at nearly half the price would be the better choice.
Do you agree that the Red is a better choice than the RE for my needs?
chrcoluk - Thursday, November 17, 2016 - link
Some bait and switching going on.I own 4 WD red's all 3TB versions.
First 2 purchased 2 years ago, they report NCQ supported, 4k sectors, security mode, and have no drive parking.
Brought 2 more yesterday. Dont support NCQ (wtf?), drive parking same as green drives, one is bigger than the other has a few extra LBA blocks and reports an extra gig size and 512byte sectors instead of 4k.
Totally bizzare.