That is why RAID was invented and backup should be routine.
Only a fool stores their important data without disk redundancy and/or backup, whether they have 12KB or 12TB of it. Doesn't matter how much important data you have, if you don't want to lose it then don't entrust it to a single drive.
Yeah from someone thats had a hard drive with pretty much my entire life saved on it (and without a current backup *sniff*) I can say there's a lot of data you can live without. Well all of it, but it is a pain to lose things.
Also: disks really do die. We've had four or five go over eight years - thankfully, none at once. You'd better be sure you have a backup - even at +200MBps you're talking 16 hours to rebuild 12TB, assuming *no* other I/O, and let's be honest: if you didn't have I/O you probably wouldn't need RAID. Realistically, it's likely to take a day; with heavy I/O, maybe a week. Consider using only up to about the first 2/3 and migrating to a 24TB drive array (once they exist).
RAID was not invented to protect your data. It's for uptime. You can live without RAID. Backup is essential for anything you value. RAID is not backup.
Which is the whole problem of a 12TB drive with a low TB/$ value. Buy arrays of 3TB drives when you need that type of storage. It isn't a notebook drive, there is no reason to limit the number of drives you are using.
Obviously once the number of drives get unwieldy, you will look into 12TB drives. But you will also be looking into tape at those sizes.
At least for a NAS you will have to buy a 4+ bay one, which is so expensive that you will pay pretty much the same as for just a 2-bay system and 2 12 TB ones but wont have higher chance of failure.
Seriously, since when has mere data size had much if any relation to data value? All of our business (and my personal) financial, tax and investment data is under 10 gigs for example, but losing it would be far worse then a terabyte of data that could (albeit with a lot of time cost) be regenerated. Of course, even that time cost would translate to a lot of lost money and pain, which is why we have backups and also redundancy (since in general it's preferable to not even suffer any downtime).
The only major effect of larger drives when it comes to data is that of course they take longer to resilver (this one might be 10-12 hours at 80% capacity), and thus depending on someone's willingness/ability/time cost to deal with downtime in case of multiple failures they may want to switch up their RAID systems to have a higher level of redundancy. That's just an economic decision though and doesn't negate the need for backups as well, preferably offsite.
Backing up 10G should give you plenty of options, and I'd recommend a belt and suspenders approach. Probably just multiple encrypted USB sticks and online storage (make sure they don't object to you pre-encrypting the data before sending it to them, most companies like that assume that they will take unencrypted data for de-duping purposes. This data obviously shouldn't be on servers unencrypted).
Just make sure you have some means (presumably SHA256 based) of verifying your USB sticks.
@Glock24: "Who wants to lose 12TB of data? Yeah, not me."
You will only loose as much data as you have stored on the drive. If you only have 3TB data, then it doesn't matter whether it's a 12TB drive or a 6TB drive (assuming the same failure rate). If you do have 12TB of data, then you'll need several smaller drives to hold that data (2x6TB, 3x4TB, etc.). That presents a trade-off for data protection. While a single catastrophic (total) drive failure won't take all your data with it, you've massively increased the probability of a catastrophic drive failure taking place. Then there's the fact that not all your data is of equal value. If Murphy has anything to say about it, it will be your most valuable data that gets lost. So all going with smaller really does is reduce the severity of a data loss (due to total drive failure) at the expense of increasing the certainty of data loss (and that data possibly being your most valuable data).
So, as kingpotnoodle said, have a backup plan in place. Redundancy via RAID1 (or other RAID not 0) is good practice for data protection. Also, if you are so inclined, you can use a file system that has built in redundancy features (I.E. ZFS) and store two or more copies of files on different parts of the drive. This would reduce the amount of data able to be stored on the drive, but significantly increase data resilience from failures that aren't total drive failures. It also makes data recovery more likely in the case of total drive failure.
In short, a 12TB drive can be both less likely to loose data and have no more data to lose than a 4TB drive if you set it up that way (ZFS or similar with triple redundancy at the file system level). Of course, this comes at the expense of cash, just like any other data redundancy solution (I.E. RAID1), so choose your methods wisely.
I suppose it can be sussed out of the performance data, but... can you please say if this drive is shingle technology or not? With any Seagate drive that's one of my first questions, and they seem to have stopped identifying it in the literature.
I already clarified in the introductory text with an edit, and also in a comment below - these are NOT shingled drives, but PMR platters in a sealed helium-filled enclosure.
Exactly. Now one thing that isn't being mentioned that is very important as we get into these bigger and bigger hard drives within use in RAID systems is the time to rebuild and single read failure rates. That 12TB drive in full use on a RAID 5 system will take over 18 hours just to read the other disks inside the RAID group, factor in 14 hours to write the parity data to the new disk and give a 10% overhead for calculating the parity, and you are looking at around 36 hours assuming no other activity is happening on the RAID set to rebuild from a failed disk. If during those 36 hours a single read failure occurs (on a RAID 5), you have just lost all your data.
This is why as has been stated that things like RAID 6 has been developed, but we are now pushing the boundaries of what RAID 6 can protect against, and really need to be using RAID 5+1 or similar, but that costs double the amount of hard drives to implement.
These issues have mostly been proven to be overkill, but I doubt I'd trust even Seagates even in RAID 6 (and then some) [having two arrays of RAID 5 means your software is a kludge. That's just fundamentally stupid and you should be really looking into some sort of Reed-Solomon based system with many ecc drives. But unfortunately "known good RAID 5" beats "insufficiently tested reed solomon encoding" so I understand how it gets used. Doesn't make it any less of a kludge] .
Also remember that the bit error rate of 10^15 doesn't mean "expect 1 bit every 10^15 read/writes" but really "expect an aligned 4kbyte of garbage every 8*4096*10^15", so the calculations are a bit different. The internals of hard drives mean either the whole sector is good or it is entirely garbage, you don't individual bit errors.
And if "you just lost all your data" really happens, you have a pretty strange dataset that can't take a single aligned 4k group of garbage (most filesystems store multiple copies of critical data, so that wouldn't be an issue).
Even if you did, you would just break out the tapes and reload (which unfortunately is much, much longer than 36 hours). When *arrays* of 12TB make sense, you are definitely in tape backup land. Hopefully you have a filesystem/backup system that can tag the error to the file (presumably to the RAID sector size) and simply reload the failed RAID sector from tape (because otherwise you will be down for weeks).
I think your sums are a little off - it doesn't have to be a serial operation. A good RAID solution will rebuild by reading and writing at the same time. However, I/O contention on reads *can* kill a rebuild, and this can easily turn an operation which "should" take a day into a week-long saga.
One more time... RAID is not backup. Doesn't matter if you have the drive mirrored. If a file gets corrupted/deleted on one drive, then you have the same issue on the mirrored drive.
[Sorry, my question was for Ganesh, not the troll thread]
I suppose it can be sussed out of the performance data, but... can you please say if this drive is shingle technology or not? With any Seagate drive that's one of my first questions, and they seem to have stopped identifying it in the literature.
I would suggest to include whether the drive is PMR, Shingled, ... storage in the table as both technologies co-exist but have different purpose/performance.
I really can't stand when people ask if a drive is SMR or "PMR". Every modern hard drive is PMR regardless of whether they are shingled or not. The proper term for a non-SMR drive is CMR.
SMR - Shingled Magnetic Recording CMR - Conventional Magnetic Recording
That is a peeve I wouldn't object to, but, let us be honest - everyone treats SMR as the 'evolutionary update' to traditional PMR (CMR) drives. But, I appreciate this insight and will probably use it myself in future articles :)
It makes perfect sense. CMR is how hard drives have stored data on tracks since their inception in the 1950's. SMR is a different strategy for writing the tracks that has the writer overlap with previously written tracks to increase track density.
"Shingled" magnetic recording is also "giant magneto resistor recording". So that's not a good way to differentiate between them.
If CMR was the way HDDs worked from the inception then why would you call SMR anything other than that? They're all the same. Magnet. Poles. Everything else is buzzwords the industry is feeding you to make you buy more and more. And the sheeple buy.
All hard drives make noise - particularly if there are lots of random accesses making the heads move around a lot, and I don't think this was any more noisy than other disks that I have evaluated before. Spec sheets put it in the same category as everyone else - 28 dB typical / 30 dB max at idle, 32 dB typical / 34 dB max for performance seeks.
To be honest, no one has brought up this aspect before, and I don't think
Go read through some of the quarterly drive reports from Backblaze. The larger sized Seagates are much more reliable than the earlier 1-3 TB drives. Although Toshiba and Hitachi are still the drives to beat for reliability.
Hey Ganesh, thank you for the review, honestly i'm always surprised by how much faster hard drives are now than they were even 5 years ago. On an unrelated question, how do you like that cooler master case? I've been considering getting one recently for its openness but I wasn't sure about whether or not i could get used to how much horizontal space it takes up on a desk.
I simply love it, but, I have to let you know that my requirements are not the typical computer builder's requirements.
I wanted something that could be used to easily hot-swap hard drives (since I tend to benchmark internal drives quite a bit) - the two front bays serve this purpose well.
I wanted it to be fairly portable - the 'carry handles' on either side have been put to use many times and have worked well.
There is plenty of ventilation, but, to be honest, I am not overclocking, and I don't even have a dGPU in that build. So, I really can't comment on that aspect.
Lastly, I wanted a LAN box rather than a tower because I wanted all the ports to be easily accessible when placed in makeshift 'rackmount' environments. Currently I have it in a repurposed garage shelving unit [ https://www.walmart.com/ip/Edsal-36-W-x-18-D-x-72-... ].
Minus the moving those were also my considerations. I'm constantly swapping and playing with different GPU's and CPU's so the ability to just open the top and be there is a big plus. I tend to not keep the exact same config for too much time, and in the time ive had my current corsair carbide 500r, ive gone through 2 motherboards, 4 different cpu's, 5 GPU's, and multiple hard drives. Not to mention all of the parts i've just tested to verify that it works for family and friends. Normally I keep cases for 5 to 7 years or until they've just had enough, but since I saw this case it seemed like it would be perfect for the sort of use I have in mind, especially now that AM4 seems like it'll have quite a few options available to it like AM2+ and AM3+ did, and I'm building up an AM4 build with an Athlon x4 950. I honestly haven't found a good review of one, and its cheap enough for me to just get one to play with while I see whether or not i want to spring for a 1600, 1700, or just wait for the refresh, although it is slower than my FX 8320 @4.4. Thanks for the information though!
All my important data (around 75GByte) I have in 4 copies. One copy on my personal PC, one copy on my home NAS hidden away, one copy in Google Drive and one copy on an external harddisk that I keep at my workplace. Location redundance is important. My less-imprortant data (around 5TByte mainly video material) I have in 2 copy. One copy on my personal PC and one copy on my home NAS.
I find these types of benchmarks far less interesting than the SSD benchmarks Anandtech uses. I know spinners would get blown out of the water by the top end SSDs on The Destroyer, but I do wonder how the fastest spinners would compete with the lowest end SSDs. Spinning media has its place in this world, and I'd really like to see benchmarks that tell that story.
Would anyone use this without redundancy? It is nice that they throw in 2 years of data recovery as part of the warranty, but I am genuinely curious if the drive technology has matured enough to guarantee safety from data loss.
I bought my first Barracuda 2tb HDD and never ever will I buy from Seagate again. While in benchmarks everything looks fine, in real world scenario where I copy 6gb file speed drops to crawling 20MBs, even old WD Green is better. I should buy WD EZRZ 2TB, but on paper Seagate looks better, thats why I went for it and I regret it. Im sure 5400rpm EZRZ from WD performs better. BTW Max access times of Barracudas are huge.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
62 Comments
Back to Article
Glock24 - Wednesday, November 15, 2017 - link
Who wants to lose 12TB of data? Yeah, not me.kingpotnoodle - Wednesday, November 15, 2017 - link
That is why RAID was invented and backup should be routine.Only a fool stores their important data without disk redundancy and/or backup, whether they have 12KB or 12TB of it. Doesn't matter how much important data you have, if you don't want to lose it then don't entrust it to a single drive.
tipoo - Wednesday, November 15, 2017 - link
Storage getting cheaper = backup getting cheaper. This has been true always.imaheadcase - Wednesday, November 15, 2017 - link
You be surprised how much data you DON'T need.PeachNCream - Wednesday, November 15, 2017 - link
Yeah from someone thats had a hard drive with pretty much my entire life saved on it (and without a current backup *sniff*) I can say there's a lot of data you can live without. Well all of it, but it is a pain to lose things.MobiusPizza - Wednesday, November 15, 2017 - link
Yeah, that's why I have 4 RAID 5 drives plus cloud backup for my terabytes of porn, I meant important dataGreenReaper - Sunday, September 2, 2018 - link
Funny - I run a porn site on 4 2TB RAID-5 drives. And another four in a mostly-live backup. And another 4TB one off-site. (Plus all the caches.)Thankfully most of it is pictures, not video, or they'd need to be 12 12TB drives - or maybe a cluster.
GreenReaper - Sunday, September 2, 2018 - link
Also: disks really do die. We've had four or five go over eight years - thankfully, none at once. You'd better be sure you have a backup - even at +200MBps you're talking 16 hours to rebuild 12TB, assuming *no* other I/O, and let's be honest: if you didn't have I/O you probably wouldn't need RAID. Realistically, it's likely to take a day; with heavy I/O, maybe a week. Consider using only up to about the first 2/3 and migrating to a 24TB drive array (once they exist).fzzzt - Wednesday, November 15, 2017 - link
This is why backups are used. RAID is not a replacement for a backup.Lord of the Bored - Wednesday, November 15, 2017 - link
A fool and his data are soon parted, as it were.jabber - Thursday, November 16, 2017 - link
I would also say only fools tie themselves down with multi TB amounts of personal data. Data is a millstone round your neck.bigboxes - Friday, November 17, 2017 - link
RAID was not invented to protect your data. It's for uptime. You can live without RAID. Backup is essential for anything you value. RAID is not backup.piroroadkill - Wednesday, November 15, 2017 - link
I find it amusing that people are STILL saying this. When drives hit 500GB, 1TB, you name it, the same comment. Over, and over.You have a backup. Always have a backup.
blackcrayon - Thursday, November 16, 2017 - link
Exactly. Just double the price of every one of these drives in your head. You need two for backup.jordanclock - Wednesday, November 15, 2017 - link
Then don't buy just one. Or have a backup pool of 12+TB.This isn't even a valid talking point. It's just a contrarian interjection.
wumpus - Thursday, November 16, 2017 - link
Which is the whole problem of a 12TB drive with a low TB/$ value. Buy arrays of 3TB drives when you need that type of storage. It isn't a notebook drive, there is no reason to limit the number of drives you are using.Obviously once the number of drives get unwieldy, you will look into 12TB drives. But you will also be looking into tape at those sizes.
Beaver M. - Sunday, November 19, 2017 - link
At least for a NAS you will have to buy a 4+ bay one, which is so expensive that you will pay pretty much the same as for just a 2-bay system and 2 12 TB ones but wont have higher chance of failure.GreenReaper - Sunday, September 2, 2018 - link
If you really just need four bays, grab a HP MicroServer Gen 8. (I wouldn't recommend the Gen 10.)It's an actual miniature server (with iLO!), not just a NAS - but you can turn it into a NAS if you like.
zanon - Wednesday, November 15, 2017 - link
Seriously, since when has mere data size had much if any relation to data value? All of our business (and my personal) financial, tax and investment data is under 10 gigs for example, but losing it would be far worse then a terabyte of data that could (albeit with a lot of time cost) be regenerated. Of course, even that time cost would translate to a lot of lost money and pain, which is why we have backups and also redundancy (since in general it's preferable to not even suffer any downtime).The only major effect of larger drives when it comes to data is that of course they take longer to resilver (this one might be 10-12 hours at 80% capacity), and thus depending on someone's willingness/ability/time cost to deal with downtime in case of multiple failures they may want to switch up their RAID systems to have a higher level of redundancy. That's just an economic decision though and doesn't negate the need for backups as well, preferably offsite.
wumpus - Thursday, November 16, 2017 - link
Backing up 10G should give you plenty of options, and I'd recommend a belt and suspenders approach. Probably just multiple encrypted USB sticks and online storage (make sure they don't object to you pre-encrypting the data before sending it to them, most companies like that assume that they will take unencrypted data for de-duping purposes. This data obviously shouldn't be on servers unencrypted).Just make sure you have some means (presumably SHA256 based) of verifying your USB sticks.
rtho782 - Wednesday, November 15, 2017 - link
I'd rather loose a 12TB Plex Library than a 100kB bitcoin wallet with 10 bitcoins in.The size of the data isn't really relevant.
BurntMyBacon - Wednesday, November 15, 2017 - link
@Glock24: "Who wants to lose 12TB of data? Yeah, not me."You will only loose as much data as you have stored on the drive. If you only have 3TB data, then it doesn't matter whether it's a 12TB drive or a 6TB drive (assuming the same failure rate). If you do have 12TB of data, then you'll need several smaller drives to hold that data (2x6TB, 3x4TB, etc.). That presents a trade-off for data protection. While a single catastrophic (total) drive failure won't take all your data with it, you've massively increased the probability of a catastrophic drive failure taking place. Then there's the fact that not all your data is of equal value. If Murphy has anything to say about it, it will be your most valuable data that gets lost. So all going with smaller really does is reduce the severity of a data loss (due to total drive failure) at the expense of increasing the certainty of data loss (and that data possibly being your most valuable data).
So, as kingpotnoodle said, have a backup plan in place. Redundancy via RAID1 (or other RAID not 0) is good practice for data protection. Also, if you are so inclined, you can use a file system that has built in redundancy features (I.E. ZFS) and store two or more copies of files on different parts of the drive. This would reduce the amount of data able to be stored on the drive, but significantly increase data resilience from failures that aren't total drive failures. It also makes data recovery more likely in the case of total drive failure.
In short, a 12TB drive can be both less likely to loose data and have no more data to lose than a 4TB drive if you set it up that way (ZFS or similar with triple redundancy at the file system level). Of course, this comes at the expense of cash, just like any other data redundancy solution (I.E. RAID1), so choose your methods wisely.
Arbie - Wednesday, November 15, 2017 - link
I suppose it can be sussed out of the performance data, but... can you please say if this drive is shingle technology or not? With any Seagate drive that's one of my first questions, and they seem to have stopped identifying it in the literature.ganeshts - Wednesday, November 15, 2017 - link
I already clarified in the introductory text with an edit, and also in a comment below - these are NOT shingled drives, but PMR platters in a sealed helium-filled enclosure.Fallen Kell - Wednesday, November 15, 2017 - link
Exactly. Now one thing that isn't being mentioned that is very important as we get into these bigger and bigger hard drives within use in RAID systems is the time to rebuild and single read failure rates. That 12TB drive in full use on a RAID 5 system will take over 18 hours just to read the other disks inside the RAID group, factor in 14 hours to write the parity data to the new disk and give a 10% overhead for calculating the parity, and you are looking at around 36 hours assuming no other activity is happening on the RAID set to rebuild from a failed disk. If during those 36 hours a single read failure occurs (on a RAID 5), you have just lost all your data.This is why as has been stated that things like RAID 6 has been developed, but we are now pushing the boundaries of what RAID 6 can protect against, and really need to be using RAID 5+1 or similar, but that costs double the amount of hard drives to implement.
wumpus - Thursday, November 16, 2017 - link
These issues have mostly been proven to be overkill, but I doubt I'd trust even Seagates even in RAID 6 (and then some) [having two arrays of RAID 5 means your software is a kludge. That's just fundamentally stupid and you should be really looking into some sort of Reed-Solomon based system with many ecc drives. But unfortunately "known good RAID 5" beats "insufficiently tested reed solomon encoding" so I understand how it gets used. Doesn't make it any less of a kludge] .Also remember that the bit error rate of 10^15 doesn't mean "expect 1 bit every 10^15 read/writes" but really "expect an aligned 4kbyte of garbage every 8*4096*10^15", so the calculations are a bit different. The internals of hard drives mean either the whole sector is good or it is entirely garbage, you don't individual bit errors.
And if "you just lost all your data" really happens, you have a pretty strange dataset that can't take a single aligned 4k group of garbage (most filesystems store multiple copies of critical data, so that wouldn't be an issue).
Even if you did, you would just break out the tapes and reload (which unfortunately is much, much longer than 36 hours). When *arrays* of 12TB make sense, you are definitely in tape backup land. Hopefully you have a filesystem/backup system that can tag the error to the file (presumably to the RAID sector size) and simply reload the failed RAID sector from tape (because otherwise you will be down for weeks).
GreenReaper - Sunday, September 2, 2018 - link
I think your sums are a little off - it doesn't have to be a serial operation. A good RAID solution will rebuild by reading and writing at the same time. However, I/O contention on reads *can* kill a rebuild, and this can easily turn an operation which "should" take a day into a week-long saga.bigboxes - Friday, November 17, 2017 - link
One more time... RAID is not backup. Doesn't matter if you have the drive mirrored. If a file gets corrupted/deleted on one drive, then you have the same issue on the mirrored drive.Pinn - Thursday, November 16, 2017 - link
Store less porn, Glock24.Samus - Thursday, November 16, 2017 - link
I'd put my family in a Ford Pinto before I put 12TB of data on a Seagate.HStewart - Thursday, November 16, 2017 - link
Large amount of storage can be used for storage of logs and such especially when you have multiple locations. Of course you could have backupjameskatt - Sunday, November 19, 2017 - link
That is why you get three backups and an online backup.Arbie - Wednesday, November 15, 2017 - link
[Sorry, my question was for Ganesh, not the troll thread]I suppose it can be sussed out of the performance data, but... can you please say if this drive is shingle technology or not? With any Seagate drive that's one of my first questions, and they seem to have stopped identifying it in the literature.
ganeshts - Wednesday, November 15, 2017 - link
Oh, I covered that in the launch article, which is linked in the first paragraph.These are standard PMR drives, helium-ones with 8 platters.
XZerg - Wednesday, November 15, 2017 - link
I would suggest to include whether the drive is PMR, Shingled, ... storage in the table as both technologies co-exist but have different purpose/performance.Taracta - Wednesday, November 15, 2017 - link
Hello, the HD Tune random access are both Read not Read and Write. Please correct. Thank you.ganeshts - Wednesday, November 15, 2017 - link
My apologies! It has been fixed now.takeshi7 - Wednesday, November 15, 2017 - link
I really can't stand when people ask if a drive is SMR or "PMR". Every modern hard drive is PMR regardless of whether they are shingled or not. The proper term for a non-SMR drive is CMR.SMR - Shingled Magnetic Recording
CMR - Conventional Magnetic Recording
ganeshts - Wednesday, November 15, 2017 - link
That is a peeve I wouldn't object to, but, let us be honest - everyone treats SMR as the 'evolutionary update' to traditional PMR (CMR) drives. But, I appreciate this insight and will probably use it myself in future articles :)phoenix_rizzen - Wednesday, November 15, 2017 - link
To be pedantic, wouldn't SPMR and CPMR be more accurate?Shingled Perpendicular Magnetic Recording
Conventional Perpendicular Magnetic Recording
ddrіver - Wednesday, November 15, 2017 - link
"Conventional" magnetic recording is "giant magneto resistor recording". So what you're saying doesn't make sense.takeshi7 - Thursday, November 16, 2017 - link
It makes perfect sense. CMR is how hard drives have stored data on tracks since their inception in the 1950's. SMR is a different strategy for writing the tracks that has the writer overlap with previously written tracks to increase track density."Shingled" magnetic recording is also "giant magneto resistor recording". So that's not a good way to differentiate between them.
ddrіver - Thursday, November 16, 2017 - link
If CMR was the way HDDs worked from the inception then why would you call SMR anything other than that? They're all the same. Magnet. Poles. Everything else is buzzwords the industry is feeding you to make you buy more and more. And the sheeple buy.cm2187 - Wednesday, November 15, 2017 - link
Important question for usage in consumer NAS: how noisy is it?ganeshts - Wednesday, November 15, 2017 - link
All hard drives make noise - particularly if there are lots of random accesses making the heads move around a lot, and I don't think this was any more noisy than other disks that I have evaluated before. Spec sheets put it in the same category as everyone else - 28 dB typical / 30 dB max at idle, 32 dB typical / 34 dB max for performance seeks.To be honest, no one has brought up this aspect before, and I don't think
Ahnilated - Wednesday, November 15, 2017 - link
Do people still buy Seagate's stuff? Their HDD's have a notoriously high level of failure.coburn_c - Wednesday, November 15, 2017 - link
I don't think that's true anymore.haukionkannel - Wednesday, November 15, 2017 - link
Yep, that was a long time ago.Senti - Wednesday, November 15, 2017 - link
Nop, it's still very true. Seagate still holds undisputed first place on HDD failure %.CheapSushi - Thursday, November 16, 2017 - link
I believe that was just mainly for their 3TB or 4TB variants.phoenix_rizzen - Wednesday, November 15, 2017 - link
Go read through some of the quarterly drive reports from Backblaze. The larger sized Seagates are much more reliable than the earlier 1-3 TB drives. Although Toshiba and Hitachi are still the drives to beat for reliability.artk2219 - Wednesday, November 15, 2017 - link
Hey Ganesh, thank you for the review, honestly i'm always surprised by how much faster hard drives are now than they were even 5 years ago. On an unrelated question, how do you like that cooler master case? I've been considering getting one recently for its openness but I wasn't sure about whether or not i could get used to how much horizontal space it takes up on a desk.ganeshts - Wednesday, November 15, 2017 - link
I simply love it, but, I have to let you know that my requirements are not the typical computer builder's requirements.I wanted something that could be used to easily hot-swap hard drives (since I tend to benchmark internal drives quite a bit) - the two front bays serve this purpose well.
I wanted it to be fairly portable - the 'carry handles' on either side have been put to use many times and have worked well.
There is plenty of ventilation, but, to be honest, I am not overclocking, and I don't even have a dGPU in that build. So, I really can't comment on that aspect.
Lastly, I wanted a LAN box rather than a tower because I wanted all the ports to be easily accessible when placed in makeshift 'rackmount' environments. Currently I have it in a repurposed garage shelving unit [ https://www.walmart.com/ip/Edsal-36-W-x-18-D-x-72-... ].
artk2219 - Wednesday, November 15, 2017 - link
Minus the moving those were also my considerations. I'm constantly swapping and playing with different GPU's and CPU's so the ability to just open the top and be there is a big plus. I tend to not keep the exact same config for too much time, and in the time ive had my current corsair carbide 500r, ive gone through 2 motherboards, 4 different cpu's, 5 GPU's, and multiple hard drives. Not to mention all of the parts i've just tested to verify that it works for family and friends. Normally I keep cases for 5 to 7 years or until they've just had enough, but since I saw this case it seemed like it would be perfect for the sort of use I have in mind, especially now that AM4 seems like it'll have quite a few options available to it like AM2+ and AM3+ did, and I'm building up an AM4 build with an Athlon x4 950. I honestly haven't found a good review of one, and its cheap enough for me to just get one to play with while I see whether or not i want to spring for a 1600, 1700, or just wait for the refresh, although it is slower than my FX 8320 @4.4. Thanks for the information though!Liltorp - Thursday, November 16, 2017 - link
All my important data (around 75GByte) I have in 4 copies. One copy on my personal PC, one copy on my home NAS hidden away, one copy in Google Drive and one copy on an external harddisk that I keep at my workplace. Location redundance is important.My less-imprortant data (around 5TByte mainly video material) I have in 2 copy. One copy on my personal PC and one copy on my home NAS.
chrysrobyn - Thursday, November 16, 2017 - link
I find these types of benchmarks far less interesting than the SSD benchmarks Anandtech uses. I know spinners would get blown out of the water by the top end SSDs on The Destroyer, but I do wonder how the fastest spinners would compete with the lowest end SSDs. Spinning media has its place in this world, and I'd really like to see benchmarks that tell that story.b1gtuna - Thursday, November 16, 2017 - link
Would anyone use this without redundancy? It is nice that they throw in 2 years of data recovery as part of the warranty, but I am genuinely curious if the drive technology has matured enough to guarantee safety from data loss.jameskatt - Sunday, November 19, 2017 - link
I hope one of these days 12 TB drives are < $100.Beaver M. - Sunday, November 19, 2017 - link
They would be in a few years. But not with this oligopoly. Even 8 TB ones are still extremely expensive, and those are over 3 years old now.stevenrix - Tuesday, November 21, 2017 - link
Seagate drives? Thanks but no thanks.DigDeep - Tuesday, September 10, 2019 - link
I bought my first Barracuda 2tb HDD and never ever will I buy from Seagate again. While in benchmarks everything looks fine, in real world scenario where I copy 6gb file speed drops to crawling 20MBs, even old WD Green is better. I should buy WD EZRZ 2TB, but on paper Seagate looks better, thats why I went for it and I regret it. Im sure 5400rpm EZRZ from WD performs better. BTW Max access times of Barracudas are huge.