Most of us are limited to what companies such as QNAP or Synology make available. In an ideal world we could just select a filesystem, but we aren't there and lots of us have no interest in building and maintaining a high powered and noisy server to roll our own.
I understand the desire to not have to maintain something yourself, but you can definitely roll your own NAS server that doesn't consume that much power and very low noise. Heck there are hats for raspberry PI devices that can let you hook up a couple a few SATA drives and are powerful enough for a light workload NAS. Cheap, lower power than synology, and zero noise.
People want support and QNAP\Synology provide that in a nice cheap little box that does almost everything well enough.
It's the same reason everyone buys a router instead of running pfsense because a $100 Ubiquiti router is 95% there, and that's good enough for 95% of people, especially if they value the support (and obvious power efficiency benefit of running a 5-watt router opposed to a pfsense PC that will idle at (if you are lucky) 30-watts.
And hence my comment: "I understand the desire to not have to maintain something yourself"
I was commenting on their mention of "high powered and noisy server" as if running a homelab NAS requires you to buy a 1u blade server with a 16 core Xeon CPU that sounds like a jet engine.
I mean I have a QNAP that has an embedded i7 with 64GB and 12 drives, but it's power consumption is lower than comparable PC's, it's much quieter than the server it replaced and it's more than powerful enough to run a few VM's and a bunch of containers as well as provide network storage.
At any rate, I'm just saying I and many others who are technically capable but have no interest in the roll your own experience and for them the feature you dismissed is actually useful.
Which feature did I dismiss? I'm fully supportive of everyones' scenarios here. Merely wanted to point out that rolling your own doesn't mean high powered and noisy.
why not both, I have QNAP that runs a OMV-Pi-Plex. Here is why: RPi4 works well for OMV and with docker it runs Pi inside it. but max throughput is less than what gigabit an do. QNAP can handle 10 GBe. Also once you have to deal with updates, it becomes a lot less reliable in terms of uptime and Quality Control of the releases.
I run the QNAP for it's RAID: uptime above all else I run the OMW for the backups of QNAP, windows, etc for this I use BTRFS with cheap, junk HDDs.
Total power of this system is less than RPi once I factor in the non-idle power consumption of my desktop that have to wait 10x as long to upload/sync/verify the weekly backups.
Total time of mine devoted into googling RPi is greater than just managing QNAP + the linux VM for the rest.
My point: just because you can/it is cheap it does not mean it's the best solution once you look at it as a cog in ecosystem. Once you have decent throughput & valuable data(think a family of 4, with nightly backups of their digital devices), professionally sourced solution makes sense.
>> Total power of this system is less than RPi once I factor in the non-idle power consumption of my desktop that have to wait 10x as long to upload/sync/verify the weekly backups.
I think this is a very fair point, so the usage case here depends very heavily on workload. For example, not all uses of a NAS/storage server are strictly for backups with large amounts of data transfer.
However, I again would like to point out that my point was never that one is better than the other. the person I replied to merely stated that building one yourself involved it being high power and noisy, which is not strictly true at all. I am 100% totally in support of people buying premade machines. there are plenty of advantages and reasons to doing so.
No, Gold uses ePMR for 16TB and up and TSA as well, from their previous datasheet and the only revision of Gold drive is 20TB, all the rest are same. So I doubt they revised and removed both. Red is not Gold. Not at all. And shucking drives White Labels are not binned to Gold standards either. When they sell a retail Gold drive it must have that rating and reliability. And this is WD who nick and dimes consumers.
At least in part, that's the Time-Limited Error Recovery mentioned in the article. The drives simply won't retry for as long, preferring to fail fast and allow the RAID to get the data off the other copies in the RAID. That's a lot less intrusive for users than locking up the whole RAID for one request for one user.
Also, if the disk waits for too long, some RAID controllers might conclude the whole disk is dodgy, and drop it from the RAID, causing significantly more problems.
Hopefully the RAID will re-write a good copy of the data to the disk with the failing sector(s), allowing the hard disk to substitute spare sectors.
WD Gold is the only drive I would go for. Esp the 16TB and up because of higher reliability rating first and second they have ePMR / EAMR and TSA. No other series has that except Gold. OptiNAND is only available for 20TB and up due to the CMR and their new iNAND UFS tech making space with the existing 9 and 10 platter designs.
Also the datasheet has 2 revisions of 20TB, I wonder what's the difference with them esp one has lower power consumption vs the other. I bought the higher power consumption one, maybe I should wait for the new drive and return this ? No idea, esp this WD is really a mess when it comes to these revisions, last time they did the revision for WD101KFBX Red Pro 10TB to WD102XXXX and they removed Helium from them.
Finally the ArmorCache thing was supposed to be on all OptiNAND drives because their datasheet for the tech brief showed that only but now they are gating that to WD Gold line only lol that too only Gold 22TB variant, horrendous practices by WD really.
One has to be super careful in choosing WD drives no doubt.
When a drive like that fails in a RAID or RAIDZ and you have to rebuild/resilver it, it will take somewhere between 3 days and a few weeks depending on workload. Shudder.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
22 Comments
Back to Article
Threska - Tuesday, July 19, 2022 - link
"This can make high-capacity HDDs attractive to home consumers / prosumers who may be rightly worried about long RAID rebuild times."Or use ZFS or BTRFS so one doesn't have to worry as much about that.
Reflex - Tuesday, July 19, 2022 - link
Most of us are limited to what companies such as QNAP or Synology make available. In an ideal world we could just select a filesystem, but we aren't there and lots of us have no interest in building and maintaining a high powered and noisy server to roll our own.inighthawki - Tuesday, July 19, 2022 - link
I understand the desire to not have to maintain something yourself, but you can definitely roll your own NAS server that doesn't consume that much power and very low noise. Heck there are hats for raspberry PI devices that can let you hook up a couple a few SATA drives and are powerful enough for a light workload NAS. Cheap, lower power than synology, and zero noise.Samus - Tuesday, July 19, 2022 - link
People want support and QNAP\Synology provide that in a nice cheap little box that does almost everything well enough.It's the same reason everyone buys a router instead of running pfsense because a $100 Ubiquiti router is 95% there, and that's good enough for 95% of people, especially if they value the support (and obvious power efficiency benefit of running a 5-watt router opposed to a pfsense PC that will idle at (if you are lucky) 30-watts.
inighthawki - Tuesday, July 19, 2022 - link
And hence my comment: "I understand the desire to not have to maintain something yourself"I was commenting on their mention of "high powered and noisy server" as if running a homelab NAS requires you to buy a 1u blade server with a 16 core Xeon CPU that sounds like a jet engine.
Reflex - Tuesday, July 19, 2022 - link
I mean I have a QNAP that has an embedded i7 with 64GB and 12 drives, but it's power consumption is lower than comparable PC's, it's much quieter than the server it replaced and it's more than powerful enough to run a few VM's and a bunch of containers as well as provide network storage.At any rate, I'm just saying I and many others who are technically capable but have no interest in the roll your own experience and for them the feature you dismissed is actually useful.
inighthawki - Wednesday, July 20, 2022 - link
Which feature did I dismiss? I'm fully supportive of everyones' scenarios here. Merely wanted to point out that rolling your own doesn't mean high powered and noisy.PEJUman - Wednesday, July 20, 2022 - link
why not both, I have QNAP that runs a OMV-Pi-Plex.Here is why: RPi4 works well for OMV and with docker it runs Pi inside it. but max throughput is less than what gigabit an do. QNAP can handle 10 GBe. Also once you have to deal with updates, it becomes a lot less reliable in terms of uptime and Quality Control of the releases.
I run the QNAP for it's RAID: uptime above all else
I run the OMW for the backups of QNAP, windows, etc for this I use BTRFS with cheap, junk HDDs.
Total power of this system is less than RPi once I factor in the non-idle power consumption of my desktop that have to wait 10x as long to upload/sync/verify the weekly backups.
Total time of mine devoted into googling RPi is greater than just managing QNAP + the linux VM for the rest.
My point: just because you can/it is cheap it does not mean it's the best solution once you look at it as a cog in ecosystem. Once you have decent throughput & valuable data(think a family of 4, with nightly backups of their digital devices), professionally sourced solution makes sense.
inighthawki - Wednesday, July 20, 2022 - link
>> Total power of this system is less than RPi once I factor in the non-idle power consumption of my desktop that have to wait 10x as long to upload/sync/verify the weekly backups.I think this is a very fair point, so the usage case here depends very heavily on workload. For example, not all uses of a NAS/storage server are strictly for backups with large amounts of data transfer.
However, I again would like to point out that my point was never that one is better than the other. the person I replied to merely stated that building one yourself involved it being high power and noisy, which is not strictly true at all. I am 100% totally in support of people buying premade machines. there are plenty of advantages and reasons to doing so.
MDD1963 - Friday, August 5, 2022 - link
QNAP offers ZFS-based NAS systems and had done so for ~12-18 months, I think....Dizoja86 - Tuesday, July 19, 2022 - link
Is that 10E13 read error rate on the Red Pro correct? If so, that's ridiculously high.meacupla - Tuesday, July 19, 2022 - link
Yeah, that is oddly high. I would have expected the purple drive to have that error rate, but I guess not.I don't get the point of the Red drives when it's priced like that against Gold and Purple.
Kamen Rider Blade - Tuesday, July 19, 2022 - link
Realistically, it's the same HDD with different Firmware underneath.Samus - Tuesday, July 19, 2022 - link
I wonder if such a high read error rate is reflective of the testing conditions to calculate that (like a 24-bay NAS enclosure)Silver5urfer - Tuesday, July 19, 2022 - link
No, Gold uses ePMR for 16TB and up and TSA as well, from their previous datasheet and the only revision of Gold drive is 20TB, all the rest are same. So I doubt they revised and removed both. Red is not Gold. Not at all. And shucking drives White Labels are not binned to Gold standards either. When they sell a retail Gold drive it must have that rating and reliability. And this is WD who nick and dimes consumers.inighthawki - Tuesday, July 19, 2022 - link
10E13 is the same as 10^14, which is pretty standard/average for a consumer grade hard drive.Sheppyb - Tuesday, August 23, 2022 - link
No, it is not!10E13 = 10^13 :)
jamesindevon - Wednesday, July 20, 2022 - link
At least in part, that's the Time-Limited Error Recovery mentioned in the article. The drives simply won't retry for as long, preferring to fail fast and allow the RAID to get the data off the other copies in the RAID. That's a lot less intrusive for users than locking up the whole RAID for one request for one user.Also, if the disk waits for too long, some RAID controllers might conclude the whole disk is dodgy, and drop it from the RAID, causing significantly more problems.
Hopefully the RAID will re-write a good copy of the data to the disk with the failing sector(s), allowing the hard disk to substitute spare sectors.
PEJUman - Wednesday, July 20, 2022 - link
I think this MTBF got to be a copy paste error in WD marketing group. they all have to be 10E15. It's the same media & heads.... or is it?Silver5urfer - Tuesday, July 19, 2022 - link
WD Gold is the only drive I would go for. Esp the 16TB and up because of higher reliability rating first and second they have ePMR / EAMR and TSA. No other series has that except Gold. OptiNAND is only available for 20TB and up due to the CMR and their new iNAND UFS tech making space with the existing 9 and 10 platter designs.Also the datasheet has 2 revisions of 20TB, I wonder what's the difference with them esp one has lower power consumption vs the other. I bought the higher power consumption one, maybe I should wait for the new drive and return this ? No idea, esp this WD is really a mess when it comes to these revisions, last time they did the revision for WD101KFBX Red Pro 10TB to WD102XXXX and they removed Helium from them.
Finally the ArmorCache thing was supposed to be on all OptiNAND drives because their datasheet for the tech brief showed that only but now they are gating that to WD Gold line only lol that too only Gold 22TB variant, horrendous practices by WD really.
One has to be super careful in choosing WD drives no doubt.
flyingpants265 - Sunday, July 24, 2022 - link
2010 pricing was 59.99 CAD for 2TB.2022 is... $599 USD for 22TB?
Like shouldn't a 8TB drive be $60 by now? What's going on here?
johanpm - Wednesday, July 27, 2022 - link
When a drive like that fails in a RAID or RAIDZ and you have to rebuild/resilver it, it will take somewhere between 3 days and a few weeks depending on workload. Shudder.