In my personal scenarios, I have two problems with these 10+ TBs HDDs. One is when one drive fails it takes 30+ hrs for plain data copy/recovery. Not sure if these can always withstand full speed copy for 30+ hrs straight. Many times the source/backup HDD will fail during the copy. I know this is not an issue for data centers, but might be for power user at home. But also rebuild in data center can take significantly more time compare to 5-6 TB drives. The second problem is the speed. While capacity increased 3x from 10TB to 30TB, the speed increased 2x from 250 MB/s to 550 MB/s for MACH.2 drives. The speed cannot keep the pace with the capacity. Again, may not be that big of a problem for data centers, but home/power users may be put off by this.
Home user here. I have a bunch of 16tb drives that I use in my unraid array. I made sure I bought them from different places and are different models, so if I have to do a rebuild, it's unlikely other drives are in the same 'ready to fail' mode.
That said, I have an offline array as a backup. Your concerns are legit..
It is deeply concerning that hard disks have increased in capacity by orders of magnitude for decades but have effectively stalled in transfer rate, which, as you said, makes rebuilding a single disk extremely time consuming, not to mention the time and cost associated with migrating from old disks to new ones.
But the laws of physics cannot be broken. While aerial density and controller performance increases have minute benefits to throughput, the footprint, rotation speed and physical limitations of the actuator and head prevent any real gains. WD has tried to address this with dual actuators, but according to blocks and files, that doesn't actually scale out to double the throughput because of obvious overhead. But it's close, and double is at least something :)
Disks getting slower relative to their capacity is unfortunately just a result of math. Data density increases in three dimensions (angular density, radial density, z (number of platters)) while disk transfer rate only increases proportionally to the density increase in the angular dimension.
As a result, transfer rate only increases by sqrt(2) each time the density increases by a factor of 2, and it gets worse if more platters are added to a drive because they increase capacity without increasing platter density.
Assuming you can currently read at ~250MB/s on a 16TB drive, you need 17.8 hours to read the entire disk. If transfer rate scales exactly with density, a 30TB drive will read at 342 MB/s and will take 24.3 hours to the entire disk. If Seagate reaches its target of 48TB disks, then they will read at 433 MB/s and take 30.8 hours to read the entire disk.
Even doubling the number of actuators to double the transfer rate just brings disk write time back to where it was a few years prior, and increasing the spindle speed is a non-starter. I'm not sure if there is any way to deal with this slowdown other than software workarounds.
So let's make all the actuators separate from each other. On top of that you could also create a new size for HDDs so that you could fit actuators on 4 sides of the drive. Changing form factors has been done before. Take, for example EATX. And to this day that still isn't a standardized ATX size AFAIK.
"So let's make all the actuators separate from each other."
back in the goode olde days, mainframes and some minis had drives with multiple radial actuators. back then, transfer rates were pitiful by today's standards, due mostly to large grains and slow rotation, so having multiples was mostly a requirement. it's just a guess, but having radial actuators should improve performance over the swingers?
Or just go all in on parallelism. 1 (or 2) actuator, but split each sector into N (N/2) parts spread out over all N (N/2) platters so that every head is reading/writing concurrently for increased throughput without having to figure out how to cram more actuators into a fixed form factor device.
Or perhaps if there were some way to dispense with heads altogether, but be able to establish, and manipulate, the configuration of the platter at a certain point.
well... ifs: - file systems are devised that are tailored to disk drive architecture - said architecture had multiple actuators with multiple heads per surface, e.g. four heads per actuator per surface
so, then one could have files indexed such that relational pointers mapped to distinct areas (targeted by specific head(s) on an actuator) on the surfaces. thus one could read/write base key rows and foreign key rows at the same time. sounds like a Ph.D. dissertation exercise at MIT CompSci.
> Many times the source/backup HDD will fail during the copy.
This is why you use RAID6 (2 redundant disks) or RAID7 (3 redundant disks) - or their equivalents such as ZFS z2 or z3 - when using larger HDDs/array sizes. I've been using RAID6 (actually z2) since 3TB HDD and on my current 8TB's, and when/if it comes time to upgrade those (probably 16-20TB HDDs if they ever come down in price) I'll be using RAID7/z3.
It was only after I had replaced the previous NVMe drives on one of my main systems (1 and 2TB with 2x4TB) a few days ago, that I realized that overall its solid state capacity had surpassed the capacity of any current HDD, because that system also sports 8 SATA SSDs at 2TB each (4 onboard SATA + 4 via an extra M.2 SATA adapter).
It's split between Windows and a ZFS pool on Linux, so I don't tend to think of it as "one storage", but still I couldn't back it up to an HDD any longer if I wanted to...
It's been a long journey, I remember experimenting with CF to PCMCIA and IDE adapters to run PCs on flash storage, with varying degrees of success.
And my very first Intel Postville 160GB SATA SSD still runs on my firewall today, showing no signs of being worn out.
I still yearn for the easy hot-swap expandability and replacability of SATA, but even with SATA RAID0, NVMe is hard to beat in physical size and speed.
HDD to me has been a tape replacement for a long time, but those tapes need to keep up!
You just need U.2 or U.3 nvme storage solutions to finally hit the home market, with motherboards directly supporting U.2 or U.3 ports instead of M.2. This won't ever happen until Intel and AMD finally give up on limiting PCIe lanes in their "consumer" CPU's and start putting 96-128 lanes in the systems.
The irony is LTO tapes have been faster than any single hard disk for a decade. Newer generation LTO tapes are even faster as depending on the compression, many can write over 1GB\second, saturating many RAID arrays, causing the buffer overruns, resulting in compression adjustments or worse, spindle parking while the array catches back up. That's just ridiculous to comprehend. Every corner of the storage community has kept up with performance AND capacity in a linear fashion except hard disk storage.
no, just 400. quoting compressed speed is just a marketing cheat.
not sure why it's a complaint against tape that its linear bandwidth is modestly higher than disk. there are some compromises when you design heads to move around, rather than sit placidly while media flows by ;)
Just want to remind anyone who sees this article that Seagate’s blog post from 2018 is still up and read: “Built over 40,000 HAMR drives; pilot volume in 2018, volume shipments of 20TB+ drives in 2019; drives are built on the same automated assembly line as current products” Seagate and Western Digital have been saying HAMR or MAMR are just around the corner for over 10 years. At some point it becomes questionable beyond marketing hype as to whether it’s stock price manipulation and if the Securities and Exchange Commission should investigate them.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
29 Comments
Back to Article
Hamm Burger - Wednesday, January 17, 2024 - link
Well, the article at least attains a new record in areal density of buzzwords.abufrejoval - Wednesday, January 17, 2024 - link
Thanks a lot for that laugh!Threska - Wednesday, January 17, 2024 - link
Mechatronics go brrrr.Watcherrr - Wednesday, January 17, 2024 - link
In my personal scenarios, I have two problems with these 10+ TBs HDDs. One is when one drive fails it takes 30+ hrs for plain data copy/recovery. Not sure if these can always withstand full speed copy for 30+ hrs straight. Many times the source/backup HDD will fail during the copy. I know this is not an issue for data centers, but might be for power user at home. But also rebuild in data center can take significantly more time compare to 5-6 TB drives.The second problem is the speed. While capacity increased 3x from 10TB to 30TB, the speed increased 2x from 250 MB/s to 550 MB/s for MACH.2 drives. The speed cannot keep the pace with the capacity. Again, may not be that big of a problem for data centers, but home/power users may be put off by this.
charlesg - Wednesday, January 17, 2024 - link
Home user here. I have a bunch of 16tb drives that I use in my unraid array. I made sure I bought them from different places and are different models, so if I have to do a rebuild, it's unlikely other drives are in the same 'ready to fail' mode.That said, I have an offline array as a backup. Your concerns are legit..
Samus - Wednesday, January 17, 2024 - link
It is deeply concerning that hard disks have increased in capacity by orders of magnitude for decades but have effectively stalled in transfer rate, which, as you said, makes rebuilding a single disk extremely time consuming, not to mention the time and cost associated with migrating from old disks to new ones.But the laws of physics cannot be broken. While aerial density and controller performance increases have minute benefits to throughput, the footprint, rotation speed and physical limitations of the actuator and head prevent any real gains. WD has tried to address this with dual actuators, but according to blocks and files, that doesn't actually scale out to double the throughput because of obvious overhead. But it's close, and double is at least something :)
The Von Matrices - Wednesday, January 17, 2024 - link
Disks getting slower relative to their capacity is unfortunately just a result of math. Data density increases in three dimensions (angular density, radial density, z (number of platters)) while disk transfer rate only increases proportionally to the density increase in the angular dimension.As a result, transfer rate only increases by sqrt(2) each time the density increases by a factor of 2, and it gets worse if more platters are added to a drive because they increase capacity without increasing platter density.
Assuming you can currently read at ~250MB/s on a 16TB drive, you need 17.8 hours to read the entire disk. If transfer rate scales exactly with density, a 30TB drive will read at 342 MB/s and will take 24.3 hours to the entire disk. If Seagate reaches its target of 48TB disks, then they will read at 433 MB/s and take 30.8 hours to read the entire disk.
Even doubling the number of actuators to double the transfer rate just brings disk write time back to where it was a few years prior, and increasing the spindle speed is a non-starter. I'm not sure if there is any way to deal with this slowdown other than software workarounds.
ballsystemlord - Wednesday, January 17, 2024 - link
So let's make all the actuators separate from each other.On top of that you could also create a new size for HDDs so that you could fit actuators on 4 sides of the drive. Changing form factors has been done before. Take, for example EATX. And to this day that still isn't a standardized ATX size AFAIK.
GeoffreyA - Thursday, January 18, 2024 - link
SSDs were supposed to solve the problem :)But yes, with the classic hard disk design of mechanical motion to read a certain point, there isn't going to be much of a change.
FunBunny2 - Thursday, January 18, 2024 - link
"So let's make all the actuators separate from each other."back in the goode olde days, mainframes and some minis had drives with multiple radial actuators. back then, transfer rates were pitiful by today's standards, due mostly to large grains and slow rotation, so having multiples was mostly a requirement. it's just a guess, but having radial actuators should improve performance over the swingers?
DanNeely - Thursday, January 18, 2024 - link
Or just go all in on parallelism. 1 (or 2) actuator, but split each sector into N (N/2) parts spread out over all N (N/2) platters so that every head is reading/writing concurrently for increased throughput without having to figure out how to cram more actuators into a fixed form factor device.GeoffreyA - Friday, January 19, 2024 - link
Parallelism seems to be the key.Or perhaps if there were some way to dispense with heads altogether, but be able to establish, and manipulate, the configuration of the platter at a certain point.
ballsystemlord - Friday, January 19, 2024 - link
That's probably already what HDDs are doing. And that wouldn't increase random IO operations.FunBunny2 - Saturday, January 20, 2024 - link
"https://www.theguardian.com/us-news/2024/jan/20/st...well... ifs:
- file systems are devised that are tailored to disk drive architecture
- said architecture had multiple actuators with multiple heads per surface, e.g. four heads per actuator per surface
so, then one could have files indexed such that relational pointers mapped to distinct areas (targeted by specific head(s) on an actuator) on the surfaces. thus one could read/write base key rows and foreign key rows at the same time. sounds like a Ph.D. dissertation exercise at MIT CompSci.
GeoffreyA - Saturday, January 20, 2024 - link
Indeed, it sounds like a recipe for bugs as well.StevoLincolnite - Friday, January 19, 2024 - link
Or you can keep the standard size... And shrink the platter sizes so they take up less internal volume.You would potentially loose capacity which could be mitigated by more platters overall, but the speed improvements would be a boon.
ballsystemlord - Sunday, January 21, 2024 - link
That's another option. Then you could make HDDs taller to accommodate more platters and/or spin those platters faster because they're smaller.Byte - Friday, January 26, 2024 - link
They already have this, it's called Mach.2https://www.seagate.com/innovation/multi-actuator-...
eldakka - Wednesday, January 17, 2024 - link
> Many times the source/backup HDD will fail during the copy.This is why you use RAID6 (2 redundant disks) or RAID7 (3 redundant disks) - or their equivalents such as ZFS z2 or z3 - when using larger HDDs/array sizes. I've been using RAID6 (actually z2) since 3TB HDD and on my current 8TB's, and when/if it comes time to upgrade those (probably 16-20TB HDDs if they ever come down in price) I'll be using RAID7/z3.
charlesg - Thursday, January 18, 2024 - link
Or use UnRaid.abufrejoval - Wednesday, January 17, 2024 - link
It was only after I had replaced the previous NVMe drives on one of my main systems (1 and 2TB with 2x4TB) a few days ago, that I realized that overall its solid state capacity had surpassed the capacity of any current HDD, because that system also sports 8 SATA SSDs at 2TB each (4 onboard SATA + 4 via an extra M.2 SATA adapter).It's split between Windows and a ZFS pool on Linux, so I don't tend to think of it as "one storage", but still I couldn't back it up to an HDD any longer if I wanted to...
It's been a long journey, I remember experimenting with CF to PCMCIA and IDE adapters to run PCs on flash storage, with varying degrees of success.
And my very first Intel Postville 160GB SATA SSD still runs on my firewall today, showing no signs of being worn out.
I still yearn for the easy hot-swap expandability and replacability of SATA, but even with SATA RAID0, NVMe is hard to beat in physical size and speed.
HDD to me has been a tape replacement for a long time, but those tapes need to keep up!
Fallen Kell - Wednesday, January 17, 2024 - link
You just need U.2 or U.3 nvme storage solutions to finally hit the home market, with motherboards directly supporting U.2 or U.3 ports instead of M.2. This won't ever happen until Intel and AMD finally give up on limiting PCIe lanes in their "consumer" CPU's and start putting 96-128 lanes in the systems.Samus - Wednesday, January 17, 2024 - link
The irony is LTO tapes have been faster than any single hard disk for a decade. Newer generation LTO tapes are even faster as depending on the compression, many can write over 1GB\second, saturating many RAID arrays, causing the buffer overruns, resulting in compression adjustments or worse, spindle parking while the array catches back up. That's just ridiculous to comprehend. Every corner of the storage community has kept up with performance AND capacity in a linear fashion except hard disk storage.HaninAT - Wednesday, January 17, 2024 - link
LTO-9 FTW! The tapes aren't too costly either.markhahn - Sunday, February 4, 2024 - link
no, just 400. quoting compressed speed is just a marketing cheat.not sure why it's a complaint against tape that its linear bandwidth is modestly higher than disk. there are some compromises when you design heads to move around, rather than sit placidly while media flows by ;)
Dante Verizon - Wednesday, January 17, 2024 - link
What is the writing and reading speed ? 500MB/s?RedGreenBlue - Sunday, March 31, 2024 - link
Just want to remind anyone who sees this article that Seagate’s blog post from 2018 is still up and read: “Built over 40,000 HAMR drives; pilot volume in 2018, volume shipments of 20TB+ drives in 2019; drives are built on the same automated assembly line as current products”Seagate and Western Digital have been saying HAMR or MAMR are just around the corner for over 10 years. At some point it becomes questionable beyond marketing hype as to whether it’s stock price manipulation and if the Securities and Exchange Commission should investigate them.
RedGreenBlue - Sunday, March 31, 2024 - link
May have been 2017 https://blog.seagate.com/craftsman-ship/hamr-next-...RedGreenBlue - Sunday, March 31, 2024 - link
But some publications claim the market is eating it up as another AI play, even though it isn’t much of one. https://www.wsj.com/livecoverage/stock-market-toda...