As much as I would love to throw out spinning rust, there's just no alternative for capacity. I guess these will still have an error rate of 1 in 10^15? (That's 1 in 113 TiB, so still not a huge issue, but 1 in 10^14 seemed a lot also at some point in time)
Was running ZFS mirror on two spinning rusts as main workstation storage years ago with just 1GB RAM. If you have >4GB you are fine. It's completely meaning less to speak about ZFS RAM requirements these days when common workstation does have more than 64GB RAM...
No, ZFS will run fine on a 32-bit Pentium 4 system with 2 GB of RAM (that was my home server for years). You do need to do some manual tweaking, but it runs fine.
Obviously, the more RAM you have, the larger the ARC will be and the better it will perform.
The only feature that requires lots of RAM is deduplication. And 9 times out of 10, you really don't want to enable dedupe. :) Compression will get you virtually the same benefits, without any of the drawbacks.
ZFS is absolutely useless on SMR drives, so unless these drives turn out to somehow solve the issue of write amplification due to zone modifications causing zone updates, ZFS won't be viable here either.
I find it interesting that two actuators and higher aerial density may push us past the 6Gb/s SATA and SAS links that we've been on for what seems like forever. SAS at least has 12Gb/s to move to, but we never saw a SATA 4 because consumer SSDs (the only drives capable of exceeding 6Gb/s) just jumped to NVME over PCI-E. There was little marginal advantage to having a 12Gb/s SATA standard--SSDs wouldn't want to use it and HDs didn't need it.
But now we're coming to a situation where a 12Gb/s SATA makes sense. Or will something like U.2 actually 'become a thing' that you see outside of trade shows and whitepapers? M.2 is only good for solid state--no vibration tolerance and highly problematic mounting for a big spinning drive.
There was a standard that could be considered a kind of "SATA 4.0", it was called SATA Express (Also called SATA 3.2 for some godforsaken reason), used two PCIe lanes, and had a theoretical bandwidth of up to 16Gbps. It just didn't take off, because neither HDD's nor SSD's needed it.
And I assume they'll eventually switch to using something derived from U.3 for consumer systems, probably using SAS (It appears there will be a 45Gbps SAS-5)
SATA Express should be considered a proto-m.2/u.2 implementation; not a continuation of SATA. The actual SATA part was completely unchanged from the existing 6gb implementation. It went nowhere because SSDs surged to the 4 lane PCIe standard rendering it obsolete even as it was first rolling out on mobos.
Multi-actuator HDDs will bottleneck existing SATA and eventually SAS implementations, initially I'd expect 12Gb to be back ported to SATA. Longer term it'll be interesting to see if they chose to double again to 24Gb, or to use PCIe cables as a transport layer.
Correct, SATA Express (u.2 form factor) is a SATA command set, just like m.2. The only difference is the interface connection. Even SAS builds on the SATA command set, thus it is compatible with SATA devices (but SAS devices are not compatible with SATA)
A lot of people confuse m.2 as a command set, when it's just a connector\interface. m.2 can be used with the various SATA command sets or the various NVMe command sets, just like SAS connector\interface can be used with SATA2+ command set or the SCSI command set.
As far as I'm concerned, it would be nice with a faster S-ATA standard for SSD use if only because S-ATA always has hotplug. It would be nice to not have to rely on a direct system bus connection for storage.
The problem with SATA is the command set cannot be modified to have a lot of the advantages of NVMe (particularly reduced overhead and power efficiency) without losing compatibility with existing SATA drives. The queuing nature of SATA and block assignment requires a command structure that actually conflicts with those of NVMe and since it hasn't ever been courted as even a theoretical proof-of-concept within industry circle, I don't think it's possible or there just doesn't seem to be enough interest within the industry to bother.
SAS has a great deal of performance and very low overhead as most of the work is offloaded to the drive controllers off the bus, enabling the host controller to manage queuing and RAID management. It will be "some time" before magnetic media exceeds 1.2GBps (and unlike SATA, SAS actually delivers nearly 100% of its available throughput. My guess is by the time magnetic media hits this kind of performance, a revised SCSI command set with a new interface will be introduced because they will need to keep it backward compatible if only for their data center partners, who effectively subsidize the hard disk industry for us casual consumers.
Correction: The recently introduced SAS-4 supports 2.25GBps, which is backward compatible all the way to SAS-1, while still supporting 65,535 devices on the same bus, and SAS-5 is being developed around PCIe 4.0 to deliver 4.5GBps. Keep in mind these are all PER NODE and the limitation is really the PCIe bus. Even a common PCIe 3.0 8x SAS-3 RAID controller doesn't have enough throughput from 8 of the fastest current SAS drives to saturate the bandwidth provided by the 8x PCIe slot. So the real limitation is the bus if hard disks ever catch up and deliver near-NVMe SSD speed.
For home-use the good thing about lack of hdd performance was the lack of need to even need >1gbit/s Ethernet. Common low-power NAS drives barley break this limit in sustained read/wrtie.
I assume that today that would increase energy consumption/heat output to intolerable levels. Not to speak of the reliability issues it would bring. Heck, right now they're probably still working on teething issues that the dual actuator tech might have.
And while I'm sure that in the future we'll have something like that, they might end up reducing the number of platters in the drive to optimize for the total cost of ownership, which would partition the IOPS-optimized drives from cold nearline storage drives again.
And it still wouldn't really be "super fast", since files in general will grow much larger as everyone gets access to more and more storage. Imagine copying a 2TB game from an HDD, even one with a bucketton of actuators.
Multi-actuator tech is just a band-aid to maintain a respectable IOPS/TB metric for their server consumers - and it will still take a very long time to read through an entire 120TB HDD.
Not for long. I suspect these new platters are going to get VERY hot with HAMR literally heating up the surface. What's amazing is that heat will naturally be absorbed into the spindle and spread to the motor. It makes me wonder if a lot of enterprise drives using heat assisted recording will be 5000RPM-class drives.
Aren't those HAMR platters based on a glass-like carrier substrate? I thought the whole idea of heat-assisted recording is the highly localized nature of the heating, so the absolute amount of power used for heating is actually quite low.
Just commenting so that 2030 readers see that some of us back in 2021 thought that 512 layer 5LC NAND flash made in 450 mm wafers would defeat hard drives in cost at the 100 TB per device range.
A 100 TB SSD for $1,000-$1,500 in 2030 seems possible. 6 new generations of NAND costing 70% of the previous generation would get it close to $0.01/GB. 3D NAND hasn't even been on the market for 9 years yet.
512-layer NAND should be here around 2025, using string stacking. It might be closer to 768-1024 by 2030.
Hopefully, the industry doesn't pursue PLC/5LC NAND and beyond, unless they can miraculously fix its problems.
There already is 100TB SSD (for 12 500 USD) so it is only question of price and what exact technology will it use. 5LC most likely yes - it is under development. 500 layers+ sure. 450mm wafers? I don't think so. It is like EUV on steroids - people are talking about it even longer but there is very little incentive to do so vs the initial cost.
Btw - it would be fun to set timer on these posts that would remind us in 2030.
However, isn't it interesting that none of the chip manufacturers has yet moved to wafers larger than 300 mm? And that despite several of the fabs taking a serious look at it. Apparently, the downsides outweigh the upsides, at least for now.
The problem is actually with the developers of a single proprietary operating system, who apparently never got the news that integer division isn't super expensive anymore.
Well, the SI units were there first. One kilometer is 1000 meters. One kilogram is 1000 grams. One kilowatt is 1000 watts. So one kilobyte is....? It's computer scientists that stole the meaning of kilo, not the other way around. Now I'm not pretending the HDD manufacturers do it for correctness, but they're not wrong. Their kilo/mega/giga/terabyte is the offical SI way, it's just that nobody else uses it.
Well the TCO is increasingly flavouring NAND. Speed, IOPS and Transfer Rate are also part of the equation as well. But I guess both side will milk it for as long as possible. NAND are already using dual stacking rather than actual single 100+ layers stack.
"There are about a dozen of customers that already use Seagate's Mach.2 PMR-based HDDs in their datacenters" Remove excess "of": "There are about a dozen customers that already use Seagate's Mach.2 PMR-based HDDs in their datacenters"
"...the company needs HAMR media featuring an areal density of approximately 2.6 Tb Tb/inch2." Duplicate "Tb": "...the company needs HAMR media featuring an areal density of approximately 2.6 Tb/inch2."
Highly unlikely. There's just no demand for this kind of capacity outside of server applications. And I will only consider something to be "out" once it's in consumer territory.
Spinning rust is doomed. The price difference between enterprise-grade drives compared to NVME/SSD solutions is getting pretty narrow. Combine that with the MUCH higher energy cost (power for spinning rust PLUS power to cool spinning rust), higher rack-space / floor space requirement, and higher failure/maintenance cost. Solid state storage costs are falling far more rapidly than magnetic based storage. Once the related costs are considered the total cost leader may flip in the next year or two. And we aren't even talking about the interests of speedier equipment being worth more.
Other than historical reasons (such as compatibility), is 3.5" the optimal size for an HDD? Are there engineering reasons why 5" (or larger/smaller) would offer advantages?
You must be young. Drives were way larger in the past. 3.5" estabilished (because it was practical size for bulk storage) There were some 5.25" HDDs in the past, but there were always problems with vibrations on such large spinning discs. Not so long ago, 2.5" were de facto standard for enterprise drives with rotation speed over 10Km until they were replaced by SSDs (much more IOPS). And while 7.5mm-12mm 2.5" HDDs were reserved for laptops, enterprise 2.5" HDDs utilized 15mm usually.
Does somebody advise how I can calculate atoms per bit? I am doing a Master's Thesis, and I am trying to calculate how many atoms required to store a bit at presently available HDD technologies.
The graphs are hilarious. They are pretending they are going to get to 50TB let alone 100TB. Its bullshit. Within 6 years SSD will overtake HDD in terms of cost per TB AND size and they KNOW this. After this there is ZERO reason to stay with HDD tech. Its less reliable, more prone to data loss, more energy use, bigger, hotter, louder. Like what are they smoking? Who are they trying to fool here? SSD storage doubles every 18-24 months for the same price. We are at 2TB for $100 now. In 2 years that will be 4TB, with 8TB available under $400. 2 more and thats 8TB for $100 and 16TB available.. then 16TB and 32... Before 2030 we will see storage become dirt fucking cheap because that's where SSD tech was always headed.
Not to mention how much fucking faster it is. HDD days are numbered and companies like this with bullshit roadmaps should be laughed at.
When are prices going to go down ? Even now the best buy drive is of 4 TB capacity while the 8 TB versions are exactly double the price. The only improvment in the last 4 years (when I bought the versions that I replaced this year) is that 3 TB was replaced as best buy with the 4 TB version. So the big drives are probably reserved to the data centers.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
70 Comments
Back to Article
Wereweeb - Wednesday, March 10, 2021 - link
Ctrl+f "Yo go further", should be "To go further"Ryan Smith - Wednesday, March 10, 2021 - link
Thanks!MenhirMike - Wednesday, March 10, 2021 - link
As much as I would love to throw out spinning rust, there's just no alternative for capacity. I guess these will still have an error rate of 1 in 10^15? (That's 1 in 113 TiB, so still not a huge issue, but 1 in 10^14 seemed a lot also at some point in time)Wereweeb - Wednesday, March 10, 2021 - link
I assume they'll find other ways to improve the error rate. Doesn't ZFS have a kind of built-in ECC?heffeque - Wednesday, March 10, 2021 - link
Yes, and it does it in the background, but it's not good for when there's a constant high I/O.eastcoast_pete - Wednesday, March 10, 2021 - link
Also, doesn't ZFS like (needs) enough RAM and some processing oomph to run at good speeds? Great file system, though!kgardas - Thursday, March 11, 2021 - link
Was running ZFS mirror on two spinning rusts as main workstation storage years ago with just 1GB RAM. If you have >4GB you are fine. It's completely meaning less to speak about ZFS RAM requirements these days when common workstation does have more than 64GB RAM...scineram - Friday, March 12, 2021 - link
No.phoenix_rizzen - Monday, March 15, 2021 - link
No, ZFS will run fine on a 32-bit Pentium 4 system with 2 GB of RAM (that was my home server for years). You do need to do some manual tweaking, but it runs fine.Obviously, the more RAM you have, the larger the ARC will be and the better it will perform.
The only feature that requires lots of RAM is deduplication. And 9 times out of 10, you really don't want to enable dedupe. :) Compression will get you virtually the same benefits, without any of the drawbacks.
scineram - Friday, March 12, 2021 - link
No.Jorgp2 - Thursday, March 11, 2021 - link
Hard drives already have eccDaniel S. Buus - Sunday, September 11, 2022 - link
ZFS is absolutely useless on SMR drives, so unless these drives turn out to somehow solve the issue of write amplification due to zone modifications causing zone updates, ZFS won't be viable here either.flyingpants265 - Saturday, March 13, 2021 - link
Fun fact, with 25TB you can already store almost the entire IMDB collection of movies (or at least those with over 1000 votes) at 700mb quality.Kamen Rider Blade - Wednesday, March 10, 2021 - link
Please don't let "Multi-Actuator" stop at just 2 independent moving arms.Gradually increase the tech until all moving arms are fully independent and we can have super fast sequential R/W & I/O performance.
dwillmore - Wednesday, March 10, 2021 - link
I find it interesting that two actuators and higher aerial density may push us past the 6Gb/s SATA and SAS links that we've been on for what seems like forever. SAS at least has 12Gb/s to move to, but we never saw a SATA 4 because consumer SSDs (the only drives capable of exceeding 6Gb/s) just jumped to NVME over PCI-E. There was little marginal advantage to having a 12Gb/s SATA standard--SSDs wouldn't want to use it and HDs didn't need it.But now we're coming to a situation where a 12Gb/s SATA makes sense. Or will something like U.2 actually 'become a thing' that you see outside of trade shows and whitepapers? M.2 is only good for solid state--no vibration tolerance and highly problematic mounting for a big spinning drive.
Wereweeb - Wednesday, March 10, 2021 - link
There was a standard that could be considered a kind of "SATA 4.0", it was called SATA Express (Also called SATA 3.2 for some godforsaken reason), used two PCIe lanes, and had a theoretical bandwidth of up to 16Gbps. It just didn't take off, because neither HDD's nor SSD's needed it.And I assume they'll eventually switch to using something derived from U.3 for consumer systems, probably using SAS (It appears there will be a 45Gbps SAS-5)
DanNeely - Thursday, March 11, 2021 - link
SATA Express should be considered a proto-m.2/u.2 implementation; not a continuation of SATA. The actual SATA part was completely unchanged from the existing 6gb implementation. It went nowhere because SSDs surged to the 4 lane PCIe standard rendering it obsolete even as it was first rolling out on mobos.Multi-actuator HDDs will bottleneck existing SATA and eventually SAS implementations, initially I'd expect 12Gb to be back ported to SATA. Longer term it'll be interesting to see if they chose to double again to 24Gb, or to use PCIe cables as a transport layer.
Jorgp2 - Thursday, March 11, 2021 - link
Sata express is a continuation of SATA that allowed it to be compatible with 2 SATA drives.Samus - Friday, March 12, 2021 - link
Correct, SATA Express (u.2 form factor) is a SATA command set, just like m.2. The only difference is the interface connection. Even SAS builds on the SATA command set, thus it is compatible with SATA devices (but SAS devices are not compatible with SATA)A lot of people confuse m.2 as a command set, when it's just a connector\interface. m.2 can be used with the various SATA command sets or the various NVMe command sets, just like SAS connector\interface can be used with SATA2+ command set or the SCSI command set.
Dolda2000 - Thursday, March 11, 2021 - link
As far as I'm concerned, it would be nice with a faster S-ATA standard for SSD use if only because S-ATA always has hotplug. It would be nice to not have to rely on a direct system bus connection for storage.Samus - Friday, March 12, 2021 - link
The problem with SATA is the command set cannot be modified to have a lot of the advantages of NVMe (particularly reduced overhead and power efficiency) without losing compatibility with existing SATA drives. The queuing nature of SATA and block assignment requires a command structure that actually conflicts with those of NVMe and since it hasn't ever been courted as even a theoretical proof-of-concept within industry circle, I don't think it's possible or there just doesn't seem to be enough interest within the industry to bother.SAS has a great deal of performance and very low overhead as most of the work is offloaded to the drive controllers off the bus, enabling the host controller to manage queuing and RAID management. It will be "some time" before magnetic media exceeds 1.2GBps (and unlike SATA, SAS actually delivers nearly 100% of its available throughput. My guess is by the time magnetic media hits this kind of performance, a revised SCSI command set with a new interface will be introduced because they will need to keep it backward compatible if only for their data center partners, who effectively subsidize the hard disk industry for us casual consumers.
Samus - Friday, March 12, 2021 - link
Correction: The recently introduced SAS-4 supports 2.25GBps, which is backward compatible all the way to SAS-1, while still supporting 65,535 devices on the same bus, and SAS-5 is being developed around PCIe 4.0 to deliver 4.5GBps. Keep in mind these are all PER NODE and the limitation is really the PCIe bus. Even a common PCIe 3.0 8x SAS-3 RAID controller doesn't have enough throughput from 8 of the fastest current SAS drives to saturate the bandwidth provided by the 8x PCIe slot. So the real limitation is the bus if hard disks ever catch up and deliver near-NVMe SSD speed.beginner99 - Friday, March 12, 2021 - link
For home-use the good thing about lack of hdd performance was the lack of need to even need >1gbit/s Ethernet. Common low-power NAS drives barley break this limit in sustained read/wrtie.Wereweeb - Wednesday, March 10, 2021 - link
I assume that today that would increase energy consumption/heat output to intolerable levels. Not to speak of the reliability issues it would bring. Heck, right now they're probably still working on teething issues that the dual actuator tech might have.And while I'm sure that in the future we'll have something like that, they might end up reducing the number of platters in the drive to optimize for the total cost of ownership, which would partition the IOPS-optimized drives from cold nearline storage drives again.
And it still wouldn't really be "super fast", since files in general will grow much larger as everyone gets access to more and more storage. Imagine copying a 2TB game from an HDD, even one with a bucketton of actuators.
Multi-actuator tech is just a band-aid to maintain a respectable IOPS/TB metric for their server consumers - and it will still take a very long time to read through an entire 120TB HDD.
Kamen Rider Blade - Wednesday, March 10, 2021 - link
Seagate has already stated in a blog that having "Dual Actuators" only cost you 1x platter in density for each HDD.So if you would have a 10 Platter HDD with Single Actuator, a 9 Platter HDD could have Dual Actuators.
That's well worth the trade-off IMO.
vegemeister - Thursday, March 11, 2021 - link
Most of hard drive power/heat is from the spindle motor.Samus - Friday, March 12, 2021 - link
Not for long. I suspect these new platters are going to get VERY hot with HAMR literally heating up the surface. What's amazing is that heat will naturally be absorbed into the spindle and spread to the motor. It makes me wonder if a lot of enterprise drives using heat assisted recording will be 5000RPM-class drives.eastcoast_pete - Friday, March 12, 2021 - link
Aren't those HAMR platters based on a glass-like carrier substrate? I thought the whole idea of heat-assisted recording is the highly localized nature of the heating, so the absolute amount of power used for heating is actually quite low.ballsystemlord - Wednesday, March 10, 2021 - link
I agree! Maybe we can get HDD's that have lower latency random I/O than QLC SSDs.eastcoast_pete - Wednesday, March 10, 2021 - link
Yes, and if they go to 8 arms they could call those their "Kraken " line of drives. At least the name would be catchy.melgross - Thursday, March 11, 2021 - link
Yeah, and this time it might actually mean something real.Spunjji - Thursday, March 18, 2021 - link
Doc Ock in my hard disk, yes please!Arsenica - Wednesday, March 10, 2021 - link
Just commenting so that 2030 readers see that some of us back in 2021 thought that 512 layer 5LC NAND flash made in 450 mm wafers would defeat hard drives in cost at the 100 TB per device range.nandnandnand - Wednesday, March 10, 2021 - link
A 100 TB SSD for $1,000-$1,500 in 2030 seems possible. 6 new generations of NAND costing 70% of the previous generation would get it close to $0.01/GB. 3D NAND hasn't even been on the market for 9 years yet.512-layer NAND should be here around 2025, using string stacking. It might be closer to 768-1024 by 2030.
Hopefully, the industry doesn't pursue PLC/5LC NAND and beyond, unless they can miraculously fix its problems.
bsd228 - Saturday, March 13, 2021 - link
SSDs at $10/TB in 9 years won't beat hard drives. I bought 14TB Exos X16s recently at $18.57/.qap - Thursday, March 11, 2021 - link
There already is 100TB SSD (for 12 500 USD) so it is only question of price and what exact technology will it use. 5LC most likely yes - it is under development. 500 layers+ sure. 450mm wafers? I don't think so. It is like EUV on steroids - people are talking about it even longer but there is very little incentive to do so vs the initial cost.Btw - it would be fun to set timer on these posts that would remind us in 2030.
melgross - Thursday, March 11, 2021 - link
Well, it’s believed that they can’t go much beyond the 144 layers seen now.nandnandnand - Friday, March 12, 2021 - link
They can with string stacking.eastcoast_pete - Friday, March 12, 2021 - link
However, isn't it interesting that none of the chip manufacturers has yet moved to wafers larger than 300 mm? And that despite several of the fabs taking a serious look at it. Apparently, the downsides outweigh the upsides, at least for now.K4AGO - Wednesday, March 10, 2021 - link
Ahhhh I can see it now. HAMR hard drives used as microprocessor heat sinks. Goodbye radiators.JoeTheDestroyr - Wednesday, March 10, 2021 - link
"Buy Intel CPUs, the extra heat makes your hard drive faster!"boozed - Wednesday, March 10, 2021 - link
I can't wait for AT to recommend one of these as a good consumer hard drivelogoffon - Wednesday, March 10, 2021 - link
Yeah, "120 TB", but then it's actually only 109.139 TiB.I still don't understand why manufacturers can't make drives that goes in 1024 instead 1000? Is it purely for costs?
ballsystemlord - Wednesday, March 10, 2021 - link
I second the motion.Kamen Rider Blade - Wednesday, March 10, 2021 - link
Marketing controls the measurement.Ergo Metric over Binary as the chosen form of data measurements.
nandnandnand - Wednesday, March 10, 2021 - link
That battle was lost a long, long time ago. But at least you used the binary prefix correctly.vegemeister - Thursday, March 11, 2021 - link
The problem is actually with the developers of a single proprietary operating system, who apparently never got the news that integer division isn't super expensive anymore.Klimax - Thursday, March 11, 2021 - link
You're blaming wrong people. https://devblogs.microsoft.com/oldnewthing/2009061...notb - Thursday, March 11, 2021 - link
Because people who don't understand the difference (aren't used to binary system) are used to decimal prefixes.And people who understand the difference can convert to TiB if that really makes them happy.
Kjella - Saturday, March 13, 2021 - link
Well, the SI units were there first. One kilometer is 1000 meters. One kilogram is 1000 grams. One kilowatt is 1000 watts. So one kilobyte is....? It's computer scientists that stole the meaning of kilo, not the other way around. Now I'm not pretending the HDD manufacturers do it for correctness, but they're not wrong. Their kilo/mega/giga/terabyte is the offical SI way, it's just that nobody else uses it.ksec - Wednesday, March 10, 2021 - link
Well the TCO is increasingly flavouring NAND. Speed, IOPS and Transfer Rate are also part of the equation as well. But I guess both side will milk it for as long as possible. NAND are already using dual stacking rather than actual single 100+ layers stack.JoeTheDestroyr - Wednesday, March 10, 2021 - link
deliciousballsystemlord - Wednesday, March 10, 2021 - link
Spelling and grammar errors:"There are about a dozen of customers that already use Seagate's Mach.2 PMR-based HDDs in their datacenters"
Remove excess "of":
"There are about a dozen customers that already use Seagate's Mach.2 PMR-based HDDs in their datacenters"
ballsystemlord - Wednesday, March 10, 2021 - link
Also:"...the company needs HAMR media featuring an areal density of approximately 2.6 Tb Tb/inch2."
Duplicate "Tb":
"...the company needs HAMR media featuring an areal density of approximately 2.6 Tb/inch2."
ekon - Wednesday, March 10, 2021 - link
2015: "capacity of hard drives will rise to 100TB by 2025"https://www.anandtech.com/show/9858/seagate-hard-d...
ballsystemlord - Wednesday, March 10, 2021 - link
According to the table in the linked article, in this year (2021) we should be at about 4Tb/in2, or ~62TB HDDs.AbRASiON - Thursday, March 11, 2021 - link
I literally, LITERALLY read the same articles about 120TB Toshiba ssds, by 2018, back in 2014.If we see a disk over 30TB in the next 5 years, I'll be suprised.
romrunning - Thursday, March 11, 2021 - link
Well, then, prepared to be surprised. :)20TB HDDs are already here, and 30TB will follow probably in a year.
anad0commenter - Sunday, March 28, 2021 - link
Highly unlikely. There's just no demand for this kind of capacity outside of server applications. And I will only consider something to be "out" once it's in consumer territory.damianrobertjones - Thursday, March 11, 2021 - link
I can see it now...As HDD storage increases, SSD storage will increase alongside.
SSD - Up to 6tb
HDD - Starting at 6tb up to...
Ensuring that the prices remain as high as possible, for more storage. Cha-Ching.
Arbie - Thursday, March 11, 2021 - link
Also ramping for 2030: Seagate's ability to be truthful about which drives are shingled and which are not.back2future - Friday, March 12, 2021 - link
Would be interesting comparison of experiences (on academical level) between server side hdd and ssd storage and backup devices for a decade?https://www.researchgate.net/publication/320025946... (disk replacement periods, energy for data copying)
https://www.researchgate.net/publication/309159564...
https://static.googleusercontent.com/media/researc...
back2future - Sunday, March 14, 2021 - link
Would be also interesting, if there's update to ssd data retention showed in this 2015 summaryhttps://www.anandtech.com/show/9248/the-truth-abou...
flyingpants265 - Saturday, March 13, 2021 - link
How about the path to 5TB hard drives first? Drives have remained the same price per gig since 2010. Doesn't make sense at all.mcnabney - Sunday, March 14, 2021 - link
Spinning rust is doomed.The price difference between enterprise-grade drives compared to NVME/SSD solutions is getting pretty narrow. Combine that with the MUCH higher energy cost (power for spinning rust PLUS power to cool spinning rust), higher rack-space / floor space requirement, and higher failure/maintenance cost.
Solid state storage costs are falling far more rapidly than magnetic based storage. Once the related costs are considered the total cost leader may flip in the next year or two. And we aren't even talking about the interests of speedier equipment being worth more.
.campbell - Monday, March 15, 2021 - link
Other than historical reasons (such as compatibility), is 3.5" the optimal size for an HDD? Are there engineering reasons why 5" (or larger/smaller) would offer advantages?badpixel - Friday, November 12, 2021 - link
You must be young. Drives were way larger in the past. 3.5" estabilished (because it was practical size for bulk storage)There were some 5.25" HDDs in the past, but there were always problems with vibrations on such large spinning discs. Not so long ago, 2.5" were de facto standard for enterprise drives with rotation speed over 10Km until they were replaced by SSDs (much more IOPS).
And while 7.5mm-12mm 2.5" HDDs were reserved for laptops, enterprise 2.5" HDDs utilized 15mm usually.
mhossain111 - Monday, April 12, 2021 - link
Hello all,Does somebody advise how I can calculate atoms per bit? I am doing a Master's Thesis, and I am trying to calculate how many atoms required to store a bit at presently available HDD technologies.
UltraTech79 - Monday, September 5, 2022 - link
The graphs are hilarious. They are pretending they are going to get to 50TB let alone 100TB. Its bullshit. Within 6 years SSD will overtake HDD in terms of cost per TB AND size and they KNOW this. After this there is ZERO reason to stay with HDD tech. Its less reliable, more prone to data loss, more energy use, bigger, hotter, louder. Like what are they smoking? Who are they trying to fool here? SSD storage doubles every 18-24 months for the same price. We are at 2TB for $100 now. In 2 years that will be 4TB, with 8TB available under $400. 2 more and thats 8TB for $100 and 16TB available.. then 16TB and 32... Before 2030 we will see storage become dirt fucking cheap because that's where SSD tech was always headed.Not to mention how much fucking faster it is. HDD days are numbered and companies like this with bullshit roadmaps should be laughed at.
zavalita2002 - Thursday, September 22, 2022 - link
When are prices going to go down ? Even now the best buy drive is of 4 TB capacity while the 8 TB versions are exactly double the price. The only improvment in the last 4 years (when I bought the versions that I replaced this year) is that 3 TB was replaced as best buy with the 4 TB version. So the big drives are probably reserved to the data centers.