Comments Locked

75 Comments

Back to Article

  • Cellar Door - Thursday, February 6, 2020 - link

    Imagine doing a RAID rebuild on a 80TB array..
  • quorm - Thursday, February 6, 2020 - link

    Drives so big they're impractical for anything except cloud.
  • prisonerX - Thursday, February 6, 2020 - link

    "640K ought to be enough for anyone..."
  • rocky12345 - Thursday, February 6, 2020 - link

    Yep Mr Bill Gates thought so way back in the 1980's when he said 640K was more than enough for any modern computer to have installed.
  • Cullinaire - Thursday, February 6, 2020 - link

    Maybe it would have been, but for certain applications that demanded a ridiculous % free of the 640k life could get difficult if you were dependent on a lot of TSRs...
  • Samus - Friday, February 7, 2020 - link

    Considering 500GB HDD's and now SSD's have been the mainstream capacity for the VAST majority of PC's for over a DECADE, using Bill Gates analogy is an apples to oranges comparison.

    Storage needs have grown incredibly slowly and in many markets have reversed, because of cloud computing making Quorm's statement that much more correct. Very few people have large hard disks in their homes, and even fewer people have disk arrays. The majority of computer users, from home to small business, have cloud based file storage and cloud based backups. Medium business and enterprise all have cloud backups and on-site backups for servers, but these aren't the people being targeted with these massive-capacity drives...most of those servers take near-line SAS drives, often 2.5", anyway.
  • deil - Friday, February 7, 2020 - link

    That's because time/price ratio of doing this yourself vs buying same service from blackblaze for example is so skewed into cloud that its almost no point of homemade stuff.
    Unless you do illegal stuff that is OR have privacy issues.
    with 1GB/50MB down/up link, its even faster to backup on BB than locally.
  • Golgatha777 - Friday, February 7, 2020 - link

    Well, my cable modem with its 1GbE only has a 35 Mb/sec uplink. Not to mention a 1TB data cap. Unless you're fortunate enough to have AT&T or Google Fiber, local backups are still the only way to backup TBs of data reliably and in a timely fashion.
  • azazel1024 - Friday, February 7, 2020 - link

    Huh? What do you think you'd be backing up to locally? I have 2x1GbE between my desktop and server. My internet is 200/200Mbps, which actually seems to hit about 250/230Mbps. Pretty thankful I have that, but my local backups are 10x faster than anything over the internet possibly could be. And for cost, I'd need a >1TB plan. Ignore my movie folder (which are almost all BR rips of movies I own, a few are still DVDs), I still have >1TB of data between home movies of my kids, photos, music, exe/zip of games (GOG, rips from old DVD game disks, etc).

    I see that plenty, like Dropbox, now are up to 2TB for $9.99 a month. But if I did want to backup my movie library (which I've spent a couple of hundred hours, plus many hundreds of hours of compute time/electricity on ripping my BRs for my personal use. So I could do it again, but that is a HUGE investment of my time I'd like to protect), 2TB wouldn't cut it (I am up over 3.5TB total data). Just using Dropbox's options, they max at 3TB.

    So not even an option for me. But if I could trim the fat and get it under 3TB, I'd be looking at $16.58 a month. Round it to $200 a year. Total investment in my server has been around $500 for all hardware, drives, etc. That was...I dunno, 4 or 5 years ago? Something like that. I pay around $15 a year in electrical costs to run the server.

    And I can't stream to my apple TV from dropbox...

    Sure I know my setup is a lot more unique than probably 97% of home users. But you can't even say that on skewed into the cloud as no point.

    A friggen USB3 2TB pocket hard drive is like $100 and way faster than 99.999% of residential internet connections available in the US (maybe slower than the .001% who have a 2Gbps internet connection).

    I also wouldn't call it privacy issues. Maybe privacy CONCERNS would be more appropriate.

    I have friends who don't have anything like a media library from piracy. Hell I helped a neighbor setup a backup routine to a 2TB USB drive they bought a couple of years ago and hadn't gotten around to doing anything with. She had about 600GB of videos and photos of her kids from her husbands camera and their phones to backup. Along with a few GB of work related files from her SE job. That can sure fit in most cloud accounts, but that external USB drive is also...again less than $100. And DB would be more than that in a year. Let alone over years.

    I use cloud storage for some of my most important files where I want FOUR copies of it, one offsite. But it isn't remotely a replacement for "homemade stuff", unless you just want easy. It isn't faster. It isn't cheaper.
  • evernessince - Friday, February 7, 2020 - link

    Mainstream Capacity for HDDs is 2TB. 500GB drives haven't been mainstream since Windows 7 first launched.

    For some people a 500GB SSD might be enough, assuming you do not play too many games or have too many apps. It will certainly fill up quickly installing modern AAA titles at 120GB each though.

    FYI, moving the storage need from a local machine to the cloud doesn't make that quote any more correct. People still need that storage, whether it comes from the cloud or not irregardless. One way or another, you are still paying for that drive space, whether it's through selling your information or through a subscription.
  • olde94 - Friday, February 7, 2020 - link

    Most laptops sold over the last few years all had 256 GB ssd's. When we changed from HDD to SSD the capacity droped for a while so i'd say the 256 or 512 have been the norm the last decade. Having more than 500gb in a laptop in 2009 was not common and running hdd only after 2015 was rare too.
  • umano - Thursday, February 13, 2020 - link

    That was the fun playing with autoexec.bat and config.sys. If I remember correctly to play Tie fighter the requirement were 615kb free :)
  • lazarpandar - Thursday, February 6, 2020 - link

    wildly misused quote
  • FunBunny2 - Thursday, February 6, 2020 - link

    "wildly misused quote"

    so he says. but it is true that using a PC for internet surfing and email and word processing doesn't require much more.
  • B3an - Thursday, February 6, 2020 - link

    You're not using a past tense... So i'm assuming you mean in todays age, but surely a human can't be this stupid? Even if your ignore the browser and OS memory/disk usage, then the AnandTech home page alone is probably over 640KB.
  • HardwareDufus - Thursday, February 6, 2020 - link

    It is. I do wonderful things with the Arduino Due, a 32bit Arm microcontroller with 512kb of storage and 96kb of sram. even run touch screen apps at 800X480... all bear metal c, c++ programming
  • HardwareDufus - Thursday, February 6, 2020 - link

    ooopps that's bare metal
  • Valantar - Friday, February 7, 2020 - link

    Bear Metal sounds like my new favourite music genre.
  • yetanotherhuman - Friday, February 7, 2020 - link

    Actually, they're not totally full of shit. If you're using traditional RAID, the rebuild times spike so high that the chance you hit an error when rebuilding is far too great.
  • damianrobertjones - Friday, February 7, 2020 - link

    You need to read the entire statement, about 640k, and take it in context.
  • PeachNCream - Friday, February 7, 2020 - link

    It's impressive that this particular quote still gets trotted out by people once in a while.
  • dullard - Friday, February 7, 2020 - link

    Everytime this 640K false quote comes up, I have to correct people. Bill Gates wanted 640K as the MINIMUM not the maximum. This was at a time when computers had 64K. So, Gates wanted to at a minimum have 10x the memory.
  • azazel1024 - Friday, February 7, 2020 - link

    Yup.

    Not that I have a strong desire to, but if my entire movie library on my server was 4k low/no compression h265 I'd go from around 2TB of data to somewhere in the range of 30-60TB.

    Not that I would. Well, anytime soon. But hey, I mean if storage continues to grow, I'll find a way to use it. Pre-h.265 being "common", well common for media streamers to decode, my library was mostly 720p with just some 1080p. Now I've been ripping my BRs again at 1080p this time and I have some 4k mixed in for a handful.

    If I had 2x the storage (or more), then I wouldn't have any qualms about 1080p from the get go. Probably at some point once I go from 2x3TB drives to a RAID array that is bigger, some of the stuff I have that is 1080p today, I'll probably source 4k for it (though none of this "uncompressed" stuff).

    If you give me the space, I'll find a way to fill it, which will likely make my life marginally better.
  • olde94 - Friday, February 7, 2020 - link

    Recovery would be 4 and a half day at 200mb/s and i doubt it'll be 200mb/s along the full platter
  • valinor89 - Friday, February 7, 2020 - link

    Wouldn't the increase of density mean that at least the read speed would also increase? A modern HDD will hit at least 100mb/s average. If the density is increased 5 times the data rate (sequential) should increase similarly. (disregarding limits of SATA etc)

    If we maintain the rotational speed and increase the data density we will also increase the data that is read.

    I have my doubts about writing, as the increased complexity might limit the speed at wich we write to the drive.
  • MenhirMike - Thursday, February 6, 2020 - link

    With modern hard drive sizes, RAID has been a less and less appealing option. Something software-assisted that understands the file system is a better option - host-managed drives have been in use by cloud providers for a bit, and stuff like ZFS or Windows Storage Spaces also avoids the issue of "Mirror 100% of a drive when only 20% are used".
  • mukiex - Thursday, February 6, 2020 - link

    ZFS could make that mangeable. It only rebuilds the data, and you can go all triple-parity so the downtime doesn't leave you at risk of data loss.
  • Supercell99 - Thursday, February 6, 2020 - link

    Pornhub can't even use that. SSD's are the future. IOP's/GB is real
  • Brane2 - Friday, February 7, 2020 - link

    if you get 250Mb/s per head and you have 9 platters and two arms, each with 18 heads ( two heads per surface), that would get you 18x0,25=4,5GB/s.

    Which means less than 40.000 seconds to read and write whole 80TB.

    Which is 12-ish hours.
    So what ?
  • Valantar - Friday, February 7, 2020 - link

    Yeah, sure, let's just assume that HDD R/W speeds will multiply by 20x in the next few years when they've increased by <2x over the past decade. Makes sense.
  • Brane2 - Friday, February 7, 2020 - link

    Over the past decade they increased the bandwidth of a SINGLE HEAD, which was the same as longitudinal density increase.

    Now they have brought in mechanisms that make MULTIHEAD data transfer possible, so corresdponding multiplicators are likely ( 18 heads on 9 platters, twice that with two arms etc).
  • sheh - Friday, February 7, 2020 - link

    Let's get 2 actuator drives into the market first, then see about more concurrently active heads. :)
  • Gigaplex - Friday, February 7, 2020 - link

    If the data you need is all on a single platter, all those extra heads aren't going to help.
  • sheh - Friday, February 7, 2020 - link

    You can write across multiple surfaces.
  • Targon - Thursday, February 6, 2020 - link

    Another development that will be obsolete in another ten years. As it stands currently, solid state drives have an issue with decay when unpowered for extended periods. If that issue is addressed where drives can be unpowered for ten years without data loss or corruption, that will resolve THAT issue and solid state drives will be good for data backups for most people.

    Next comes data volume. With the ability to get 4TB onto a M.2 2280 drive, a larger physical drive could then be made. getting to 80TB in a 3.5 inch sized drive really wouldn't be that difficult, but with much higher data transfer speeds. The form factor, something like a PCIe 4.0 16 lane card could do the trick as well to get 80TB done. Again, the only current issue is shelf life if the machine is left unplugged for an extended period of time.

    In ten years, would there be any reason to go magnetic at the rate of SSD development as long as the big unpowered issue is no longer an issue? High capacity platters are still subject to problems like head crashes.
  • extide - Thursday, February 6, 2020 - link

    There are already 50TB 3.5" SSD's. The issue is price.
  • bill.rookard - Thursday, February 6, 2020 - link

    Yeah - when Samsung put out the 15.7TB SSD, the MSRP on that one was in the neighborhood of $10k. I'm sure it's come down a bit since then, and they also released a 30TB drive. Not sure on the price for that one. These are both in the 2.5" format.

    Seagate put out a 60TB 3.5" drive, and I'm sure that's in the range of 'stupidly expensive'.
  • HardwareDufus - Thursday, February 6, 2020 - link

    Heat too...
  • MenhirMike - Thursday, February 6, 2020 - link

    In ten years, who knows. But from now for the next 10 years? Yeah, storage density per $ is much better with spinning rust.
  • PeachNCream - Thursday, February 6, 2020 - link

    I would think that as density increases on mechanical drives, we may begin to see data corruption from background level EMI impact long-term unpowered HDDs as we are already seeing with SSDs. What we really need is to put NAND and mechanical drives behind us by using storage technologies that mix the P/E durability and cold storage longevity of mechanical drives with the high throughput, low latency, and physical shock resistance of NAND. I was hopeful about Optane/Xpoint, but that technology and other potential replacements are simply not developing very quickly so we are instead getting a variety of bandaid and stopgap solutions on both sides of the storage development fence that fail to address the fundamental problems of each respective data storage method.
  • prisonerX - Thursday, February 6, 2020 - link

    Most technological developments will be obsolete in 10 years time. You're assuming that spinning platters won't progress in those 10 years, just like they have every other 10 years since inception their inception. Ans they'll do so in a way that satisfies their niche.
  • DanD85 - Thursday, February 6, 2020 - link

    Lol, I guess you've never heard about data tape then? 80TB hard drive? Magnetic tape has reached 330TB since 2017! Every storage technologies have its own places.
  • MrSpadge - Thursday, February 6, 2020 - link

    Cramming lot's of NAND in a tight space is not an issue, but paying for all those chips is. And yes, you can increase the cold storage time - at the cost of capacity. You have to go back to SLC, increase cell sizes and insulate them better (slower). Sounds pretty unattractive to me.
  • Beaver M. - Friday, February 7, 2020 - link

    Not to mention resources. Its easy to get materials for HDDs and recycle them. The materials used in SSDs are rare and expensive, hard to come by, limited and very hard to recycle, if not impossible.
    If you were an eco-activist, you would see SSDs as the big bad combustion engines.
  • Beaver M. - Friday, February 7, 2020 - link

    I recently started up an SSD from my old 2008 machine. Hasnt been touched for 10+ years.
    All data is still there, everything works.
    Humans are good at making an elephant out of a fly.
  • PeachNCream - Friday, February 7, 2020 - link

    Humans are also pretty good at comparing apples to oranges (older, larger feature size MLC tolerance applied as if it were modern, smaller TLC).

    Humans further excel at assuming a sample size of one is statistically significant (my one computer passed a cursory check so everyone will have the same outcome).

    Humans are really good at making up fictional situations which no one can disprove in order to make a false point (a supposed computer from 2008 that has not been turned on in 10+ years).
  • Beaver M. - Saturday, February 8, 2020 - link

    Humans are also good at making shit up. I even know people who still claim that SSDs are far too unreliable because they break early because of the write limit. :)

    Humans are also good at reading too much into a sentence they didnt read properly. I only said its from my 2008 machine. Never said its still here. The SSD was outside all this time, not connected at all for at least 10 years. Funny thing is that I had a 40 GB HDD that is a little older, and wasnt connected all that time either and that one isnt working anymore.

    Many examples like mine in forums and even articles about it. If you dont believe it then you are a very human human. :)
    And thats not even my fault. Gasp, huh?
  • PeachNCream - Sunday, February 9, 2020 - link

    I like the goal posts that you've just moved to make your story seem more plausible. It's a good try and you should give yourself a sticker for the effort.
  • MASSAMKULABOX - Friday, February 7, 2020 - link

    Its pretty shocking to me that these are *mechanical* drives operating almost at the atomic scale. And they are expected to endure a pretty hefty duty cycle. What do they use on the ISS .. gotta be hardened SSD's (maybe just 28nm or bigger?). The takeoff alone would destroy any unparked HDD's?. Who would have thought HDD's would still be going strong after SSD's emerged 20 years ago.
  • valinor89 - Friday, February 7, 2020 - link

    They used ThinkPads and mention using hard drives.

    https://www.ibm.com/ibm/history/exhibits/space/spa...
  • cosmotic - Thursday, February 6, 2020 - link

    Any increase in the speed of the data transfers? Is it time to add more read/write heads? Seems the stackexchange answer is out of date and nothing in the answer precludes implementation https://superuser.com/questions/1137805/why-arent-...
  • shabby - Thursday, February 6, 2020 - link

    They'll hit 80tb as soon as intel hits 10ghz on the p4...
  • waterdog - Thursday, February 6, 2020 - link

    Photo caption: "Plan View" not "Plain View" though the surface is likely pretty darn flat.
  • GCappio - Thursday, February 6, 2020 - link

    For my degree in 1993 I have been working for 2 years in a physics university lab (Politecnico for Engineers, I am an electronic engineer, branch "microelectronics, optoelectronics, instrumentation") in Milan, Italy on "Thin Fe (Iron) films grown on Ag(100) (crystalline silver) studied by angle- and spin-resolved inverse-photoemission spectroscopy"...

    So, we discovered that the magnetic domains are perpendicular to the surface for Iron films up to 5 atomic layers, then the magnetic domains become parallel to the surface if there are more than 5 atomic layers...

    This means that if you are able to make platters with up to five Iron atomic layers, then you can have muche denser bit recording per square inch... I wonder if they will ever be able to make such platters, and what would the recording density...

    See this article... https://journals.aps.org/prb/abstract/10.1103/Phys...
  • rahvin - Thursday, February 6, 2020 - link

    Is the platinum new? I don't remember Platinum being a platter ingredient. I hope it's a really tiny amount cause yikes that's going to boost drive prices.
  • eldakka - Thursday, February 6, 2020 - link

    I don't think even 20TB HDD are practical at the currently available data transfer rates, let alone 80TB.

    At least not in RAID (e.g. ZFS raidz2 or raidz3) arrays as other commentators have noted. I have 8TB HDDs in raidz2, and if I lose a disk, it takes the best part of a day to resilver the replacement drive.

    Adding a second actuator with a second set of heads would help tremendously. However, the cost of this (assuming they could even fit them in) would probably bring them up to near SSD prices anyway ...
  • Brane2 - Friday, February 7, 2020 - link

    That's where multihead access and redundance comes in.
    With multihead transfers and 8 platters you get 16x transfer speed.

    With extra head per platter you get considerable redundance.

    One such 20 TB drive could be much more reliable thatn a coresponding bunch of classic HDDs in RAID-6 for example.
  • GreenReaper - Friday, February 7, 2020 - link

    Extra heads per platter sound like more things to break inside an enclosed space, and more heat. Neither seem particularly good for reliability.
  • Brane2 - Friday, February 7, 2020 - link

    Heat issue was adressed with helium.

    More things can go wrong, but if they do, user will be given the options that s/he had not have before. Like to use other good head for data migration from affected plater side.

    Also, with such high capacity drives users will expect to pay some more $$$ foor better mechanics as these things will be used for cold storage. Which means better bearing materials, much more detailed SMART, better head mechanics and abillity to run normally 24/7 in degraded mode ( like with 1 faulty head) etc.
  • extide - Friday, February 7, 2020 - link

    More IOPs, not more bandwidth.
  • Brane2 - Friday, February 7, 2020 - link

    Both. Youcan easily interlieve sectors so that come on sequential heads.
    One doesn't even have to do it in drive. Computer driver can take care of that.
    And do many other things.

    I suppose Linux Filesystem structure will be updated and changed to be able to take advantage of those new mechanisms...
  • bcronce - Friday, February 7, 2020 - link

    OpenZFS recently got DRAID http://open-zfs.org/wiki/DRAID_Rebuild_Performance
    Not sure which platforms currently have it yet or how mature it is.
  • Brane2 - Friday, February 7, 2020 - link

    When they get to 40-ish TB per drive, with multihead RAID-like access and two heads per platter at low per-showel prices like we have now for 2TB, they will present GREAT archival option that could trigger avalanche of new orders.

    AND - interchangeable electronics. So that if electronics on one drive dies, it can easily be interchanged with another and data rescued. It was possible long ago and I don't see a reason why it shouldn't be possible again.

    AND, it would be nice to have separate external pin that would enable one to turn drive totally off.
    So you could have say 100 drives on the same cold-storage machine, consuming no power on idle.
    With a little help from cheap microcontroller on the side...

    No one needs the porn so bad that access time difference SSD/HDD would make meaningful difference ;o)

    For long-term data archival OTOH, HDDs seem bo unbeatable for mere mortals.
  • GreenReaper - Friday, February 7, 2020 - link

    As someone who runs such a site, it depends. Sure, the full files we have on RAID5. But reduced-sized previews and thumbnails go on RAID10 and SSD if available - for IOPS if not for access times. (Although obviously the two can be almost directly related when it comes to HDDs.)
  • sorten - Friday, February 7, 2020 - link

    I assume that at some point you'll fix the broken purch js scripts.

    A little difficult for me to get excited about cold storage drives. LOL @ people talking about using these in a RAID.
  • Valantar - Friday, February 7, 2020 - link

    Let's see ... a 5x increase in density might lead to a near-linear increase in read speeds, but I have a feeling write speeds won't reach those levels - especially if we're talking SMR on top of this stuff. So let's say one of these 80TB drives manages to do 500MB/s reads and 300MB/s writes. That would make them very fast for an HDD, but ...

    300*60*60*24=25 920 000 megabytes of writes per day in a perfectly sequential workload (which never happens). That's more than three full days to fill the drive.

    500*60*60*24=43 200 000 megabytes of reads per day. So nearly two full days to read out the entire drive - and that's if you ignore the file system entirely and just clone the raw data track by track sequentially. _Any_ seek time added between files would dramatically increase this time.

    In short, storage like this is wildly impractical outside of deployments with absolutely massive redundancy, and even there it has serious issues.
  • eastcoast_pete - Friday, February 7, 2020 - link

    Interesting article - but, when will we see these in action?
    And, to those who ask who needs such enormous drives? Try backing up 4K and soon 8K footage, and you'll see soon that there is no such thing as too much storage space. And yes, editing is much better done from large NVMe SSDs, but you still want to have the raw backups on spinning rust.
  • sheh - Friday, February 7, 2020 - link

    Modern 3.5" HDDs average 150-190MB/sec.

    For years now read/write speed has been growing slower than data density. If that wasn't the case, reading a whole drive wouldn't have grown from a few minutes in the past to many hours today.

    The problem is that speed grows only from increases in linear density (inside a single track), while most of the density growth is from an increase in the number of tracks (tracks per inch).

    I think write speed should be the same as read.
  • sheh - Friday, February 7, 2020 - link

    The above is a reply to valinor89.

    (AnandTech, reply threading is broken when JavaScript is disabled. It's been like that for... 10 years now?)
  • ksec - Saturday, February 8, 2020 - link

    Most may not have a need for 70 - 80TB Desktop HDD, but I would surely want a 8- 20 TB Size HDD in Notebook 2.5" format.
  • Gonemad - Tuesday, February 11, 2020 - link

    Do they use RAID internally on the drive... like... it has nine platters, so it records a file in nine chunks, one in each platter, on the same needle position?
    Or are the needles independently driven?

    Would that strategy even be possible?

    I understand the "temperature" on the recording surface is absurdly high when we talk on the nanometric scale..., but how hot would a drive like this run during a heavy write load? Any models already out with HAMR?

    Would it run purposely some form of #prochot and force a cooldown on heavy loads?

    Can benchmarkers delve into the controllers and see if there are internal RAID techniques, or they just run benchmarks as-is and hope to pick a drive "cheating" somehow?

    SO MANY QUESTIONS.

Log in

Don't have an account? Sign up now