Comments Locked

54 Comments

Back to Article

  • MFinn3333 - Thursday, February 7, 2019 - link

    I don’t think it is going to make it into consumer PC. Besides to fill up something like 24TB HDD at 250MB/s would take 96,000 seconds in the best case scenarios.

    SSD’s are just getting way too inexpensive for just good enough storage. Especially in the 1-2TB SSD.
  • Korguz - Thursday, February 7, 2019 - link

    maybe.. but until SSDs capacity is a lot higher.. mechanical hdds will still have their use in consumer comps... 2 tb.. is not enough room for some things like music or movies....
  • bill.rookard - Thursday, February 7, 2019 - link

    Exactly, I have a server in my basement with about 20TB of space (4x4tb + 4x3tb in a ZFS array) and it's about half full. I could put together a similar system with SSDs if they offered affordable 4tb sizing, but they just don't.

    3/4TB drives are dirt cheap right now, and with a good ZFS array offer enough performance to more than saturate a 1Gbit link.
  • PeachNCream - Thursday, February 7, 2019 - link

    I think it's safe to say most people don't have a server in their home and that your storage needs represent an edge case rather than the mainstream norm. Average folks tend to stream their video content and maybe grab a few shows on a cable company-supplied DVR box. Smartphone ownership in the US has outpaced desktops and laptops combined and the number of households where people do not own PCs at all, but instead rely exclusively on phones is increasing. PCs aren't dead, but desktop hardware in the home is on a steady decline with businesses and PC gamers propping up an increasingly premium-priced market segment that is shrinking due to self-inflicted rising cost wounds. I expect laptops will follow suit eventually as well though they are currently consuming the desktop market. Servers sitting at homes...well this is a tech site and even here you represent a sub-majority of the reader population.
  • Korguz - Friday, February 8, 2019 - link

    i have a NAS box too :-) heheheh even if i didnt.. the sizes of SSDs.. are just not big enough for my usage needs still...

    BTW.. 1 tb ssds are not too bad price wise.. but 2tb... minimum 350, for the 2.5 inch version, m.2 is 530... dont know about you but thats not inexpensive in my books :-)
  • Schmich - Friday, February 8, 2019 - link

    Most people don't buy high-end hard-drives either.

    SSD took up the low-end market and boot drives for enthusiasts. But HDDs will always be there for the rest. People have been saying for 10 years now that HDDs are about to die. They won't. We need so much storage. The average enthusiast definitely has a NAS of some sort. And us who takes a lot of RAW images or video have an extensive NAS. For either of those, SSDs are too expensive or too small in capacity. At best you can use one as cache/buffer.
  • FunBunny2 - Friday, February 8, 2019 - link

    "The average enthusiast definitely has a NAS of some sort."

    but... that's not the average buyer. your 'average' doesn't comport with the 'average' user who needed support for Win95/Office/etc. the numbers aren't even close.
  • PeachNCream - Sunday, February 10, 2019 - link

    Yup, we're talking about someone that represents society's middle ground which varies from an individual using a phone as their primary computer to a family with a desktop computer and a couple of laptops for the kids. Those sorts of people are unlikely to be familiar with the idea of networked storage beyond leaving files in "the cloud" with Google or via OneDrive, the underlying technologies of both they don't understand and have no need to learn. Those sorts of people and the variations of those people out there represent the bulk of the world and are the average folks to which I'm referring.
  • wumpus - Friday, February 8, 2019 - link

    Don't expect Dell/HP to ship HDDs to the "average person" much longer, as a 1TB drive is pretty much the minimum (although I assume OEMs manage to buy cut down 500GB for reasons). Another thing is that most people seem to use about 100GB of space, so a 256GB drive is probably enough for a consumer drive, and the same price as a 1TB drive (and probably going below it soon enough).

    For those "normal people" who need more capacity (I'm specifically thinking about my father, who while he's used computers for nearly 40 years still doesn't get the idea of saving things to anywhere but the desktop) I'd strongly recommend AMD's StoreMI caching system (or other ones, but anything else is probably outside the realm of average people). 3-4TB rotating data (dirt cheap storage per bit) and a 256GB SSD cache ($40 tops?) should work wonders.
  • Korguz - Friday, February 8, 2019 - link

    most people dont.. i agree.. but those that do.. ssds just dont have the capacity, and get mechanical ones for storage, and ssds for other things... OS boot drive, drive to install games on.. etc....
  • 29a - Friday, February 8, 2019 - link

    I have a home server.
  • Storagegeek - Wednesday, February 13, 2019 - link

    Average folks are the reason YouTube has cut their bitrate to one third what is was in 2014... videos from there are now unwatchable.

    I would hope most computers geeks now have some form of NAS. I have 96TB SnapRAID, 55TB RAID6, and 28TB RAID1. 12TB of the RAID1 is on my main PC (XP 32bit 8GB RAM, 5GB pagefile on a RAMDisk since 2011).

    With 40TB drives, no one will be using striped RAID. Cloudlike replication will be used. See Drivepool.

    My new HTPC server is in the planning stages- 48 100TB drives. Double replication (3 copies). That's 4800TB raw, about 1400TB formatted. The best thing about the replication model is that you only buy 3 drives to start, and can easily add more as needed.
  • ZolaIII - Friday, February 8, 2019 - link

    You have a 1TB micro SD cards this day's so NAND capacity is not the issue, the price whose but as it will reach rock bottom this year it will first time in history match the price of the mechanical storage.
  • stanleyipkiss - Thursday, February 7, 2019 - link

    Some of us have hundreds of terabytes that need to be stored. I do.
    A 24 TB hard drive would make my life easier.
  • alacard - Thursday, February 7, 2019 - link

    Not if you put it on a Seagate drive it won't. Putting your trust in Seagate to store your data long term is like kindly and gently placing your trust in a bucket of acid. Relish the death of your data along with the death of the thousands of dollars you spent trying to protect it.
  • StevoLincolnite - Thursday, February 7, 2019 - link

    This. You couldn't pay me to go Seagate.

    If I could get a Western Digital Drive with 24TB... I could replace my ageing 5x 4 Terabyte drives... And still have some spare room.
  • Schmich - Friday, February 8, 2019 - link

    Couldn't pay. That's when you know not to trust the comment. Free extra NAS with Seagate that you can put off location and you would say no? Frankly idiotic.
  • piroroadkill - Friday, February 8, 2019 - link

    Well, exactly. I'd definitely take a brand new free Seagate 24TB drive.
  • Spunjji - Monday, February 11, 2019 - link

    Replacing five drives with one is a recipe for data loss that's way, way more likely to happen that any brand-based misgivings. Not a useful comment.
  • Reflex - Thursday, February 7, 2019 - link

    Its not that simplistic. Virtually every manufacturer has had issues with some drive models/manufacturing runs while others have been rock solid. Seagate 3/6TB drives were known to be problematic, but their 4/8TB drives have ranked among the best for reliability. At one point IBM/HGST had a terrible reputation due to the rare Deathstar failure, but now are perceived as the most reliable drives on the market.

    There are of course fans of various brands, but for those who are serious about data integrity, reliability data is important and its not tied to brand. I wouldn't buy a 24GB drive at launch from any vender, but after a year of data I'd consider it.

    It would be nice to reduce my 16TB RAID-1 4x8TB drive array to two drives, if the cost is right, and even if the failure rate is *higher* per drive it would still be more reliable as I go from 4 points of failure to only two (and gain 8TB more capacity). Which is the other major reliability gain: so long as you aren't putting all your faith in single drives, larger capacities for any given amount of data are inherently less failure prone due to a reduction in the need for total drives.
  • alacard - Thursday, February 7, 2019 - link

    Ah ok, so their 3/6 TB drives are known to be "problematic", OK.. If you know it then Seagate MUST know it too, i have 'x' number of them. Will Seagate stand behind them and retrieve all my data from the dead drives and then give me my money back because they're all broken and everyone knows it?

    Stop shilling for these bullshit companies Reflex. Dong so makes you nothing more than a comedic stooge.
  • Reflex - Thursday, February 7, 2019 - link

    Backblaze has been tracking this for a while now. I'm not in alignment with directly comparing their usage patterns with home use, but as an apples to apples comparison you can pretty easily see which generations of drives have issues for each manufacturer and which do not.

    Also, who am I 'schilling' for here and who is paying me for such? I need to collect a check
  • azazel1024 - Friday, February 8, 2019 - link

    And beyond that, they give a warranty. Yeah, they'll stand behind it if it dies within that period. Is it a crappy corporate practice if they didn't replace drives if they have MASS failures at say 1 year and 1 day? Absolutely, but really all they are guaranteeing is that their product will be free of defects for 1 year.

    That is it. I have 4 Seagate 3TB drives that have been ticking along fine for...3 years now? Maybe it has only be 2 and some change. No issues. SMART data looks good. Sure I worry about it, but I'd worry about any set of drives. Which is why my server mirrors my workstation and vice versa. And I have a 5TB external drive that I back-up to about once every 1-3 months that sits in a fire proof safe. It sure ain't totally disaster proof, but it should provide redundancy against almost anything like a likely data loss scenario (I've accidently deleted folders I shouldn't have and discovered it months later only to have to recover it from my cold storage drive. I've accidently killed the RAID array by accident when upgrading my server and had to copy all of my data back over from my desktop. I had a drive die (2TB Samsung drive when I was running a 2x2TB RAID0 array in both machines about 4 years ago. I replaced it and then another drive started to throw some pending errors, so I replace both arrays with my 2x3TB Seagate arrays because I got the drives for cheap (about $65 each at the time)) that I was able to replace and had to copy over everything. And on plenty of occasions I've accidently deleted a file or folder and was able to copy it back over from my server or from my desktop (as appropriate. Only had to go to cold storage ONE time. So far). That is also why I don't do daily backups or constant backups.

    It gives me some breathing room to find my screw ups.

    Some stuff it would be annoying if I lost a video I spent awhile transcoding, but I'll just rip it from my BR again. Others would be catastrophic, like I lost one or more (or all!) of my pictures from a family vacation. This is also why I don't delete the pictures off my camera's SD card until the card starts getting close to full and I might need more space for whatever event I am taking pictures at.

    I pull them all to my desktop ASAP, delete bad ones, edit good ones, convert to JPEGs and then I will let it backup everything from that night to a week later depending on when during the week it was. I don't delete the card till it has backed up.

    This way I always have at least 2 copies of anything at any given time (and occasionally 3, or 4. And some pictures I keep on my cloud storage account also. So on rare occasions, 5 copies of a file).
  • saratoga4 - Thursday, February 7, 2019 - link

    >Putting your trust in Seagate to store your data long term is like kindly and gently placing your trust in a bucket of acid.

    Seagate's disks have been the most reliable you can buy the last several generations: https://www.backblaze.com/blog/2018-hard-drive-fai...

    FWIW I'm probably going to go with the 10TB models (<1% AFR) in my next NAS.
  • PeachNCream - Thursday, February 7, 2019 - link

    Ugh, don't remind me. I have a 1TB Seagate drive stuffed into a laptop running Samba to host files for my other computers and to act as a backup repository following the premature death of a different 1TB Seagate drive several months ago. It's hard to believe that after all these years, Seagate still foists garbage drives out that die at such an alarming rate.
  • Spunjji - Monday, February 11, 2019 - link

    Is the bigger story here that you've had a single Seagate drive fail or that you have a terrible data storage setup?
  • Schmich - Friday, February 8, 2019 - link

    Thousands? That's the thing. If you build yourself it's not in the thousands. Annual failure rate of 2% on a harsh environment is a tad high for sure but definitely not as bad as your immature comment tries to say.

    I rather have multiple fail-safes on a cheaper HDDs than going simply having 1 NAS 1 parity drive WHILST spending those "thousands" of dollars you speak of.
  • alacard - Friday, February 8, 2019 - link

    Uh, I have over a hundred of terabytes of data. Is math so hard that you can't calculate that housing and replicating that would cost thousands of dollars?

    I do my best to record history as a backup for future generations, and that takes a lot of storage, and I've been burned so bad by Seagate I will gladly tell everyone not to touch their drives. They've earned it.
  • Reflex - Friday, February 8, 2019 - link

    It's the year 3019. Future archeologists are excavating the basement of a typical early 21st century suburban home. "Jackpot" exclaims one! "Look, I found yet another NAS from famed historian alacard!" "That's amazing!" says their companion. "Everyone knows alacard's data caches are by far the most valuable as they were the only historian to avoid Seagate media! Wait'll the Smithsonian catches wind of this!"

    Seriously, your just being silly now.
  • PeachNCream - Sunday, February 10, 2019 - link

    I see no need to store a hundred TB of data that has any meaning to me. 500GB is pushing it for me and that would include many hundreds of GB of video files that I wouldn't exactly miss because I can stream pretty much everything I need. Looking at the data I store, I really only care about maybe 60GB of it and even that includes relatively unimportant information. The data that really is meaningful to me can fit on a 16GB thumb drive with capacity to grow for years.

    If you were holding that data for a business reason, then I'd concede the point, but it sounds a lot more like you're just camping atop as much data as you can find just so you have a reason to operate a needlessly complex and expensive storage array (or you're making this up as you go in order to impress people on the Internet that will probably never call you out on the embellishment).
  • Reflex - Sunday, February 10, 2019 - link

    To be fair, some of us rip our DVD/BR collection onto a NAS for usage with Plex.
  • Storagegeek - Thursday, February 14, 2019 - link

    My experience has been that they all suck equally now (16 years ago, Maxtor was the one with an 80% failure rate over 3 years).

    I archive on both Seagate and Western Digital (2 copies offline) and 1 to 3 copies live on local RAID1 or RAID6 and SnapRAID arrays.

    It's important to refresh the data on a periodic basis. I use HDSentinel to read , rewrite, then re-read verify the data. It works on the sector level.

    There is also the issue of flash memory bitrot in the chips on each drive. If stored in a cool environment, after 20 years , they'll begin to fail. Be sure to copy your data to new drives before the old ones fail. All my 30 year old or older drives and some audio gear that used eeprom or flash microcontrollers have now failed, some quite obviously via bitrot (Onyko tape decks).
  • Supercell99 - Friday, February 8, 2019 - link

    You should go to pornhub instead of saving it locally
  • wumpus - Friday, February 8, 2019 - link

    Tell that to the people who liked the stills on tumblr.
  • DanNeely - Thursday, February 7, 2019 - link

    That's a bit over a day. Other than on initial loading not a real issue; after that point my NAS only ever reads/writes a tiny fraction of the data on it each day.

    That said, when I standup NAS-2019 later this year to supersede NAS-2014 (RIP WHS), I'll probably only put 2x 10TB drives in because bigger drives will probably get old enough to justify a precautionary replacement before I run out of space. (I've only got about 3.5 or 4TB of my 2x6TB mirror used on the current system.)
  • azazel1024 - Friday, February 8, 2019 - link

    I don't know, it might make it in to consumer HDDs. SSDs are rapidly approaching HDD storage costs, but they are still a few times off in cost. I like that SSDs provide significantly higher transfer speeds, high enough I wouldn't lean on RAID, which makes it easier to expand storage than needing to exactly match drive specs or completely replacing an array.

    I use a pair of 3TB drives in RAID0 in my desktop and in my server. My free space is steadily shrinking. Right now I run 2x1GbE links with SMB Multichannel it gets me ~235MiB/Sec of performance over my network. But I am starting to be drive limited in some circumstances because I am getting ever closer to the inner tracks on my drives.

    I am hoping I can upgrade my network to 2.5 or 5GbE in the next 1-3 years. At that time, even for a single link, my HDD arrays are going to be the limiting performance factor under many scenarios (and every scenario for a 5GbE link). A pair of 10-14TB HDDs one in each machine would be plenty of storage space to last me at least a few more years and with dual actuators if they can really push ~480MiB/Sec on the outer tracks, I assume that translates to more like 250 or so MiB/Sec on the inner most tracks. That's sufficient performance for my use cases.

    So I could do that without even bothering to build a RAID array. That may tide me over till SSDs are cheap enough, or some new HDD technology comes along. I am tipping my toes in to 4k. I don't have much, but throwing a 17GiB file from my desktop to my server takes awhile, even at 235MiB/sec. Having ~300+ MiB/sec of 2.5GbE and a drive that can handle it would be nice.

    And if prices were reasonable enough, I'd certainly consider a pair of 10-14TiB drives in RAID0 for each machine (by reasonable I mean less than $1000 for all of the storage). That is about my price cap.

    Today that buys me around 10TB of SSD storage for the cheapest stuff. And the cheapest stuff it is questionable if that would cover my needs (300+ MiB/Sec sustained reads/writes on large files, at least until capacity hits 80% or so utilized), at least without RAID.

    And 5TB in each machine means I'd be up around 85% utilization. Not good and no room for growth for long at all. 6TB is already getting crunched (I am at 68% utilized free space).

    Prices would need to halve. Maybe drop to 1/3 (IE so I wasn't looking at the most bargain basement TLC or QLC drives, but something that was a good performing TLC drive).

    I understand about SLC caching on QLC and TLC drives...but it isn't unheard of for me to need to do a full disk backup/transfer from my server to my desktop or vice versa. I've averaged once per year for the last 4 or 5 years having to do that. Sure, I can just set it up and walk away, but ~4TB of data takes a long time as it is at 235MiB/Sec. I don't like the odds if the cache fills up and we are talking maximum 70-120MiB/Sec rates (for QLC drives. The crappier TLC drives are in the 170-200 range. But most nice TLC drives seem to be able to push low to mid 200s sustained writes).

    Most of my use cases are 1.5-4GiB files, but I do toss 10-25GiB files across on occasion. And then those rare full disk backups. Oh, also incremental backups may mean transferring 5-50GiB, depending on what I've been doing that week (my backups are setup weekly late at night). Those I don't particularly care how slow they go, so long as it gets done by morning.
  • ruzveh - Thursday, February 7, 2019 - link

    Why cant they simply launch 5.5" HDD with more platters and hence more capacity in single HDD? Sadly in the past few years we have seen a trend where new HDD capacity is launched double the price whereas the existing drives prices are not being lowered. Suppose 10TB is available for $300 then when you launch 14TB HDD, one should price 14TB at $300 and push the pricess of 10TB to next lower slot eg. $200
  • ruzveh - Thursday, February 7, 2019 - link

    I would also like to make a point that one cannot add more drives to your motherboard or ATX cabinets. Also there is a drive size and qty limitations when adding NAS. This really makes our investment dead for purchasing new drives.
  • eldakka - Friday, February 8, 2019 - link

    I don't understand this comment.

    If you have a free PCIe slot (preferably x8), you can add a SAS HBA, and with SAS expanders hanging off that HBA a single PCIe slot can control 255 (or more) drives.

    And you don't need space in the ATX cabinet/case that hosts the motherboard/HBA, that's what external SAS cables are for.

    The NAS I have built has an onboard SAS controller (8-port) and 6 SATA connectors and a free PCIe x8 slot. When I fill the drive bays in the case itself (Silverstone CS380, 8xhotswap plus a couple 5 1/4" which could be converted to 3x or 4x 3.5" with a bay adapter), i'll buy another CS380 or DS280 or similar case and a SAS expander, populate it with HDDs and run an external SAS cable to it.
  • DanNeely - Thursday, February 7, 2019 - link

    The biggest problem is just that HDD makers are hitting the wall hard on physics/engineering limits. Look back on their future capacity predictions over the last decade, they've been missing them by ever larger margins. The days of easy capacity increases are well behind us and each marginal uptick is requiring ever more difficult and expensive parts; and in the case of He drives with 6-8 platters instead of 5 just more parts in total.

    THe implosions of the cheap laptop/desktop HDD market hasn't helped either. This isn't so much in terms of lost profits to feed R&D: They were very low margin parts, but them going away has left them with large excesses of facilities to maintain and machines going idle before they became obsolete and were due to be scrapped. There's been some closing of facilities, and probably will continue to be more in the future; but because most components/tasks are only made/performed in one or two locations it's not easy to shut facilities down even if all of them are running well below capacity. The 10/15k RPM SAS market's death was probably much lower impact. These were high profit items, but they were also low volume and big companies buying huge numbers of 7200 rpm drives for bulk storage has replaced those revenues. Depending on the timing of when they wound down R&D for them hits from a generation that failed sell enough to earn out its costs or that was cancelled short of release are certainly possible though.
  • rahvin - Friday, February 8, 2019 - link

    The HDD market is actually bigger than it was when the laptop HDD market still existed. The arise of the cloud and the massive storage arrays that it's generated has put HDD revenues at record levels even with the loss of significant prior market share.

    But it quite amazing how much it's changed over the last ten years. in 2008 the average HDD was a single platter and they couldn't imagine disks with 4 or more platters and now we've moved towards 8 platters as they try to drive up density without any associated increase in technology because as you say they've hit the physical limits of magnetic recording and the only way to increase platter density at this point is exotic techniques like heating up the platter to allow smaller writes all while they keep adding more and more platters to disks.

    It's frankly going to be interesting to watch the machinations they go through to continue to drive density on the 3.5 drive up as the cloud demand for storage is growing almost exponentially. The cloud companies are willing to pay buckets of money to drive up density so they can drive down overall storage costs. Given the physical limitations they've hit with magnetic storage they are now trying things they wouldn't have even considered a decade ago. Helium filled drives are just plain crazy. HAMR and MAMR are even crazier. But it will certainly be interesting, that's for sure.
  • DanNeely - Friday, February 8, 2019 - link

    They're growing in dollar terms, but are down nearly half in number of drives shipped from their 2011 peak. That is leading to them shutting down some of their manufacturing sites (as detailed in some previous pipeline articles).

    https://www.statista.com/statistics/275336/global-...

    Your claim about 4 platter drives being unimaginable a decade ago just isn't true. I've been dismembering all the dead drives I've bought over the last 15 years; and the ones that were big when new were all 4/5 platter types. At the time max tech platters had little cost premium so small drives were only 1 or 2 platters; but big ones were always a near/full stack.

    HAMR/MAMR were both supposed to be out several years ago. Unexpected difficulties in getting them to production is a big part of why they've missed predicted future TB/drive levels so badly.
  • rahvin - Friday, February 8, 2019 - link

    I might be off a half a decade in my timeline. I've been disassembling a bunch of old drives the last week or two to extract magnets and other fun things, most of them are 1 or 2 platter at the most. But all the drives are sub-terrabyte so like I say I'm probably off a few years. I do distinctly remember when all drives were a single platter, and honestly it wasn't that long ago, at least to me.

    Part of the reason HAMR/MAMR are so late is just how crazy this tech is. Your talking about micro-heating the platter right under the super tiny head while the disk is moving by at 7200rpm. This is just insane, the level of control need for this is crazy difficult (and you have a very small window of temperature where it works, if it's too hot you wipe out adjacent data and if it's not hot enough you can't write). It's a massive addition to what mechanically was a pretty simple system prior to this. The level of precision needed is probably right up there with nuclear time triggers. It doesn't surprise me at all that it's taken this long to get it reliable (if it actually is reliable, I guess we'll all find out).
  • Lord of the Bored - Friday, February 8, 2019 - link

    Physics. Harder to spin up and keep at speed. And wider platters can't be spun as fast.
  • DanNeely - Friday, February 8, 2019 - link

    The real killer with 5.25" HDDs (which were a thing into the 90s) was that the double platter radius meant a doubled average seek time; which absolutely clobbered performance.
  • wumpus - Friday, February 8, 2019 - link

    Really? I'm guessing the "elevator algorithm" was still applicable (keep seeking the nearest sector in one direction until there aren't any more, then reverse direction), so it could require the actuator to move further. I'll also have to admit that the 3.5" drive probably had a lot to do with making the elevator algorithm obsolete.

    Don't forget that for a lot of the 1980s, "full height drives" were twice the height of a "modern" 5.25" bay. Such a drive (with a bunch of platters with presumably easier clearance) might make sense on it's own now, but nobody wants to alter the existing infrastructure for such small gains (you could eat the latency for the extra storage, but you don't want to redesign all the cases, NAS devices, rack equipment, etc...).
  • twotwotwo - Thursday, February 7, 2019 - link

    Hadn't thought about how it would become hard to use higher capacities without higher IOPS at some point. Interesting.
  • Darcey R. Epperly - Friday, February 8, 2019 - link

    Where are the 100TB drives promised in 2010?
  • Supercell99 - Friday, February 8, 2019 - link

    they have them developed, but they are milking the $/TB price curve for another decade
  • danwat1234 - Friday, February 8, 2019 - link

    So, when will 4TB 2.5" 9.5mm laptop drives come out?
  • DanNeely - Friday, February 8, 2019 - link

    Probably about the same time as 32TB desktop drives. Laptop platters are about half the area of desktop ones; and normal 2.5" drives only have room for 2 platters. That means 2TB 2.5" platters or 4TB 3.5" ones. The latter would make a 32TB 8 platter helium filled drive. So probably not anytime soon.

    If an He filled 2.5" 9.5mm drive could squeeze a 3rd platter in (not sure if this is feasible or not), they'd only need 1.3TB platters; which'd correspond to a 2.67tb/22 tb overall drive.

    OTOH the 24TB capacity point for a desktop drive corresponds to a 3tb 2.5" laptop drive. Assuming the price isn't insane we might see that in the near future.
  • rahvin - Friday, February 8, 2019 - link

    I don't think you can get more than 2 platters in 9mm. They can't thin the platters any more or they'll blow to pieces when you spin them up. You've also got a minimum spacing to all the heads to operate. Unless they can shrink the motor dramatically you can't get that third platter in without going to 12-14mm.
  • madmilk - Monday, February 11, 2019 - link

    There have been a few 3-platter 9.5mm drives, like the 1.5TB HGST 5K1500 and the 2TB Samsung M9T. 9.5mm drives seem to be getting less common though. Seagate at least seems to have replaced their whole Barracuda 2.5" lineup with 7mm and 15mm drives.
  • Sivar - Friday, February 8, 2019 - link

    "First Multi-Actuator HDD: 14 TB, ~480 MB/s"
    These are not the first multi-actuator hard drives, not even the first mainstream multi-actuator Seagate mainstream drives. Those were the Seagate Barracuda 2HP series (ST-12450W).

    Before that, Imprimis produced them for supercomputers starting in 1987.

Log in

Don't have an account? Sign up now