Interesting, but the news is disappointing. If this really does mean that HAMR is dead, that stinks. "It's MAMR time!" doesn't have the same ring to it so I'm still going to advocate the inferior, more expensive HAMR option.
I usually prefer to pronounce acronyms as the letters are pronounced in their respective words. In this case "UH"-ssisted would lead me to pronounce M"UH"MR. "AYE"ssisted just doesn't sound right. "AH"ssisted is close, but weird if you linger on the "H" too long. Although, I was pronouncing it as "Mammer," like Hammer, as I was reading the article, "EH"ssisted doesn't work.
Hi! Erik from Western Digital here (I manage @WesternDigital on Twitter). HAMR isn't dead, it just isn't economically or technically feasible yet. We talk about why, and why MAMR is what we're working with now, in this article: http://innovation.wdc.com/game-changers/energy-ass...
It's great to see representatives of the manufacture monitor and reply to an article with relevant feedback. Just another reason WD is totally killing it. Cheers!
It isn't dead. Competition is good. It's possible HAMR could have densities well beyond 4Tb/sq in. in which case its expected increased costs, complexities, and possibly reduced reliabilities, might be of benefit to those data centers where storage density is more critical than long-term reliability. Considering HAMR has wear leveling algorithms and will share a lot with NAND in terms of firmware and controller technology, HAMR drives might have a more predictable "expiration" date.
But it is clear that WD has made some very good moves. They have always been a strategically successful company in my opinion. Seagate beat them to market in various consumer technologies, but in the corporate and enterprise space, I think their success really began with the Raid Edition drives, near line innovation, and especially the acquisition of Hitachi, which clearly played out well for both companies, and consumers (Hitachi drives were historically more expensive but now that tech is available in mass market WD products)
Then again they shut down their WDLabs team. Seems less than smart to shut shown your innovation and research team to save costs but I don't know exactly what they did perhaps there were similar teams in the company...
40TB on one HDD is more than I have spread out over almost 12 drives at the moment. My worry though is that those 40TB drives are going to be pretty pricey.
The bigger concern is throughput - if it takes the bulk of the MTBF of a drive to write then read it we are gonna have a bad time... quick math - maybe I goofed, but given 250MB/s and TB = 1024^4 that's 167,772s or 2796m or 46 hours to read the entire drive. Fun time waiting 2 days for a raid re-build...
If you are using this for home use, you should not be using raid anyways. Since you will had SSD on computer, and also if its a server bandwidth is not a concern since its on LAN. And backing up to cloud is what %99 of people do in that situation.
It's dangerous not only for RAID, but also for that "cloud" you speak of and underlaying object storages. Typical object storage have 3 replicas. With 250MBps peak write/read speed you are not looking at two days to replicate all files. In reality it's more like two weeks to one month because you are handling lot of small files, transfer over LAN and in that case both read and write suffer. Over the course of several weeks there is too high probability of 3 random drives failing. We were considering 60TB SATA SSDs for our object storage, but it simply doesn't add up even in case of SSD-class read/write. Especially if there is only single supplier of such drives, chance of synchronized failure of multiple drives is too high (we had one such scare).
That is not how it works. If you have 3 replicas, and one drive dies, then all of that drive's data has two other replicas.
Those two other replicas are _NOT_ just on two other drives. A large clustered file system will have the data split into blocks, and blocks randomly assigned to other drives. So if you have 300 drives in a cluster, a replica factor of 3, and one drive dies, then that drive's data has two copies, evenly spread out over the other 299 drives. If those are spread out across 30 nodes (each with 10 drives) with 10gbit network, then we have aggregate ~8000 MB/sec copying capacity, or close to a half TB per minute. That is a little over an hour to get the replication factor back to 3, assuming no transfers are local, and all goes over the network.
And that is a small cluster. A real one would have closer to 100 nodes and 1000 drives, and higher aggregate network throughput, with more intelligent block distribution. The real world result is that on today's drives it can take less than 5 minutes to re-replicate a failed drive. Even with 40TB drives, sub 30 minute times would be achievable.
RAID isn't dead. The same people who used it in the past are still using it. It was never popular outside of enterprise/enthusiast use. I need a place to store all of my 4K videos.
[non-0] RAID almost never made sense for home use (although there was a brief point before SSDs where it was cool to partitions two drives so /home could be mirrored and /everything_else could be striped.
Backblaze uses some pretty serious RAID, and I'd expect that all serious datacenters use similar tricks. Redundancy is the heart of storage reliability (SSDs and more old fashioned drives have all sorts of redundancy built in) and there is always the benefit of having somebody swap out the actual hardware (which will always be easier with RAID).
RAID isn't going anywhere for the big boys, but unless you have a data hording hobby (and tons of terrabytes to go with it), you probably don't want RAID at home. If you do, then you you will probably on need to RAID your backups (RAID on your primary only helps for high availability).
Not sure what game you are playing, but at least 90% of the tier 1 games out there care mostly about throughput not latency when it comes to hard drive speed. Hard drive latency in general is too great for any reasonable game design to assume anything other than a streaming architecture.
Sorry but if you backup to the cloud you are a fool. All your data is freely accessible to anyone from the script kiddies on up. Much less transferring it over the web is a huge risk.
What is the alternative? Waste your time / bust your ass writing your own exploit(s) - when so many cool exploits already exist?
Some of us who dabble with said scripts, have significant other networking / Linux knowledge, so it doesn't fit to denigrate us, just because we can't be arsed to write new exploits ourselves.
We've better things to be doing with our time...
I bet you don't make your own clothes, even though you possibly can.
Regarding clothing, there are a lot of older people who would think it fitting to denigrate kids who are wholly unable to make their own clothes, given cloth and needle and thread. It has more to do with the knowledge and appreciation of the craft than with doing everything on your own.
And a generation ago the elders scoffed at kids who didn't know how to shoe a horse. Elders are always scoffing, its hard to recognize or accept your obsolescence.
LOL, Oh damn, the script kiddie got triggered LMFAO! JK JK JK.....Seriously the term script kiddies is meant to be denigrating and its not meant to apply to you.
Sure. For people with access to a 1Gb internet connection, a cloud backup/restore is much less painful. Of course it would still take the better part of a 4 days. (1 Gb = 128 GB. roughly half the HDD data rate that cekim used in his calculation that revealed a 46 hour (46.6 actually) rebuild time.
However, people with access to such connections are still a vast minority. Despite availability of the service increasing, it is far from ubiquitous. Some providers (in certain countries) implement restrictive data caps so even if you can get gigabit internet, it will take months to backup or restore a 40TB drive from cloud without incurring penalties and/or throttling. Comcast, for instance, now has a data cap of 1TB a month (not sure about caveats as I don't live anywhere near Comcast territory ;' ) ). It would take 40 months to get 40TB of data restored without penalty.
1 gigabit Ethernet has an upper limit of 125 MB/s. It would take a minimum of 320,000 seconds or around 3.7 days of continuous 125 MB/s transfer rate to backup a 40 TB drive. Cloud backup of large volumes of data isn't going to be practical until at least 20 gigabit connections are commonplace.
Backing up to the cloud is really slow. Retrieving from the cloud is really slow. When gigabit becomes ubiquitous then it will make more sense. Even then, your should keep at least one copy of the data for data about offline and local. Btw, I agree that raid is dead, but for different reasons. Namely, we've the much more flexible erasure coding (much more sophisticated than the simple xor encoding used by some of the raid levels) schemes that let you apply arbitrary amounts of redundancy and decide on placement of data. That's what the data centers have been moving towards.
I have GbE LAN and I'm starting to run into bottlenecks with that being slow that once I have the funds to do so, I'm likely going to move over to 4x FDR IB.
1Gbps = 128MBps. cekim seems to think that 250MBps is a better estimate and alpha75493 suggests that these drives will increase in speed well beyond that. Granted this will not hold up for small file writes, but for large sequential data sets, the days of 1G ethernet being roughly equal to HDD speed are soon coming to an end.
Even now HDD have sequential (non-cached) speeds in excess of 300MB (for enterprise 15k drives), but 250MB+ is currently available with the 8TB+ 7200 drives. Those are best case, but they might also be readily achievable depending on how your backup software works (eg., a block-based object store vs NTFS/zfs/xfs/etc).
Your math is a little bit off. If the areal density increases from 1.1 Tb/in^2 to 4 Tb/in^2, then so too will the data transfer speeds.
It has to.
Check that and update your calcs.
@imaheadcase
RAID is most definitely not dead.
RAID HBAs addressing SANs is still farrr more efficient to map (even with GPT) a logical array rather than lots of physical tables.
You do realise that there are rackmount enclosures that hold like 72 drives, right?
If that were hosted as a SAN (or iSCSI), there isn't anything that you can put as AICs that will allow a host to control 72 JBOD drives simultaneously.
It'd be insanity, not to mention the cabling nightmare.
Here's an interest topic on raid rebuilds for ZFS. While it can't fix the issue of writing 250MiB/s to a many TiB storage device, it is fun.
Parity Declustered RAID for ZFS (DRAID)
A quick overview is that ZFS can quickly rebuild a storage device if the storage device was mostly empty. This is because ZFS only needs to rebuild the data, not the entire drive. On the other hand, as the device gets fuller, the rate of a rebuild gets slower because walking the tree causes random IO. DRAID allows for a two pass where it optimistically writes out the data via a form of parity, then scrubs the data after to make sure it's actually correct. This allows the device to be quickly rebuilt by deferring the validation.
My biggest issue with ZFS is that there are ZERO data recovery tools available for it. You can't do a bit read on the media in order to recover the data if the pool fails.
I was a huge proponent of ZFS throughout the mid 2000s. Now, I am completely back to NTFS because at least if a NTFS array fails, I can do a bit-read on the media to try and recover the data.
(Actually spoke with the engineers who developed ZFS originally at Sun, now Oracle and they were able to confirm that there are no data recovery tools like that for ZFS. Their solution to a problem like that: restore from backup.)
(Except that in my case, the ZFS server was the backup.)
Are there any freely available tools to do this for NTFS. If so, please post as I'm sure more than an few people here would be interested in acquiring said tools. If not, what is your favorite non-free tool?
I've been a huge fan of ZFS, particularly after my basement flooded and despite my NAS being submerged, I was able to recover every last bit of my data. Took a lot of work using DD and DDRescue, but I eventually got it done. That all said, a bit read tool would be nice.
40 TB (40,000,000,000,000 bytes) @ 550 MB/s (which would expect from the density increase) gives about 20 hours to write or read the whole drive (assuming you can drive it at the speed the whole time). This may require an HDD with a direct PCIe connection. :0
SATA 3 is 600 MB/s, so should be adequate. Generally, the SATA controller is connected via PCIe. The bottleneck is still the read and write speeds to/from the media. It is what it is. If it were a 40 TB PCIe SSD with 2 GB/s read speed, it would still take almost 6 hours.
Since MAMR needs 'only' a new type of head and can use today's platters etc, the MAMR drives will not actually cost much more to make. WD will of course price them high at first, if only to recoup the R&D costs. But when the technology spreads, the price will drop rapidly.
This is interesting, and I'm glad WD is still innovating.
But, I still wonder if they are ever planning on changing the deceptive capacity labeling. Is that 40 "TB" hard drive going to be 36 TiB?
The missing space just keeps growing... If you could do me a favor, Anandtech, please put a GiB or TiB figure next to the manufacturer's labeled size in your reviews, so I won't need to use a calculator.
While I agree with the frustration, technically manufacturers aren't lying if they say 40 TB, because if the drive has 40 billion bytes, then it IS 40 TB.
One big reason for the confusion is that Microsoft chose not to update their File Explorer code to the new standards in 1998. They still haven't done it, so now we're at 20 years of people learning falsehoods by using Windows.
When a word acquires a new meaning, that doesn't make the old meaning disappear. So there are two definitions of kilobyte: 1. 1024 bytes 2. 1000 bytes
This may create confusion--indeed I'm pretty sure that the whole point of introducing the second definition was to create confusion--but I don't see how you can blame Microsoft for that.
The problem lies with the operating system creators considering 1 TB as 1024 GB. All that needs to happen is a simple (ok, maybe not SIMPLE, but not super complex either) rewrite to view storage in blocks of 1000 vs 1024. Then OS and HD would unite in a brilliant flash of unity unseen since...ahh...special relativity and quantum mechanics? No, AT&T and Sprint? No, no...Star Wars and Disney? Oh never mind...it'd be GREAT!
Just for the purpose of making hard drive purchasers who are ignorant more content? No. The OS agrees (and should) with the reality of binary. Not marketing.
For crying out loud. I wish we could get over this nonsense. You do realize that it's the same amount of storage? It doesn't matter which number is used, as long as everyone uses the same way of describing it.
It matters because computing by its very nature lends itself to the binary world, powers of 2, hex, etc., and the idea of not doing this for describing disk capacities only started as a way of making customers think they were getting more storage than they actually were. When I was at uni in the late 1980s, nobody in any context used MB, GB, etc. based on a power of ten, as everything was derived from the notion of bytes and KB, which are powers of 2. Like so many things these days, this sort of change is just yet more dumbing down, oh we must make it easier for people! Rubbish, how about for once we insist that people actual improve their intellects and learn something properly.
Anyway, great article Ganesh, thanks for that! I am curious though how backup technologies are going to keep up with all this, eg. what is the future of LTO? Indeed, as consumer materials become ever more digital, surely at some point the consumer market will need viable backup solutions that are not ferociously expensive. It would be a shame if in decades' time, the future elderly have little to remember their youth because all the old digital storage devices have worn out. There's something to be said for a proper photo album...
I have a few decrepit 5.25 inch full height hard drives (the sorts that included a bad sector map printed on their label made by companies long dead to this world) sitting in a box in my house that were from the 80s. They used a power of ten to represent capacity even before you attended university. This capacity discussion is absolutely not a new concern. It was the subject of lots of BBS drama carried out over 2400 baud modems.
For the longest time where was no common definition of a "byte". 5 bit byte? 6 bit byte? 7 bit byte? 11 bit byte? Most storage devices were labeled in bits, which is labeled in base 10.
"It doesn't matter which number is used, as long as everyone uses the same way of describing it."
You would be amazed at how many times I still get the question : "Did I get ripped off? Windows says my hard drive is smaller than what <insert PC OEM here> said in their specs!"
That's why changing the way we describe this every few years is a problem. We need a standard to be used everywhere, no matter which one. Quite frankly, almost no one will ever do what your friends, according to you, do. Most don't even know offhand, who makes their computer, much less how much storage it comes with.
This's a pleasant surprise (cheaper bulk storage is always a win). Projections I'd seen from a year or two ago were projecting a mid 2020's point for flash becoming cheaper per TB than HDDs. I assume the high expected cost for HAMR was probably a major driving factor in those projections. If MAMR really will be as cheap to implement as current PMR solutions it's good news all around.
WD used to lie a lot less than others but those HDD vs SSD projections are hilarious, new CEO ,new habits i suppose. SSDs will go after nearline soon enough as they'll need to do that to offload all the bits produced. Anyway, interesting tech but too late to matter much and that's a pity.
It's going to take a heck of a long time before SSDs come close to the costs of HDD.
While HDDs seem to have newer tech to enable more storage in the same space, SSD is relying on multilayering, which is now 64, but moving to 72. The idea of using smaller process diminution has ended, unless some unknown breakthrough occurs, which we can't expect, because there's nothing known to have us expect one. How many layers can be made? At some point, it won't be possible to go any higher.
But MAMR was understood to be a slight possibility, and WD has made that breakthrough. There's no reason to believe that the 40TB by 2025 shouldn't be believed since preproduction sampling will be next year, and production delivery will be in 2019. That's pretty quick.
Who are you to say it's too low? Are you doing research in this area to know that? Or do you just read articles on it and complain because it's not moving fast enough for you?
The issue with SSDs is that there already is a shortage of flash. Flash is a semi-conductor and hence manufacturing is a bottleneck. Unless a significantly better alternative to flash is found, ssd will remain niche in terms of total storage amounts.
Exactly. That's the whole point of HAMR, MAMR, and all those other acronyms. If WD wasn't going to increase capacity, then they would just fire their entire R&D department
I really hope this means that SMR will be dumped now that a much better storage technology is available. SMR = slower speed & more error check/correction for reliability, all for just a small increase in storage density over PMR. I really hope it dies out.
In the consumer world I agree, its surprise behavior among people who expect something normal is sketchy enough that it's not worth the extra 20% it gives.
For enterprise though, there'll always be data center customers who're quite willing to make the tradeoff to store archival//backup/write only data.
This is what the article says WD is currently doing no consumer model; but if you own a data center and want HDDs by the shipping container (or some other massive quantity) they'll hook you up.
Hopefully MAMR completely takes over. I'd like to see BackBlaze get them as soon as possible so that we can see their reliability rates against other PMR/SMR drives.
"Despite new HDD technology, advancements in solid state memory technology are running at a faster pace. As a result SSD technology and NAND Flash have ensured that performance enterprise HDDs will make up only a very minor part of the total storage capacity each year in the enterprise segment."
What? Am I missing something? Those sentences mesh with each other, but disagree with both graphs shown. HDDs are almost always more capacity and the graph shows they are 10x less $/TB. It would be tough for SSDs to have far surpassed HDDs in annual storage capacity in enterprise applications.
Hmm, well now I see the "performance" vs "capacity" enterprise HDD distinction in the second graph. Wow, that came out of nowhere. There was nothing in the text that indicated you were breaking HDDs into 2 segments up to that point. I would definitely recommend clarifying that.
The sentences are talking about 10/15k RPM SAS drives, the graph is 5.4/7.2k bulk storage drives. The former are almost as expensive as SSDs but still have all the big HDD limitations.
I have always wondered how the Heck would HAMR work in such environment, heat takes time, and spinning disk is fast. But then I am no expert, so I could only wait. Turns out HAMR really dont work. And would HAMR work within Helium?
Now MAMR suddenly comes out of no where. And I assume it would work with Helium too!
My problem is 40TB HDD in 2025 is still too slow. WD could have produced this now at 1.2x cost of $/GB of current HDD and cloud vendor would still slap them up like crazy.
With more self learning microchips/code, I think HDD's should include some kind of NVRAM as buffer/cache for 4k content specially the ones detected as part of OS/programs.
Most of the OS/programs should fit on 16-32GB NVRAM cache. Even with "novelty" prices, intel's optane are around $45-80 at final consumer prices.
Or just a quality 128GB MLC SSD that could be used a big partition with the HDD part or as a SSD only leaving the HDD part for data. Would barely increase the prices of a 4-10TB by $50-60
Even with a low capacity NVRAM/SSD cache windows should use this space for image thumbnails for example and frequent apps (or let you pick what you want to be written to the buffer).
I still think HDDs are dead. I mean this is fascinating technology but how does it allow HDDs to really compete with SSDs? They say by 2025 we'll have 40TB MAMRs. ~40TB SSDs are already available today. SSDs don't suffer from any of the mechanical issues of HDDs, don't necessarily produce as much heat and have performance characteristics which are an order of magnitude above and beyond what HDDs are capable, or even going to be capable of. If performance didn't matter and price per TB was the only thing that did then tape storage would seem to be the better option at those capacities. As far as I know, tape storage is simpler, way cheaper and comes with many side benefits. Obviously HDDs are cheaper than SSDs are today but how expensive will SSDs be in 2025? And the only thing SSD manufacturers would have to do to catch up is build up production capacity for what is already a known quantity and proven technology.
Well, HAMR was too little, too late, to compete with SSD.
The calculation of price per GB didn't accounted that price/GB depends on how many units are sold. As SSD eat more and more market from HD, price/GB falls for SSD, and increases for HD.
At the time HAMR was expected to enter the market, the marketshare for HD would be a niche too small to support the expected low price per GB.
MAMR will save the price advantage for HD, for a while, but it may be the last generation of HD.
Where is the comment from the person who made $1000's last month working from home? Maybe she's sick or had an accident. I'd sleep more soundly if someone could look in on her. Thanks!
If you are setting up your own video server and have 100s or even 1,000s of video files (currently only 480P, 720P, and 1080P) then I expect to fill up my 8 TB HD in a couple more years. That doesn't include any 4K video (which may include 1.5 Mbit (or more) audio in lossless formats like Atmos, DTS-MA, DD True HD). If you don't currently do it (or only have a few ITunes or XBox Video downloads) you don't know what you are missing. My wife refuses to watch a video unless it has been digitized and all "extra" intro and credits have been removed. When we watch videos it is always from hi quality rips (for the most part the video quality is high enough to be "almost indistinguishable from originals). That usually translates into 4 GB/hour of 1080P, 1 GB/hour of 720P/480P video and audio. While it may not satisfy audiophiles it looks quite nice on my brand new LG 55OLEDB6 which is generally considered to be one of the finest TVs on the market right now. I have been far less tolerant of low quality content (especially older DVD rips of 480P (or lower in some cases) than ever before even though the LG TV does an outstanding job of upscaling lower quality content to 4K (3840x2160P).
You're overestimating the time it takes to fill up. I have a 13.58 TB btrfs array which is already at 11.80 TB usage and it's less than a year old. I try to only use 1080p x265 encodes but these aren't always available.
Not sure about them posting negativity of HAMR. I think it is about cost and not reliability. The heat from HAMR can be reduced significantly with a shorter and narrow laser pulse.
@Samus - No. Heat canont beat uW and has too many drawbacks that decraese reliability.
@cekim - You are thinking in obsolete RAID only with nothing else terms. Current solutions are much more complex and faster ever if they are based on RAIDs. You failed to consider the speed increase of the drives.
@Jaybus - 1 gigabit Ethernet is obsolete when it comes to storage systems. Storage systems use high speed optical connections of dozens of gigabit/s. SANs do have a purpose, you know...
@tuxRoller - cloud is not something to use as backup if you have a lot of data.
@alpha754293 - Yes, cekim did not consider the speed increase.
@Arbie - True. And price must stay low so HDDs can keep the largest part of the market.
@sonny73n - Someone decided that the computer KB that = 1024 violates the S.I. definition of a kilo, which is 1000. So they invented the KiB, MiB and so forth absurdities. Veterans don't use that junk naming though. For us, the KB is still 1024 and the TB is still 1099511627776. So no worries.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
127 Comments
Back to Article
Ranger1065 - Thursday, October 12, 2017 - link
Thanks for a very interesting article.bill.rookard - Thursday, October 12, 2017 - link
Agreed. Very cool stuff. I guess that the industry in general didn't want their hard drives 'with frikkin laser beams.' </dr.evil>dalewb - Friday, October 13, 2017 - link
Lol thanks for the laugh!BrokenCrayons - Thursday, October 12, 2017 - link
Interesting, but the news is disappointing. If this really does mean that HAMR is dead, that stinks. "It's MAMR time!" doesn't have the same ring to it so I'm still going to advocate the inferior, more expensive HAMR option.lmcd - Thursday, October 12, 2017 - link
If HAMR doesn't make a notable % marketshare, we can repurpose the acronym. Microwave-Assisted -> High frequency-Assisted.zoxo - Thursday, October 12, 2017 - link
ironically, microwave is orders of magnitude lower frequency than "heat"GoodRevrnd - Thursday, October 12, 2017 - link
But now you can load your data center with MAMRies.BrokenCrayons - Thursday, October 12, 2017 - link
This is logic I can't argue with. Long live the data center's MAMRies!milleron - Friday, October 13, 2017 - link
When these devices fail, have they gone tits up?boeush - Thursday, October 12, 2017 - link
It's all in how your pronounce it.For instance, I tend to favor "maimer" over "mah-mer"... ;-)
'nar - Friday, October 13, 2017 - link
I usually prefer to pronounce acronyms as the letters are pronounced in their respective words. In this case "UH"-ssisted would lead me to pronounce M"UH"MR. "AYE"ssisted just doesn't sound right. "AH"ssisted is close, but weird if you linger on the "H" too long. Although, I was pronouncing it as "Mammer," like Hammer, as I was reading the article, "EH"ssisted doesn't work.WesternDigital - Friday, October 13, 2017 - link
Hi! Erik from Western Digital here (I manage @WesternDigital on Twitter). HAMR isn't dead, it just isn't economically or technically feasible yet. We talk about why, and why MAMR is what we're working with now, in this article: http://innovation.wdc.com/game-changers/energy-ass...Alexvrb - Saturday, October 14, 2017 - link
Ah yes, but when should we expect HAMAMR drives? I'm looking forward to 100TB 3.5" drives. To uh, archive newsletters, and such. Yeah... that's it.HollyDOL - Saturday, October 14, 2017 - link
That's a lot of past-10pm "fun", even in 4kSamus - Sunday, October 15, 2017 - link
It's great to see representatives of the manufacture monitor and reply to an article with relevant feedback. Just another reason WD is totally killing it. Cheers!Samus - Sunday, October 15, 2017 - link
It isn't dead. Competition is good. It's possible HAMR could have densities well beyond 4Tb/sq in. in which case its expected increased costs, complexities, and possibly reduced reliabilities, might be of benefit to those data centers where storage density is more critical than long-term reliability. Considering HAMR has wear leveling algorithms and will share a lot with NAND in terms of firmware and controller technology, HAMR drives might have a more predictable "expiration" date.But it is clear that WD has made some very good moves. They have always been a strategically successful company in my opinion. Seagate beat them to market in various consumer technologies, but in the corporate and enterprise space, I think their success really began with the Raid Edition drives, near line innovation, and especially the acquisition of Hitachi, which clearly played out well for both companies, and consumers (Hitachi drives were historically more expensive but now that tech is available in mass market WD products)
jospoortvliet - Thursday, October 19, 2017 - link
Then again they shut down their WDLabs team. Seems less than smart to shut shown your innovation and research team to save costs but I don't know exactly what they did perhaps there were similar teams in the company...MajGenRelativity - Thursday, October 12, 2017 - link
Well, this looks quite interesting. Hopefully this will contribute to higher capacity HDDs with a slight speed increasebill.rookard - Thursday, October 12, 2017 - link
40TB on one HDD is more than I have spread out over almost 12 drives at the moment. My worry though is that those 40TB drives are going to be pretty pricey.MajGenRelativity - Thursday, October 12, 2017 - link
They probably will be very pricey, but come down in price over timecekim - Thursday, October 12, 2017 - link
The bigger concern is throughput - if it takes the bulk of the MTBF of a drive to write then read it we are gonna have a bad time... quick math - maybe I goofed, but given 250MB/s and TB = 1024^4 that's 167,772s or 2796m or 46 hours to read the entire drive. Fun time waiting 2 days for a raid re-build...imaheadcase - Thursday, October 12, 2017 - link
If you are using this for home use, you should not be using raid anyways. Since you will had SSD on computer, and also if its a server bandwidth is not a concern since its on LAN. And backing up to cloud is what %99 of people do in that situation.RAID is dead for most part.
qap - Thursday, October 12, 2017 - link
It's dangerous not only for RAID, but also for that "cloud" you speak of and underlaying object storages. Typical object storage have 3 replicas. With 250MBps peak write/read speed you are not looking at two days to replicate all files. In reality it's more like two weeks to one month because you are handling lot of small files, transfer over LAN and in that case both read and write suffer. Over the course of several weeks there is too high probability of 3 random drives failing.We were considering 60TB SATA SSDs for our object storage, but it simply doesn't add up even in case of SSD-class read/write.
Especially if there is only single supplier of such drives, chance of synchronized failure of multiple drives is too high (we had one such scare).
LurkingSince97 - Friday, October 20, 2017 - link
That is not how it works. If you have 3 replicas, and one drive dies, then all of that drive's data has two other replicas.Those two other replicas are _NOT_ just on two other drives. A large clustered file system will have the data split into blocks, and blocks randomly assigned to other drives. So if you have 300 drives in a cluster, a replica factor of 3, and one drive dies, then that drive's data has two copies, evenly spread out over the other 299 drives. If those are spread out across 30 nodes (each with 10 drives) with 10gbit network, then we have aggregate ~8000 MB/sec copying capacity, or close to a half TB per minute. That is a little over an hour to get the replication factor back to 3, assuming no transfers are local, and all goes over the network.
And that is a small cluster. A real one would have closer to 100 nodes and 1000 drives, and higher aggregate network throughput, with more intelligent block distribution. The real world result is that on today's drives it can take less than 5 minutes to re-replicate a failed drive. Even with 40TB drives, sub 30 minute times would be achievable.
bcronce - Thursday, October 12, 2017 - link
RAID isn't dead. The same people who used it in the past are still using it. It was never popular outside of enterprise/enthusiast use. I need a place to store all of my 4K videos.wumpus - Thursday, October 12, 2017 - link
[non-0] RAID almost never made sense for home use (although there was a brief point before SSDs where it was cool to partitions two drives so /home could be mirrored and /everything_else could be striped.Backblaze uses some pretty serious RAID, and I'd expect that all serious datacenters use similar tricks. Redundancy is the heart of storage reliability (SSDs and more old fashioned drives have all sorts of redundancy built in) and there is always the benefit of having somebody swap out the actual hardware (which will always be easier with RAID).
RAID isn't going anywhere for the big boys, but unless you have a data hording hobby (and tons of terrabytes to go with it), you probably don't want RAID at home. If you do, then you you will probably on need to RAID your backups (RAID on your primary only helps for high availability).
alpha754293 - Thursday, October 12, 2017 - link
I can see people using RAID at home thinking that it will give them the misguided latency advantage (when they think about "speed").(i.e. higher MB/s != lower latency, which is what gamers probably actually want when they put two SSDs on RAID0)
surt - Sunday, October 15, 2017 - link
Not sure what game you are playing, but at least 90% of the tier 1 games out there care mostly about throughput not latency when it comes to hard drive speed. Hard drive latency in general is too great for any reasonable game design to assume anything other than a streaming architecture.Ahnilated - Thursday, October 12, 2017 - link
Sorry but if you backup to the cloud you are a fool. All your data is freely accessible to anyone from the script kiddies on up. Much less transferring it over the web is a huge risk.Notmyusualid - Thursday, October 12, 2017 - link
@ AhnilatedI never liked the term 'script kiddies'.
What is the alternative? Waste your time / bust your ass writing your own exploit(s) - when so many cool exploits already exist?
Some of us who dabble with said scripts, have significant other networking / Linux knowledge, so it doesn't fit to denigrate us, just because we can't be arsed to write new exploits ourselves.
We've better things to be doing with our time...
I bet you don't make your own clothes, even though you possibly can.
BrokenCrayons - Friday, October 13, 2017 - link
If you allow your hair to grow long enough, clothing isn't that important. Oooor, you could just move to the tropics.mkozakewich - Sunday, October 15, 2017 - link
Regarding clothing, there are a lot of older people who would think it fitting to denigrate kids who are wholly unable to make their own clothes, given cloth and needle and thread. It has more to do with the knowledge and appreciation of the craft than with doing everything on your own.surt - Sunday, October 15, 2017 - link
And a generation ago the elders scoffed at kids who didn't know how to shoe a horse. Elders are always scoffing, its hard to recognize or accept your obsolescence.Manch - Monday, October 16, 2017 - link
LOL, Oh damn, the script kiddie got triggered LMFAO! JK JK JK.....Seriously the term script kiddies is meant to be denigrating and its not meant to apply to you.cm2187 - Thursday, October 12, 2017 - link
Maybe you live in a datacentre, but for the rest of us, uploading 40TB worth of data through a DSL connection is a no go...DanNeely - Thursday, October 12, 2017 - link
... and even if you eventually did that, redownloading for a restore would still be nightmarish.I back up some stuff to the cloud, but full system images are local to the NAS on my LAN. (Currently WHS 2011 in a 2 disk RAID 1 equivalentish mode.)
alpha754293 - Thursday, October 12, 2017 - link
My apologies to you for using WHS 2k11.oynaz - Friday, October 13, 2017 - link
There are many places where 1 gigabit connections are commonplace though.BurntMyBacon - Friday, October 13, 2017 - link
@oynazSure. For people with access to a 1Gb internet connection, a cloud backup/restore is much less painful. Of course it would still take the better part of a 4 days. (1 Gb = 128 GB. roughly half the HDD data rate that cekim used in his calculation that revealed a 46 hour (46.6 actually) rebuild time.
However, people with access to such connections are still a vast minority. Despite availability of the service increasing, it is far from ubiquitous. Some providers (in certain countries) implement restrictive data caps so even if you can get gigabit internet, it will take months to backup or restore a 40TB drive from cloud without incurring penalties and/or throttling. Comcast, for instance, now has a data cap of 1TB a month (not sure about caveats as I don't live anywhere near Comcast territory ;' ) ). It would take 40 months to get 40TB of data restored without penalty.
someonesomewherelse - Saturday, October 14, 2017 - link
Download sure, upload not so much. Also using the cloud implies using encryption which is annoying.Jaybus - Monday, October 16, 2017 - link
1 gigabit Ethernet has an upper limit of 125 MB/s. It would take a minimum of 320,000 seconds or around 3.7 days of continuous 125 MB/s transfer rate to backup a 40 TB drive. Cloud backup of large volumes of data isn't going to be practical until at least 20 gigabit connections are commonplace.tuxRoller - Thursday, October 12, 2017 - link
Backing up to the cloud is really slow.Retrieving from the cloud is really slow.
When gigabit becomes ubiquitous then it will make more sense.
Even then, your should keep at least one copy of the data for data about offline and local.
Btw, I agree that raid is dead, but for different reasons. Namely, we've the much more flexible erasure coding (much more sophisticated than the simple xor encoding used by some of the raid levels) schemes that let you apply arbitrary amounts of redundancy and decide on placement of data. That's what the data centers have been moving towards.
alpha754293 - Thursday, October 12, 2017 - link
Even with Gb WAN, it'd still be slow.I have GbE LAN and I'm starting to run into bottlenecks with that being slow that once I have the funds to do so, I'm likely going to move over to 4x FDR IB.
tuxRoller - Thursday, October 12, 2017 - link
Slower than an all ssd, 10g lan, but how many people have that? 1g is roughly HDD speed.BurntMyBacon - Friday, October 13, 2017 - link
@tuxRoller1Gbps = 128MBps. cekim seems to think that 250MBps is a better estimate and alpha75493 suggests that these drives will increase in speed well beyond that. Granted this will not hold up for small file writes, but for large sequential data sets, the days of 1G ethernet being roughly equal to HDD speed are soon coming to an end.
tuxRoller - Friday, October 13, 2017 - link
Even now HDD have sequential (non-cached) speeds in excess of 300MB (for enterprise 15k drives), but 250MB+ is currently available with the 8TB+ 7200 drives.Those are best case, but they might also be readily achievable depending on how your backup software works (eg., a block-based object store vs NTFS/zfs/xfs/etc).
alpha754293 - Thursday, October 12, 2017 - link
@cekimYour math is a little bit off. If the areal density increases from 1.1 Tb/in^2 to 4 Tb/in^2, then so too will the data transfer speeds.
It has to.
Check that and update your calcs.
@imaheadcase
RAID is most definitely not dead.
RAID HBAs addressing SANs is still farrr more efficient to map (even with GPT) a logical array rather than lots of physical tables.
You do realise that there are rackmount enclosures that hold like 72 drives, right?
If that were hosted as a SAN (or iSCSI), there isn't anything that you can put as AICs that will allow a host to control 72 JBOD drives simultaneously.
It'd be insanity, not to mention the cabling nightmare.
bcronce - Thursday, October 12, 2017 - link
Here's an interest topic on raid rebuilds for ZFS. While it can't fix the issue of writing 250MiB/s to a many TiB storage device, it is fun.Parity Declustered RAID for ZFS (DRAID)
A quick overview is that ZFS can quickly rebuild a storage device if the storage device was mostly empty. This is because ZFS only needs to rebuild the data, not the entire drive. On the other hand, as the device gets fuller, the rate of a rebuild gets slower because walking the tree causes random IO. DRAID allows for a two pass where it optimistically writes out the data via a form of parity, then scrubs the data after to make sure it's actually correct. This allows the device to be quickly rebuilt by deferring the validation.
alpha754293 - Thursday, October 12, 2017 - link
My biggest issue with ZFS is that there are ZERO data recovery tools available for it. You can't do a bit read on the media in order to recover the data if the pool fails.I was a huge proponent of ZFS throughout the mid 2000s. Now, I am completely back to NTFS because at least if a NTFS array fails, I can do a bit-read on the media to try and recover the data.
(Actually spoke with the engineers who developed ZFS originally at Sun, now Oracle and they were able to confirm that there are no data recovery tools like that for ZFS. Their solution to a problem like that: restore from backup.)
(Except that in my case, the ZFS server was the backup.)
BurntMyBacon - Friday, October 13, 2017 - link
Are there any freely available tools to do this for NTFS. If so, please post as I'm sure more than an few people here would be interested in acquiring said tools. If not, what is your favorite non-free tool?I've been a huge fan of ZFS, particularly after my basement flooded and despite my NAS being submerged, I was able to recover every last bit of my data. Took a lot of work using DD and DDRescue, but I eventually got it done. That all said, a bit read tool would be nice.
tuxRoller - Friday, October 13, 2017 - link
How often do you run backups?sonofgodfrey - Thursday, October 12, 2017 - link
40 TB (40,000,000,000,000 bytes) @ 550 MB/s (which would expect from the density increase) gives about 20 hours to write or read the whole drive (assuming you can drive it at the speed the whole time). This may require an HDD with a direct PCIe connection. :0Jaybus - Monday, October 16, 2017 - link
SATA 3 is 600 MB/s, so should be adequate. Generally, the SATA controller is connected via PCIe. The bottleneck is still the read and write speeds to/from the media. It is what it is. If it were a 40 TB PCIe SSD with 2 GB/s read speed, it would still take almost 6 hours.Threska - Thursday, October 12, 2017 - link
Just wait till you see what that is in SSDs.Arbie - Thursday, October 12, 2017 - link
Since MAMR needs 'only' a new type of head and can use today's platters etc, the MAMR drives will not actually cost much more to make. WD will of course price them high at first, if only to recoup the R&D costs. But when the technology spreads, the price will drop rapidly.edgineer - Thursday, October 12, 2017 - link
This is interesting, and I'm glad WD is still innovating.But, I still wonder if they are ever planning on changing the deceptive capacity labeling. Is that 40 "TB" hard drive going to be 36 TiB?
The missing space just keeps growing... If you could do me a favor, Anandtech, please put a GiB or TiB figure next to the manufacturer's labeled size in your reviews, so I won't need to use a calculator.
MajGenRelativity - Thursday, October 12, 2017 - link
While I agree with the frustration, technically manufacturers aren't lying if they say 40 TB, because if the drive has 40 billion bytes, then it IS 40 TB.ddriver - Thursday, October 12, 2017 - link
except that tera is for a trillion, not billionDr. Swag - Thursday, October 12, 2017 - link
You know they meant trillion :PMajGenRelativity - Thursday, October 12, 2017 - link
I did mean trillion, my badHollyDOL - Thursday, October 12, 2017 - link
Actually those definitions are not globally same, see https://en.wikipedia.org/wiki/Long_and_short_scale...Flunk - Thursday, October 12, 2017 - link
It's a good thing that we're typing in English where all the major countries have settled on the short scale.sonny73n - Friday, October 13, 2017 - link
Back in school days, I was taught that1 KB = 1024 Bytes
1 MB = 1024 x 1024 Bytes
1 GB = 1024 x 1024 x 1024 Bytes
and so on.
So how is 40 trillions bytes equal to 40TB?
Strom- - Friday, October 13, 2017 - link
Binary multiples were standardized in 1998 https://physics.nist.gov/cuu/Units/binary.html1 KB = 1 kilobyte = 1000 bytes
1 KiB = 1 kibibyte = 1024 bytes
One big reason for the confusion is that Microsoft chose not to update their File Explorer code to the new standards in 1998. They still haven't done it, so now we're at 20 years of people learning falsehoods by using Windows.
KAlmquist - Friday, October 13, 2017 - link
When a word acquires a new meaning, that doesn't make the old meaning disappear. So there are two definitions of kilobyte:1. 1024 bytes
2. 1000 bytes
This may create confusion--indeed I'm pretty sure that the whole point of introducing the second definition was to create confusion--but I don't see how you can blame Microsoft for that.
letmepicyou - Thursday, October 12, 2017 - link
The problem lies with the operating system creators considering 1 TB as 1024 GB. All that needs to happen is a simple (ok, maybe not SIMPLE, but not super complex either) rewrite to view storage in blocks of 1000 vs 1024. Then OS and HD would unite in a brilliant flash of unity unseen since...ahh...special relativity and quantum mechanics? No, AT&T and Sprint? No, no...Star Wars and Disney? Oh never mind...it'd be GREAT!06GTOSC - Thursday, October 12, 2017 - link
Just for the purpose of making hard drive purchasers who are ignorant more content? No. The OS agrees (and should) with the reality of binary. Not marketing.rrinker - Thursday, October 12, 2017 - link
Dog and cats, living together...gammaray - Thursday, October 12, 2017 - link
How? there is a reason why it's 1024. Computers are based on the binary system. Hard drives are measured in power of 2 ( 128, 256, 512, 1024 etc)HollyDOL - Saturday, October 14, 2017 - link
Indeed, I have yet to see a drive with sector size of 4000 instead of 4096 bytes :-)Glaurung - Thursday, October 12, 2017 - link
Mac OS has calculated storage capacity using TB rather than TiB for years now.lmcd - Thursday, October 12, 2017 - link
That'll happen when general users refer to Gibibytes instead of Gigabytes, etc.melgross - Thursday, October 12, 2017 - link
For crying out loud. I wish we could get over this nonsense. You do realize that it's the same amount of storage? It doesn't matter which number is used, as long as everyone uses the same way of describing it.mapesdhs - Thursday, October 12, 2017 - link
It matters because computing by its very nature lends itself to the binary world, powers of 2, hex, etc., and the idea of not doing this for describing disk capacities only started as a way of making customers think they were getting more storage than they actually were. When I was at uni in the late 1980s, nobody in any context used MB, GB, etc. based on a power of ten, as everything was derived from the notion of bytes and KB, which are powers of 2. Like so many things these days, this sort of change is just yet more dumbing down, oh we must make it easier for people! Rubbish, how about for once we insist that people actual improve their intellects and learn something properly.Anyway, great article Ganesh, thanks for that! I am curious though how backup technologies are going to keep up with all this, eg. what is the future of LTO? Indeed, as consumer materials become ever more digital, surely at some point the consumer market will need viable backup solutions that are not ferociously expensive. It would be a shame if in decades' time, the future elderly have little to remember their youth because all the old digital storage devices have worn out. There's something to be said for a proper photo album...
BrokenCrayons - Thursday, October 12, 2017 - link
I have a few decrepit 5.25 inch full height hard drives (the sorts that included a bad sector map printed on their label made by companies long dead to this world) sitting in a box in my house that were from the 80s. They used a power of ten to represent capacity even before you attended university. This capacity discussion is absolutely not a new concern. It was the subject of lots of BBS drama carried out over 2400 baud modems.bcronce - Thursday, October 12, 2017 - link
For the longest time where was no common definition of a "byte". 5 bit byte? 6 bit byte? 7 bit byte? 11 bit byte? Most storage devices were labeled in bits, which is labeled in base 10.melgross - Thursday, October 12, 2017 - link
Sheesh, none of that has any importance whatsoever outside of the small geeky areas in this business.alpha754293 - Thursday, October 12, 2017 - link
...except for the companies that got sued for fraudulent advertising, because 'Murica!Ratman6161 - Thursday, October 12, 2017 - link
"It doesn't matter which number is used, as long as everyone uses the same way of describing it."You would be amazed at how many times I still get the question : "Did I get ripped off? Windows says my hard drive is smaller than what <insert PC OEM here> said in their specs!"
melgross - Thursday, October 12, 2017 - link
That's why changing the way we describe this every few years is a problem. We need a standard to be used everywhere, no matter which one. Quite frankly, almost no one will ever do what your friends, according to you, do. Most don't even know offhand, who makes their computer, much less how much storage it comes with.DanNeely - Thursday, October 12, 2017 - link
This's a pleasant surprise (cheaper bulk storage is always a win). Projections I'd seen from a year or two ago were projecting a mid 2020's point for flash becoming cheaper per TB than HDDs. I assume the high expected cost for HAMR was probably a major driving factor in those projections. If MAMR really will be as cheap to implement as current PMR solutions it's good news all around.MajGenRelativity - Thursday, October 12, 2017 - link
Agreedjjj - Thursday, October 12, 2017 - link
WD used to lie a lot less than others but those HDD vs SSD projections are hilarious, new CEO ,new habits i suppose.SSDs will go after nearline soon enough as they'll need to do that to offload all the bits produced.
Anyway, interesting tech but too late to matter much and that's a pity.
melgross - Thursday, October 12, 2017 - link
It's going to take a heck of a long time before SSDs come close to the costs of HDD.While HDDs seem to have newer tech to enable more storage in the same space, SSD is relying on multilayering, which is now 64, but moving to 72. The idea of using smaller process diminution has ended, unless some unknown breakthrough occurs, which we can't expect, because there's nothing known to have us expect one. How many layers can be made? At some point, it won't be possible to go any higher.
But MAMR was understood to be a slight possibility, and WD has made that breakthrough. There's no reason to believe that the 40TB by 2025 shouldn't be believed since preproduction sampling will be next year, and production delivery will be in 2019. That's pretty quick.
Lolimaster - Thursday, October 12, 2017 - link
But 40TB on the top of the line for 2025 is still too low, considering near 10years of PMR stagnation.HAMR was supposed to give us 15-20TB by now, up to 50TB by 2020-2021 and up to 100TB in 2025.
melgross - Thursday, October 12, 2017 - link
Who are you to say it's too low? Are you doing research in this area to know that? Or do you just read articles on it and complain because it's not moving fast enough for you?AnnonymousCoward - Thursday, October 12, 2017 - link
Samsung says 128TB SSD in 2019.beginner99 - Friday, October 13, 2017 - link
The issue with SSDs is that there already is a shortage of flash. Flash is a semi-conductor and hence manufacturing is a bottleneck. Unless a significantly better alternative to flash is found, ssd will remain niche in terms of total storage amounts.someonesomewherelse - Saturday, October 14, 2017 - link
Cartel agreements you mean.shabby - Thursday, October 12, 2017 - link
And I'm sure when these are released they'll still be the same size like the current drives so wd can keep milking everyone.MajGenRelativity - Thursday, October 12, 2017 - link
Western Digital stated their intention to produce higher capacity drivesmadwolfa - Thursday, October 12, 2017 - link
Because this is what enterprise market is interested in.MajGenRelativity - Thursday, October 12, 2017 - link
Exactly. That's the whole point of HAMR, MAMR, and all those other acronyms. If WD wasn't going to increase capacity, then they would just fire their entire R&D departmentdbrons - Thursday, October 12, 2017 - link
yes, interesting and good news.Says drives avail 2019romrunning - Thursday, October 12, 2017 - link
I really hope this means that SMR will be dumped now that a much better storage technology is available. SMR = slower speed & more error check/correction for reliability, all for just a small increase in storage density over PMR. I really hope it dies out.DanNeely - Thursday, October 12, 2017 - link
In the consumer world I agree, its surprise behavior among people who expect something normal is sketchy enough that it's not worth the extra 20% it gives.For enterprise though, there'll always be data center customers who're quite willing to make the tradeoff to store archival//backup/write only data.
This is what the article says WD is currently doing no consumer model; but if you own a data center and want HDDs by the shipping container (or some other massive quantity) they'll hook you up.
romrunning - Thursday, October 12, 2017 - link
Hopefully MAMR completely takes over. I'd like to see BackBlaze get them as soon as possible so that we can see their reliability rates against other PMR/SMR drives.mikato - Thursday, October 12, 2017 - link
"Despite new HDD technology, advancements in solid state memory technology are running at a faster pace. As a result SSD technology and NAND Flash have ensured that performance enterprise HDDs will make up only a very minor part of the total storage capacity each year in the enterprise segment."What? Am I missing something? Those sentences mesh with each other, but disagree with both graphs shown. HDDs are almost always more capacity and the graph shows they are 10x less $/TB. It would be tough for SSDs to have far surpassed HDDs in annual storage capacity in enterprise applications.
mikato - Thursday, October 12, 2017 - link
Hmm, well now I see the "performance" vs "capacity" enterprise HDD distinction in the second graph. Wow, that came out of nowhere. There was nothing in the text that indicated you were breaking HDDs into 2 segments up to that point. I would definitely recommend clarifying that.melgross - Thursday, October 12, 2017 - link
It's obviously wrong.DanNeely - Thursday, October 12, 2017 - link
The sentences are talking about 10/15k RPM SAS drives, the graph is 5.4/7.2k bulk storage drives. The former are almost as expensive as SSDs but still have all the big HDD limitations.Dug - Thursday, October 12, 2017 - link
Nice article. This is the type of writing I miss in the old Anandtech.Krysto - Thursday, October 12, 2017 - link
The NSA will love this.melgross - Thursday, October 12, 2017 - link
I'm not worried about the NSA. Anything they want, they can get from Google.iwod - Thursday, October 12, 2017 - link
I have always wondered how the Heck would HAMR work in such environment, heat takes time, and spinning disk is fast. But then I am no expert, so I could only wait. Turns out HAMR really dont work. And would HAMR work within Helium?Now MAMR suddenly comes out of no where. And I assume it would work with Helium too!
My problem is 40TB HDD in 2025 is still too slow. WD could have produced this now at 1.2x cost of $/GB of current HDD and cloud vendor would still slap them up like crazy.
melgross - Thursday, October 12, 2017 - link
The greater the information density, the faster it works, particularly for sequential reads/writes.The largest/fastest HDDs now do over 265MB/sec. I would imagine that with 4 times the density, these speeds would increase several times.
Lolimaster - Thursday, October 12, 2017 - link
With more self learning microchips/code, I think HDD's should include some kind of NVRAM as buffer/cache for 4k content specially the ones detected as part of OS/programs.Most of the OS/programs should fit on 16-32GB NVRAM cache. Even with "novelty" prices, intel's optane are around $45-80 at final consumer prices.
Or just a quality 128GB MLC SSD that could be used a big partition with the HDD part or as a SSD only leaving the HDD part for data. Would barely increase the prices of a 4-10TB by $50-60
Lolimaster - Thursday, October 12, 2017 - link
Even with a low capacity NVRAM/SSD cache windows should use this space for image thumbnails for example and frequent apps (or let you pick what you want to be written to the buffer).Magichands8 - Thursday, October 12, 2017 - link
I still think HDDs are dead. I mean this is fascinating technology but how does it allow HDDs to really compete with SSDs? They say by 2025 we'll have 40TB MAMRs. ~40TB SSDs are already available today. SSDs don't suffer from any of the mechanical issues of HDDs, don't necessarily produce as much heat and have performance characteristics which are an order of magnitude above and beyond what HDDs are capable, or even going to be capable of. If performance didn't matter and price per TB was the only thing that did then tape storage would seem to be the better option at those capacities. As far as I know, tape storage is simpler, way cheaper and comes with many side benefits. Obviously HDDs are cheaper than SSDs are today but how expensive will SSDs be in 2025? And the only thing SSD manufacturers would have to do to catch up is build up production capacity for what is already a known quantity and proven technology.pavag - Thursday, October 12, 2017 - link
Well, HAMR was too little, too late, to compete with SSD.The calculation of price per GB didn't accounted that price/GB depends on how many units are sold. As SSD eat more and more market from HD, price/GB falls for SSD, and increases for HD.
At the time HAMR was expected to enter the market, the marketshare for HD would be a niche too small to support the expected low price per GB.
MAMR will save the price advantage for HD, for a while, but it may be the last generation of HD.
AbRASiON - Thursday, October 12, 2017 - link
Ok great, but when can I buy a 10, 12 or 14TB drive for under 220$ US? :(Zim - Thursday, October 12, 2017 - link
Where is the comment from the person who made $1000's last month working from home? Maybe she's sick or had an accident. I'd sleep more soundly if someone could look in on her. Thanks!Magichands8 - Thursday, October 12, 2017 - link
AIDSArbie - Thursday, October 12, 2017 - link
@Zim +3trivor - Thursday, October 12, 2017 - link
If you are setting up your own video server and have 100s or even 1,000s of video files (currently only 480P, 720P, and 1080P) then I expect to fill up my 8 TB HD in a couple more years. That doesn't include any 4K video (which may include 1.5 Mbit (or more) audio in lossless formats like Atmos, DTS-MA, DD True HD). If you don't currently do it (or only have a few ITunes or XBox Video downloads) you don't know what you are missing. My wife refuses to watch a video unless it has been digitized and all "extra" intro and credits have been removed. When we watch videos it is always from hi quality rips (for the most part the video quality is high enough to be "almost indistinguishable from originals). That usually translates into 4 GB/hour of 1080P, 1 GB/hour of 720P/480P video and audio. While it may not satisfy audiophiles it looks quite nice on my brand new LG 55OLEDB6 which is generally considered to be one of the finest TVs on the market right now. I have been far less tolerant of low quality content (especially older DVD rips of 480P (or lower in some cases) than ever before even though the LG TV does an outstanding job of upscaling lower quality content to 4K (3840x2160P).foobaz - Thursday, October 12, 2017 - link
You are very lucky to have a wife who appreciates and benefits from your hobby. :-)someonesomewherelse - Saturday, October 14, 2017 - link
You're overestimating the time it takes to fill up. I have a 13.58 TB btrfs array which is already at 11.80 TB usage and it's less than a year old. I try to only use 1080p x265 encodes but these aren't always available.mazz7 - Thursday, October 12, 2017 - link
A very nice interesting article, anandtech ftw to educate tech enthusiast :)stux - Thursday, October 12, 2017 - link
Enjoyed the article. Thanks :)AnTech - Friday, October 13, 2017 - link
Bring SSD (faster, larger and cheaper). Once you try SSD, you do not want rotational mechanical disks, even for free!AnTech - Friday, October 13, 2017 - link
Bring faster, larger and cheaper SSD. Once you try them, you do not want rotational mechanical disks, even for free!Lolimaster - Saturday, October 14, 2017 - link
For movies, music, pictures (except for thumbnail generation), game storage HDD is the thing you want.SSD is basically OS/Apps and some open world games, and store things for a casual user (<500GB at any given time and max usage for years).
someonesomewherelse - Saturday, October 14, 2017 - link
So where are the cheap (~100 eur) 12 GB drives? #teamdatahoardingPeskarik - Sunday, October 15, 2017 - link
This is a really interesting article to read for a layman, thank you!zodiacfml - Sunday, October 15, 2017 - link
Not sure about them posting negativity of HAMR. I think it is about cost and not reliability. The heat from HAMR can be reduced significantly with a shorter and narrow laser pulse.Overmind - Wednesday, October 25, 2017 - link
@Samus - No. Heat canont beat uW and has too many drawbacks that decraese reliability.@cekim - You are thinking in obsolete RAID only with nothing else terms. Current solutions are much more complex and faster ever if they are based on RAIDs. You failed to consider the speed increase of the drives.
@Jaybus - 1 gigabit Ethernet is obsolete when it comes to storage systems. Storage systems use high speed optical connections of dozens of gigabit/s. SANs do have a purpose, you know...
@tuxRoller - cloud is not something to use as backup if you have a lot of data.
@alpha754293 - Yes, cekim did not consider the speed increase.
@Arbie - True. And price must stay low so HDDs can keep the largest part of the market.
@sonny73n - Someone decided that the computer KB that = 1024 violates the S.I. definition of a kilo, which is 1000. So they invented the KiB, MiB and so forth absurdities. Veterans don't use that junk naming though. For us, the KB is still 1024 and the TB is still 1099511627776. So no worries.
FvBilsen - Thursday, November 9, 2017 - link
Can somebody make the WD presentation with the slides in this article available ? Can somebody give me guidance how to get this ?