Comments Locked

88 Comments

Back to Article

  • eek2121 - Friday, December 29, 2017 - link

    I was wondering when a company was going to step up and do this. They should also have a second set of actuators on the opposite side of the platter. They could boost performance to the point where hard drives become competitive with SSDs. If they could get performance up to the level of a SATA SSD, many people would be willing to pay the premium. a 4TB drive that performs like an SSD for $250? I'll take 4.

    Also, from the sound of things it seems like these drives will be relying on RAID for the performance increase? They should handle this stuff in the drive controller.
  • quiksilvr - Friday, December 29, 2017 - link

    All the actuators in the world will never beat the speed of light. This is great for NAS or hosting VMs but this will not replace SSDs.
  • sor - Friday, December 29, 2017 - link

    You’re right, but I think the theory was that with multiple actuators you could get into the ballpark of SSDs on a IOPs number. Latency will still be close to the same, but you could realistically get a spinning disk up to low 1000s of Random IOPs with multiple actuators, not far off from the 2k-8k IOPs a busy Samsung 840 Pro SSD will do.
  • ImSpartacus - Friday, December 29, 2017 - link

    So you're saying it's no longer orders of magnitude apart?
  • saratoga4 - Friday, December 29, 2017 - link

    > you could realistically get a spinning disk up to low 1000s of Random IOPs with multiple actuators,

    If the HDD read speed were infinitely fast, a 5 ms seek time means 200 IOPS per actuator. Getting to even 1000 IOPS would require an insanely fast read speed and 5 actuators, which doesn't seem too likely.

    >not far off from the 2k-8k IOPs a busy Samsung 840 Pro SSD will do.

    The 840 Pro can hit 20k IOPS for random IO. You would need more than 100 platters + actuators to match that.
  • sor - Friday, December 29, 2017 - link

    Responding to OP’s idea of multiple actuators, given that a single HDD today is good for up to 150 IOPs, we could assume about 150 per actuator. Taking the idea to a logical advancement of say 8 distinct actuators (maybe two pivots on opposite sides of the platter with four distinct heads each) you’d get over 1k IOPs.

    I’m not saying this will ever happen, and there are details like having to be seeking evenly across all platters to get top speed, but there’s potential there. If you’d have asked me if we would see this MAT tech at all I would have been skeptical.
  • Bullwinkle-J-Moose - Friday, December 29, 2017 - link

    "but you could realistically get a spinning disk up to low 1000s of Random IOPs with multiple actuators, not far off from the 2k-8k IOPs a busy Samsung 840 Pro SSD will do."
    ----------------------------------------------------------------------------------------------------------
    An 850 Pro will copy and paste data from and to itself at twice the speed of an 840 Pro

    Internal throughput on SSD's has been ignored at this site ever since I first brought it up over a year ago

    Try improving on the best case scenario / not the worst!

    Synthetic tests have no place in the real world

    Billy and Ryan need to wake up and smell the cofee
  • jordanclock - Friday, December 29, 2017 - link

    Why do we care about how fast an SSD copies data on itself? Is this really a bottleneck for any common use cases?
  • WinterCharm - Friday, December 29, 2017 - link

    Many tasks like video editing require this sort of thing. So there's a significant amount of stuff that uses this type of processing.
  • jordanclock - Saturday, December 30, 2017 - link

    That sounds like an atypical use case to me. But yes, that would absolutely be a scenario where you would want high copy speeds on the same drive.
  • Bullwinkle-J-Moose - Friday, December 29, 2017 - link

    "Why do we care about how fast an SSD copies data on itself? Is this really a bottleneck for any common use cases?"
    -----------------------------------------------------------------------------------------------------------
    Because it accurately shows how fast an SSD can simultaneousy Read and Write data

    or in other words.......

    Why then would we care how fast a new Seagate HDD copies data to and from itself simultaneously?
  • jordanclock - Saturday, December 30, 2017 - link

    Well, I don't care about how fast a new Seagate HDD copes to and from itself. I care about how fast data enters and leaves the drive to other mediums, like most people or businesses would.

    These drives sound fantastic for read-intensive activities, especially where you're pulling lots of small reads scattered across large data containers.
  • Bullwinkle-J-Moose - Saturday, December 30, 2017 - link

    "I care about how fast data enters and leaves the drive to other mediums, like most people or businesses would."
    ---------------------------------------------------------
    The internal copy/paste test shows how fast the drive itself is able to read and write at the same time

    This is the MAXIMUM speed at which data can enter and leave the drive at the same time which was what you were most interested in

    Data cannot enter and leave the drive faster than the internal throughput of the drive itself but can be slower due to how the drive is connected to the rest of the system

    Internal throughput can be considerably higher than connected system throughput and an ideal drive would read and write data simultaneously as fast as it's (PCIe?) connection allows while at the same time read and write data internally at an even faster rate with proper onboard processing

    With PCIe 5.0 arrival, 16 lanes of read / write to and from the drive in addition to the on-board / internal data handling (copy / paste 0 speed would make a great starting point for the next gen drives after 2020
  • Bullwinkle-J-Moose - Saturday, December 30, 2017 - link

    Ideally, a 20 core CPU running several VM's should have a multilane, parallel processing drive to match it

    Maximum simultaneous read / write to and from the drive with high speed simultaneous internal data handling is just what the doctor ordered for massive multi-core operations
  • Bullwinkle-J-Moose - Saturday, December 30, 2017 - link

    0 = )
  • mode_13h - Friday, December 29, 2017 - link

    The speedup (in IOPS volume) will likely be close to N, where N is the number of actuators. However, latency wouldn't significantly drop, since you're probably still talking about writing to a pre-determined location that needs to be accessed by a specific actuator.

    Where I can see a super-linear speedup is that the actuator/head assembly mass should decrease by close to 1/N. However, the time to write a cluster should similarly decrease (assuming clusters are striped across the platters).

    What I want to know is whether & to what extent each actuator needs to compensate for what the other is doing. When an actuator accelerates, it's not only moving itself, but the entire drive. The tracks are so thin that I'd imagine the drive *must* move by at least a couple track widths, during a typical seek. With multiple actuators moving around, it seems like each would need to compensate for how the others are moving the drive. This could be why such techniques aren't commonplace.
  • mode_13h - Friday, December 29, 2017 - link

    Sorry, I meant to say the speed of writing a cluster should drop to 1/N - not the time. So, that would partially offset the benefit of multi-actuator. In other words, you seek time would halve (with 2 actuators) but your write time would double.

    That said, I know seeking is the part they're worried about, and dominates the time needed to perform small writes.
  • StevoLincolnite - Friday, December 29, 2017 - link

    Electrons moving down a conduit will never reach the speed of light.

    https://en.wikipedia.org/wiki/Speed_of_electricity
  • ddrіver - Saturday, December 30, 2017 - link

    Nothing with mass ever reaches the speed of light. But since most people here got a TV education what can you expect?
  • chaos215bar2 - Sunday, December 31, 2017 - link

    Data in a wire is not transmitted by particles with mass. It’s transmitted by the electric field created by said particles, which does propagate at the speed of light — more or less.
  • FunBunny2 - Monday, January 1, 2018 - link

    "which does propagate at the speed of light — more or less."
    not really. very much less

    "Answer 3:

    Light travels through empty space at 186,000 miles per second. The electricity which flows through the wires in your homes and appliances travels much slower: only about 1/100 th the speed of light. Part of the reason is that light is massless; it has no weight, whereas the electricity flowing in the wires is made up of a stream of electrons, all of which have some small amount of weight. In addition, the electrons flowing through the wires constantly bump into the atoms of the wire, which slows them down considerably. If you were to take the electrons out of the wire and make them flow through space (which is essentially what you do when you make a spark), they can move faster, but no matter what, they cannot move as fast light."
    here: http://scienceline.ucsb.edu/getkey.php?key=2910
  • FunBunny2 - Monday, January 1, 2018 - link

    and this overstates somewhat. according to my physics teacher: electricity doesn't propagate as water through a pipe, electrical fields through anything. what happens: an electron enters the wire, which promptly hits an electron of an atom of the wire. the hit electron then heads off and hits an electron of its atom, or another, repeat prodigiously. or, IOW, nearly no electron (or wave or whatnot) travels from one end of the wire to the other. it's massive bumper cars on a quantum playing field.
  • UpSpin - Tuesday, January 2, 2018 - link

    "The electricity which flows through the wires in your homes and appliances travels much slower: only about 1/100 th the speed of light."
    But we don't care about the electron flow, but the energy flow. The electrons are the carriers, the same as the water molecules. But to transmit energy, neither the water molecules, nor the electrons have to reach the end of the water hose, or the end of the wire. The electrical energy, and thus the signals, are transmitted by the electromagnetic wave around the wire. An this wave propagated with 50%-99% of the speed of light, depending on the medium.
    https://en.wikipedia.org/wiki/Velocity_factor
  • letmepicyou - Tuesday, January 2, 2018 - link

    I tend to think you could come a lot close to SSD performance if you made each actuator arm move independently rather than grouping them. You're always going to have a seek time, but we've only scratched the surface as far as "normal" hard drive performance. Sure we've added things like NCQ, increased areal density, and slapped on bigger buffers. But there are so many other things you could do...independent actuator arms could theoretically allow you to create a RAID array inside a single drive. I sense a lot of untouched possibilities.
  • mode_13h - Wednesday, January 3, 2018 - link

    One benefit of linking the arms would seem to be mechanical rigidity. Making them all independent might also make them too bouncy.

    Also, consider that moving the arm also jerks the spindle in the opposite direction. So, each actuator might need to compensate for the effects of the others on the spindle's precise location. That could add a lot of computational overhead, as you scale up the number of actuators.
  • Sivar - Friday, December 29, 2017 - link

    This has been done before by Seagate in the 90's.
  • mode_13h - Friday, December 29, 2017 - link

    Perhaps, but it gets much harder when your areal density increases enough that you have to compensate for the movement of the spindle caused by other actuators. I won't repeat any more of what I said above, but perhaps that was a factor the technique falling into disuse.
  • Sivar - Friday, December 29, 2017 - link

    This was done before by Seagate in the 90's. Seagate sold a drive where each platter had its own actuator. STR was about double that of a standard drive (I don't remember how many platters these had, probably 3), but the product was abandoned due to high costs and greater complexity.

    That's still the case. You can get better performance by adding more drives, especially when not using performance-killing RAID5 or RAID6 configurations. For high IOPS loads, we now have SSDs and Intel's Optane.
  • mode_13h - Friday, December 29, 2017 - link

    RAID is only a real performance problem when a significant number of your writes are updates to records smaller than your stripe size. If you're reading & writing mostly whole stripes, then it's all good.
  • Sivar - Saturday, December 30, 2017 - link

    A true statement, though the article's title suggests that IOPS is the goal, which usually implies the need for small, random IOs.
  • mode_13h - Sunday, December 31, 2017 - link

    RAID stripe element size is still pretty darn small.

    Anyway, I'm lead to believe RAID is losing favor among hyperscalers. Not my area of expertise, but I think they prefer object-based storage and use higher-level redundancy.
  • ianmills - Friday, December 29, 2017 - link

    Pretty underwhelming. There's only a 50% chance that the files will be on separate actuators. The use cases for this also intersect with striping use cases... Having more actuators also increases the failure rate of a drive

    Seems pretty useless to me. If IOPs is important than why even have use an hdd in the first place?
  • mode_13h - Friday, December 29, 2017 - link

    Pretty underwhelming analysis.

    They said this is for cloud, where your queues are probably deep enough to really care about IOPS. With deeper queue depths, it should be the norm that requests divide up pretty well.

    And to your latter point, if SSDs were as cost-effective per GB as HDDs, then obviously no one would bother. Since they're not, then HDDs are sometimes still needed. And even if it's not for a realtime or interactive application, the service time of HDDs is an issue for throughput. And that affects even bulk, background operations.
  • Guspaz - Friday, December 29, 2017 - link

    It seems to me that, even though it’s slowed down a lot recently, the cost of SSDs is still decreasing faster than HDDs, which haven’t changed in price per TB in half a decade.
  • mode_13h - Saturday, December 30, 2017 - link

    Wow, then someone had better tell the data centers so they don't make a costly mistake. If they are looking at any forecasts at all, they must be the wrong ones.
  • goatfajitas - Friday, December 29, 2017 - link

    2x the heads means 2x the likelihood of a failure. If they put 2 more stacks on the other side that would be 4x. Not saying it isnt doable, but I doubt we would see this with the same fail rates.
  • mode_13h - Friday, December 29, 2017 - link

    It's not 2x the heads - it's 2x the actuators. So, how many drive failures are due to the actuators? Perhaps they could beef up the actuators to keep the failure rate from increasing much.
  • bcronce - Saturday, December 30, 2017 - link

    I wouldn't be surprised if the opposite occurred. If the IO load does not increase by 2x, each actuator may not need to work as hard.
  • Miggleness - Saturday, December 30, 2017 - link

    I thought about additional autonomous actuators years ago. However we need to remember that more actuators = higher risk of failure. Resiliency will always be the biggest advantage non-mechanical storage devices will have over. Will you trust Seagate's multi-actuator HDD with your data?
  • Dr. Swag - Saturday, December 30, 2017 - link

    SSDS will forever be ahead when it comes to latency
  • mode_13h - Saturday, December 30, 2017 - link

    LOL, I dunno. Some DRAM-less 3D QLC NAND might end up even worse than fast HDDs, on the write side.
  • Lolimaster - Saturday, December 30, 2017 - link

    They will never compete of things SSD's ace them, IOPS, latency and transfer rate, why even worry. The thing they should work on, density A.K.A HAMR still nowhere to be found, 3 years later (and probably more).

    Were are our 20-30TB drives? HDD's are just for storage now, specially video and music and non open world games, everything else, SSD.
  • boozed - Monday, January 1, 2018 - link

    TBH I'm surprised that the arms aren't already completely independent.

    You could then do some interesting things with parallel reads/writes on a per platter level, depending on cache size.
  • The Hardcard - Friday, December 29, 2017 - link

    This would seem to appeal to workstation users in some use cases as well. Editors will soon be rendering multiple 8K video tracks...
  • Pork@III - Friday, December 29, 2017 - link

    "increase capacities of hard drives for cloud/exascale datacenters to over 20 TB by 2020 and to 40 TB within the next five or seven years.'
    Tooooooo slow and small increase. Toooooo pathetic and miserable!
  • sor - Friday, December 29, 2017 - link

    It would seem one could approximate this tech by running multiple platters in RAID 0. You’d have multiple serial links with RAID, that’s a difference, but as far as having multiple actuators servicing different platters, one could get an idea of what performance increase to expect by thinking in terms of RAID setups.
  • name99 - Friday, December 29, 2017 - link

    I agree. Selling it to data warehouses (who are the group most capable of mass striping their drives) seems a strange choice...
    The better target would seem to be prosumers who need more storage than they're willing to pay for flash, but might be willing to pay a SMALL extra cost (maybe 15% or so) for a drive that was noticeably faster.
  • mode_13h - Friday, December 29, 2017 - link

    I agree. Data warehousing sounds boring. Warehouses remind me of forklifts. Forklifts are slow. Slow forklifts have no concern for throughput, so why bother?

    If people cared how fast their jobs ran, why would they use a virtualized server in some data warehouse? They would use a separate physical machine, overclocked, with a dedicated RAID of NVMe SSDs and it would live in a facility known as a data racetrack. And people who go to racetracks are rich, so the obscene cost of all this hardware would be no problem for them.

    That's brilliant. You should be a consultant.
  • mode_13h - Friday, December 29, 2017 - link

    Sure. If purchase and operating cost are unbounded, just double the drives. Problem solved! Genius!
  • jordanclock - Friday, December 29, 2017 - link

    RAID requires RAID controllers, which will take up space in your servers. Those same controllers represent a new point of failure. This method removes that entire piece of equipment. Or you can keep your controllers and get up to double the IO from a single drive in a RAID array.
  • Pixels303 - Friday, December 29, 2017 - link

    You need to remember, eight heads reading one sector reads eight times the information in one movement, whereas four heads moving reads four times the information in one movement. This may decrease latency, the read speeds will decrease by the multiple of actuator splits. This technique of splitting actuators is nothing more than a gimmick. Only way to decrease latency is by decreasing the mass of the arms or increasing the rotational speed of the platters. Ever think of using graphine for the coils and carbon nanotube for the arms? Should be vastly lighter then.. Hard drive manufacturers already have technology that could improve products, but do not use intentionally to safeguard long term business strategies. If you knew you could sell sour milk for the same price as fresh milk, would you bother to try getting milk out faster? Seagate is lazy.
  • cjl - Friday, December 29, 2017 - link

    Hard drives do not read all heads in parallel - they read one head at a time. Therefore, your argument is incorrect - read speeds are independent of the number of platters/heads in a drive, they only depend on linear data density (kfci) and RPM, and this drive should (in theory) be able to stream data at twice the rate of a single-actuator drive that is otherwise identical.
  • mode_13h - Friday, December 29, 2017 - link

    Source? Why on earth wouldn't they stripe clusters across platters, or at least put consecutive clusters on adjacent platters? That's just dumb.
  • cjl - Friday, December 29, 2017 - link

    Because the tracks are so small that you can't guarantee that all the heads are on track at once. Slight thermal variations up and down the head stack will skew it by several tracks, vibrations will not affect all the heads equally, and tracks on adjacent disks won't even be perfectly concentric (though they're pretty close). As a result, you can't read from multiple heads simultaneously because you can't keep both of them perfectly centered on each track. Multiple independent actuators solves this, which is why this drive should be able to double sequential performance.

    Now, they do put consecutive data on adjacent platters, so a sequential read will actually (on the disk) read some data from one surface, then jump to the next surface, etc, so the overall drive goes smoothly from outside to inside as you go from the start to the end of the data. You just have to read each surface consecutively rather than concurrently.
  • mode_13h - Friday, December 29, 2017 - link

    I'm pretty sure I read sometime in the past 10 years that the tracks are actually created by the heads (during a low-level format?). If true, that should avoid some of the variation you mention. That said, I get that there might be some variation in the arm geometry over time and as the drive heats up, etc. It's just unfortunate they can't exploit that additional parallelism.

    I read about WDC drives with dual-stage actuators, in about 2010. Would be interesting if the second stage was per-platter to enable concurrent reading/writing.

    https://techreport.com/review/17812/western-digita...
    https://www.hgst.com/sites/default/files/resources...
  • cjl - Friday, December 29, 2017 - link

    I believe WD/Hitachi is doing self servo write, but Seagate is still using MDWs (multi disc writers) to write their servo pattern before the disc stack is assembled. Regardless though, even if the stack is aligned in one configuration, it will tilts
  • cjl - Friday, December 29, 2017 - link

    Unfortunately, I can't edit after fat fingering my last post, but to continue...

    I believe WD/Hitachi is doing self servo write, but Seagate is still using MDWs (multi disc writers) to write their servo pattern before the disc stack is assembled. Regardless though, even if the stack is aligned in one configuration, it will still go out of alignment due to thermal expansion, different impacts from vibration, etc. Dual stage actuators don't fix this either - they just allow higher bandwidth and smaller track pitch. As far as I know, every hard drive in production today uses just one read/write head at a time when transferring data.
  • mode_13h - Friday, December 29, 2017 - link

    Since each actuator is driving 1/N the number of arms & heads, why wouldn't the mass decrease to nearly 1/N?

    Also, the time to read a cluster only matters if reads/writes are large. Since they explicitly care about IOPS, presumably the accesses are mostly small, in which case it's dominated by seek time, which benefits from not only parallelizing the accesses but also decreasing the mass of the arms & heads.
  • cjl - Friday, December 29, 2017 - link

    I would tend to expect seek time to be pretty much the same for this drive as for a single actuator, since the lower inertia of the actuator will be offset by the lower torque constant of the thinner voice coil motor. This will be great for higher-queue IOPS though, since you can be servicing two requests simultaneously.
  • prime2515103 - Friday, December 29, 2017 - link

    They stole my idea! I thought of this back in the '90's. Geez...
  • mode_13h - Friday, December 29, 2017 - link

    99% perspiration, dude. The idea is usually the easy part. Probably tens of thousands beat you to it.
  • shabby - Friday, December 29, 2017 - link

    If the platters would spin independently then we'd have a winner but since we don't then the performance numbers will always say "up to twice as fast". And in some instances just as slow as a regular hd.
  • mode_13h - Friday, December 29, 2017 - link

    What on earth would that solve? HDDs are normally constant velocity, you know. Significant variation of rotational speed would burn a tremendous amount of power.
  • FunBunny2 - Friday, December 29, 2017 - link

    "HDDs are normally constant velocity"

    angular velocity, of course. which is why, some decades ago, hard formatted PC drives started to have variable sector boundaries to take advantage of greater density on the edge-ish tracks.
  • cjl - Friday, December 29, 2017 - link

    What would be solved by independent platter spin? No matter how many heads are active at once, you still want minimal variation in rotational speed.
  • Bullwinkle-J-Moose - Friday, December 29, 2017 - link

    "Why do we care about how fast an SSD copies data on itself? Is this really a bottleneck for any common use cases?"
    -------------------------------------------------------------------------------------------------------
    Simultaneous Read/Write speed to and from an SSD can be easily tested by this method of Copy and Paste

    or to put it another way.....
    Why then would we care if this new Seagate HDD attempts to do the same thing at much slower speeds than an SSD ?
  • FunBunny2 - Friday, December 29, 2017 - link

    well, the engineering does work:
    "Read/write heads were fixed in position over each track. That eliminated seek time and contributed substantially to system performance. Data could be written at rates up to 3 million bytes per second."

    here: https://www-03.ibm.com/ibm/history/exhibits/storag...

    not bad performance for 1970
  • Beaver M. - Friday, December 29, 2017 - link

    Before I buy a HDD like that, I will wait quite some time until its known how reliable they are. Added complexity always nibbles on reliability, especially when its something like this.
  • mode_13h - Friday, December 29, 2017 - link

    Sure, but are you running a database server or something? What kind of high-IOPS workload do you have that would justify the additional cost, power, and (as you point out) potentially lower reliability?

    They said these are targeted at datacenter customers, so I wouldn't be looking to jump on them for home use.
  • Beaver M. - Saturday, December 30, 2017 - link

    It was a general statement.
  • Zok - Saturday, December 30, 2017 - link

    While not a new idea, as this was done by Conner back in the 90s with their Chinook drive, the ratio of capacity to IOPS has become so skewed (disk rebuild time in RAID, for example) that dual actuators may finally show their worth. BTW, Connor was acquired by none other than Seagate in 1996: https://en.wikipedia.org/wiki/Conner_Peripherals
  • ddriver - Saturday, December 30, 2017 - link

    Not a few months ago I wrong about multiple actuators being one of the many possible directions to improve HDD performance.

    Of course, being mediocre, the industry is taking baby steps, barely incremental, as it is only two independent actuators.

    Meh...

    They are yet to take the courage to go for completely independent heads and multiple heads per platter.

    Meanwhile my concepts went further, and now I've come up with a head that doesn't move, and can read and write at multiple locations on the same platter.

    Rough simulations put a 4000 RPM drive with 4 such heads per platter at enough sequential throughput to fully saturate a PCIE 4 x8 link. Random performance is on par with the best SATA SSDs currently have to offer. Latency estimates average out at about 0.2 msec, not having to move actuators really helps out there. Additional extra features include flexible user configurable LBA layout with options to maximize performance or redundancy. Power usage estimates for a 100 TB model is around 80 watts, which might sound a lot compared to a regular HDD, but is actually very good considering the performance. And yes, it is 5.25 inch, it doesn't require noble gasses, fancy writing assistance like hamr and similar, it achieves that performance through massive low level parallelism, and since it doesn't push the limits of what outdated tech is capable of, reliability should be much better than contemporary high end HDDs.
  • mode_13h - Saturday, December 30, 2017 - link

    Wow, if only these babies had talked to you...

    I'll bet the whole tech world could advance at like 10x if you'd just come out of your troll cave into the light of day and drop some of your golden knowledge nuggets on all these unenlightened industries. They'd probably add you to Mt. Rushmore and make your birthday a holiday on par with Presidents' and MLK day.
  • ddriver - Sunday, December 31, 2017 - link

    That is a very safe bet. A lot is possible if you put progress and the benefit of the consumer first.

    Alas, the industry is all about milking the maximum amount of profit with the minimum possible amount of effort at the expense of the consumer.
  • mode_13h - Sunday, December 31, 2017 - link

    Seriously dude, if you're so smart, why aren't you out there fighting bad guys, like Tony Stark? I'm sure you totally have an Iron Man suit design sitting around somewhere. Just get off your butt and build the darn thing!
  • cjl - Saturday, December 30, 2017 - link

    Even if your heads are spaced evenly around the platter, and can read any track instantaneously, your latency is still going to average out at just under 2ms, with worst case approaching 4ms. This is because there's no guarantee the data you want is at the rotational position where the head is located. In addition, you'd need hundreds of thousands of read/write heads on each "head", since there are around 400-450k tracks on a modern hard drive. This is absurdly, ludicrously cost prohibitive. I suspect you know almost nothing about the design or tradeoffs involved in modern hard drives, and thus you think your ridiculous concept is viable, but it's actually completely impossible to implement.
  • mode_13h - Sunday, December 31, 2017 - link

    Yeah, the hubris is strong with this one. Either that or it's an elaborate troll.

    Literally tens or hundreds of millions of hours have been poured into HDDs technology by physicists and engineers intimately familiar with all the myriad issues, many of which you only learn about through experience. In the ~50 years or so that HDDs have been with us, I'm pretty sure multiple people have explored pretty much every imaginable idea that even kind of makes sense.
  • cjl - Sunday, December 31, 2017 - link

    To be entirely fair, sometimes good ideas do sit idle in an office somewhere for a long time even though they would be beneficial. For example, there were some working helium prototypes a very long time before anyone actually sold a commercial helium drive. However, that's usually a case where it's not unambiguously an advancement - helium significantly increases manufacturing complexity and unit cost, but gives substantially lower power and better track following capability. However, at the time the first prototypes were developed, the turbulence on the head stack wasn't really the limiting factor on density anyways, and the market cared more about drive cost than power consumption, so not much effort was put into it. Similarly, multi-actuator concepts have existed for a while (at least on paper), but it's so much additional cost that it wasn't really worth it until we started getting these 8 disk monstrosities that would take forever for a raid rebuild, and that have significantly worse random IOPS per TB the larger they get.

    Yes, the industry is focused on profit, and occasionally, good ideas slip by because of it. However, if any drive manufacturer could, for almost any price, make a 100TB, 0.2ms latency, hundreds of thousands of IOPS, multi-gigabit-per-second drive, they'd jump all over that in a heartbeat. They'd have instant dominance of the entire storage industry, and put pretty much all HDD and flash companies out of business right there. It would be a CEO's dream.
  • mode_13h - Sunday, December 31, 2017 - link

    Many good ideas sit idle because it's not technically feasible or economically profitable to do, at the time. But the advancement of tech in one area has a way of opening doors in others.

    In other cases, such your helium example, an idea solves a problem that's not yet a real pain point.
  • cjl - Tuesday, January 2, 2018 - link

    Agreed. I will be very interested to see what the HDD tech brings over the next few years - both HAMR and multi-actuator have been on roadmaps and concepts for a long time, but maybe this time they'll actually be released, and maybe we'll see another period of exponential capacity growth like we saw when PMR came out. As you said, there are a lot of smart engineers and scientists working on it (and as you may have guessed, I used to be one of them, though I've been out of that industry for a bit now), so it should be very interesting to see where things go.
  • mode_13h - Tuesday, January 2, 2018 - link

    Thanks for sharing your knowledge & expertise with us.

    I'm excited by MAMR. I have more faith in it than HAMR, but I'll keep an open mind and wait for some hard data.
  • cjl - Tuesday, January 2, 2018 - link

    MAMR is very interesting. I was at Seagate rather than WD, and I haven't been in the industry for a while anyways, so I know more about HAMR than MAMR, but I'm definitely following the developments in both with some interest.
  • ddriver - Sunday, December 31, 2017 - link

    Not much of an outside the box thinker eh? Incapable of conceiving something you haven't already seen. Assuming the absurd necessity for a head for every track. It must really suck being you.

    My concept employs a head for every group of tracks, currently 1024, but the number can vary. Each head is a motionless actuator on its own, it consists of focusing coils and data access coils. It can instantaneously alter the shape of its magnetic field to concentrate it on a particular track in the group.

    The head array itself is a bar that incorporates about 200 individual heads per inch. That bar is only 5 mm wide, so you can actually have multiple head arrays per platter surface, as much as you can fit. This allows to scale the design significantly higher than the already high estimates of my baseline design, which has only 2 heads per surface.

    Granted, the actual latency cannot go below 0.1 msec since it is after all still a spinning disk, but the addition of head arrays scales up throughput and iops almost linearly. The design can easily scale well beyond what is necessary. Terabytes of throughput and millions of iops per second.
  • cjl - Sunday, December 31, 2017 - link

    I suspect you don't know just how much effort has gone into making the write magnetic field as focused as it is - I'd be very surprised if you could get tracks anywhere close to as narrow as is common today with your "concept". This is even shown by the fact that your undemonstrated "concept" has 200 heads per inch at 1024 tracks per head, when modern hard drives are running approximately double that density (~400ktpi). That having been said, I don't think you could even achieve 200ktpi with your focusing coil concept, since you'd get too much adjacent track interference every time you wanted to write.

    Also, your latency is still ignoring rotational latency. If your baseline design has 2 head arrays per surface, that actually makes my last estimate generous by a factor of 2 - you're looking at typical access times of ~4ms, and worst case around 8ms. This is assuming zero latency on the head array, and comes purely from the rotational latency of a 4000RPM (your spec) disk with 2 read locations spaced 180 degrees apart. To achieve your 0.1ms figure, you'll need around a hundred head arrays spaced equally around the disk at 4000RPM, or alternatively, you'll need one head array and a 300krpm disk.
  • mode_13h - Sunday, December 31, 2017 - link

    You mistakenly assume he cares about the practical details of actually implementing it. In fact, he just needs to develop the idea far enough to feed his ego.
  • lightningz71 - Sunday, December 31, 2017 - link

    I've wondered about this very concept for years. I think that we can look for the peak of this technology to be effectively:
    A 3.5" hard drive external form factor.
    A 2.5" hard drive platter assembly spinning at 15,000 RPM
    4 pivots arrayed in a square around the platters
    2 independent actuators per pivot.

    That would be about the pinnacle of what you're going to achieve in throughput and IOPs for a mechanical hard drive. To handle all of that, you're going to need a significant buffer on the drive itself, something in the order of 512MB. You're also going to likely need multiple SATA interfaces as the interface will begin to restrict the number of ops that you can get to the drive. In the enterprise space, SAS-4 could handle all of this, but SATA-3 would likely hit some severe limitations when you're pushing this configuration hard.

    In the 2.5" external form factor, you might be able to work in a pair of pivots and 4 total actuators. Might be nice for a laptop that needs both high capacity and high performance. Maybe good for high density servers too.
  • mode_13h - Sunday, December 31, 2017 - link

    So, you're saying like 4 heads for each platter? I'm pretty sure I've seen old designs that had two. Perhaps when HDDs become dense enough, it'll be worthwhile to increase the number of heads.
  • cjl - Tuesday, January 2, 2018 - link

    I suspect we won't ever see 15krpm come back - you sacrifice too much in density capability, and the costs are just too high. We may see 10krpm continue to survive for quite a while, and the 2.5" platter in a 3.5" enclosure is pretty much how existing high-RPM HDD designs already work, but I tend to think that the majority of the HDD industry will stick with 7200RPM for now, since that's where the price/capacity/speed tradeoff seems to work best. This is especially true if they go to multiple actuators spaced around the disk, since that cuts rotational latency in half (giving a 7200RPM drive similar rotational latency to a single actuator 15k).

Log in

Don't have an account? Sign up now