Thanks for this news! I'm glad to see WD come out with updated Red products. That limit on the number of NAS bays was unfounded, glad to see it extended to 8 -bay units. I known plenty 8-bay Synology NASes that run WD Reds just fine.
This new Pro line is interesting but if prices are only meh compared to the Re I won't bit. My current Re drive continues to work very well in my Syno NAS.
I was hoping to see this put pressure downwards on pricing but these initials MSRPs are definitely slotting overtop the 4TB pricepoint which I usually see minimizing around $189.
More drives in the enclosure means more vibration tolerance, and the Red line has always been positioned as between consumer drives (which aren't nas rated at all) and enterprise drives rated for massive enclosures.
What I'm curious about is if the increase to 8 bay approval is linked to new model numbers or if it's being done retroactively to older drives.
You can find 3TB Red drives for <$150, so a 33% cost increase for marginally better vibration control isn't much... Especially considering rubber isolation works several times better for first mode vibration and can be had for cents.
How many platters are in the 5/6TB red drives? I'm assuming that it's not the same 800GB platters as the Red Pro -- 8 platters in a drive just doesn't seem feasible space-wise.
Every existing 4TB platter drive on the market has been a 4 (or more) platter design. The maximum capacity drives everyone's been selling for at least a half dozen years have all been 4 or 5 platter designs.The lack of articles about high capacity drives having higher than normal failure rates makes it clear to me that current drive designs can handle it, whatever may have been the case many years ago.
What is the unrecoverable read error rate on these drives? Most consumer drives are rated at 10^14 bits which means arrays over 11.5TB will almost always have a second failure during the rebuild after replacing a failed drive.
The solution to that is raid6 or raidz2. The chances of having a read error during a rebuild isn't that remote (as you point out, 10^14 bits implies a read error is likely in larger arrays, although 10^14 is 12.5TB rather than 11.5). The chances of having a read error during a rebuild in the exact same spot on two drives is astronomical.
There definitely is a capacity sacrifice that has to be made to do that, though. My current setup is a single ZFS pool spread over two arrays that are both using raidz2. 7x4TB and 8x2TB, for a total of 44TB of which I lose 2x4+2x2=12TB, giving me a total of 32TB of usable storage space in the end. So I'm losing a bit over a quarter of my space, which isn't too bad.
If (when) I need more storage space, I can just swap those 2TB drives for 6TB ones, bumping me up to 76TB total, 56TB usable. Won't be cheap, of course, but 6TB drives should be a bunch cheaper by the time I need to do that.
Which is why anyone who cares about their data wouldn't touch drives this big with such poor error rates. You need the RE class drives with 1 in 10^15 unrecoverable read error rates, an order of magnitude better. Size isn't everything, and the error rates of consumer-class drives make their sheer size a liability. Just putting them in a RAID array isn't a panacea for their basic unreliability, unless you like to waste time nursing failed arrays.
You move the error handling higher up by adding more redundancy (RAID6+/RAIDZ2+) instead. In almost all cases, it works out being significantly cheaper than paying for RE disks.
I'd doubt your overall cost saving taking respective MTBF and warranties into account, and it's not only about cost. As the original questioner stated, it's the risk of ANOTHER disk in your array failing while you're rebuilding after a failure. The sheer size of these disks makes that statistically much more likely since so much more data is being read to recover the array. Throwing more redundancy at the problem saves your array at the initial point of failure, but you're going to be spending more time in a slower degraded rebuild mode, hence increasing the risk of further cheap disks dying in the process, than if you'd invested in more reliable disks in the first place. :/ It's the combination of the enormous size of the individual disks and lower intrinsic reliability that's the issue here.
an unrecoverable read error is not an entire drive failure. raidz2/raid6 protects you during a rebuild from the cheap drives having a single sector unrecoverable read error.
That is meaningless because you are invoking that "protection" as if it is a guarantee - it's not. You're still playing the odds against more drives failing. You might be lucky, you might not. The issue is whether that 1 in 10^14 chance is statistically likely to occur during the massive amount of reading required to rebuild arrays using such large disks as these 6TB monsters.
Newsflash - 6TB is approx 0.5 X 10^14 bits. So during a full rebuild EVERY DISK IN THE ARRAY has a 50% chance of an error if they are 6TB disks with that level of unrecoverable error rate. THAT'S why the bigger the disk I'd rather have one with 10x the error rate reliability - due to the size of these disks it's weighing 50% failure risk PER DISK during rebuild against 5%. Extra redundancy, however clever ZFS is, can't mitigate against those statistics.
ZFS should be able to catch said errors. The issue in traditional arrays is that file integrity verification is done so rarely that UREs can remain hidden for very long periods of time, and consequently, only show up when a rebuild from failure is attempted.
ZFS meanwhile will "scrub" files regularly on a well-setup machine, and will catch the UREs well before a rebuild happens. When it does catch the URE, it repairs that block, marks the relevant disk cluster as broken and never uses it again.
In addition, each URE is only a single bit of data in error. The odds of all drives having the URE for the exact same block at the exact same location AND the parity data for that block AND have all of those errors happen at the same time is extremely small compared to the chance of getting 1bit errors (that magical 1x10^14 number).
Besides, if you really get down to it, that error rate is the same for smaller disks as well, so if you're building a large array, the error rate is still just as high, but now spread over a larger amount of disks, so while the odds of a URE is smaller (arguable, I'm not going there), the odds of a physical crash, motor failure, drive head failure and controller failure are higher, and as anyone will tell you, far more likely to happen than 2 or more simultaneous UREs for the same block across disks (which is why you want more than 1 disk worth of parity, so there is enough redundancy DURING the rebuild to tolerate the second URE hapenning on another block)
Yay for ZFS if it really can work like you claim and negate all reliability deficiencies in drives. You may be safe using these.
However, DanNeely's comment almost immediately below this is the far more likely scenario for the majority who try making non-ZFS arrays. Read that and remember there's a 50% chance of bit error with EVERY 6TB drive in a full array rebuild. That is how they are more likely to be used by low-knowledge users buying only for size at lowest available $/GB or £/GB - note that these 6GB drives are NOT being marketed as for ZFS only.
You make a fair point, and tbh, I ramble on about it every NAS review: gimme ZFS from the factory already, damnit!
I've just grown used to using ZFS for any form of RAID (and this is just more pro-softraid argument) that I forget that people actually uses non-ZFS RAID at all...
Also, RAIDZ3 (three disk parity). Good stuff if like me you're thinking about huge arrays... 3x15disk vdevs in a backblaze pod is my plan... When I can finally afford to get 5disks in a single go and gradually build up from there...
ZFS isn't always the best solution for home use. I preach the gospel of snapraid (non-realtime snapshot parity, non-striping, scrub support, arbitrary levels of parity support, mix/match drives, easy addition and removal... Open Source!). It's great for a media server with infrequent data changes, or for use as a backup for PCs. For high churn access patterns, zfs is still the best (well, for home use).
Which means building your own and going ECC pretty much to handle ZFS (since we're talking about home/enthusiast/SOHO/SMB market not enterprise).
At this point of URE probability with 6Tb drives and consumer class 10^14 reliability RAID6 is not much better than a RAID5. 50% fail rate per disk statistically speaking, even RAID6 you're dicing with danger during a rebuild.
"ZFS should be able to catch said errors. The issue in traditional arrays is that file integrity verification is done so rarely that UREs can remain hidden for very long periods of time, and consequently, only show up when a rebuild from failure is attempted.
ZFS meanwhile will "scrub" files regularly on a well-setup machine, and will catch the UREs well before a rebuild happens. When it does catch the URE, it repairs that block, marks the relevant disk cluster as broken and never uses it again.
In addition, each URE is only a single bit of data in error. The odds of all drives having the URE for the exact same block at the exact same location AND the parity data for that block AND have all of those errors happen at the same time is extremely small compared to the chance of getting 1bit errors (that magical 1x10^14 number).
Besides, if you really get down to it, that error rate is the same for smaller disks as well, so if you're building a large array, the error rate is still just as high, but now spread over a larger amount of disks, so while the odds of a URE is smaller (arguable, I'm not going there), the odds of a physical crash, motor failure, drive head failure and controller failure are higher, and as anyone will tell you, far more likely to happen than 2 or more simultaneous UREs for the same block across disks (which is why you want more than 1 disk worth of parity, so there is enough redundancy DURING the rebuild to tolerate the second URE hapenning on another block)"
It boils down to one URE being only 1 bit, not a whole disk failure.
The problem is with the raid controllers, mostly in that they work below the level of the FS. Instead of being able to report that sector 1098832109 was lost during the rebuild and that consequently you need to restore the file \blah\stuff\whatever\somethingelse\whocares.ext from an alternate backup source they report a failure to rebuild the array because the drive with the 1 bit error on it failed. And in a raid 1/5 configuration the presence of a second bad drive means that as far as the raid controller is concerned your array is now non-recoverable because you've lost 2 drives in an array that can only tolerate a single failure.
I was just about to buy a bunch of 4 TB Red drives. Is there any way to tell whether one is getting a 2.0 or 3.0 firmware drive, since it's not a user-upgradable thing?
I asked WD support and got a pretty disappointing answer: -------------------------------- Thank you for contacting Western Digital Customer Service and Support. My name is Daniel.
You have a very good question, I will be very happy to answer it.
You are right, the oldest drives cannot be updated to the new technology and the model is the same.
However the specs of the hard drives provided by every retailer should provide the correct data.
So, before purchasing the hard drive, check the specs to see if it has the newest technology. I just checked on Newegg and Amazon and they provide that detail. You will realize if it is an old drive or a new one.
If this does not answer your question, please let me know, I will be very happy to help you. -------------------------------- At the time I asked, Amazon listed their version at 3.0 but the image still said 2.0. I asked them about it and the description reverted to 2.0, so trusting retailers to be accurate sounds like a bad plan.
I'm sure they're just trying to create the confusion to clear the old stock. I mean, who would buy the old one when the new one is sitting right beside it for the same price? It's just a little disappointing when I'm looking to buy right now, and the 6 TB drives are available and obviously updated since they're new, but $/GB is just a little high. 3 TB drives are actually the best on that metric, but 4 TB drives aren't far off and get me a little better density.
Hi, I am about to buy my first NAS and looking at DS415play and was wondering which 4TB drive to put in? I think I want two 4TB in RAID 1. I was thinking WD red but wondered if the red pro is worth the premium? Thanks Jim
Anyone have an idea why they haven't updated the 2.5" drives other than firmware? They obviously can pack more on a platter, but the 2.5 Reds have been stuck in limbo for a while. At this point I would have expected them to be somewhere between two and three TB.
The only time a (laptop thickness) 2.5" drive would get that close to the max capacity of a 3.5" drive is when the latter is being held back for a long time. 2.5" platters can only hold about half the data of a 3.5" platter, and unless you go to the extra thick drives in some storage enclosures you're limited to 2 platters. With everyone limited to a 1tb platter in 3.5" drives until today, that meant they were also limited to 500gb platters in 2.5" drives and a 1tb maximum capacity as a result. We'll probably see 600gb/1.2tb (640/1.25?) 2.5" drives in the near future as WD upgrades more of its production line.
well someone is building 1.2tb 2.5 inch drives, as we deal with hp servers alot and they do have them available. usually seagate or hitachi builds them
That's a 10k RPM SAS drive. They come in a 3.5" form factor, so they can put in 5 platters into a single disk relatively easily. 2.5" 9mm drives on the other hand is where the problem lies.
The issue is still the number of platters: at 15mm high, the 1.8TB drive uses 4 platters, which divides nicely to 450GB per platter, about on track with what is seen in the 2.5" 9.5mm segment.
At 12mm high, you can fit 3 platters, at 9.5mm high 2, at 7mm 1 and at 5mm, thanks to some very clever PCB work, 1 full double-sided platter. HGST is a bit of a trailblazer here, with 3 platters in 2.5" 9.5mm, but it's HGST, and they've more often than not been the leaders in density for a very long time
Given the current 500GB/platter capacity, it will take a while for things to move forward, although given the previous 500:800GB ratio, we should see 750GB platters soon-ish
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
creed3020 - Monday, July 21, 2014 - link
Thanks for this news! I'm glad to see WD come out with updated Red products. That limit on the number of NAS bays was unfounded, glad to see it extended to 8 -bay units. I known plenty 8-bay Synology NASes that run WD Reds just fine.This new Pro line is interesting but if prices are only meh compared to the Re I won't bit. My current Re drive continues to work very well in my Syno NAS.
I was hoping to see this put pressure downwards on pricing but these initials MSRPs are definitely slotting overtop the 4TB pricepoint which I usually see minimizing around $189.
DanNeely - Monday, July 21, 2014 - link
More drives in the enclosure means more vibration tolerance, and the Red line has always been positioned as between consumer drives (which aren't nas rated at all) and enterprise drives rated for massive enclosures.What I'm curious about is if the increase to 8 bay approval is linked to new model numbers or if it's being done retroactively to older drives.
basroil - Monday, July 21, 2014 - link
You can find 3TB Red drives for <$150, so a 33% cost increase for marginally better vibration control isn't much... Especially considering rubber isolation works several times better for first mode vibration and can be had for cents.ZeDestructor - Monday, July 21, 2014 - link
Is it just me or do the new WD Red Pro drives look like (both visually, specs and features wise) near identical to the WD Se line?lunadesign - Tuesday, July 22, 2014 - link
It's not you. I thought the exact same thing.lurker22 - Monday, July 21, 2014 - link
Woah, that's a LOT of platters. My experience has been more than 3 platters leads to problems with the drive...otherwise - Monday, July 21, 2014 - link
How many platters are in the 5/6TB red drives? I'm assuming that it's not the same 800GB platters as the Red Pro -- 8 platters in a drive just doesn't seem feasible space-wise.DanNeely - Monday, July 21, 2014 - link
The article says that the 6tb red is using 5 1.2TB platters and is the first >1gb/platter drive on the market.DanNeely - Monday, July 21, 2014 - link
Every existing 4TB platter drive on the market has been a 4 (or more) platter design. The maximum capacity drives everyone's been selling for at least a half dozen years have all been 4 or 5 platter designs.The lack of articles about high capacity drives having higher than normal failure rates makes it clear to me that current drive designs can handle it, whatever may have been the case many years ago.Speedy710 - Monday, July 21, 2014 - link
What is the unrecoverable read error rate on these drives? Most consumer drives are rated at 10^14 bits which means arrays over 11.5TB will almost always have a second failure during the rebuild after replacing a failed drive.ericloewe - Monday, July 21, 2014 - link
If you read their spec sheet, it says 10^14.Per Hansson - Wednesday, July 23, 2014 - link
The spec for the regular Red & Red Pro differ, is it just marketing bullshit to make the numbers look better when infact they are the same?Non-Recoverable Read Errors per Bits Read:
<1 in 10^14 WD Red
<10 in 10^15 WD Red Pro
Just for reference the Seagate Constellation ES.3, Hitachi Ultrastar He6 & 7K4000 are all rated for: 1 in 10^15
Guspaz - Monday, July 21, 2014 - link
The solution to that is raid6 or raidz2. The chances of having a read error during a rebuild isn't that remote (as you point out, 10^14 bits implies a read error is likely in larger arrays, although 10^14 is 12.5TB rather than 11.5). The chances of having a read error during a rebuild in the exact same spot on two drives is astronomical.There definitely is a capacity sacrifice that has to be made to do that, though. My current setup is a single ZFS pool spread over two arrays that are both using raidz2. 7x4TB and 8x2TB, for a total of 44TB of which I lose 2x4+2x2=12TB, giving me a total of 32TB of usable storage space in the end. So I'm losing a bit over a quarter of my space, which isn't too bad.
If (when) I need more storage space, I can just swap those 2TB drives for 6TB ones, bumping me up to 76TB total, 56TB usable. Won't be cheap, of course, but 6TB drives should be a bunch cheaper by the time I need to do that.
asmian - Monday, July 21, 2014 - link
Which is why anyone who cares about their data wouldn't touch drives this big with such poor error rates. You need the RE class drives with 1 in 10^15 unrecoverable read error rates, an order of magnitude better. Size isn't everything, and the error rates of consumer-class drives make their sheer size a liability. Just putting them in a RAID array isn't a panacea for their basic unreliability, unless you like to waste time nursing failed arrays.ZeDestructor - Monday, July 21, 2014 - link
You move the error handling higher up by adding more redundancy (RAID6+/RAIDZ2+) instead. In almost all cases, it works out being significantly cheaper than paying for RE disks.asmian - Monday, July 21, 2014 - link
I'd doubt your overall cost saving taking respective MTBF and warranties into account, and it's not only about cost. As the original questioner stated, it's the risk of ANOTHER disk in your array failing while you're rebuilding after a failure. The sheer size of these disks makes that statistically much more likely since so much more data is being read to recover the array. Throwing more redundancy at the problem saves your array at the initial point of failure, but you're going to be spending more time in a slower degraded rebuild mode, hence increasing the risk of further cheap disks dying in the process, than if you'd invested in more reliable disks in the first place. :/ It's the combination of the enormous size of the individual disks and lower intrinsic reliability that's the issue here.cygnus1 - Monday, July 21, 2014 - link
an unrecoverable read error is not an entire drive failure. raidz2/raid6 protects you during a rebuild from the cheap drives having a single sector unrecoverable read error.asmian - Monday, July 21, 2014 - link
That is meaningless because you are invoking that "protection" as if it is a guarantee - it's not. You're still playing the odds against more drives failing. You might be lucky, you might not. The issue is whether that 1 in 10^14 chance is statistically likely to occur during the massive amount of reading required to rebuild arrays using such large disks as these 6TB monsters.Newsflash - 6TB is approx 0.5 X 10^14 bits. So during a full rebuild EVERY DISK IN THE ARRAY has a 50% chance of an error if they are 6TB disks with that level of unrecoverable error rate. THAT'S why the bigger the disk I'd rather have one with 10x the error rate reliability - due to the size of these disks it's weighing 50% failure risk PER DISK during rebuild against 5%. Extra redundancy, however clever ZFS is, can't mitigate against those statistics.
ZeDestructor - Monday, July 21, 2014 - link
ZFS should be able to catch said errors. The issue in traditional arrays is that file integrity verification is done so rarely that UREs can remain hidden for very long periods of time, and consequently, only show up when a rebuild from failure is attempted.ZFS meanwhile will "scrub" files regularly on a well-setup machine, and will catch the UREs well before a rebuild happens. When it does catch the URE, it repairs that block, marks the relevant disk cluster as broken and never uses it again.
In addition, each URE is only a single bit of data in error. The odds of all drives having the URE for the exact same block at the exact same location AND the parity data for that block AND have all of those errors happen at the same time is extremely small compared to the chance of getting 1bit errors (that magical 1x10^14 number).
Besides, if you really get down to it, that error rate is the same for smaller disks as well, so if you're building a large array, the error rate is still just as high, but now spread over a larger amount of disks, so while the odds of a URE is smaller (arguable, I'm not going there), the odds of a physical crash, motor failure, drive head failure and controller failure are higher, and as anyone will tell you, far more likely to happen than 2 or more simultaneous UREs for the same block across disks (which is why you want more than 1 disk worth of parity, so there is enough redundancy DURING the rebuild to tolerate the second URE hapenning on another block)
asmian - Monday, July 21, 2014 - link
Yay for ZFS if it really can work like you claim and negate all reliability deficiencies in drives. You may be safe using these.However, DanNeely's comment almost immediately below this is the far more likely scenario for the majority who try making non-ZFS arrays. Read that and remember there's a 50% chance of bit error with EVERY 6TB drive in a full array rebuild. That is how they are more likely to be used by low-knowledge users buying only for size at lowest available $/GB or £/GB - note that these 6GB drives are NOT being marketed as for ZFS only.
ZeDestructor - Tuesday, July 22, 2014 - link
You make a fair point, and tbh, I ramble on about it every NAS review: gimme ZFS from the factory already, damnit!I've just grown used to using ZFS for any form of RAID (and this is just more pro-softraid argument) that I forget that people actually uses non-ZFS RAID at all...
ZeDestructor - Tuesday, July 22, 2014 - link
Also, RAIDZ3 (three disk parity). Good stuff if like me you're thinking about huge arrays... 3x15disk vdevs in a backblaze pod is my plan... When I can finally afford to get 5disks in a single go and gradually build up from there...tuxRoller - Tuesday, July 22, 2014 - link
ZFS isn't always the best solution for home use. I preach the gospel of snapraid (non-realtime snapshot parity, non-striping, scrub support, arbitrary levels of parity support, mix/match drives, easy addition and removal... Open Source!). It's great for a media server with infrequent data changes, or for use as a backup for PCs.For high churn access patterns, zfs is still the best (well, for home use).
wintermute000 - Saturday, July 26, 2014 - link
Which means building your own and going ECC pretty much to handle ZFS (since we're talking about home/enthusiast/SOHO/SMB market not enterprise).At this point of URE probability with 6Tb drives and consumer class 10^14 reliability RAID6 is not much better than a RAID5. 50% fail rate per disk statistically speaking, even RAID6 you're dicing with danger during a rebuild.
ZeDestructor - Tuesday, July 29, 2014 - link
Quoting my own reply to asmian a little above:"ZFS should be able to catch said errors. The issue in traditional arrays is that file integrity verification is done so rarely that UREs can remain hidden for very long periods of time, and consequently, only show up when a rebuild from failure is attempted.
ZFS meanwhile will "scrub" files regularly on a well-setup machine, and will catch the UREs well before a rebuild happens. When it does catch the URE, it repairs that block, marks the relevant disk cluster as broken and never uses it again.
In addition, each URE is only a single bit of data in error. The odds of all drives having the URE for the exact same block at the exact same location AND the parity data for that block AND have all of those errors happen at the same time is extremely small compared to the chance of getting 1bit errors (that magical 1x10^14 number).
Besides, if you really get down to it, that error rate is the same for smaller disks as well, so if you're building a large array, the error rate is still just as high, but now spread over a larger amount of disks, so while the odds of a URE is smaller (arguable, I'm not going there), the odds of a physical crash, motor failure, drive head failure and controller failure are higher, and as anyone will tell you, far more likely to happen than 2 or more simultaneous UREs for the same block across disks (which is why you want more than 1 disk worth of parity, so there is enough redundancy DURING the rebuild to tolerate the second URE hapenning on another block)"
It boils down to one URE being only 1 bit, not a whole disk failure.
Sivar - Monday, July 21, 2014 - link
A single bit error is not the same as a drive failure.DanNeely - Monday, July 21, 2014 - link
The problem is with the raid controllers, mostly in that they work below the level of the FS. Instead of being able to report that sector 1098832109 was lost during the rebuild and that consequently you need to restore the file \blah\stuff\whatever\somethingelse\whocares.ext from an alternate backup source they report a failure to rebuild the array because the drive with the 1 bit error on it failed. And in a raid 1/5 configuration the presence of a second bad drive means that as far as the raid controller is concerned your array is now non-recoverable because you've lost 2 drives in an array that can only tolerate a single failure.icrf - Monday, July 21, 2014 - link
I was just about to buy a bunch of 4 TB Red drives. Is there any way to tell whether one is getting a 2.0 or 3.0 firmware drive, since it's not a user-upgradable thing?noeldillabough - Wednesday, July 23, 2014 - link
I'd like to know this too, noticed everywhere has a sale on the (presumably older) red drives right now.icrf - Monday, July 28, 2014 - link
I asked WD support and got a pretty disappointing answer:--------------------------------
Thank you for contacting Western Digital Customer Service and Support. My name is Daniel.
You have a very good question, I will be very happy to answer it.
You are right, the oldest drives cannot be updated to the new technology and the model is the same.
However the specs of the hard drives provided by every retailer should provide the correct data.
So, before purchasing the hard drive, check the specs to see if it has the newest technology. I just checked on Newegg and Amazon and they provide that detail. You will realize if it is an old drive or a new one.
If this does not answer your question, please let me know, I will be very happy to help you.
--------------------------------
At the time I asked, Amazon listed their version at 3.0 but the image still said 2.0. I asked them about it and the description reverted to 2.0, so trusting retailers to be accurate sounds like a bad plan.
I'm sure they're just trying to create the confusion to clear the old stock. I mean, who would buy the old one when the new one is sitting right beside it for the same price? It's just a little disappointing when I'm looking to buy right now, and the 6 TB drives are available and obviously updated since they're new, but $/GB is just a little high. 3 TB drives are actually the best on that metric, but 4 TB drives aren't far off and get me a little better density.
JimmyWoodser - Friday, August 1, 2014 - link
Hi, I am about to buy my first NAS and looking at DS415play and was wondering which 4TB drive to put in? I think I want two 4TB in RAID 1. I was thinking WD red but wondered if the red pro is worth the premium? Thanks JimS.D.Leary - Monday, July 21, 2014 - link
Anyone have an idea why they haven't updated the 2.5" drives other than firmware? They obviously can pack more on a platter, but the 2.5 Reds have been stuck in limbo for a while. At this point I would have expected them to be somewhere between two and three TB.SDLeary
DanNeely - Monday, July 21, 2014 - link
The only time a (laptop thickness) 2.5" drive would get that close to the max capacity of a 3.5" drive is when the latter is being held back for a long time. 2.5" platters can only hold about half the data of a 3.5" platter, and unless you go to the extra thick drives in some storage enclosures you're limited to 2 platters. With everyone limited to a 1tb platter in 3.5" drives until today, that meant they were also limited to 500gb platters in 2.5" drives and a 1tb maximum capacity as a result. We'll probably see 600gb/1.2tb (640/1.25?) 2.5" drives in the near future as WD upgrades more of its production line.Dahak - Monday, July 21, 2014 - link
well someone is building 1.2tb 2.5 inch drives, as we deal with hp servers alot and they do have them available. usually seagate or hitachi builds themhttp://h30094.www3.hp.com/product/sku/10745146
ZeDestructor - Monday, July 21, 2014 - link
That's a 10k RPM SAS drive. They come in a 3.5" form factor, so they can put in 5 platters into a single disk relatively easily. 2.5" 9mm drives on the other hand is where the problem lies.Adul - Tuesday, July 22, 2014 - link
It still is a 1.2 TB 2.5" drive1.2TB 6G SAS 10K rpm SFF (2.5-inch) SC Dual Port ENT 3yr Warranty Hard Drive
Also hitachi has 2.5" 1.8 TB 10K SAS drives
http://www.hgst.com/hard-drives/enterprise-hard-dr...
ZeDestructor - Tuesday, July 22, 2014 - link
The issue is still the number of platters: at 15mm high, the 1.8TB drive uses 4 platters, which divides nicely to 450GB per platter, about on track with what is seen in the 2.5" 9.5mm segment.At 12mm high, you can fit 3 platters, at 9.5mm high 2, at 7mm 1 and at 5mm, thanks to some very clever PCB work, 1 full double-sided platter. HGST is a bit of a trailblazer here, with 3 platters in 2.5" 9.5mm, but it's HGST, and they've more often than not been the leaders in density for a very long time
Given the current 500GB/platter capacity, it will take a while for things to move forward, although given the previous 500:800GB ratio, we should see 750GB platters soon-ish