This form factor isn't just about capacity, it's also about managing the heat generated by 16TB+ of flash and (more importantly) a controller that can handle NVMe speeds into a small form factor. One slide that AnandTech left out compared the thermal load of 4TB P4500 in a 2.5 15mm form factor vs the ruler form factor. A cdillion pointed out below, the ruler also has a serviceability advantage because use can use the length of the chassis without having to pull the chassis out of the rack like you'd have to with multiple ranks of 2.5" drives.
I would say that Intel Ruler solution will be cheaper and 256TB 2.5 in will not fit in 1U drive rack. At $5000 to $7000 per 16TB 2.5 drive it would cost between $80,000 and $112,000
That is assuming you can even fit 16 drives in 1U rack - I saw one with 10 drivers but not sure it is 1U.
The heat from that many 2.5 in drives would also be a factor
"That is assuming you can even fit 16 drives in 1U rack - I saw one with 10 drivers but not sure it is 1U."
Supermicro offers a system with 20x 2.5" bays intended for NVMe SSDs, though they won't sell that chassis standalone, just as a prebuilt. They do also have a 10 bay unit intended for DIY.
I don't think that rack density comparison is fair. The ruler drives are going most of the way back, the 2.5" drive chassis only has a single row at the front. A high density storage server would be drives most of the way back.
It is fair if you consider online serviceability and not just density. Stacking drives in the Z-dimension (depth-wise) in a high-density storage server leads to lower serviceability since you need to -- at the very least -- pull the entire server or a multi-drive sled forward out of the rack to access the drives that are sitting behind the front row.
If you chose not to install rear cable management on the server that allows you to pull the server forward while it is live to perform this potentially delicate internal operation, you would need to shut the server down and disconnect it first before pulling it out to replace a drive.
This ruler format will make it easier to put all of these high-capacity drives at the front of the server where they can be easily accessed and hot-plugged.
Not necessarily an entire server, you can have multiple drawers per server. So you pull it out, remove the failed SSD, insert another, then put it back in. In and out in 30 seconds. Only 1/4 of the storage array is inaccessible for a very short duration. Depending on the config, it is actually possible to avoid any offline time altogether.
Intel assumes that 1D stacking is the only possible solution, but you can easily do 2D stacking, and cram that chassis full of SSDs over its entire depth, easily beating the "ruler" capacity even via modest size 4 TB SSDs. That ruler is pretty big, and if capacity for that size is only 8TB then the actual storage density isn't all that great.
What cable management, SSDs have been plugin-slot compatible since day 1.
The ruler format makes it easier for intel to rake in more profits.
32TB Rulers have been available for a while now - Intel has the DC series at 32TB. They are special order, but are available. The 16TB are more common...
He's talking about cable management in the back of a racked server on rails. One of my biggest pet peeves used to be when well-meaning datacenter guys rack a server and then Velcro the cables together all nice and neat but close to taut which makes it impossible to slide the server forward on its rails without disconnecting everything. This introduces the possibility of an error when reconnecting things and increases service time. Most servers have a kit available that encases the cables in a folding arm that guides them when the server is moved forward and back on rails. Unfortunately, they also can block airflow, so they're not terribly popular.
Totally, that's just a big no no. The ideal situation is to have a rack serviceable without removal. Other than motherboards, most are. Back-plane failures are incredibly rare. But motherboards do fail, often a network controller. Proliants not have a door on the center of the chassis to allow only partially removing the rack to upgrade the memory. PSU's and drives are all serviced from the rear and front. High end units have hot swapable memory as well.
I think the immediate takeaway here is finally official hot swap support for SSD's without needing SAS. The density and performance benefits will be more important in the future.
2.5" SSDs have "official hot swap support" already. I do it all the time in my Corsair 900D with an ASRock "gaming" motherboard. If you think servers can't do it better, you've been sleeping under a rock for a few decades.
Have you ever been in a data center? If they had to pull a rack out, or outfit them with a drawer, every time they had to service a failed drive, maintenance costs would go through the roof. I toured an AWS facility in Chicago and they mentioned they replace around 90 drives A DAY while performing non-stop ongoing expansion. Basically people are always walking the isles, which are narrow, and if you had to remove a rack onto a cart or drawer to perform this operation it would turn a 30 second procedure into a 30 minute procedure.
This is a desperately needed form factor, especially with the utter lack of SAS SSD's as virtually no SATA drives officially support hot swapping.
Yeah yeah, if it comes from intel it is intrinsically much needed and utterly awesome, even if a literal turd.
Zero downtime servicing is perfectly possible from engineering point of view. You have a 2U cassis with 4 drawers, each of them fitting 32 2.5" SSDs, its own motherboard and status indicators on the front. Some extra cable for power and network will allow you to pull the drawer open while still operational, from the right side you can pull out or plug in drives, from the left you can service or even replace the motherboard, something that you actually can't do in intel's ruler form factor. Obviously, you will have to temporary shut down the system to replace the motherboard, but it will only take out 1/4 of the chassis, and it will take no more than a minute, while replacing individual drives or ram sticks will take seconds.
In contrast the ruler only allows for easy replacement of drives, if you want to replace the motherboard, you will have to pull out the whole thing, and shut down the whole thing, and your downtime will be significantly longer.
The reason I went for proprietary server chassis is that standard stuff is garbage. Which is the source of all servicing and capacity issues. There is absolutely nothing wrong with the SSD form factor, it is the server form factor that is problematic.
In this context, introducing a new, incompatible with existing infrastructure SSD form factor, which will in all likelihood also come with a premium is just stupid. Especially when you can get even better serviceability, value, compatibility and efficiency with an improved server form factor.
If your service encounters downtime when you shut a server down, you have a seriously poor architecture. We run ~3000 servers powering our main stack - we constantly lose machines due to a myriad of issues and it literally doesn't matter. Mesos handles re-routing of tasks, and new instances are created to replace the lost capacity.
At scale, if the actual backplane does have issues, the DC provider simply replaces it with a new unit - the time it takes to diagnose, repair, and re-rack the unit is a complete waste of money.
To the contrary, nearly all (every single one I've seen, so "nearly" Matt he incorrect) SATA drives support hot swapping. It's part of the damn protocol. There are *host controllers* that don't support it, but finding those on modern hardware is challenging.
If this form factor is designed to cool devices up to 50w, and you can stick 32 of them in a 1U server, that sounds like a dream come true for GPU accelerators. Good luck fitting eight 250W (which is what you'd need to exceed the performance of that assuming perfect scaling) GPUs in a 1U server, after all.
I'm sad to see the gap between HEDT and server/datacenter widening into a chasm.
Also, I'm not clear on how these are meant to dissipate 50 W. Will such high-powered devices have spacing requirements so they can cool through their case? Or would you pack them in like sardines, and let them bake as if in an oven?
Just to let you know where I'm coming from, I have a LGA2011 workstation board with a Xeon and a DC P3500 that I got in a liquidation sale. I love being able to run old server HW for cheap, and in a fairly normal desktop case.
I'm worried that ATX boards with LGA3647 sockets are going to be quite rare, and using these "ruler" drives in any kind of desktop form factor seems ungainly and impractical, at best.
12.5mm pitch 9.5mm width - I'm assuming this indicates an air-gap on both sides to allow for cooling.
As for (E)ATX and EEB MB's I hear ya - ASROCK has some up on their page, but certainly they are not looking like they will be as easy to get as C612 boards were/are.
I'm not worried about storage tech for consumers, but the CPU divide is definitely widening. Hopefully AMD keeps Intel honest and we start to see dual TR boards that force Intel to enable ECC and do dual i9 systems.
Software has to get better through, it is still lagging behind and just got roughly 2x the cores consumer software could have ever imagined when they had just started digesting more than 4.
"Or would you pack them in like sardines, and let them bake as if in an oven?"
That's pretty much the plan. This solution WILL NOT COOL WELL. It will however allow for a substantial amount of heat to sink in, and when it saturates, drives will begin to throttle, killing your performance.
Additionally, having the drives bake and sweat in heat throughout their entire lifetime will be a real help when designing drives to fail as soon as possible after their warranty period expires. If corporations like intel hate something, that is products which continue being operational for years after their warranty runs out. Rather than people lining up to buy new hardware and generating as much profit as possible.
Like I already said, it is a decorated money grab. It is possible to have solutions superior to this in every way, while still retaining compatibility with the perfectly adequate 2.5" form factor.
The individual drive seems better thermally since all dies have a direct thermal path to the outer cover and the cover provides a lot of surface area for cooling air to flow over. My worry is if this is enough to overcome the total heat density that can now be achieved. If the air flow is front to back the victims may be the circuitry in the back and not the drives since they will see the elevated air temperatures created by the drives. I have to believe someone looked at all that though.
Well if you combine this with directions of Intel's Compute Card, you see something interesting modular upgrading of components and expansion. But keep in mind the Compute Card are not high end components - but it might point to future plans for Intel - for many Compute Card based server in small amount space.
My guess this is going quite expensive - but there are many that need this amount storage - but then again at my first job we have a machine with 24M of memory and 1G of storage and that was pretty much unheard of in PC world in 1992.
Well this is no different than today's ultra notebooks or tablets like Apple and Android.
What is chance for this to happen - only time I had a blown computer was old HP with Core 2 or earlier - blown when sticking usb stick into. But I have 10 year dual Xeon 5160 that still runs.
It also depends on your usage for the equipment. For example for usage of this "Ruler" SSD - if one had hardware failure - it would be simpler to replace the stick. Same theory would be possible if you had redundant multiple module server and one of cards had a failure.
I not sure Intel has this in mind -they are think smart TV's and such - replace the card with newer one
Looks kind of dumb. Unless this form factor is picked up by everyone, this is just going to go the way of Thunderbolt 2 or Firewire or any other custom port design which automatically invalidates most of the options you have on the market.
If anything shows, technology has staying power if it's launched and already compatible with most designs on the market. USB Type A's rectangular port design will continue to exist for yet another decade, at least. I don't think anyone with more sense than money in their head would be going out to go buy a proprietary 1 U server that supports only proprietary ruler shaped SSDs available from the same supplier of the server rack.
While I'm welcome to more efficient and better designed electronics, I'm not ignorant to the fact that most products with small ecosystems of compatibility end up being ignored for more robust and serviceable server designs.
Big players (Google et al) are already running custom form factors with custom controllers. This (and Samsung's NGSFF initiative) are just playing catch so the smaller guys can get some of that NAND and IOPS density that the major players are reaping.
"Over time Intel also plans to introduce 3D XPoint-based Optane SSDs in the ruler form-factor."
Is this AnandTech's interpolation or did Intel actually say this? Because if Intel said it, WTF??? They seem incapable of controlling themselves when it comes to Optane. WHAT about Optane makes it appropriate for the "TB in a rack" mass storage market? It costs what, 4x already expensive enterprise flash. It doesn't seem to have any sort of power advantage. And shipping it as a drive rather than in DIMM form substantially reduces the primary advantage it DOES have over flash, namely byte-addressability.
Using Optane for this task seems as stupid as using it as a (tiny) cache to speed up magnetic drives. And the fact that Intel is pushing these solutions should make you very worried about the Optane team, who seem unable to ship what they promised three years ago and so now are randomly flailing around. I'm sure they'll find one or two Wall Street firms happy to pay crazy prices for lower latency bulk storage; but you can't build a product line like Optane on the very limited needs of a few specialty customers, you have to provide something that's actually relevant to the mass market. And even to the extent that expensive low-latency storage is relevant to the mass market (eg enterprise and data warehouses), I would expect it to be sold in fairly small volumes to act as the storage for particular, segregated data, not as a blanket slap-it-down-everywhere solution.
"WHAT about Optane makes it appropriate for the "TB in a rack" mass storage market?"
Big, big data processing. Think 100s of TBs of data in your working set with 10s of PBs worth of total data. Demand is obviously there, which is why they're talking.
Besides, even if there weren't, you generally want your server to have only one form factor, so if your mainstream NAND is in ruler, may as well have your optane caching layer as ruler too.
Intel said it (it's in one of their slides -- the slide isn't in the article but if you view the gallery it is the last slide.) I could see some people using a a couple of rulers worth of Optane and then the rest as flash and then using the few Optane rulers as a cache for the rest of the flash. Filling the whole thing with Optane would be insane.
Endurance, transfer speeds, latency ... They are the #1 data center deployed drive.
I have over 100 Optane 2.5" deployed ... so far for a specialty customer.
You know nothing about what you speak. Optane SSDs are designed for and used in data centers - don't mix up the 2.5" & Ruler drives for the Optane DIMMMs.
Huh. Samsung's "NGSFF" form factor looks more incremental--30.5mm wide PCB vs 38.6 for the whole "ruler". For comparison, 1U is ~44.5mm high, but you can't use all of that for SSD of course. Curious to see which, if either, wins. The height and depth of the "ruler" looks kind of constraining for server designers, but also potentially useful if Intel wants to build really large individual SSDs, like large early XPoint devices might be. Guess we'll see.
The depth of the Ruler is constraining if you plan to fill the entire width of the server with Rulers. If you only put a bank of 16 on one half of the server, you still have plenty of room for as much motherboard area as you could need in 2S server.
Seems like you could fill the whole width and still do 2P if you had no PCIe slots on the back and relied only on built in controllers for ethernet and such.
Seems like an odd direction. One might think the evolution of hot-swappable NAND would take place between a central controller and the NAND itself- similar to HDDs as the storage and an IC controller. How much power can be run over those contacts? The dimensions of this thing are huge in areal space and, given densities of current SSDs, wouldn't it be limited by heat dissipation or power consumption/requirements? I would think the failure risk of a single module containing a controller and high capacity storage compounds and is worse than an array configuration of individual conventional M.2 and 3.5" SSDs. Bonus points for the size comparison photo of it alongside an Eneloop Pro AA though. Eneloop cells are great.
You'll probably never see hot-swap capability on the interface between the controller and NAND. Expecting the SSD to reconfigure the flash translation layer on the fly while preserving data integrity is unreasonable.
Delivering 50W or more over a Ruler connector is not unreasonable, given that M.2 does 8-12.5W at 3.3V, while the Ruler uses 12V. Heat dissipation is improved in two ways: the higher surface area to volume ratio of the Ruler form factor compared to 2.5" 15mm U.2, and the right-angle connectors used on the backplane mean that there's no PCB obstructing airflow behind the drives.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
Brian Flores - Wednesday, August 9, 2017 - link
Drool...ddriver - Wednesday, August 9, 2017 - link
Intel may not have heard of that, but there are other SSD vendors out there, and they are already pushing 16 TB and up in 2.5" form factor.thesandbenders - Wednesday, August 9, 2017 - link
This form factor isn't just about capacity, it's also about managing the heat generated by 16TB+ of flash and (more importantly) a controller that can handle NVMe speeds into a small form factor. One slide that AnandTech left out compared the thermal load of 4TB P4500 in a 2.5 15mm form factor vs the ruler form factor. A cdillion pointed out below, the ruler also has a serviceability advantage because use can use the length of the chassis without having to pull the chassis out of the rack like you'd have to with multiple ranks of 2.5" drives.HStewart - Thursday, August 10, 2017 - link
I would say that Intel Ruler solution will be cheaper and 256TB 2.5 in will not fit in 1U drive rack. At $5000 to $7000 per 16TB 2.5 drive it would cost between $80,000 and $112,000That is assuming you can even fit 16 drives in 1U rack - I saw one with 10 drivers but not sure it is 1U.
The heat from that many 2.5 in drives would also be a factor
wolrah - Sunday, August 13, 2017 - link
"That is assuming you can even fit 16 drives in 1U rack - I saw one with 10 drivers but not sure it is 1U."Supermicro offers a system with 20x 2.5" bays intended for NVMe SSDs, though they won't sell that chassis standalone, just as a prebuilt. They do also have a 10 bay unit intended for DIY.
yomamafor1 - Wednesday, August 16, 2017 - link
Yeh, that sounds like a massive fire hazard.Deicidium369 - Tuesday, June 23, 2020 - link
The ruler format is an Intel design, and they have 8TB U.2 2.5" drives. Samsung has 30TB 2.5" SSDs - So capacity isn't everything.Samus - Wednesday, August 9, 2017 - link
Seriously this article should be on pornhub. I know what I'm looking at later tonight...SharpEars - Thursday, August 10, 2017 - link
+25!DanNeely - Wednesday, August 9, 2017 - link
I don't think that rack density comparison is fair. The ruler drives are going most of the way back, the 2.5" drive chassis only has a single row at the front. A high density storage server would be drives most of the way back.cdillon - Wednesday, August 9, 2017 - link
It is fair if you consider online serviceability and not just density. Stacking drives in the Z-dimension (depth-wise) in a high-density storage server leads to lower serviceability since you need to -- at the very least -- pull the entire server or a multi-drive sled forward out of the rack to access the drives that are sitting behind the front row.If you chose not to install rear cable management on the server that allows you to pull the server forward while it is live to perform this potentially delicate internal operation, you would need to shut the server down and disconnect it first before pulling it out to replace a drive.
This ruler format will make it easier to put all of these high-capacity drives at the front of the server where they can be easily accessed and hot-plugged.
ddriver - Wednesday, August 9, 2017 - link
Not necessarily an entire server, you can have multiple drawers per server. So you pull it out, remove the failed SSD, insert another, then put it back in. In and out in 30 seconds. Only 1/4 of the storage array is inaccessible for a very short duration. Depending on the config, it is actually possible to avoid any offline time altogether.Intel assumes that 1D stacking is the only possible solution, but you can easily do 2D stacking, and cram that chassis full of SSDs over its entire depth, easily beating the "ruler" capacity even via modest size 4 TB SSDs. That ruler is pretty big, and if capacity for that size is only 8TB then the actual storage density isn't all that great.
What cable management, SSDs have been plugin-slot compatible since day 1.
The ruler format makes it easier for intel to rake in more profits.
Adesu - Wednesday, August 9, 2017 - link
The article says the capacity will be up to 32TB "soon", 1PB (32TB x 32 Rulers) in a 1U server? That's pretty impressiveDeicidium369 - Tuesday, June 23, 2020 - link
32TB Rulers have been available for a while now - Intel has the DC series at 32TB. They are special order, but are available. The 16TB are more common...CheerfulMike - Wednesday, August 9, 2017 - link
He's talking about cable management in the back of a racked server on rails. One of my biggest pet peeves used to be when well-meaning datacenter guys rack a server and then Velcro the cables together all nice and neat but close to taut which makes it impossible to slide the server forward on its rails without disconnecting everything. This introduces the possibility of an error when reconnecting things and increases service time. Most servers have a kit available that encases the cables in a folding arm that guides them when the server is moved forward and back on rails. Unfortunately, they also can block airflow, so they're not terribly popular.Samus - Wednesday, August 9, 2017 - link
Totally, that's just a big no no. The ideal situation is to have a rack serviceable without removal. Other than motherboards, most are. Back-plane failures are incredibly rare. But motherboards do fail, often a network controller. Proliants not have a door on the center of the chassis to allow only partially removing the rack to upgrade the memory. PSU's and drives are all serviced from the rear and front. High end units have hot swapable memory as well.I think the immediate takeaway here is finally official hot swap support for SSD's without needing SAS. The density and performance benefits will be more important in the future.
petteyg359 - Wednesday, August 16, 2017 - link
2.5" SSDs have "official hot swap support" already. I do it all the time in my Corsair 900D with an ASRock "gaming" motherboard. If you think servers can't do it better, you've been sleeping under a rock for a few decades.petteyg359 - Wednesday, August 16, 2017 - link
2.5" plain-old SATA*Deicidium369 - Tuesday, June 23, 2020 - link
Watched a couple cabling vids - and they did that - apparently the cable god didn't know about servicingSamus - Wednesday, August 9, 2017 - link
Have you ever been in a data center? If they had to pull a rack out, or outfit them with a drawer, every time they had to service a failed drive, maintenance costs would go through the roof. I toured an AWS facility in Chicago and they mentioned they replace around 90 drives A DAY while performing non-stop ongoing expansion. Basically people are always walking the isles, which are narrow, and if you had to remove a rack onto a cart or drawer to perform this operation it would turn a 30 second procedure into a 30 minute procedure.This is a desperately needed form factor, especially with the utter lack of SAS SSD's as virtually no SATA drives officially support hot swapping.
ddriver - Thursday, August 10, 2017 - link
Yeah yeah, if it comes from intel it is intrinsically much needed and utterly awesome, even if a literal turd.Zero downtime servicing is perfectly possible from engineering point of view. You have a 2U cassis with 4 drawers, each of them fitting 32 2.5" SSDs, its own motherboard and status indicators on the front. Some extra cable for power and network will allow you to pull the drawer open while still operational, from the right side you can pull out or plug in drives, from the left you can service or even replace the motherboard, something that you actually can't do in intel's ruler form factor. Obviously, you will have to temporary shut down the system to replace the motherboard, but it will only take out 1/4 of the chassis, and it will take no more than a minute, while replacing individual drives or ram sticks will take seconds.
In contrast the ruler only allows for easy replacement of drives, if you want to replace the motherboard, you will have to pull out the whole thing, and shut down the whole thing, and your downtime will be significantly longer.
The reason I went for proprietary server chassis is that standard stuff is garbage. Which is the source of all servicing and capacity issues. There is absolutely nothing wrong with the SSD form factor, it is the server form factor that is problematic.
In this context, introducing a new, incompatible with existing infrastructure SSD form factor, which will in all likelihood also come with a premium is just stupid. Especially when you can get even better serviceability, value, compatibility and efficiency with an improved server form factor.
Samus - Thursday, August 10, 2017 - link
per usual, ddriver the internet forum troll knows more than industry pioneer Intel...SkiBum1207 - Friday, August 11, 2017 - link
If your service encounters downtime when you shut a server down, you have a seriously poor architecture. We run ~3000 servers powering our main stack - we constantly lose machines due to a myriad of issues and it literally doesn't matter. Mesos handles re-routing of tasks, and new instances are created to replace the lost capacity.At scale, if the actual backplane does have issues, the DC provider simply replaces it with a new unit - the time it takes to diagnose, repair, and re-rack the unit is a complete waste of money.
petteyg359 - Wednesday, August 16, 2017 - link
To the contrary, nearly all (every single one I've seen, so "nearly" Matt he incorrect) SATA drives support hot swapping. It's part of the damn protocol. There are *host controllers* that don't support it, but finding those on modern hardware is challenging.Valantar - Wednesday, August 9, 2017 - link
If this form factor is designed to cool devices up to 50w, and you can stick 32 of them in a 1U server, that sounds like a dream come true for GPU accelerators. Good luck fitting eight 250W (which is what you'd need to exceed the performance of that assuming perfect scaling) GPUs in a 1U server, after all.Deicidium369 - Tuesday, June 23, 2020 - link
Haven't seen that FF used for GPUs - but Intel did release some Nervana that use the m.2 type connector...mode_13h - Wednesday, August 9, 2017 - link
I'm sad to see the gap between HEDT and server/datacenter widening into a chasm.Also, I'm not clear on how these are meant to dissipate 50 W. Will such high-powered devices have spacing requirements so they can cool through their case? Or would you pack them in like sardines, and let them bake as if in an oven?
mode_13h - Wednesday, August 9, 2017 - link
Just to let you know where I'm coming from, I have a LGA2011 workstation board with a Xeon and a DC P3500 that I got in a liquidation sale. I love being able to run old server HW for cheap, and in a fairly normal desktop case.I'm worried that ATX boards with LGA3647 sockets are going to be quite rare, and using these "ruler" drives in any kind of desktop form factor seems ungainly and impractical, at best.
cekim - Wednesday, August 9, 2017 - link
12.5mm pitch 9.5mm width - I'm assuming this indicates an air-gap on both sides to allow for cooling.As for (E)ATX and EEB MB's I hear ya - ASROCK has some up on their page, but certainly they are not looking like they will be as easy to get as C612 boards were/are.
I'm not worried about storage tech for consumers, but the CPU divide is definitely widening. Hopefully AMD keeps Intel honest and we start to see dual TR boards that force Intel to enable ECC and do dual i9 systems.
Software has to get better through, it is still lagging behind and just got roughly 2x the cores consumer software could have ever imagined when they had just started digesting more than 4.
Deicidium369 - Tuesday, June 23, 2020 - link
they are not designed to be used in anything other then the 1U case that is designed for them.surt - Wednesday, August 9, 2017 - link
According to the article, they specify a pitch between units which would seem to allow for between-blade (ruler) air cooling.ddriver - Thursday, August 10, 2017 - link
"Or would you pack them in like sardines, and let them bake as if in an oven?"That's pretty much the plan. This solution WILL NOT COOL WELL. It will however allow for a substantial amount of heat to sink in, and when it saturates, drives will begin to throttle, killing your performance.
Additionally, having the drives bake and sweat in heat throughout their entire lifetime will be a real help when designing drives to fail as soon as possible after their warranty period expires. If corporations like intel hate something, that is products which continue being operational for years after their warranty runs out. Rather than people lining up to buy new hardware and generating as much profit as possible.
Like I already said, it is a decorated money grab. It is possible to have solutions superior to this in every way, while still retaining compatibility with the perfectly adequate 2.5" form factor.
flgt - Thursday, August 10, 2017 - link
The individual drive seems better thermally since all dies have a direct thermal path to the outer cover and the cover provides a lot of surface area for cooling air to flow over. My worry is if this is enough to overcome the total heat density that can now be achieved. If the air flow is front to back the victims may be the circuitry in the back and not the drives since they will see the elevated air temperatures created by the drives. I have to believe someone looked at all that though.HStewart - Wednesday, August 9, 2017 - link
Well if you combine this with directions of Intel's Compute Card, you see something interesting modular upgrading of components and expansion. But keep in mind the Compute Card are not high end components - but it might point to future plans for Intel - for many Compute Card based server in small amount space.My guess this is going quite expensive - but there are many that need this amount storage - but then again at my first job we have a machine with 24M of memory and 1G of storage and that was pretty much unheard of in PC world in 1992.
ddriver - Thursday, August 10, 2017 - link
Let's hear it for convenience.The compute card blows up a 2 cent capacitor.
Your buy another 200$ compute card.
199.8$ wasted.
Convenience!
HStewart - Thursday, August 10, 2017 - link
Well this is no different than today's ultra notebooks or tablets like Apple and Android.What is chance for this to happen - only time I had a blown computer was old HP with Core 2 or earlier - blown when sticking usb stick into. But I have 10 year dual Xeon 5160 that still runs.
It also depends on your usage for the equipment. For example for usage of this "Ruler" SSD - if one had hardware failure - it would be simpler to replace the stick. Same theory would be possible if you had redundant multiple module server and one of cards had a failure.
I not sure Intel has this in mind -they are think smart TV's and such - replace the card with newer one
JoeyJoJo123 - Thursday, August 10, 2017 - link
Looks kind of dumb. Unless this form factor is picked up by everyone, this is just going to go the way of Thunderbolt 2 or Firewire or any other custom port design which automatically invalidates most of the options you have on the market.If anything shows, technology has staying power if it's launched and already compatible with most designs on the market. USB Type A's rectangular port design will continue to exist for yet another decade, at least. I don't think anyone with more sense than money in their head would be going out to go buy a proprietary 1 U server that supports only proprietary ruler shaped SSDs available from the same supplier of the server rack.
While I'm welcome to more efficient and better designed electronics, I'm not ignorant to the fact that most products with small ecosystems of compatibility end up being ignored for more robust and serviceable server designs.
ZeDestructor - Thursday, August 10, 2017 - link
Big players (Google et al) are already running custom form factors with custom controllers. This (and Samsung's NGSFF initiative) are just playing catch so the smaller guys can get some of that NAND and IOPS density that the major players are reaping.Deicidium369 - Tuesday, June 23, 2020 - link
The ruler format isn't designed for you, desktop user. It is designed for data center that need 1PB per 1U height.name99 - Thursday, August 10, 2017 - link
"Over time Intel also plans to introduce 3D XPoint-based Optane SSDs in the ruler form-factor."Is this AnandTech's interpolation or did Intel actually say this?
Because if Intel said it, WTF??? They seem incapable of controlling themselves when it comes to Optane. WHAT about Optane makes it appropriate for the "TB in a rack" mass storage market? It costs what, 4x already expensive enterprise flash. It doesn't seem to have any sort of power advantage. And shipping it as a drive rather than in DIMM form substantially reduces the primary advantage it DOES have over flash, namely byte-addressability.
Using Optane for this task seems as stupid as using it as a (tiny) cache to speed up magnetic drives. And the fact that Intel is pushing these solutions should make you very worried about the Optane team, who seem unable to ship what they promised three years ago and so now are randomly flailing around. I'm sure they'll find one or two Wall Street firms happy to pay crazy prices for lower latency bulk storage; but you can't build a product line like Optane on the very limited needs of a few specialty customers, you have to provide something that's actually relevant to the mass market.
And even to the extent that expensive low-latency storage is relevant to the mass market (eg enterprise and data warehouses), I would expect it to be sold in fairly small volumes to act as the storage for particular, segregated data, not as a blanket slap-it-down-everywhere solution.
ZeDestructor - Thursday, August 10, 2017 - link
"WHAT about Optane makes it appropriate for the "TB in a rack" mass storage market?"Big, big data processing. Think 100s of TBs of data in your working set with 10s of PBs worth of total data. Demand is obviously there, which is why they're talking.
Besides, even if there weren't, you generally want your server to have only one form factor, so if your mainstream NAND is in ruler, may as well have your optane caching layer as ruler too.
extide - Wednesday, August 16, 2017 - link
Intel said it (it's in one of their slides -- the slide isn't in the article but if you view the gallery it is the last slide.) I could see some people using a a couple of rulers worth of Optane and then the rest as flash and then using the few Optane rulers as a cache for the rest of the flash. Filling the whole thing with Optane would be insane.Deicidium369 - Tuesday, June 23, 2020 - link
Endurance, transfer speeds, latency ... They are the #1 data center deployed drive.I have over 100 Optane 2.5" deployed ... so far for a specialty customer.
You know nothing about what you speak. Optane SSDs are designed for and used in data centers - don't mix up the 2.5" & Ruler drives for the Optane DIMMMs.
twotwotwo - Thursday, August 10, 2017 - link
Huh. Samsung's "NGSFF" form factor looks more incremental--30.5mm wide PCB vs 38.6 for the whole "ruler". For comparison, 1U is ~44.5mm high, but you can't use all of that for SSD of course. Curious to see which, if either, wins. The height and depth of the "ruler" looks kind of constraining for server designers, but also potentially useful if Intel wants to build really large individual SSDs, like large early XPoint devices might be. Guess we'll see.Billy Tallis - Thursday, August 10, 2017 - link
The depth of the Ruler is constraining if you plan to fill the entire width of the server with Rulers. If you only put a bank of 16 on one half of the server, you still have plenty of room for as much motherboard area as you could need in 2S server.extide - Wednesday, August 16, 2017 - link
Seems like you could fill the whole width and still do 2P if you had no PCIe slots on the back and relied only on built in controllers for ethernet and such.Comdrpopnfresh - Sunday, August 13, 2017 - link
Seems like an odd direction. One might think the evolution of hot-swappable NAND would take place between a central controller and the NAND itself- similar to HDDs as the storage and an IC controller. How much power can be run over those contacts? The dimensions of this thing are huge in areal space and, given densities of current SSDs, wouldn't it be limited by heat dissipation or power consumption/requirements?I would think the failure risk of a single module containing a controller and high capacity storage compounds and is worse than an array configuration of individual conventional M.2 and 3.5" SSDs.
Bonus points for the size comparison photo of it alongside an Eneloop Pro AA though. Eneloop cells are great.
Comdrpopnfresh - Sunday, August 13, 2017 - link
Correction: I meant to say 2.5"Billy Tallis - Monday, August 14, 2017 - link
You'll probably never see hot-swap capability on the interface between the controller and NAND. Expecting the SSD to reconfigure the flash translation layer on the fly while preserving data integrity is unreasonable.Delivering 50W or more over a Ruler connector is not unreasonable, given that M.2 does 8-12.5W at 3.3V, while the Ruler uses 12V. Heat dissipation is improved in two ways: the higher surface area to volume ratio of the Ruler form factor compared to 2.5" 15mm U.2, and the right-angle connectors used on the backplane mean that there's no PCB obstructing airflow behind the drives.
robert kao - Thursday, October 26, 2017 - link
How much mass is it?