Oh gosh, will someone PLEASE put together a semi-affordable, multi-TB drive for consumer use? Spinny disks in my NAS make me nervous (even if it is somewhat protected by RAID). I'd like to see how the Samsung VNAND brings pricing down once the yields come up.
Just looking at this ($1/GB), a 1TB is about $1000, so the 256TB unit is... $256,000.00 - yikes.
If you don't need the performance, HDDs are less than $0.04 per GB. You could store 10 backup copies of your data on HDDs and it's still cheaper than 1 copy on SSD. 10 HDD copies is far, far more reliable than 1 SSD copy.
You obviously haven't priced out commercials SAN/NAS storage. That pricing is extremely competitive. Esp for an all flash system. Its basically cheaper than that equiv capacity in 15k rpm drives and should run rings around them.
TB SSDs for the consumer are already pretty cheap at ~40c per GB. And its unlikely that we'll really see capacity increases on consumer drives because 1 TB SSDs already sell in fairly low numbers.
For capacity situation nothing is going to be spinning disks.
Also, I'll say that any published pricing from enterprise vendors is far from the actual selling price. The pricing floors for this kind of hardware are ALL over the board. You see discounts that can go as high as 90% in enterprise hardware sales. Yes 90%.
Not for the likes of Joe and Jane Consumer then. The low per-GB cost and decent speed could make this a nightmare for other flash SAN providers, especially if you can do away with complex optimizations and just throw more and more flash at a problem.
I'm still waiting for slow but cheap flash for mobile devices for use as mainly read-only storage. I wouldn't mind 32 GB of fast eMMC or even SATA on a phone with 128 or 256 GB on microSD to store static content like music and videos.
Pretty much everyone is going gaga over this release today, primarily based on pricing. The technology itself really isn't that interesting. Its just a repackaging of SAS SSDs into card form factor, combined with 4 SAS expanders and like 2 8 port sas switched. So nothing revolutionary there.
Its really the density and price that make it interesting. For a lot of installations, it will be quite attractive. 1 per rack to 1 per triple rack results in a pretty nice amount of ultra low latency, high I/O, high capacity storage for a wide variety of workloads.
While density and power consumption are excellent, for an all-flash storage array performance is extremely poor per TB - we're looking at 1,600-4,000 IOPS per TB (vs typical 50,000 IOPS or more) and throughput is only 15-30MB/sec per TB i.e. less than that of a 7200 RPM SAS drives which cost 60-80$/TB (vs $1,000+/TB here). Basically, it's a yet another way to waste a perfectly good flash and $$$$$$.
It is limited by the SAS expanders/switches which in the initial version appear to be only SAS2 (ie 6 Gbps). Assuming its wired up like I think it is with each 8 port external switch connected up to 4 expanders via a 4x SAS connection, you are looking at a max theoretical of 6ish GB/s of bandwidth. I'm assuming that the SAS expanders are in a Mirrored 2+2 configuration.
So it appears that both 4k and 8k IOPS are bandwidth limited. The upgrade to SAS3 (aka 12Gb/s) should both double bandwidth and double IOPS.
Yeah, for that amount of performance it should have more front-end bandwidth, the SAS-ports are totally limiting, but throw in something like Inifiband-switches and the price goes way up. And as the article says, they are selling it to select few hyperscale customer(amazon, fb, google maybe). Also as limiting the ports are think about backing up the data when your running that hot 24/7. There's enterprise and than there's enterpricy :)
Given the price of this storage array (2K$/TB x 256-512TB) extra cost of a pair of 12-port FDR Infiniband/40Gb Ethernet switches (each with non-blocking bandwidth of 1+Tb/s i.e. over 120GB/sec) is trivial (something like 10K-15K$). Real problem is that their backplane is most likely way too slow for 64 4-8TB Flash drives.
Actually it's not even necessary to use Infiniband to get high front-end bandwidth (probably 70-80GB/sec - not 7GB/sec !) and it could all be SAS3 and pretty cheap. Given the existence of 48-port and 68-port 12Gb SAS3 expanders (like those from PMC - http://pmcs.com/products/storage/sas_expanders/pm8... and http://pmcs.com/products/storage/sas_expanders/pm8... which apparently only cost $320 and $200 ea, 6 expander chips (2 68-port at the top level and 4 48-port at lower level) should allow HA configuration connecting 16-18 4x SAS3 12Gb external ports to 64 SAS3 drives (2 SAS ports per drive). To better visualize it, think of each of those 64 SAS drives as a 2-port NIC, each of 4 SAS3 expanders at the bottom layer as 48-port TOR switch with 32 1x downlinks and 4 (2+2) 4x uplinks connected to 2 top-level 68-port (17x4x) switches each of those having 8 4x downlinks and 8 (or 9) 4x front-end links. Maybe someone who actually understand SAS3 at the hardware level can comment if there are any hidden bottlenecks in this approach above? And if not, why Sandisk did build this bandwidth-handicapped array?
If you look at the likes of EMC and NetApp, what differentiates them from this and why (they would justify) they charge more per GB is the supporting software they supply that enables provisioning, management, replication, Dedupe, integration with virtual platforms etc, very little of which is evident in the article, it woudl seem to be just a bunch of storage. NetApp often state they are a software company, the hardware they have to provide but really it's just comodity stuff, EMC still think hardware and if they ever stop buying companies and get all thier bits in one place they may go that way too. This smacks of a "hey look at what you can do if you really try", slap in the face of these sorts of vendors, "here's the hardware, stick your software in front of this and lets see what we can do together."
Well the reality is as much as a company like NetApp wants to be a software company, they'll never be one. They sell hardware, and their primary market is legacy SAN/NAS.
Anyone building a modern application is bypassing all the legacy storage providers. One needs look no further than AWS. Anyone operating at any scale, which is basically everything new, wants either raw block or raw object, don't give a care about RAID cause RAID is basically useless these days, and will handle redundancy at the application level.
Solutions like this and the Skyera flash boxes (136TB RAW per 1U moving to 300TB per 1U this year!!!) are basically the bees knees. They are basically cost competitive with spinning disk systems while delivering excellent capacity.
Both are somewhat bandwidth limited currently at about 6-8 GB/s.
For the higher performance arena, you have the custom super fast systems like IBM's FlashCore et al.
And at the top end there are the NVMe solutions that are now on the market from several vendors.
I won't even get into the trend of rotational media moving to ethernet interfaces...
This is reminiscent of the exciting products from the recently cashed-out ("acquired") Skyera. Hopes to this not being bullshit vaporware like Skyera's products.
It's great to see innovation on the hardware side but I believe most of the magic is going to come from the software. What happens when a controller fails? What happens when an entire node fails? How graceful is the failure? Do systems go down? There's a *LOT* of hours spent on firmware engineering and software engineering to address those kinds of things. I don't think you'll see these on a banking floor anytime soon.
In the long run, the market is going to be with the companies who provide the intelligence at the software/service layer such as Amazon, EMC, Netapp, newcomers like Pure and Qumulo, etc. You'll see this gear in a few shops but to go and sell direct means you need to do things like parts retention, node compatibility across upgraded platforms, and many other things that take companies many growth cycles to accomplish and a lot of $$$.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
22 Comments
Back to Article
bill.rookard - Wednesday, March 4, 2015 - link
Oh gosh, will someone PLEASE put together a semi-affordable, multi-TB drive for consumer use? Spinny disks in my NAS make me nervous (even if it is somewhat protected by RAID). I'd like to see how the Samsung VNAND brings pricing down once the yields come up.Just looking at this ($1/GB), a 1TB is about $1000, so the 256TB unit is... $256,000.00 - yikes.
SirMaster - Wednesday, March 4, 2015 - link
But it's still nowhere near cost effective.If you don't need the performance, HDDs are less than $0.04 per GB. You could store 10 backup copies of your data on HDDs and it's still cheaper than 1 copy on SSD. 10 HDD copies is far, far more reliable than 1 SSD copy.
Flunk - Wednesday, March 4, 2015 - link
I'm guessing you're not used to SAN prices because this is positively cut-rate.Anonymous Blowhard - Wednesday, March 4, 2015 - link
Seriously. Gimme this for my VDI deployment right now.ats - Wednesday, March 4, 2015 - link
You obviously haven't priced out commercials SAN/NAS storage. That pricing is extremely competitive. Esp for an all flash system. Its basically cheaper than that equiv capacity in 15k rpm drives and should run rings around them.TB SSDs for the consumer are already pretty cheap at ~40c per GB. And its unlikely that we'll really see capacity increases on consumer drives because 1 TB SSDs already sell in fairly low numbers.
For capacity situation nothing is going to be spinning disks.
cb216 - Friday, April 17, 2015 - link
Also, I'll say that any published pricing from enterprise vendors is far from the actual selling price. The pricing floors for this kind of hardware are ALL over the board. You see discounts that can go as high as 90% in enterprise hardware sales. Yes 90%.serendip - Wednesday, March 4, 2015 - link
Not for the likes of Joe and Jane Consumer then. The low per-GB cost and decent speed could make this a nightmare for other flash SAN providers, especially if you can do away with complex optimizations and just throw more and more flash at a problem.I'm still waiting for slow but cheap flash for mobile devices for use as mainly read-only storage. I wouldn't mind 32 GB of fast eMMC or even SATA on a phone with 128 or 256 GB on microSD to store static content like music and videos.
juhatus - Wednesday, March 4, 2015 - link
AT has usually steered away from enterprise storage, I feel writer is a bit of a victim of enterprise propaganda. Well spinned.Anonymous Blowhard - Wednesday, March 4, 2015 - link
So that whole subsection under ""Cloud/Datacenter and IT dating back to 2010" is just a figment of my imagination then?Go back to Reddit.
ats - Wednesday, March 4, 2015 - link
Pretty much everyone is going gaga over this release today, primarily based on pricing. The technology itself really isn't that interesting. Its just a repackaging of SAS SSDs into card form factor, combined with 4 SAS expanders and like 2 8 port sas switched. So nothing revolutionary there.Its really the density and price that make it interesting. For a lot of installations, it will be quite attractive. 1 per rack to 1 per triple rack results in a pretty nice amount of ultra low latency, high I/O, high capacity storage for a wide variety of workloads.
DataGuru - Wednesday, March 4, 2015 - link
While density and power consumption are excellent, for an all-flash storage array performance is extremely poor per TB - we're looking at 1,600-4,000 IOPS per TB (vs typical 50,000 IOPS or more) and throughput is only 15-30MB/sec per TB i.e. less than that of a 7200 RPM SAS drives which cost 60-80$/TB (vs $1,000+/TB here).Basically, it's a yet another way to waste a perfectly good flash and $$$$$$.
ats - Wednesday, March 4, 2015 - link
It is limited by the SAS expanders/switches which in the initial version appear to be only SAS2 (ie 6 Gbps). Assuming its wired up like I think it is with each 8 port external switch connected up to 4 expanders via a 4x SAS connection, you are looking at a max theoretical of 6ish GB/s of bandwidth. I'm assuming that the SAS expanders are in a Mirrored 2+2 configuration.So it appears that both 4k and 8k IOPS are bandwidth limited. The upgrade to SAS3 (aka 12Gb/s) should both double bandwidth and double IOPS.
juhatus - Wednesday, March 4, 2015 - link
Yeah, for that amount of performance it should have more front-end bandwidth, the SAS-ports are totally limiting, but throw in something like Inifiband-switches and the price goes way up. And as the article says, they are selling it to select few hyperscale customer(amazon, fb, google maybe). Also as limiting the ports are think about backing up the data when your running that hot 24/7. There's enterprise and than there's enterpricy :)DataGuru - Thursday, March 5, 2015 - link
Given the price of this storage array (2K$/TB x 256-512TB) extra cost of a pair of 12-port FDR Infiniband/40Gb Ethernet switches (each with non-blocking bandwidth of 1+Tb/s i.e. over 120GB/sec) is trivial (something like 10K-15K$).Real problem is that their backplane is most likely way too slow for 64 4-8TB Flash drives.
DataGuru - Saturday, March 7, 2015 - link
Actually it's not even necessary to use Infiniband to get high front-end bandwidth (probably 70-80GB/sec - not 7GB/sec !) and it could all be SAS3 and pretty cheap. Given the existence of 48-port and 68-port 12Gb SAS3 expanders (like those from PMC - http://pmcs.com/products/storage/sas_expanders/pm8... and http://pmcs.com/products/storage/sas_expanders/pm8... which apparently only cost $320 and $200 ea, 6 expander chips (2 68-port at the top level and 4 48-port at lower level) should allow HA configuration connecting 16-18 4x SAS3 12Gb external ports to 64 SAS3 drives (2 SAS ports per drive). To better visualize it, think of each of those 64 SAS drives as a 2-port NIC, each of 4 SAS3 expanders at the bottom layer as 48-port TOR switch with 32 1x downlinks and 4 (2+2) 4x uplinks connected to 2 top-level 68-port (17x4x) switches each of those having 8 4x downlinks and 8 (or 9) 4x front-end links.Maybe someone who actually understand SAS3 at the hardware level can comment if there are any hidden bottlenecks in this approach above? And if not, why Sandisk did build this bandwidth-handicapped array?
vFunct - Wednesday, March 4, 2015 - link
This is amazing. Pair this up with an 8-socket Xeon system and you have a database monster.FunBunny2 - Wednesday, March 4, 2015 - link
Welcome to the club. Dr. Codd would be proud.lorribot - Wednesday, March 4, 2015 - link
If you look at the likes of EMC and NetApp, what differentiates them from this and why (they would justify) they charge more per GB is the supporting software they supply that enables provisioning, management, replication, Dedupe, integration with virtual platforms etc, very little of which is evident in the article, it woudl seem to be just a bunch of storage. NetApp often state they are a software company, the hardware they have to provide but really it's just comodity stuff, EMC still think hardware and if they ever stop buying companies and get all thier bits in one place they may go that way too.This smacks of a "hey look at what you can do if you really try", slap in the face of these sorts of vendors, "here's the hardware, stick your software in front of this and lets see what we can do together."
ats - Thursday, March 5, 2015 - link
Well the reality is as much as a company like NetApp wants to be a software company, they'll never be one. They sell hardware, and their primary market is legacy SAN/NAS.Anyone building a modern application is bypassing all the legacy storage providers. One needs look no further than AWS. Anyone operating at any scale, which is basically everything new, wants either raw block or raw object, don't give a care about RAID cause RAID is basically useless these days, and will handle redundancy at the application level.
Solutions like this and the Skyera flash boxes (136TB RAW per 1U moving to 300TB per 1U this year!!!) are basically the bees knees. They are basically cost competitive with spinning disk systems while delivering excellent capacity.
Both are somewhat bandwidth limited currently at about 6-8 GB/s.
For the higher performance arena, you have the custom super fast systems like IBM's FlashCore et al.
And at the top end there are the NVMe solutions that are now on the market from several vendors.
I won't even get into the trend of rotational media moving to ethernet interfaces...
DuckieHo - Tuesday, March 10, 2015 - link
Is the stacked piped front panel purely for aesthetics or there's some function to them? I see the vents behind them.anon4632 - Tuesday, March 10, 2015 - link
This is reminiscent of the exciting products from the recently cashed-out ("acquired") Skyera. Hopes to this not being bullshit vaporware like Skyera's products.cb216 - Friday, April 17, 2015 - link
It's great to see innovation on the hardware side but I believe most of the magic is going to come from the software. What happens when a controller fails? What happens when an entire node fails? How graceful is the failure? Do systems go down? There's a *LOT* of hours spent on firmware engineering and software engineering to address those kinds of things. I don't think you'll see these on a banking floor anytime soon.In the long run, the market is going to be with the companies who provide the intelligence at the software/service layer such as Amazon, EMC, Netapp, newcomers like Pure and Qumulo, etc. You'll see this gear in a few shops but to go and sell direct means you need to do things like parts retention, node compatibility across upgraded platforms, and many other things that take companies many growth cycles to accomplish and a lot of $$$.