Intel chipset RAID tends not to scale that well with more than two drives. I have to admit that I haven't tested four drives (or any RAID 0 in a while) to fully determine the performance gains, but it's safe to say that the Phoenix Blade is better than any Intel RAID solution since it's more optimized (specific hardware and custom firmware).
I dont get the high praise of this drive, sure it has value for people that need high sequential speed, or people that use it to host a database on a budget that have tons of request, and can utilize high QD, all other are better off with a SATA SSD that preforms much better with a QD of 2 or less.
As desktop users almost never go over QD2 in real word use, so they would be much better of with a 8x0 EVO or so, both performance wise as price wise.
I am actually wane of the few that could use the drive, if i had space for it (running quad SLI), as i use a RAMdrive, and copy programs and games that are stored on the SSD in a RAR file, true a script from a R0 set of SSDs to the RAMdisk, so high sequential speed is king for me.
But i count my self in the 0.1% of nerds, that dose things like that because i like doing stuff like that, any other sane person would just use a SSD to run its programs of.
The typical self-centered response: "This product doesn't apply to me. So I don't understand why anyone else likes it or why it should be reviewed," followed by, "Not that my system specs have ANYTHING to do with this, but here they are... 16 video cards, raid-0 with 16 ssd's, 64TB ram, blah blah blah..." They literally just look for an excuse to brag...
It's like someone typing a response to a review of Crest toothpaste. "I don't really know anything about that toothpaste. But I saw some, the other day, when I went to the store in my 2014 Dodge Charger quad-Hemi supercharged with Borla exhaust, 20" BBS with racing slicks, HID headlights, custom sound system, swimming pool in the trunk and with wings on the side so I can fly around.
The DMI interface between the chipset and the processor maxes out at about 1800~1850MB/s and this bandwidth has to be split between all the devices connected to the PCH which also incorporates an 8x pci 2.0 link. Simply put, there's not enough bandwidth to go around with more than two drives attached to the chipset in raid, not to mention that the scaling beyond 2 drives is fairly bad in general through the PCH even when nothing else is going on. And to top it all off 4k performance is usually slightly slower in Raid than a on a single SSD (ie it doesn't scale at all).
I know Tomshardware had an article or two on this subject if you want to google it.
It takes three SSDs to saturate DMI. And 4k writes are nearly double on long queue depths. So you get more capacity, higher cost, and much of the performance benefit for many operations. Certainly tons more than a single SSD at a linear cost. If you research your statements.
To your first point about saturating DMI, we're in agreement. Reread what i said.
To your second point about 4k, you are correct but i've personally had three separate sets of RAID 0 on my performance machine (2 vertex 3s, 2 vertex 4s, 2 vectors), and i can tell you that those higher 4k results were not impactful in any way when compared to a single SSD. (Programs didn't load faster for instance.)
That leaves me curious as to what you're doing that allows you to get the benefits of high queue depth RAID0? What's your setup, what programs do you run? I ask because for me it turned out not to be worth the bother, and this is coming from someone who badly wanted it to be. In the end the higher low queue depth 4k of 1 SSD was a better option for me so i switched back.
What really sucks is that Intel continues attaching a PCH to the host processor through a four-lane DMI 2.0 connection on even the X99. You only get 2 GB/s of bi-directional throughput.
So 3 disk R0 or 4 disk R5 is all it takes to saturate the DMI connection between chipset and CPU, even do you got 10x SATA3 connectors.
On the moment only solutions are M.2, PCIe to have a faster storage solution.
And for the desktop, only M.2 with native PCIe 3.x 4x will be able to to deliverer cost affectedly solutions, one's they finally have good SSD controllers developed.
You're preaching to the quire on that one. 2GB per second (actually only 1800MB/s after overhead) divided between 10 SATA ports, 14 USB (6 3.0) ports, Gigabit LAN, and 8 PCI express lanes, is an absolute joke.
What you're missing is that while an SSD at peak speed can saturate a SATA 3 link, and 3 such drives can saturate 2GB/s DMI connection, even the best SSDs can rarely reach such speeds with normal workloads.
Random (especially low queue depth 4K random) workloads tend to be limited to much lower speeds, and random IO is much more representative of typical workloads. Sequential workloads are usually bulk file copy operations, and how often do you do that?
So, given your 10x SATA 3 connectors, what workload do you possibly envisage that would require that combined bandwidth? And benchmark dick swinging doesn't count.
My tasks are varied but they often involve opening large data sets and importing them into an inverted index store, at the same time running many process agents on the incoming data as well as visualizing it. This host is also used for virtualization. Programs loading faster is the least of my concerns.
I know RAID 0 (especially with 4 drives) theoretically would give high performance but is it really worth the data risks? I do question some laptop manufacturers or PC OEM to actually build a RAID 0 with SSDs for potential customers, it's just not a good practice imo.
RAM is much more volatile than flash or spinning storage yet it has its place. SSDs are in a sense always RAID array since many chips are used. And it's been posted that the failure rate of a good SSD is much less than a HDD, multiple SSDs are still less than a single HDD. And one should always have good backups regardless. So if the speed is worth it it's not at at all unreasonable.
I have 3x Kingston HyperX 240gb in Raid 0, I have 4 of them, but 3 maxes out my AMDraid gains, it is significant over 2 at around 1000 x 1100 r/w (ATTO diskbench). I have tried 4, the gain was minimal. To get further gains with the 4th, I'd probably need to put in an actual RAID card. I know it's not intel, but it is sandforce.
You say - "As a result the XP941 will remain as my recommentation for users that have compatible setups (PCIe M.2 and boot support for the XP941) because I'd say it's slightly better performance wise and at $200 less there is just no reason to choose the Phoenix Blade over the XP941, except for compatibility"
I'm curious, what are you using to determine the XP941 has slightly better performance? It just seems to me most of the benchmarks favor the Phoenix Blade.
It's the 2011 Heavy Workload in particular where the XP941 performs considerably better than the Phoenix Blade, whereas in 2013 and 2011 Light suites the difference between the two is quite small. The XP941 also has better low QD random performance, which typically important for desktop workloads.
Did we ever found out about the endurance of the XP941? Is it artificially limited? The endurance of the GSKILL blade may actually make it worth an extra $200 if it can really hold up to that can of write endurance. http://www.anandtech.com/show/8006/samsung-ssd-xp9...
Bro i KNOW! I have been on the lookout for sm951 for a long time. When I saw this drive was going to be native pci-e 3.0 x4 m2 ssd + nvme + available in 1TB capacity I was like OMG this is my new drive I dont care what the price is it's 100% goin into my next build. That was like almost 6 months ago or something and still no word. I'm rly sad I hope it is still going to come out.
I am not building till Skylake-E so i still have plenty of time. Even tho I am on gulftown i7-980x which is over 4 years old goin on 5 years it still isn't slow enough to be a bottleneck especially on 4.2Ghz OC. Not even upgrading for the cpu just for features I want like DDR4, PCI-E 4.0, ultra m2 slot, sata express, usb 3.0 that isnt from a third party controller (yes I don't have native usb 3.0 still). I still might buy somethign else other than a pc upgrade. This year I bought a 55" oled LG's 2nd gen 55" oled instead of a pc upgrade ( best decision ever it is eye searingly beautiful).
I'm disappointed no RAID0 SSD setups were included. That's a cost effective option many people will explore which often has comparable performance. Three 850 Pros for 768GB is still less than this device.
I've often wondered why these kinds of review sites don't keep databases of results. I realize that the benchmark suites change and you're not a huge operation, but even having recent results to compare openly (using your own front end or even releasing open data) would really up this game and enable your users to participate better. I don't want to sound harsh, but it's 2014, reviewing sites have been around for yearly twenty years and they have changed little in format. Anandtech is easily one of the best, but many sites come down to a few pictures of results and some fairly arbitrary comments (Storage Review is one exception; since the start they've had a database where results can be arbitrarily compared). I hope sooner or later Wikipedia and other collective open benchmarking sites will start elevating comparison and I'd hope to see sites like Anandtech leading the way.
"there is a market in users with older motherboards for whom the XP941 is simply not an option due to the lack of boot support."
Presumably an enthusiast or one in the higher-end workstation markets would already have a suitable boot device so I don't see this as a hindrance. This is already a high-end product, so buying a motherboard to fit the niche would be expected.
Kinda like you don't bemoan a high-power GPU for it's inability to work on low-power supply systems.
I'm not sure I agree with this. Many enthusiasts/professionals haven't seen the appeal to upgrade from Sandy/Ivy Bridge setups, so it's not just the motherboard that needs to be updated.
I definitely don't agree with it. The only reason I'm planning to replace the core of my even older i7-930 system is that it's gotten old enough that an old age failure is becoming more likely and I don't want to do a rush upgrade when something catastrophically fails.
And i'm just here, expection a native pci-e solution.... Damn, raid0 sandforce is really annoying. Atleast use a proper controller, like marvell. And trim still doesn't seem to work properly...
I thought about mentioning the P3500, but it's been "about to start selling in the next few weeks" for the past six months, so I decided not to mention it since there is still no real schedule for its release.
Kristian why does noone bother to bench to P3600? I only need read performance and the P3600 might be suitable. its cutting edge tech yet noone has bothered to review it!
I too have been waiting for this drive, but considering it was pulled from Intel's website, I don't think we're going to see it. I am going to guess that Intel though it would completely cannibalize P3600 sales, which it probably will if it sees the light of day.
Is anyone ever going to get around to producing a native PCIe drive that's actually available at retail for enthusiasts to buy for their systems? Bonus points if it supports NVMe. The SSD in my MacBook Pro is faster than the one in my desktop PC and that just doesn't sit right with me.
Only price I see for an iO-FX is $1380 from Amazon. That's pretty much $700 more or double the price. Even so for an extra $300 I'd rather have 2 XP941's.
I was comparing Amazon costs, as I found the Phoenix Blade 480GB for $1000 on Amazon. But after checking Newegg, it is only $680 on Newegg. Oh well. The iO-FX is still worth it. The speed and reliability is simply second to none. While it might be a little bit overkill for the average consumer, I don't believe PCIe SSDs (especially ones this expensive) are for the average consumer. If you're buying a single storage drive that costs $680, you can buy a storage drive that costs $1400.
Interesting piece of hardware, but I'm interested in it more for my used servers. I have several running and something with that kind of durability rating and the all important backwards compatibility and form factor would allow me to have all kinds of fun with that on my Xeon box.
Of course, two of them would set me back $1500.00 - details, details.
My guess is for a server environment, something that has to be 24/7, a product this new should be tested for a while before I would put it into a production machine, just to see what happens :)
In case anyone is wondering, Kristian is wrong about the controller. SBC Designs has nothing to do with this thing, as if their website's lack of specifics and all-around amateurs-from-the-'90s look wasn't a big tip-off. Pure googling actually turns up Comay, and Comay is a brand used by CoreRise. In fact, the Phoenix Blade is nothing more than a rebadge of CoreRise's BladeDrive G24 (see http://www.corerise.com/en/product_show.php?id=95 ). Looking at the text strings in the driver for this confirms as much. As for the chip itself, CoreRise claims the SBC208 is their own proprietary device. Personally, I don't believe this, as their product portfolio doesn't otherwise suggest they have that level of expertise. I'd guess it's a LSI or Marvell controller.
Thanks for the heads up and detective work. I couldn't find anything in Google, but looks like I wasn't trying hard enough... Anyway, I've updated the article.
Stop bending common sense with all that slow self-destructive flash junk and start making battery backuped RAM PCIe drives. Speeds will be 20x immediately and forever. RAM prices will drop with adoption
Texas Memory (now, a unit of IBM) made such from 20 years ago; they were among the pioneers of SSD before NAND. Didn't sell all that well. Kind of expensive.
If anybody were interested, it might be $500 for an 80GB DDR3 storage drive. But honestly, nobody could utilize that sort of performance except for the largest and busiest data centers. And even they don't need it.
If you really want "teh supr speedi as f*ck spid" then you might as well just grab X79 or X99, and put in 64GBs of RAM and just ramdisk most of it.
First of all you'd want proper ECC RAM on that thing, which will cost at least around $500 for the RAM only. In addition to that you'd want additional logic that will drive the whole thing, map and avoid bad chips, store stuff in flash (another cost) when power dies and resume it after, check and manage the flash, check and manage the battery and make it look like a "drive" in general. Then you add in the R&D costs, manufacturing, support and warranty costs etc and you're not even in the neighborhood any more.
Creating a persistent RAM "disk" is not quite the same thing as software-mapping a bunch of consumer-grade RAM into a ramdisk. Sure, that works and is quite awesome, but everyone who uses it for anything decent, acknowledges and works around the risks that the data there may go poof or worse, bad, at any random time.
Someone makes this now. Just saw a review recently, can't remember where. Very fast, but absurdly expensive. If Samsung or Intel got behind something like this and put out a design that could be easily mfg with commodity ECC ram, then the adoption rate would be high. Shops that need high file I/O, such as big data or database applications would benefit from a batt backed RAM disk card.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
62 Comments
Back to Article
Duncan Macdonald - Friday, December 12, 2014 - link
How does this compare to 4 240GB Sandforce SSDs in software RAID 0 using the Intel chipset SATA interfaces?Kristian Vättö - Friday, December 12, 2014 - link
Intel chipset RAID tends not to scale that well with more than two drives. I have to admit that I haven't tested four drives (or any RAID 0 in a while) to fully determine the performance gains, but it's safe to say that the Phoenix Blade is better than any Intel RAID solution since it's more optimized (specific hardware and custom firmware).nathanddrews - Friday, December 12, 2014 - link
Sounds like you just set yourself up for a Capsule Review.Havor - Saturday, December 13, 2014 - link
I dont get the high praise of this drive, sure it has value for people that need high sequential speed, or people that use it to host a database on a budget that have tons of request, and can utilize high QD, all other are better off with a SATA SSD that preforms much better with a QD of 2 or less.As desktop users almost never go over QD2 in real word use, so they would be much better of with a 8x0 EVO or so, both performance wise as price wise.
I am actually wane of the few that could use the drive, if i had space for it (running quad SLI), as i use a RAMdrive, and copy programs and games that are stored on the SSD in a RAR file, true a script from a R0 set of SSDs to the RAMdisk, so high sequential speed is king for me.
But i count my self in the 0.1% of nerds, that dose things like that because i like doing stuff like that, any other sane person would just use a SSD to run its programs of.
Integr8d - Sunday, December 14, 2014 - link
The typical self-centered response: "This product doesn't apply to me. So I don't understand why anyone else likes it or why it should be reviewed," followed by, "Not that my system specs have ANYTHING to do with this, but here they are... 16 video cards, raid-0 with 16 ssd's, 64TB ram, blah blah blah..." They literally just look for an excuse to brag...It's like someone typing a response to a review of Crest toothpaste. "I don't really know anything about that toothpaste. But I saw some, the other day, when I went to the store in my 2014 Dodge Charger quad-Hemi supercharged with Borla exhaust, 20" BBS with racing slicks, HID headlights, custom sound system, swimming pool in the trunk and with wings on the side so I can fly around.
It's comical.
dennphill - Monday, December 15, 2014 - link
Thanks, Integ8d, you put a smile on my face this morning! My feelings exactly.pandemonium - Tuesday, December 16, 2014 - link
Hah. Nicely done, Integr8d.alacard - Friday, December 12, 2014 - link
The DMI interface between the chipset and the processor maxes out at about 1800~1850MB/s and this bandwidth has to be split between all the devices connected to the PCH which also incorporates an 8x pci 2.0 link. Simply put, there's not enough bandwidth to go around with more than two drives attached to the chipset in raid, not to mention that the scaling beyond 2 drives is fairly bad in general through the PCH even when nothing else is going on. And to top it all off 4k performance is usually slightly slower in Raid than a on a single SSD (ie it doesn't scale at all).I know Tomshardware had an article or two on this subject if you want to google it.
personne - Friday, December 12, 2014 - link
It takes three SSDs to saturate DMI. And 4k writes are nearly double on long queue depths. So you get more capacity, higher cost, and much of the performance benefit for many operations. Certainly tons more than a single SSD at a linear cost. If you research your statements.alacard - Friday, December 12, 2014 - link
To your first point about saturating DMI, we're in agreement. Reread what i said.To your second point about 4k, you are correct but i've personally had three separate sets of RAID 0 on my performance machine (2 vertex 3s, 2 vertex 4s, 2 vectors), and i can tell you that those higher 4k results were not impactful in any way when compared to a single SSD. (Programs didn't load faster for instance.)
http://www.tomshardware.com/reviews/ssd-raid-bench...
That leaves me curious as to what you're doing that allows you to get the benefits of high queue depth RAID0? What's your setup, what programs do you run? I ask because for me it turned out not to be worth the bother, and this is coming from someone who badly wanted it to be. In the end the higher low queue depth 4k of 1 SSD was a better option for me so i switched back.
http://www.hardwaresecrets.com/article/Some-though...
Havor - Sunday, December 14, 2014 - link
What really sucks is that Intel continues attaching a PCH to the host processor through a four-lane DMI 2.0 connection on even the X99. You only get 2 GB/s of bi-directional throughput.So 3 disk R0 or 4 disk R5 is all it takes to saturate the DMI connection between chipset and CPU, even do you got 10x SATA3 connectors.
On the moment only solutions are M.2, PCIe to have a faster storage solution.
And for the desktop, only M.2 with native PCIe 3.x 4x will be able to to deliverer cost affectedly solutions, one's they finally have good SSD controllers developed.
alacard - Sunday, December 14, 2014 - link
You're preaching to the quire on that one. 2GB per second (actually only 1800MB/s after overhead) divided between 10 SATA ports, 14 USB (6 3.0) ports, Gigabit LAN, and 8 PCI express lanes, is an absolute joke.TheWrongChristian - Monday, December 15, 2014 - link
What you're missing is that while an SSD at peak speed can saturate a SATA 3 link, and 3 such drives can saturate 2GB/s DMI connection, even the best SSDs can rarely reach such speeds with normal workloads.Random (especially low queue depth 4K random) workloads tend to be limited to much lower speeds, and random IO is much more representative of typical workloads. Sequential workloads are usually bulk file copy operations, and how often do you do that?
So, given your 10x SATA 3 connectors, what workload do you possibly envisage that would require that combined bandwidth? And benchmark dick swinging doesn't count.
personne - Sunday, December 14, 2014 - link
My tasks are varied but they often involve opening large data sets and importing them into an inverted index store, at the same time running many process agents on the incoming data as well as visualizing it. This host is also used for virtualization. Programs loading faster is the least of my concerns.AllanMoore - Saturday, December 13, 2014 - link
Well you could see the blistering speed on 480Gb comparing to 240Gb version, see the table: http://picoolio.net/image/e4OEzioAs - Saturday, December 13, 2014 - link
I know RAID 0 (especially with 4 drives) theoretically would give high performance but is it really worth the data risks? I do question some laptop manufacturers or PC OEM to actually build a RAID 0 with SSDs for potential customers, it's just not a good practice imo.personne - Monday, December 15, 2014 - link
RAM is much more volatile than flash or spinning storage yet it has its place. SSDs are in a sense always RAID array since many chips are used. And it's been posted that the failure rate of a good SSD is much less than a HDD, multiple SSDs are still less than a single HDD. And one should always have good backups regardless. So if the speed is worth it it's not at at all unreasonable.Symbolik - Sunday, December 14, 2014 - link
I have 3x Kingston HyperX 240gb in Raid 0, I have 4 of them, but 3 maxes out my AMDraid gains, it is significant over 2 at around 1000 x 1100 r/w (ATTO diskbench). I have tried 4, the gain was minimal. To get further gains with the 4th, I'd probably need to put in an actual RAID card. I know it's not intel, but it is sandforce.Dug - Friday, December 12, 2014 - link
You say - "As a result the XP941 will remain as my recommentation for users that have compatible setups (PCIe M.2 and boot support for the XP941) because I'd say it's slightly better performance wise and at $200 less there is just no reason to choose the Phoenix Blade over the XP941, except for compatibility"I'm curious, what are you using to determine the XP941 has slightly better performance? It just seems to me most of the benchmarks favor the Phoenix Blade.
Kristian Vättö - Friday, December 12, 2014 - link
It's the 2011 Heavy Workload in particular where the XP941 performs considerably better than the Phoenix Blade, whereas in 2013 and 2011 Light suites the difference between the two is quite small. The XP941 also has better low QD random performance, which typically important for desktop workloads.Supercell99 - Friday, December 12, 2014 - link
Did we ever found out about the endurance of the XP941? Is it artificially limited? The endurance of the GSKILL blade may actually make it worth an extra $200 if it can really hold up to that can of write endurance. http://www.anandtech.com/show/8006/samsung-ssd-xp9...Dug - Friday, December 12, 2014 - link
Ahh ok. Thank you for the response. My fault for not understanding the weight attributed to a certain benchmark.olderkid - Friday, December 12, 2014 - link
Any idea if we're going to see the Samsung SM951 anytime soon? It's all I've been waiting on for a new x99 build.Laststop311 - Saturday, December 13, 2014 - link
Bro i KNOW! I have been on the lookout for sm951 for a long time. When I saw this drive was going to be native pci-e 3.0 x4 m2 ssd + nvme + available in 1TB capacity I was like OMG this is my new drive I dont care what the price is it's 100% goin into my next build. That was like almost 6 months ago or something and still no word. I'm rly sad I hope it is still going to come out.I am not building till Skylake-E so i still have plenty of time. Even tho I am on gulftown i7-980x which is over 4 years old goin on 5 years it still isn't slow enough to be a bottleneck especially on 4.2Ghz OC. Not even upgrading for the cpu just for features I want like DDR4, PCI-E 4.0, ultra m2 slot, sata express, usb 3.0 that isnt from a third party controller (yes I don't have native usb 3.0 still). I still might buy somethign else other than a pc upgrade. This year I bought a 55" oled LG's 2nd gen 55" oled instead of a pc upgrade ( best decision ever it is eye searingly beautiful).
Laststop311 - Saturday, December 13, 2014 - link
3000 for 55" oled = WINpersonne - Friday, December 12, 2014 - link
I'm disappointed no RAID0 SSD setups were included. That's a cost effective option many people will explore which often has comparable performance. Three 850 Pros for 768GB is still less than this device.HoldDaMayo - Friday, December 12, 2014 - link
Well said, I was thinking the exact same thing.Kristian Vättö - Friday, December 12, 2014 - link
I don't have any sets of two drives, so I couldn't include any RAID 0 results here. I may provide an update later if I get my hands on some, though.personne - Friday, December 12, 2014 - link
I've often wondered why these kinds of review sites don't keep databases of results. I realize that the benchmark suites change and you're not a huge operation, but even having recent results to compare openly (using your own front end or even releasing open data) would really up this game and enable your users to participate better. I don't want to sound harsh, but it's 2014, reviewing sites have been around for yearly twenty years and they have changed little in format. Anandtech is easily one of the best, but many sites come down to a few pictures of results and some fairly arbitrary comments (Storage Review is one exception; since the start they've had a database where results can be arbitrarily compared). I hope sooner or later Wikipedia and other collective open benchmarking sites will start elevating comparison and I'd hope to see sites like Anandtech leading the way.Thanks for listening. (=
Kristian Vättö - Friday, December 12, 2014 - link
Well, we've had the Bench section with all of our benchmark data for as long as I can remember.http://www.anandtech.com/bench
personne - Friday, December 12, 2014 - link
Oh nice, how did I miss that? Must have been thinking of another site. Thanks.vLsL2VnDmWjoTByaVLxb - Friday, December 12, 2014 - link
"there is a market in users with older motherboards for whom the XP941 is simply not an option due to the lack of boot support."Presumably an enthusiast or one in the higher-end workstation markets would already have a suitable boot device so I don't see this as a hindrance. This is already a high-end product, so buying a motherboard to fit the niche would be expected.
Kinda like you don't bemoan a high-power GPU for it's inability to work on low-power supply systems.
Kristian Vättö - Friday, December 12, 2014 - link
I'm not sure I agree with this. Many enthusiasts/professionals haven't seen the appeal to upgrade from Sandy/Ivy Bridge setups, so it's not just the motherboard that needs to be updated.DanNeely - Friday, December 12, 2014 - link
I definitely don't agree with it. The only reason I'm planning to replace the core of my even older i7-930 system is that it's gotten old enough that an old age failure is becoming more likely and I don't want to do a rush upgrade when something catastrophically fails.hojnikb - Friday, December 12, 2014 - link
And i'm just here, expection a native pci-e solution.... Damn, raid0 sandforce is really annoying. Atleast use a proper controller, like marvell.And trim still doesn't seem to work properly...
UltraWide - Friday, December 12, 2014 - link
Would it be more prudent to wait for an NVMe based PCIe SSD? Maybe the Intel DC P3500 that is about to start selling in the next few weeks?Kristian Vättö - Friday, December 12, 2014 - link
I thought about mentioning the P3500, but it's been "about to start selling in the next few weeks" for the past six months, so I decided not to mention it since there is still no real schedule for its release.Luke212 - Friday, December 12, 2014 - link
Kristian why does noone bother to bench to P3600? I only need read performance and the P3600 might be suitable. its cutting edge tech yet noone has bothered to review it!Kristian Vättö - Saturday, December 13, 2014 - link
Intel hasn't sampled media with the P3600, that's why.otherwise - Monday, December 15, 2014 - link
I too have been waiting for this drive, but considering it was pulled from Intel's website, I don't think we're going to see it. I am going to guess that Intel though it would completely cannibalize P3600 sales, which it probably will if it sees the light of day.otherwise - Monday, December 15, 2014 - link
For those interested, here is the P3xxx series page at intel, which used to contain all three models, but now just the P3600 and P3700: http://www.intel.com/content/www/us/en/solid-state...otherwise - Monday, December 15, 2014 - link
Fixed link: http://www.intel.com/content/www/us/en/solid-state...r3loaded - Friday, December 12, 2014 - link
Is anyone ever going to get around to producing a native PCIe drive that's actually available at retail for enthusiasts to buy for their systems? Bonus points if it supports NVMe. The SSD in my MacBook Pro is faster than the one in my desktop PC and that just doesn't sit right with me.biostud - Friday, December 12, 2014 - link
Can't it boot from a X99 setup?FunBunny2 - Friday, December 12, 2014 - link
The text says 2281, but the table 2282?? Typo? Matter much?Antronman - Friday, December 12, 2014 - link
For just $300 more, I'll take an iO-FX any day over this PCIe SSD.Poik - Friday, December 12, 2014 - link
Only price I see for an iO-FX is $1380 from Amazon. That's pretty much $700 more or double the price. Even so for an extra $300 I'd rather have 2 XP941's.Antronman - Friday, December 12, 2014 - link
I was comparing Amazon costs, as I found the Phoenix Blade 480GB for $1000 on Amazon. But after checking Newegg, it is only $680 on Newegg. Oh well. The iO-FX is still worth it. The speed and reliability is simply second to none. While it might be a little bit overkill for the average consumer, I don't believe PCIe SSDs (especially ones this expensive) are for the average consumer. If you're buying a single storage drive that costs $680, you can buy a storage drive that costs $1400.bill.rookard - Friday, December 12, 2014 - link
Interesting piece of hardware, but I'm interested in it more for my used servers. I have several running and something with that kind of durability rating and the all important backwards compatibility and form factor would allow me to have all kinds of fun with that on my Xeon box.Of course, two of them would set me back $1500.00 - details, details.
Supercell99 - Saturday, December 13, 2014 - link
My guess is for a server environment, something that has to be 24/7, a product this new should be tested for a while before I would put it into a production machine, just to see what happens :)MTEK - Friday, December 12, 2014 - link
Don't really care about a RAID-0 hack. Where are the SF-3700 based SSDs? Anand/Kingston were teasing us with one from last CES.... Where is it??counterclockwork - Friday, December 12, 2014 - link
In case anyone is wondering, Kristian is wrong about the controller. SBC Designs has nothing to do with this thing, as if their website's lack of specifics and all-around amateurs-from-the-'90s look wasn't a big tip-off. Pure googling actually turns up Comay, and Comay is a brand used by CoreRise. In fact, the Phoenix Blade is nothing more than a rebadge of CoreRise's BladeDrive G24 (see http://www.corerise.com/en/product_show.php?id=95 ). Looking at the text strings in the driver for this confirms as much. As for the chip itself, CoreRise claims the SBC208 is their own proprietary device. Personally, I don't believe this, as their product portfolio doesn't otherwise suggest they have that level of expertise. I'd guess it's a LSI or Marvell controller.Kristian Vättö - Saturday, December 13, 2014 - link
Thanks for the heads up and detective work. I couldn't find anything in Google, but looks like I wasn't trying hard enough... Anyway, I've updated the article.StrongDC - Saturday, December 13, 2014 - link
The text says driven by four SandForce SF-2281 controllers while the table says 4x SandForce SF-2282. :)Kristian Vättö - Saturday, December 13, 2014 - link
Fixed :)SanX - Saturday, December 13, 2014 - link
Stop bending common sense with all that slow self-destructive flash junk and start making battery backuped RAM PCIe drives. Speeds will be 20x immediately and forever. RAM prices will drop with adoptionSanX - Saturday, December 13, 2014 - link
Battery and hard drive/flash backupFunBunny2 - Saturday, December 13, 2014 - link
Texas Memory (now, a unit of IBM) made such from 20 years ago; they were among the pioneers of SSD before NAND. Didn't sell all that well. Kind of expensive.Antronman - Saturday, December 13, 2014 - link
To be fair, that was 20 years ago.If anybody were interested, it might be $500 for an 80GB DDR3 storage drive. But honestly, nobody could utilize that sort of performance except for the largest and busiest data centers. And even they don't need it.
If you really want "teh supr speedi as f*ck spid" then you might as well just grab X79 or X99, and put in 64GBs of RAM and just ramdisk most of it.
incx - Sunday, December 14, 2014 - link
First of all you'd want proper ECC RAM on that thing, which will cost at least around $500 for the RAM only. In addition to that you'd want additional logic that will drive the whole thing, map and avoid bad chips, store stuff in flash (another cost) when power dies and resume it after, check and manage the flash, check and manage the battery and make it look like a "drive" in general. Then you add in the R&D costs, manufacturing, support and warranty costs etc and you're not even in the neighborhood any more.Creating a persistent RAM "disk" is not quite the same thing as software-mapping a bunch of consumer-grade RAM into a ramdisk. Sure, that works and is quite awesome, but everyone who uses it for anything decent, acknowledges and works around the risks that the data there may go poof or worse, bad, at any random time.
Supercell99 - Sunday, December 14, 2014 - link
Someone makes this now. Just saw a review recently, can't remember where. Very fast, but absurdly expensive. If Samsung or Intel got behind something like this and put out a design that could be easily mfg with commodity ECC ram, then the adoption rate would be high. Shops that need high file I/O, such as big data or database applications would benefit from a batt backed RAM disk card.gammaray - Monday, December 15, 2014 - link
why pay 700$ for this when i can grab samsung 850 pro for half?