Dissappointed you didnt run the 2013 Destroyer on there, with that amazing low-queue depth performance, it will probably blow everything else away on that chart.
I mean really... For $1.50/GB, this could be purchased by a desktop enthusiast, so it's completely valid to test this against other consumer drives to see how it'd do. I'd be very interested in the results.
I found some better use for this pcie ssd drive. You can use it in qnap TVS-1282 or TDS-16489U for hot data. They do tiered storage functionality. Its £600 to get this ssd, but if you want VMs this is great. (https://www.span.com/product/Intel-SSD-DC-P3700-PC...
It would also be useful to compare to the PCIe consumer SSD's that Apple has been shipping in the MacBook Pro, MacBook Air and iMac lines for quite some time now. Given its sales volume and early adoption of PCIe drives, I'd have to imagine that Apple may have shipped more of them to users than anyone else. (These drives are supplied by both Samsung and SanDisk, and perhaps others.)
Yes this new Intel product is for a quite different market, but comparison is how one comes to understand what those differences are and mean.
You miss the point. They are one of the largest players in the consumer market. Moreover, most of their sales are a premium price points and include SSDs. Put the two together, and they almost certainly ship more consumer SSDs than any one else by far. What's more, many of their lines are already on PCIe SSDs.
So, please explain why Apple's shipping PCIe SSD options aren't a significant point of comparison against an aftermarket SSD that just arrived in the market.
The M.2 SATA-protocol-on-PCIe drives? A comparison would mean Apple would need have to have support for NVMe first or the ability to even slot in such a card (rules out the 3 offerings you outlined).
What are you talking about? "The M.2 SATA-protocol-on-PCIe drives?" doesn't even make sense.
All you need to do to compare them is run the benchmarks on the apple hardware, possibly while running under a windows OS. OR, if the drives use the regular m.2 style connector you could just stick them in a desktop. The fact they run AHCI over PCIe does not make a comparison impossible, in fact all of the other PCIe cards in this review that were benchmarked against were also AHCI based cards. Seems like NVMe is confusing people at lot more than it should.
That's also the first thing I thought. I wanted to see the boost level. That bottom is pretty close to where I would consider splurging for my desktop with a 400GB. If you consider a RAID card and few drives then $600 is justifiable.
I stayed away from the PCIe SSDs because of boot issues and quality concerns. A lot of those were OCZ.
NVMe won't necessarily be a replacement for M.2. M.2 is just the connector, and the M.2 standard supports both SATA and NVMe as protocols to control the SSD. That said, you need a motherboard that's wired to give PCIe-over-M.2 as well as a drive that supports NVMe, and NVMe M.2 drives will likely be much better than SATA ones.
Yeah, this except more correctly it is SATA vs PCIe as the interface and AHCI vs NVMe as the protocol.
Connectors: M.2 --> Supports AHCI over SATA, AHCI over PCIe, and NVMe over PCIe SFF-8639 --> Supports AHCI over PCIe and NVMe over PCIe PCIe card --> AHCI over PCIe, and NVMe over PCIe
Now the latter 2 (and even the first one if you really wanted to...) could have a PCIe based SATA controller on it which would go PCIe --> SATA/SATA RAID Controller -> SATA SSD Controller(s), (For example this is how the OCZ Revo Drive works)
That's not what I meant with my comment. I'm upset that besides ASRock on the Extreme 6 and 9 and ASUS on their Impact no other manufacture included a higher bandwidth M.2 connector. I guess all upcoming PCIe M.2 drives will already be bottlenecked by the lackluster M.2 speeds most mainboard manufactures are building into their products,
I would really appreciate a short test of this. How should this work when AHCI is the standard on todays Mainboards/Bios/UEFI? There is alreday some work done until the Windows-/Linux driver take over the helm (which is of course already available: http://www.nvmexpress.org/blog/open-fabrics-allian...
Since a couple of you asked, I threw it in our X79 testbed.
Windows 8.1U1 sees the drive without issue, however it is not bootable as our motherboard cannot see the drive as a bootable devices. I preface that with the fact that our X79 testbed is a consumer platform (ASRock X79 Professional) and X79 is a rather old chipset. So I can't speak for how this would behave on a brand spanking new Z97 board, or a server board for that matter.
PCIe booting may be a general prob with standard bios setting on these boards. I found a tiny bios setting guide how to fix this (on a similar Asrock X97 board). Would be awesome If you could try this: http://www.oczforum.com/forum/showthread.php?10114... You would be the very first in web booting from an NVMe device :-)
the other way around the question is: does - your board - with this bios version - with this bios settings - in this PCIe slot see other bootable PCIe SSD devices?
if so this new Intel PCIe NVMe SSD behave somehow different If others couldn't be seen either - there is still hope for "normal" boot support :-) You just have the right board, bios settings...
The Revodrive 3 presents itself as a SAS device as I understand it. In any case the problem isn't that the board can't see the drive period - the ASRock system browser cheerfully identifies it as an Intel drive - it just doesn't consider it bootable. This is with the latest BIOS (3.10) BTW.
To answer your second question, none of the other PCIe drives we have are bootable. The Intel 910 is explicitly non-bootable, and the Micron P420m uses a proprietary protocol.
just found a gerat piece of information - maybe you only need the right driver (or windows version where this is included out of the box) "NVM Express Boot Support added to Windows 8.1 and Windows Server 2012 R2" details: http://www.nvmexpress.org/blog/nvm-express-boot-su... hopefully this works for these devices...
Thanks for the article. I'm glad the new interface is showing good results so far. People are probably drooling for the new storage hardware options coming eventually.
I would like to see what this could do on an ESXi host running Pernix Data's FVP. If supported, these cards could make that solution much more affordable from a hardware perspective.
are these types of drives only going to be on PCIe.. or are sata-express drives planned as well ?? depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..
ok.. BUT.. that's not what i asked.... will this type of drive, ie the NVMe type.. be on some other type of connection besides PCIe 4x ?? as i said :
depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..
cause one can quickly run out of PCIe slots, or have slots covered/blocked by other PCIe cards ... right now, for example. i have an Asus P6T and due to my 7970.. the 2nd PCIe 16 slot.. is unusable and the 3rd slot.. has a raid card in it.. on a newer board.. it may be different.. but sill SLI/Crossfire.. can quickly cover up slots ... or block them ... hence.. will NVMe type drives also be on sata express ??
right and what I told you is that 2.5" SFF-8639 is also offered. You can probably plug it into a sata express connector but you will only realize 2x pci-e 3.0 speeds IE 10gb/s.
How is 200 uS considered low latency? What a joke. If intel had any ambitions besides playing second fiddle to apple and ARM, then they would put the SSD controller on the cpu and create a DIMM type interface for the NAND. Then they would have read latencies in the 1 to 10 uS range, and even less latency as they improve their caching techniques. It's true that you wouldnt be able to address more than a couple TB of NAND through such an interface, but it would be so blazing fast that it could be shadowed using SATA SSDs with very little perceived performance loss over the entire address space. Think big cache for NAND, call it L5 or whatnot. It would do for storage what L2 did for cpus.
At last Intel showed its muscles again. If in a 6 month timeframe something similar and cheaper doesnt come out, the 400GB P3600 will be my next SSD. Looking forward SF3700 really curiously now.
I love the raw speed that this delivers. It does not hold anything back in terms of performance. I'm genuinely excited as to what a full blown server implementation using 16x PCIe 3.0 could provide.
My only issue is one of capacity and cost per GB of storage. It is good to see 2 TB solutions but honestly I was hoping for a bit more. Moving away from the 2.5" SATA, M.2 and mSATA formats should enable far more NAND packages. I can see Intel limiting these consumer/prosumer cards to lower capacities to keep the higher capacity units in the enterprise space where ultra fast storage carries a higher premium. Speaking of costs, I was prepared to accept this as costing a bit more but not this much. Things like moving to an 18 channel design and the cost of the NVMe controller would be more expensive but not quiet this much. I was hoping to see something closer to $1/GB as the 2.5" SATA market is well below that and starting to approach $0.50/GB. Speed can carry a premium but those lower $/GB SATA drives are still pretty fast on their own.
Actually are there any subjective impressions? Does the P3700 feel noticeably faster in day-to-day usage than a generic 2.5" SATA SDD?
I'd also like to see some boot testing. Generally there are some quirks here and there that crop up with technology introductions.
with the p3700 drive rated at 10dw/d it's aimed at high end enterprise. expecting $1/GB for this high end of a drive isn't realistic. also it's not a fair comparison if you simply compare the price of this drive to an average consumer sata drive and not take nand endurance and quality into account.
As mentioned else where, could you drop this in on any open pcie and have it work and bootable with no other upgrades needed as long as you're running windows 8?
Yes, this would be key. Would be annoying to buy a card and not be able to boot Windows from it. Would it be only be possible with Z97 based chipsets or also Z87? Have a relatively new Z87 card. As much as I don't want to change to Apple, one must admit they are better at getting some of these things right. Come on Intel (Asus) - make it possible to boot from one of these on a Z87 motherboard and I will buy one right away!!
Honestly, it's up to the motherboard's capabilities. A bios update should be possible but it depends on many things like how the PCIe lanes are distributed etc, I wouldn't count on getting the full performance out of a chipset designed before PCIe SSDs. PCIe RAID cards have the controller to boot from built in, but these stand alone SSDs mean the chipset or other onboard controller has to be able to recognize it, that might not be as simple as a bios update.
Why does the p3500 have such low 4k random write IOPS? Is it merely the worst case/steady-state performance? Is it much lower quality NAND? Is it lack of over provisioning and not a problem if the drive is not filled to the brim? I've been waiting for a product like this for a very long time. To be honest, I was surprised Intel was the one to deliver. It looks like they checked out of making innovative products looking at their CPU lineup.
Then again, I guess as long as 4k qd1 write speeds are the same as the p3700, it doesn't really matter. Many enthusiasts will buy the p3500 and put it under a consumer workload anyways that rarely has qd > 1.
Low random write performance is probably an indirect reflection of TLC. The endurance numbers make this pretty clear. TLC has relatively long P/E dwell times. These times become apparent when garbage collection is triggered by sustained random write workloads. I don't know whether these devices support overprovisioning. Having it might help deal with spikey workloads as long as the "runway" is long enough. Though, frankly, the P3500 was not designed for a high random write workload.
My bad. They're using MLC, not TLC. The reserve/spare capacity is 7% on the P3500, which in-part accounts for the relatively low endurance. Intel is probably also doing NAND part binning, using the poorest quality parts in the P3500.
One comment (and I do work for Intel, to be open about it)... but the P3700 does this all in x4 while the p420m does it in x8, so half the PCIe lanes consumed. I didn't see this in the article, and feel like it's very relevant. It also explains the disparity in sequential read performance.
I find it interesting that this article is presented as an enterprise SSD review and even goes so far as to decry the performance of previous implementations, but does not mention Fusion-io or Virident. We've had 500K IOPS and latencies in the tens of microseconds for years now without Intel or NVMe, those are not the stories here.
NVMe is not some wonderful advance from a performance point of view, and should not be presented as such. What it is is a path towards the commoditization of relatively high performance PCIe SSDs. That's an incredibly important achievement and should have been the focus of the discussion.
As it stands, this article follows the the Intel marketing tune a little too closely and does not respect the deep market insights that I've come to expect from AnandTech.
With SATA/SAS drives I can use LSI/Adaptec controllers and mirror/striping/parity configuration to tune performance, reliability and drive failure recoverability.
While NVMe only uses a third of the CPU power, it is still quite lot to achieve those IOPS. Although consumer application would / should hardly see those number in use in real life.
We really need PCI-E to get faster and more lanes, the Ultra M.2 promoted by ASRock was great. Direct CPU connect, 4X PCI-E 3.0. Lots and Lots of headroom to work with. Compared to upcoming going to be standard which would easily get saturated buy the time they arrive.
You should really really explore how you make this bootable win8.1 drive on Z97. Is it possible or not? With M.2 support on Z97 it really should'nt be a problem?
Why is the S3700 200GB drive being used as the comparison to this gigantic 1.6TB monster? Unless there is something I don't understand it has always been the case where the larger the drive (and more channels used) can significantly increase the performance compared to a smaller drive (with less channels). The S3700 had an 800GB drive. That one IMO would be more representative of the improvements of the P3700.
Hi Anand, I have some question regarding the I/O efficiency graphs in the "CPU utilization" page.
What performance counter did you watch when comparing CPU storage load?
I'm ask you because if you use the classical "I/O wait time" (common on Unix and Windows platform), you are basically measuring the time the CPU is waiting for storage, *not* its load.
The point it that while the CPU is waiting for storage, it can schedule another readily-available thread. In other words, while it wait for storage, the CPU is free to do other works. If this is the case, it means that you are measuring I/O performance, *not* I/O efficiency (IOPS per CPU load).
On the other hand, If you are measuring system time and IRQ time, the CPU load graphs are correct.
I wrote an e-mail to the FCC, called them and left a message and went on their website to fill my comment. Took me 5 insignificant minutes. Do it too! Don't let those motherfuckers run over you! SHARE THIS VIDEO!!!!
Technological advancements improve the reliability and performance of the tools and processes we all use in our daily routines. Whether for professional or personal needs, technology allows us to perform our tasks more efficiently in most cases.
Sounds like once these SSDs get cheap it is going to eliminate the aggravation of waiting for computers to do certain things like loading big programs and games forever. I can't wait to get my hands on one. I'm super impatient when it comes to computers. Hopefully, there will be some intense competition for these NVMe SSDs from Samsung and others and prices come down fast.
No, it was not out of necessity. SSD's have used Sata because they lacked vision/ lazy, or whatever other excuse. PCI express has been around for years, as so has AHCI. There is no reason there isn't a single strap on a PCI express card to change between operating modes, like AHCI for older machines, and whatever this new thing is. All an SSD is largely a risk computer that overwhelmingly provides it's functionality using software. Msata should have never existed, if you have to have a controller anyway, why not a PCI-express? After all, SATA controllers connect to PCI-express?
SSD's could have been PCI express in 2008. Those early drives however were terrible, and didn't need the bandwidth or latency, so there was no reason. They were too busy trying to get NAND flash working to bother worrying about other concerns.
Even now, most flash drives being sold are not capable of saturating Sata3 even on sequential reads. I'm going to jab Kingston again here about their dishonest V300, but Micron's M500 isn't pushing any limits either. Intel SSD's should be fast, this isn't news, they have been horribly overpriced. What is news is that the price is now justified.
Why isn't the new spec internal thunderbolt? Oh yeah, has gots to make money on licensing! Why make money producing products when it is so much easier to cash royalty checks? The last thing the pc industry needs is another standard to do something that can already be done 2 other ways, but then we need a jobs program making adapters. Those two ways are PCI-Express, and thunderbolt.
At some point the hard drive should be replaced by a PCI-express full length card that accepts NAND cards, and the user simply buys and keeps adding cards as space is required. This can already be done with current technology, no reinventing the wheel required.
Thunderbolt is PCI-e x4 multiplexed with display port. Intel's new SSD is PCI-e x4. I don't think we have a reason to route display port to an SSD, so thunderbolt makes no sense. Was this sarcasm that I missed?
"At some point the hard drive should be replaced by a PCI-express full length card that accepts NAND cards, and the user simply buys and keeps adding cards as space is required. This can already be done with current technology, no reinventing the wheel required."
What protocol are the NAND cards going to use to talk to the controller? There are many engineering limitations and complexities here. Are you going to have 18 channel controller like this new Intel card? If you populate channels one at a time, then it isn't going to perform well until you populate many of the channels. This is just like system memory; quad channel systems require 4 modules from the start to get full bandwidth. It gets very complicated unless each "NAND card" is a full pci-e card by itself. Each one being a separate pci-e card is no different from just adding more pci-e cards to your motherboard. Due to this move to pci-e, motherboard makers will probably be putting more x4 and/or x8 pci-e slots with different spacing from what is required by video cards. This will allow users to just add a few more cards to get more storage. It may be useful for small form factor systems to make a pci-e card with several m.2 slots since several different types of things (or sizes of SSDs) can be plugged into it. This isn't going to perform as well as having the whole card dedicated to being a single SSD though. I don't think you can fit 18 channels on an m.2 card at the moment.
Anyway, most consumer applications will not really benefit from this. I don't think you would see too much difference in "everyday usability" using a pci-e card vs. a fast sata 6 drive. Most consumer applications are not going to even stress this card. I suspect that sata 6 SSDs wil be around for a while. The SATA Express connector seems like a kludge though. If you actually need more performance than sata 6 (for what?), just get the pci-e card version.
Long time fan and love your storage and enterprise articles, including this one. One questions - what the driver situation on NVMe as far as dropping one of these into existing platforms (consumer and enterprise) and being able to boot?
Another question is regarding cabling for the non-direct PCIe interfaces like SataExpress and SFF-8639? It would be great if you could have some coverage of these topics and timing for consumer availability when you do your inevitable articles on the P3600 and P3500 which seem like great deals given the performance.
How come you did not include ANY Fusion-IO card? In enterprise space they are practically cheaper than the P3700 and have far bigger sizes, for less money, consistently low latency, not to mention advanced software to match it... was it a request from Intel to leave them out?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
85 Comments
Back to Article
extide - Tuesday, June 3, 2014 - link
They finally did it, a bad ass no-compromises SSD.extide - Tuesday, June 3, 2014 - link
Dissappointed you didnt run the 2013 Destroyer on there, with that amazing low-queue depth performance, it will probably blow everything else away on that chart.smartypnt4 - Tuesday, June 3, 2014 - link
I mean really... For $1.50/GB, this could be purchased by a desktop enthusiast, so it's completely valid to test this against other consumer drives to see how it'd do. I'd be very interested in the results.B3an - Tuesday, June 3, 2014 - link
I agree, needs consumer drive comparisons...edward1987 - Tuesday, February 7, 2017 - link
I found some better use for this pcie ssd drive. You can use it in qnap TVS-1282 or TDS-16489U for hot data. They do tiered storage functionality. Its £600 to get this ssd, but if you want VMs this is great. (https://www.span.com/product/Intel-SSD-DC-P3700-PC...TelstarTOS - Tuesday, June 3, 2014 - link
absolutely, but wait until they have a 3600 or 3500 in their hands.NCM - Tuesday, June 3, 2014 - link
It would also be useful to compare to the PCIe consumer SSD's that Apple has been shipping in the MacBook Pro, MacBook Air and iMac lines for quite some time now. Given its sales volume and early adoption of PCIe drives, I'd have to imagine that Apple may have shipped more of them to users than anyone else. (These drives are supplied by both Samsung and SanDisk, and perhaps others.)Yes this new Intel product is for a quite different market, but comparison is how one comes to understand what those differences are and mean.
Marthisdil - Tuesday, June 3, 2014 - link
No one really cares much about Apple's offerings. Mainly because they are such a small percentage of the Marketplace....easp - Monday, June 9, 2014 - link
You miss the point. They are one of the largest players in the consumer market. Moreover, most of their sales are a premium price points and include SSDs. Put the two together, and they almost certainly ship more consumer SSDs than any one else by far. What's more, many of their lines are already on PCIe SSDs.So, please explain why Apple's shipping PCIe SSD options aren't a significant point of comparison against an aftermarket SSD that just arrived in the market.
SeanJ76 - Wednesday, May 20, 2015 - link
Casue Apple is shit!Ammaross - Tuesday, June 3, 2014 - link
The M.2 SATA-protocol-on-PCIe drives? A comparison would mean Apple would need have to have support for NVMe first or the ability to even slot in such a card (rules out the 3 offerings you outlined).extide - Tuesday, June 3, 2014 - link
What are you talking about? "The M.2 SATA-protocol-on-PCIe drives?" doesn't even make sense.All you need to do to compare them is run the benchmarks on the apple hardware, possibly while running under a windows OS. OR, if the drives use the regular m.2 style connector you could just stick them in a desktop. The fact they run AHCI over PCIe does not make a comparison impossible, in fact all of the other PCIe cards in this review that were benchmarked against were also AHCI based cards. Seems like NVMe is confusing people at lot more than it should.
Galatian - Wednesday, June 4, 2014 - link
I think he tried to say that you can't stick one of those new NVMe drives into a Mac, since OS X does not yet support NVMe.That being said, Apple discontinued the old Mac Pro where you could put a PCIe device inside, so the point is moot no matter what.
gospadin - Tuesday, June 3, 2014 - link
I'd have liked to see the drive in 25W mode tooextide - Tuesday, June 3, 2014 - link
Yeah, I would as well. I am assuming the 25W mode has specific cooling requirements? More info on this would be nice. Also what is the default TDP?eanazag - Tuesday, June 3, 2014 - link
That's also the first thing I thought. I wanted to see the boost level. That bottom is pretty close to where I would consider splurging for my desktop with a 400GB. If you consider a RAID card and few drives then $600 is justifiable.I stayed away from the PCIe SSDs because of boot issues and quality concerns. A lot of those were OCZ.
Galatian - Tuesday, June 3, 2014 - link
I might have just overlooked it, but I guess those drives are not bootable?457R4LDR34DKN07 - Tuesday, June 3, 2014 - link
" Booting to NVMe drives shouldn't be an issue either."Galatian - Tuesday, June 3, 2014 - link
Ah great...so this might be a nice alternative to the lackluster state M.2 is right now after all!dopp - Tuesday, June 3, 2014 - link
NVMe won't necessarily be a replacement for M.2. M.2 is just the connector, and the M.2 standard supports both SATA and NVMe as protocols to control the SSD. That said, you need a motherboard that's wired to give PCIe-over-M.2 as well as a drive that supports NVMe, and NVMe M.2 drives will likely be much better than SATA ones.extide - Tuesday, June 3, 2014 - link
Yeah, this except more correctly it is SATA vs PCIe as the interface and AHCI vs NVMe as the protocol.Connectors:
M.2 --> Supports AHCI over SATA, AHCI over PCIe, and NVMe over PCIe
SFF-8639 --> Supports AHCI over PCIe and NVMe over PCIe
PCIe card --> AHCI over PCIe, and NVMe over PCIe
Now the latter 2 (and even the first one if you really wanted to...) could have a PCIe based SATA controller on it which would go PCIe --> SATA/SATA RAID Controller -> SATA SSD Controller(s), (For example this is how the OCZ Revo Drive works)
Galatian - Wednesday, June 4, 2014 - link
That's not what I meant with my comment. I'm upset that besides ASRock on the Extreme 6 and 9 and ASUS on their Impact no other manufacture included a higher bandwidth M.2 connector. I guess all upcoming PCIe M.2 drives will already be bottlenecked by the lackluster M.2 speeds most mainboard manufactures are building into their products,hpvd - Tuesday, June 3, 2014 - link
hmmm are you sure? no new mainboard needed? No new Bios? Should it work in all boards which could boot existing PCIe SSDs?hpvd - Tuesday, June 3, 2014 - link
I would really appreciate a short test of this. How should this work when AHCI is the standard on todays Mainboards/Bios/UEFI? There is alreday some work done until the Windows-/Linux driver take over the helm(which is of course already available: http://www.nvmexpress.org/blog/open-fabrics-allian...
TelstarTOS - Tuesday, June 3, 2014 - link
404j00d - Friday, June 6, 2014 - link
just take off the trailing ) in the urlRyan Smith - Tuesday, June 3, 2014 - link
Since a couple of you asked, I threw it in our X79 testbed.Windows 8.1U1 sees the drive without issue, however it is not bootable as our motherboard cannot see the drive as a bootable devices. I preface that with the fact that our X79 testbed is a consumer platform (ASRock X79 Professional) and X79 is a rather old chipset. So I can't speak for how this would behave on a brand spanking new Z97 board, or a server board for that matter.
hpvd - Wednesday, June 4, 2014 - link
many thanks for giving this a try! Should be further investigated...hpvd - Wednesday, June 4, 2014 - link
PCIe booting may be a general prob with standard bios setting on these boards. I found a tiny bios setting guide how to fix this (on a similar Asrock X97 board). Would be awesome If you could try this:http://www.oczforum.com/forum/showthread.php?10114...
You would be the very first in web booting from an NVMe device :-)
hpvd - Wednesday, June 4, 2014 - link
the other way around the question is:does
- your board
- with this bios version
- with this bios settings
- in this PCIe slot
see other bootable PCIe SSD devices?
if so this new Intel PCIe NVMe SSD behave somehow different
If others couldn't be seen either - there is still hope for "normal" boot support :-)
You just have the right board, bios settings...
Ryan Smith - Wednesday, June 4, 2014 - link
The Revodrive 3 presents itself as a SAS device as I understand it. In any case the problem isn't that the board can't see the drive period - the ASRock system browser cheerfully identifies it as an Intel drive - it just doesn't consider it bootable. This is with the latest BIOS (3.10) BTW.To answer your second question, none of the other PCIe drives we have are bootable. The Intel 910 is explicitly non-bootable, and the Micron P420m uses a proprietary protocol.
hpvd - Wednesday, June 4, 2014 - link
many thanks for giving that much details!!hpvd - Friday, June 6, 2014 - link
just found a gerat piece of information - maybe you only need the right driver (or windows version where this is included out of the box)"NVM Express Boot Support added to Windows 8.1 and Windows Server 2012 R2"
details:
http://www.nvmexpress.org/blog/nvm-express-boot-su...
hopefully this works for these devices...
Bloorf - Tuesday, June 3, 2014 - link
Thanks for the article. I'm glad the new interface is showing good results so far. People are probably drooling for the new storage hardware options coming eventually.Rob94hawk - Tuesday, June 3, 2014 - link
I personally would like to see results in a dedicated gaming rig.Benjam - Tuesday, June 3, 2014 - link
I think that gaming performance is the least of your concerns with this speed monster.jimjamjamie - Tuesday, June 3, 2014 - link
Minimal loading times and no I/O-related hitching still sounds wonderful though.TelstarTOS - Tuesday, June 3, 2014 - link
Excellent QD1 performance is what matters for a workstation or enthusiast machine, so this is EXTREMELY promising.stevets - Tuesday, June 3, 2014 - link
I would like to see what this could do on an ESXi host running Pernix Data's FVP. If supported, these cards could make that solution much more affordable from a hardware perspective.Qasar - Tuesday, June 3, 2014 - link
are these types of drives only going to be on PCIe.. or are sata-express drives planned as well ?? depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..457R4LDR34DKN07 - Tuesday, June 3, 2014 - link
No, they are 4x pcie 2.5" SFF-8639 drives here is a good article describing the differences between satae and 2.5" SFF-8639 drives:http://www.anandtech.com/show/6294/breaking-the-sa...
Qasar - Tuesday, June 3, 2014 - link
ok.. BUT.. that's not what i asked.... will this type of drive, ie the NVMe type.. be on some other type of connection besides PCIe 4x ?? as i said :depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..
cause one can quickly run out of PCIe slots, or have slots covered/blocked by other PCIe cards ... right now, for example. i have an Asus P6T and due to my 7970.. the 2nd PCIe 16 slot.. is unusable and the 3rd slot.. has a raid card in it.. on a newer board.. it may be different.. but sill SLI/Crossfire.. can quickly cover up slots ... or block them ... hence.. will NVMe type drives also be on sata express ??
457R4LDR34DKN07 - Wednesday, June 4, 2014 - link
right and what I told you is that 2.5" SFF-8639 is also offered. You can probably plug it into a sata express connector but you will only realize 2x pci-e 3.0 speeds IE 10gb/s.xdrol - Tuesday, June 3, 2014 - link
It takes 5x 200 GB drives to match the performance of a 1.6 TB drive? That does not sound THAT good... Make it 8x and it's even.Lonyo - Tuesday, June 3, 2014 - link
Now make a motherboard with 8xPCIe slots to put those drives in.hpvd - Tuesday, June 3, 2014 - link
sorry only 7 :-(http://www.supermicro.nl/products/motherboard/Xeon...
:-)
hpvd - Tuesday, June 3, 2014 - link
some technical data for the lower capicity models could be fund here:http://www.intel.com/content/www/us/en/solid-state...
maybe this is interesting to be added to the article...
huge pile of sticks - Tuesday, June 3, 2014 - link
but can it run crysis?Homeles - Tuesday, June 3, 2014 - link
It can run 1,000 instances of Crysis. A kilocrysis, if you will.Shadowmaster625 - Tuesday, June 3, 2014 - link
How is 200 uS considered low latency? What a joke. If intel had any ambitions besides playing second fiddle to apple and ARM, then they would put the SSD controller on the cpu and create a DIMM type interface for the NAND. Then they would have read latencies in the 1 to 10 uS range, and even less latency as they improve their caching techniques. It's true that you wouldnt be able to address more than a couple TB of NAND through such an interface, but it would be so blazing fast that it could be shadowed using SATA SSDs with very little perceived performance loss over the entire address space. Think big cache for NAND, call it L5 or whatnot. It would do for storage what L2 did for cpus.gospadin - Tuesday, June 3, 2014 - link
tRead on MLC NAND is > 50us. 10us latency will never be achievable with an MLC NAND back-end without a redesign of the NAND array.mavere - Friday, June 6, 2014 - link
"I am angry because my ridiculous fantasies aren't fulfilled."Antronman - Tuesday, June 3, 2014 - link
Fusion iO blows it all away.TelstarTOS - Tuesday, June 3, 2014 - link
did they make it bootable on windows after what 8 years?extide - Tuesday, June 3, 2014 - link
At what cost?I'd definitely like to see some FusionIO benches on this site...
TelstarTOS - Tuesday, June 3, 2014 - link
At last Intel showed its muscles again.If in a 6 month timeframe something similar and cheaper doesnt come out, the 400GB P3600 will be my next SSD. Looking forward SF3700 really curiously now.
Kevin G - Tuesday, June 3, 2014 - link
I love the raw speed that this delivers. It does not hold anything back in terms of performance. I'm genuinely excited as to what a full blown server implementation using 16x PCIe 3.0 could provide.My only issue is one of capacity and cost per GB of storage. It is good to see 2 TB solutions but honestly I was hoping for a bit more. Moving away from the 2.5" SATA, M.2 and mSATA formats should enable far more NAND packages. I can see Intel limiting these consumer/prosumer cards to lower capacities to keep the higher capacity units in the enterprise space where ultra fast storage carries a higher premium. Speaking of costs, I was prepared to accept this as costing a bit more but not this much. Things like moving to an 18 channel design and the cost of the NVMe controller would be more expensive but not quiet this much. I was hoping to see something closer to $1/GB as the 2.5" SATA market is well below that and starting to approach $0.50/GB. Speed can carry a premium but those lower $/GB SATA drives are still pretty fast on their own.
Actually are there any subjective impressions? Does the P3700 feel noticeably faster in day-to-day usage than a generic 2.5" SATA SDD?
I'd also like to see some boot testing. Generally there are some quirks here and there that crop up with technology introductions.
kaix2 - Tuesday, June 3, 2014 - link
with the p3700 drive rated at 10dw/d it's aimed at high end enterprise. expecting $1/GB for this high end of a drive isn't realistic. also it's not a fair comparison if you simply compare the price of this drive to an average consumer sata drive and not take nand endurance and quality into account.Sacco_svd - Tuesday, June 3, 2014 - link
P3700 / 2048gb*3dollar = 6144 dollarsP3500 / 2048gb*1.469dollar = 3061.76 dollars
If those are going to be the prices they're still not competitively cheap, by a big margin.
balindad - Tuesday, June 3, 2014 - link
As mentioned else where, could you drop this in on any open pcie and have it work and bootable with no other upgrades needed as long as you're running windows 8?andrewaggb - Tuesday, June 3, 2014 - link
that's really the question isn't it. I'm skeptical until somebody proves otherwise. Seems like you'd need a bios update at a minimum.BeethovensCat - Tuesday, June 3, 2014 - link
Yes, this would be key. Would be annoying to buy a card and not be able to boot Windows from it. Would it be only be possible with Z97 based chipsets or also Z87? Have a relatively new Z87 card. As much as I don't want to change to Apple, one must admit they are better at getting some of these things right. Come on Intel (Asus) - make it possible to boot from one of these on a Z87 motherboard and I will buy one right away!!Taurothar - Tuesday, June 3, 2014 - link
Honestly, it's up to the motherboard's capabilities. A bios update should be possible but it depends on many things like how the PCIe lanes are distributed etc, I wouldn't count on getting the full performance out of a chipset designed before PCIe SSDs. PCIe RAID cards have the controller to boot from built in, but these stand alone SSDs mean the chipset or other onboard controller has to be able to recognize it, that might not be as simple as a bios update.morganf - Tuesday, June 3, 2014 - link
I was disappointed that the 4K QD1 read was no better than 40 MB/sec that can be achieved by SATA / AHCI SSDs like the Samsung 840 Pro.FusionIO has been getting twice that (i.e., around 80 MB/sec) for years. I was expecting NVMe to achieve something similar.
But maybe the 40 MB/sec is an OS driver limitation? Perhaps FusionIO is able to get around that because they have their own driver.
boogerlad - Tuesday, June 3, 2014 - link
Why does the p3500 have such low 4k random write IOPS? Is it merely the worst case/steady-state performance? Is it much lower quality NAND? Is it lack of over provisioning and not a problem if the drive is not filled to the brim? I've been waiting for a product like this for a very long time. To be honest, I was surprised Intel was the one to deliver. It looks like they checked out of making innovative products looking at their CPU lineup.boogerlad - Tuesday, June 3, 2014 - link
Then again, I guess as long as 4k qd1 write speeds are the same as the p3700, it doesn't really matter. Many enthusiasts will buy the p3500 and put it under a consumer workload anyways that rarely has qd > 1.Dangledon - Wednesday, June 4, 2014 - link
Low random write performance is probably an indirect reflection of TLC. The endurance numbers make this pretty clear. TLC has relatively long P/E dwell times. These times become apparent when garbage collection is triggered by sustained random write workloads. I don't know whether these devices support overprovisioning. Having it might help deal with spikey workloads as long as the "runway" is long enough. Though, frankly, the P3500 was not designed for a high random write workload.Dangledon - Wednesday, June 4, 2014 - link
My bad. They're using MLC, not TLC. The reserve/spare capacity is 7% on the P3500, which in-part accounts for the relatively low endurance. Intel is probably also doing NAND part binning, using the poorest quality parts in the P3500.rob_allshouse - Tuesday, June 3, 2014 - link
One comment (and I do work for Intel, to be open about it)... but the P3700 does this all in x4 while the p420m does it in x8, so half the PCIe lanes consumed. I didn't see this in the article, and feel like it's very relevant. It also explains the disparity in sequential read performance.mfenn - Tuesday, June 3, 2014 - link
I find it interesting that this article is presented as an enterprise SSD review and even goes so far as to decry the performance of previous implementations, but does not mention Fusion-io or Virident. We've had 500K IOPS and latencies in the tens of microseconds for years now without Intel or NVMe, those are not the stories here.NVMe is not some wonderful advance from a performance point of view, and should not be presented as such. What it is is a path towards the commoditization of relatively high performance PCIe SSDs. That's an incredibly important achievement and should have been the focus of the discussion.
As it stands, this article follows the the Intel marketing tune a little too closely and does not respect the deep market insights that I've come to expect from AnandTech.
will792 - Tuesday, June 3, 2014 - link
How do you hardware RAID these drives?With SATA/SAS drives I can use LSI/Adaptec controllers and mirror/striping/parity configuration to tune performance, reliability and drive failure recoverability.
iwod - Wednesday, June 4, 2014 - link
While NVMe only uses a third of the CPU power, it is still quite lot to achieve those IOPS. Although consumer application would / should hardly see those number in use in real life.We really need PCI-E to get faster and more lanes, the Ultra M.2 promoted by ASRock was great. Direct CPU connect, 4X PCI-E 3.0. Lots and Lots of headroom to work with. Compared to upcoming going to be standard which would easily get saturated buy the time they arrive.
juhatus - Wednesday, June 4, 2014 - link
You should really really explore how you make this bootable win8.1 drive on Z97. Is it possible or not? With M.2 support on Z97 it really should'nt be a problem?Mick Turner - Wednesday, June 4, 2014 - link
Was there any hint of a release date?7Enigma - Wednesday, June 4, 2014 - link
Why is the S3700 200GB drive being used as the comparison to this gigantic 1.6TB monster? Unless there is something I don't understand it has always been the case where the larger the drive (and more channels used) can significantly increase the performance compared to a smaller drive (with less channels). The S3700 had an 800GB drive. That one IMO would be more representative of the improvements of the P3700.shodanshok - Wednesday, June 4, 2014 - link
Hi Anand,I have some question regarding the I/O efficiency graphs in the "CPU utilization" page.
What performance counter did you watch when comparing CPU storage load?
I'm ask you because if you use the classical "I/O wait time" (common on Unix and Windows platform), you are basically measuring the time the CPU is waiting for storage, *not* its load.
The point it that while the CPU is waiting for storage, it can schedule another readily-available thread. In other words, while it wait for storage, the CPU is free to do other works. If this is the case, it means that you are measuring I/O performance, *not* I/O efficiency (IOPS per CPU load).
On the other hand, If you are measuring system time and IRQ time, the CPU load graphs are correct.
Regards.
Ramon Zarat - Wednesday, June 4, 2014 - link
NET NEUTRALITYPlease, share this video: https://www.youtube.com/watch?v=fpbOEoRrHyU
I wrote an e-mail to the FCC, called them and left a message and went on their website to fill my comment. Took me 5 insignificant minutes. Do it too! Don't let those motherfuckers run over you! SHARE THIS VIDEO!!!!
Submit your comments here http://apps.fcc.gov/ecfs/upload/begin?procName=14-... It's proceeding # 14-28
#FUCKTHEFCC #netneutrality
underseaglider - Wednesday, June 4, 2014 - link
Technological advancements improve the reliability and performance of the tools and processes we all use in our daily routines. Whether for professional or personal needs, technology allows us to perform our tasks more efficiently in most cases.aperson2437 - Thursday, June 5, 2014 - link
Sounds like once these SSDs get cheap it is going to eliminate the aggravation of waiting for computers to do certain things like loading big programs and games forever. I can't wait to get my hands on one. I'm super impatient when it comes to computers. Hopefully, there will be some intense competition for these NVMe SSDs from Samsung and others and prices come down fast.Shiitaki - Thursday, June 5, 2014 - link
No, it was not out of necessity. SSD's have used Sata because they lacked vision/ lazy, or whatever other excuse. PCI express has been around for years, as so has AHCI. There is no reason there isn't a single strap on a PCI express card to change between operating modes, like AHCI for older machines, and whatever this new thing is. All an SSD is largely a risk computer that overwhelmingly provides it's functionality using software. Msata should have never existed, if you have to have a controller anyway, why not a PCI-express? After all, SATA controllers connect to PCI-express?SSD's could have been PCI express in 2008. Those early drives however were terrible, and didn't need the bandwidth or latency, so there was no reason. They were too busy trying to get NAND flash working to bother worrying about other concerns.
Even now, most flash drives being sold are not capable of saturating Sata3 even on sequential reads. I'm going to jab Kingston again here about their dishonest V300, but Micron's M500 isn't pushing any limits either. Intel SSD's should be fast, this isn't news, they have been horribly overpriced. What is news is that the price is now justified.
Why isn't the new spec internal thunderbolt? Oh yeah, has gots to make money on licensing! Why make money producing products when it is so much easier to cash royalty checks? The last thing the pc industry needs is another standard to do something that can already be done 2 other ways, but then we need a jobs program making adapters. Those two ways are PCI-Express, and thunderbolt.
At some point the hard drive should be replaced by a PCI-express full length card that accepts NAND cards, and the user simply buys and keeps adding cards as space is required. This can already be done with current technology, no reinventing the wheel required.
jamescox - Monday, June 9, 2014 - link
"Why isn't the new spec internal thunderbolt?"Thunderbolt is PCI-e x4 multiplexed with display port. Intel's new SSD is PCI-e x4. I don't think we have a reason to route display port to an SSD, so thunderbolt makes no sense. Was this sarcasm that I missed?
"At some point the hard drive should be replaced by a PCI-express full length card that accepts NAND cards, and the user simply buys and keeps adding cards as space is required. This can already be done with current technology, no reinventing the wheel required."
What protocol are the NAND cards going to use to talk to the controller? There are many engineering limitations and complexities here. Are you going to have 18 channel controller like this new Intel card? If you populate channels one at a time, then it isn't going to perform well until you populate many of the channels. This is just like system memory; quad channel systems require 4 modules from the start to get full bandwidth. It gets very complicated unless each "NAND card" is a full pci-e card by itself. Each one being a separate pci-e card is no different from just adding more pci-e cards to your motherboard. Due to this move to pci-e, motherboard makers will probably be putting more x4 and/or x8 pci-e slots with different spacing from what is required by video cards. This will allow users to just add a few more cards to get more storage. It may be useful for small form factor systems to make a pci-e card with several m.2 slots since several different types of things (or sizes of SSDs) can be plugged into it. This isn't going to perform as well as having the whole card dedicated to being a single SSD though. I don't think you can fit 18 channels on an m.2 card at the moment.
Anyway, most consumer applications will not really benefit from this. I don't think you would see too much difference in "everyday usability" using a pci-e card vs. a fast sata 6 drive. Most consumer applications are not going to even stress this card. I suspect that sata 6 SSDs wil be around for a while. The SATA Express connector seems like a kludge though. If you actually need more performance than sata 6 (for what?), just get the pci-e card version.
jeffbui - Thursday, June 5, 2014 - link
"I long for the day when we don't just see these SSD releases limited to the enterprise and corporate client segments, but spread across all markets"Too bad Intel is all about profit margin. Having to compete on price (at low profit margins) gives them no incentive to go into the consumer space.
sethk - Saturday, June 7, 2014 - link
Hi Anand,Long time fan and love your storage and enterprise articles, including this one. One questions - what the driver situation on NVMe as far as dropping one of these into existing platforms (consumer and enterprise) and being able to boot?
Another question is regarding cabling for the non-direct PCIe interfaces like SataExpress and SFF-8639? It would be great if you could have some coverage of these topics and timing for consumer availability when you do your inevitable articles on the P3600 and P3500 which seem like great deals given the performance.
T2k - Monday, June 16, 2014 - link
How come you did not include ANY Fusion-IO card? In enterprise space they are practically cheaper than the P3700 and have far bigger sizes, for less money, consistently low latency, not to mention advanced software to match it... was it a request from Intel to leave them out?SeanJ76 - Wednesday, May 20, 2015 - link
Intel has always been the best in SSD performance, and longevity. I own 3 of the 520 series(240GB) and have never had a complaint.