I wish they could have made a bootable SSD card, so the UEFI BIOS can see it as any other storage component. After all, it would be nice to have an OS installed directly on it.
flash memory comes in different forms and longevities.
the best SLC flash will outperform and outlast the cheapest TLC flash by considerable margins, but at a cost.
car analogy: do you want a performance car which is extremely well built and designed to survive in harsh environments, or a cheap family car which can only survive a Californian "winter"?
They have to make this thing bootable and sell 120GB version for 300 bucks max, until then this is just like reading about NASA's new spaceship(interesting but no use for 99% of users)
I was just talking with a co-worker a little while ago about how a device like this would give us more head room. It is extremely relevant to our workload, and I'm glad to see this article. Just because it isn't for home use you shouldn't ignore it. If this technology works (and works well) in the enterprise market, it will likely eventually trickle down to the consumer market. Would have liked to see prices, but if this is comparable to a high end raid controller, plus a bunch of fast SSDs it will likely be an option for lots of people.
It's not intended for you, or me, or any other consumers.
This is a enterprise server part, and for that market its probably priced about right.
I would LOVE to see a bootable native PCIe consumer drive with MLC NAND priced more reasonably for the consumer market, and we likely will in the not TOO distant future, but they aren't here yet.
What you fail to realize is these SSD's (Micron P320h, Intel 910 and OCZ Z-Drive) are targeted at enterprise markets. The majority of these enterprise customers would purchase these drives as high IOPS storage or for running virtual servers/desktops from. Most of these companies are likely using a virtualized solution(VMware ESXi) in which case the hypervisor is loaded into ram at boot anyway, not much point in booting the OS from SSD.
If you are looking for a solution for 99% of users, I'm not really sure why you are even looking at PCI-e SSD's, there is a VERY small market for PCI-e SSD's(bootable or not) on the consumer side.
Man! For the tens of thousands of dollars that customers would spend on one of these SSDs there is no way in the world that anyone would ever run one under 128QD, ever. These are designed to be run 100% full bore, not at QD 1. These benchmarks are totally irrelevant, and have no real meaning. This is like testing an Nascar vehicle in a small parking lot.
Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature. users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money. This is designed for high load OLTP and virtualized environments, not to run the database of one website. you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.
Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.
I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.
By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.
I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?
Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.
This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.
As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).
Also, Anand did point out a 25nm eMLC version is coming out in the future.
As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
I think some of you guys are missing the point. This is an enterprise drive. You are not going to be booting off of it. You are not going to find it cheap or in smaller sizes. This drive, if it was a car, would be the unholy child of a European supercar and a Marauder. I could put one of these in a compute cluster and slam it 24/7 and it would be happy. And I would be happy because it means that I don't have to worry about hitting NAND endurance limits and I have a low-latency, highly parallelized storage device.
So no. I (and probably anyone else who deals with enterprise hardware) don't care that it isn't bootable. I don't want it bootable. I don't care that it probably costs $5000+ for 700GB. It's cheaper in the long run. If it was to be anywhere close $300, you would have probably have 128GB of raw eMLC NAND, before over provisioning/RAIN/etc. Who in the industry would want such a small PCIe SSD when its strength is the large number of channels and large capacity.
But would I give my right testicle to be able to eval one of these units, possibly buying enough for all of the servers? Yes. I probably would.
Dear PCTC2, I’m with Micron and our engineering team loves your Marauder/Supercar analogy. If you’re serious about that eval unit, we should chat. You can reach me at [email protected].
And I promise our eval terms don’t require quite as much commitment as you’ve suggested....
Sorry, but no enterprise is putting their Oracle or MSSQL clusters on a platform based just on individual disk benchmarks.
Numbers compared to disk arrays, SAN devices, etc would be welcome. Also, no enterprise will run something like this without redundancy which brings up another slew of questions - TRIM, wear-levelling, etc.
There are only two cards that I'm aware of that are similar. Fusion-io with their line of cards and Texas Memory Systems which was acquired by IBM recently.
The big difference between this card and the Fusion-io card is where it stores its LBA's and block mappings. The TMS drive and this drive both store that on the drive either in DRAM or flash on PCB. The fusion-io cards use your system memory.
On the latency side, fusion-io has very solid latency numbers even on their MLC products. This card between the native PCI-e interface and using SLC make it very competitive.
I am worried about the driver issues, this is a HUGE problem for those of us running on windows. TMS and fusion-io both have had driver problems but with products that have been on the market for several years now have ironed them out. Micron being very late to the game can't afford to have these issues at launch even though they are disclosing them it will cut them off from a lot of the smaller shops that would buy from them. I would like to know how many channels are active out of the 32 available at a time. If they come back and say all 32 that is also concerning pointing to bottlenecks in their custom ASIC with this much SLC flash on board.
You should run a load which requires hardware like this on an enterprise OS, not a playtoy as Windows... I think the comment about 32 vs 512 QD made that clear already. MS is nice for SMB with unskilled workers due to familiarity. But prise-performance it's crappy (requires lots of maintenance) and
That's what you use at, for example, at the London Stock Exchange. Just google "london stock exchange linux dotnet" and see how MS failed at a real demanding workload and the Stock Exchange lost billions on a bad bet.
But I guess you'll have to ignore all that as you're not trained for anything else :D
I’d like to respond to the drivers concern. I work for Micron in our SSD organization and can certify that we have fully tested our drive in two server-class Windows-based operating systems (in addition to Linux). These are Windows Server 2008 and 2012. This is an Enterprise-class drive and as such we currently do not support desktop operating systems such as Windows 7. Some of the chipset compatibility issues like the H67 fall also into the category of desktop systems and as such we do not explicitly support them. Understandably, while this makes reviewing the card somewhat difficult (most reviewers don’t want to spend $10K+ on a server) we need to be clear that this is not a driver maturity issue but a conscious decision we made to support datacenter, server-grade hardware and OSs.
Because it wasn't meant to be... it makes me laugh to see how many people think Micron, Intel, and OCZ are developing these PCI-e SSD's for the consumer market. This is an enterprise class drive, and as such is not meant to be a bootable drive unless you are booting VM's on a hypervisor.
I get the impression these tests did not stress this drive.
Compare the relationship between disk busy time and average QD for the test tp the other drives. The higher the QD, the lower the relative disk busy time compared to the competition. -In IOMeter tests with QD32, no disk busy time is recorded, but the drive is in a solid lead for random noncompressible data throughtput. -The poorest numbers for this drive happen at lowest QD. -The highest listed QD for any test, here is 32. -"Micron claims much higher sequential read/write numbers under Linux at 256 concurrent IOs."
"I get the impression these tests did not stress this drive."
Yes, that seems to be the case. thessdreview.com have what looks like a really nice review [1] including tests of a QD up to 512. There one can see that it achieves only about a 1/3 of its peak performance with a QD of 32. Not till a QD of 128 or even 256 does it achieve its full potential. Then however it seems to perform truly amazing, and is able to completely saturate the 8x PCIe 2.1 bus with 4k random reads! It supposedly can sustain 3.3GB/s of 4kb random reads.
Even at small 512B read requests it can, according to them, still achieve on the order of 600Mb/s, achieving well in access of 1.5 million IOps. Even then the limiting factor was the CPU, not the device, despite using a core i7 that was overclocked to 4.9 GHz.
So if those numbers are true, Anand didn't even come close to stressing this SSD to its limit (or intended purpose).
As stated in the article, the drive is using IDT's 32-channel PCIe gen 3 x8 controller, but operating it in gen 2 mode. Since x8 gen 2 is sufficiently faster than the drive is capable of, it is a good choice, as it allows compatibility for use in systems without gen 3 slots. IDT claims full compliance with the NVM Express standard. See http://www.idt.com/products/interface-connectivity... for controller specs.
Looks like a NVM Express drive to me. Why would you say it is not?
Because the IDT controllers released at Flash Memory Summit are new SOCs, the Micron p320h drive is using a previous jointly developed SOC which is not NVME. See comment from Anand..
On the issue of durability or whatever you want to call it...
"50 petabytes of writes" is totally meaningless marketing intellect abuse.
It only takes on a meaning if you were to fill up the entire drive at once and then erase it and then write to the entire volume again etc until you reached 50 petabytes of writes.
Show me a hard drive that is ever used like that and I'll donate the pot of gold I've got stashed out back to your favorite charity.
Colonel Pepper, at Micron we spec TBW or total bytes written, but it’s closely related to another standard you’ll see in the enterprise industry, “X drive fills per day for 5 years.” The two specs are simply different ways to express the same number. The spec tracks the amount of bytes you can write to the SSD before the NAND exceeds its wear life and reverts to a write-protect (read only) mode. It includes any and every write ever made to the drive, not just the full drive fills and erases you’ve described.
What I don't understand though is why SSD controllers don't have PHYs that can talk PCIe as well as SATA/SAS. Then manufacturers could leverage the high volume/high performance SATA/SAS designs for PCIe too. The firmware would probably be even simpler.
Because it would be adding additional complexity, die size, and cost, to mass produced consumer parts with very thin margins. All of which are good ways to go out of business.
Any chance we could see some power consumption numbers on the various PCI-E SSDs? It would be interesting to see if the higher levels of integration in the NVMe controller solution are reflected by power savings. Also, its a shame it's not PCI-E 3.0 certified (although I'm sure it will be, given time) - it's not that the extra bandwidth is necessary, but you could achieve the same bandwidth with fewer lanes.
Its funny to see all the dreamers here wishing it booted so when they won the lotto they could have one to go with their quad chip triple SLI dream machine.
Seriously though, the SLC makes it a good sell to the boss who is worried about it dying before its time. Versus SAS hard drives/SSDs you just pull the dead one out and swap in a new one. Not so easy with these cards if you don't have them mirrored. Add the current market cost of a 2nd one to the mirror it does make it a hard sell. You could rely on these running a bit too well so that failure becomes a massive concern. So far we only trust the HP ones to store data that can easily be restored or is our prime business of massive scale market data capture and therefore mostly useless after 30 minutes.
We have just started using the HP badged (Fusion) IO Accelerator and are well impressed with its performance. We would love to start using it in more servers but even with preferential HP pricing these are not cheap. If this thing was certified to run in Proliant DL 38x & 58x servers at $3000 I think the market just got tipped on its head. I can't see the competition getting much cheaper than around $8000 on the HP cards so Micron are either going to go in hard and disruptive or they might settle for a "cheaper" price closer to the competition to keep margins high. It would be a shame if they did, these devices are just about ready for a big push into the enterprise market that it is ripe for someone to come in and sweep it all up. We would have brought 4 times the number of cards versus the HP ones.
This is the second Micron review you have done that there is no available product for. The last one was their new SSD drive which still hasn't appeared. I am in the UK. I wrote to them about the last review you did and they said that although Micron was their parent company, they has no information on when a product would or would not be available.
I don't understand why they send you products for review, yet even months (nearly a year for the first one?) they aren't shipping the product.
I can understand proof of concept and all that, and I love this PCIE card, but it's all fantasy IT until we can actually get our hands on it :)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
57 Comments
Back to Article
itanic - Monday, October 15, 2012 - link
Isn't this essentially just what FusionIO has been doing all along, except without all the drivers/software surrounding it?FunBunny2 - Monday, October 15, 2012 - link
That's my understanding. As to who IDT is?webdev511 - Monday, October 15, 2012 - link
IDT (Integrated Device Technology) has been doing IC controllers for a Loooooong time.iamkyle - Wednesday, October 17, 2012 - link
How can you not remember the IDT WinChip!?ericgl21 - Monday, October 15, 2012 - link
I wish they could have made a bootable SSD card, so the UEFI BIOS can see it as any other storage component. After all, it would be nice to have an OS installed directly on it.extide - Monday, October 15, 2012 - link
For home /power users, yeah you'd want that, but for this thing's target market booting from it is not an issue.blackmagnum - Monday, October 15, 2012 - link
Want SSD storage at the sweetspot of 1 dollar/ 1 gigabyte.Zak - Monday, October 15, 2012 - link
There are plenty of SSDs, although SATA not PCIe, for less than $1 per gigabyte.colonelpepper - Monday, October 15, 2012 - link
$2,000+ USD for a 2TB drive???...in other news, I'd like to buy a 32 inch LCD screen at the sweetspot price of $16,000.
melgross - Monday, October 15, 2012 - link
There's a sweet spot for RAM, in varying sizes, HDD's and SSD's. they're all different.speculatrix - Tuesday, October 16, 2012 - link
flash memory comes in different forms and longevities.the best SLC flash will outperform and outlast the cheapest TLC flash by considerable margins, but at a cost.
car analogy: do you want a performance car which is extremely well built and designed to survive in harsh environments, or a cheap family car which can only survive a Californian "winter"?
milkod2001 - Monday, October 15, 2012 - link
They have to make this thing bootable and sell 120GB version for 300 bucks max, until then this is just like reading about NASA's new spaceship(interesting but no use for 99% of users)kevith - Monday, October 15, 2012 - link
Spot on!A5 - Monday, October 15, 2012 - link
If you only care about consumer-ready cheap stuff, there are plenty of really boring tech sites out there for you to read.ender8282 - Monday, October 15, 2012 - link
I was just talking with a co-worker a little while ago about how a device like this would give us more head room. It is extremely relevant to our workload, and I'm glad to see this article. Just because it isn't for home use you shouldn't ignore it. If this technology works (and works well) in the enterprise market, it will likely eventually trickle down to the consumer market. Would have liked to see prices, but if this is comparable to a high end raid controller, plus a bunch of fast SSDs it will likely be an option for lots of people.taltamir - Monday, October 15, 2012 - link
This is not for home users, its for enterprises with a lot of money and extreme speed needs.mattlach - Tuesday, October 16, 2012 - link
It's not intended for you, or me, or any other consumers.This is a enterprise server part, and for that market its probably priced about right.
I would LOVE to see a bootable native PCIe consumer drive with MLC NAND priced more reasonably for the consumer market, and we likely will in the not TOO distant future, but they aren't here yet.
zlyles - Tuesday, October 16, 2012 - link
What you fail to realize is these SSD's (Micron P320h, Intel 910 and OCZ Z-Drive) are targeted at enterprise markets. The majority of these enterprise customers would purchase these drives as high IOPS storage or for running virtual servers/desktops from. Most of these companies are likely using a virtualized solution(VMware ESXi) in which case the hypervisor is loaded into ram at boot anyway, not much point in booting the OS from SSD.If you are looking for a solution for 99% of users, I'm not really sure why you are even looking at PCI-e SSD's, there is a VERY small market for PCI-e SSD's(bootable or not) on the consumer side.
JellyRoll - Monday, October 15, 2012 - link
Man! For the tens of thousands of dollars that customers would spend on one of these SSDs there is no way in the world that anyone would ever run one under 128QD, ever.These are designed to be run 100% full bore, not at QD 1. These benchmarks are totally irrelevant, and have no real meaning. This is like testing an Nascar vehicle in a small parking lot.
Sivar - Monday, October 15, 2012 - link
You're right, who in their right mind would run all these irrelevant real-world database tests to see actual performance in the target market?I'm sure you're speaking from a lot of industry experience. I'm also sure you've ever seen a sustained queue depth of 128 on any real-world system.
JellyRoll - Monday, October 15, 2012 - link
Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature.users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money.
This is designed for high load OLTP and virtualized environments, not to run the database of one website.
you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
DataC - Tuesday, October 16, 2012 - link
JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.jospoortvliet - Tuesday, October 16, 2012 - link
Yup. And it should be tested on a proper enterprise platform - this test is like running a Nascar vehicle with the handbrakes on.Time for an upgrade to a real OS, Anand.
Denithor - Monday, October 15, 2012 - link
Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.FunBunny2 - Monday, October 15, 2012 - link
Actually, against a Fusion-io part, the closest example.jwilliams4200 - Monday, October 15, 2012 - link
Right, enterprise drives should get all the standard consumer SSD tests run on them in addition to the enterprise tests.mckirkus - Wednesday, October 17, 2012 - link
And I'd argue a RAMDisk should be included just to get a sense of relative performance.Kevin G - Monday, October 15, 2012 - link
I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.
I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?
Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.
This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
PCTC2 - Monday, October 15, 2012 - link
You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).
Also, Anand did point out a 25nm eMLC version is coming out in the future.
As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
FunBunny2 - Monday, October 15, 2012 - link
Check out the Sun/Oracle flash appliance. Other niche Enterprise flash storage also exist.PCTC2 - Monday, October 15, 2012 - link
I think some of you guys are missing the point. This is an enterprise drive. You are not going to be booting off of it. You are not going to find it cheap or in smaller sizes. This drive, if it was a car, would be the unholy child of a European supercar and a Marauder. I could put one of these in a compute cluster and slam it 24/7 and it would be happy. And I would be happy because it means that I don't have to worry about hitting NAND endurance limits and I have a low-latency, highly parallelized storage device.So no. I (and probably anyone else who deals with enterprise hardware) don't care that it isn't bootable. I don't want it bootable. I don't care that it probably costs $5000+ for 700GB. It's cheaper in the long run. If it was to be anywhere close $300, you would have probably have 128GB of raw eMLC NAND, before over provisioning/RAIN/etc. Who in the industry would want such a small PCIe SSD when its strength is the large number of channels and large capacity.
But would I give my right testicle to be able to eval one of these units, possibly buying enough for all of the servers? Yes. I probably would.
DukeN - Tuesday, October 16, 2012 - link
So you'd buy this for all your servers without figuring out how to add disk redundancy for these things?Or if you could RAID them, how that would affect the lifetime and TRIM-related performance.
DataC - Tuesday, October 16, 2012 - link
Dear PCTC2, I’m with Micron and our engineering team loves your Marauder/Supercar analogy. If you’re serious about that eval unit, we should chat. You can reach me at [email protected].And I promise our eval terms don’t require quite as much commitment as you’ve suggested....
DukeN - Monday, October 15, 2012 - link
Sorry, but no enterprise is putting their Oracle or MSSQL clusters on a platform based just on individual disk benchmarks.Numbers compared to disk arrays, SAN devices, etc would be welcome. Also, no enterprise will run something like this without redundancy which brings up another slew of questions - TRIM, wear-levelling, etc.
Thanks
SQLServerIO - Monday, October 15, 2012 - link
There are only two cards that I'm aware of that are similar. Fusion-io with their line of cards and Texas Memory Systems which was acquired by IBM recently.The big difference between this card and the Fusion-io card is where it stores its LBA's and block mappings. The TMS drive and this drive both store that on the drive either in DRAM or flash on PCB. The fusion-io cards use your system memory.
On the latency side, fusion-io has very solid latency numbers even on their MLC products. This card between the native PCI-e interface and using SLC make it very competitive.
I am worried about the driver issues, this is a HUGE problem for those of us running on windows. TMS and fusion-io both have had driver problems but with products that have been on the market for several years now have ironed them out. Micron being very late to the game can't afford to have these issues at launch even though they are disclosing them it will cut them off from a lot of the smaller shops that would buy from them.
I would like to know how many channels are active out of the 32 available at a time. If they come back and say all 32 that is also concerning pointing to bottlenecks in their custom ASIC with this much SLC flash on board.
Just my thoughts.
Wes - www.sqlserverio.com
FunBunny2 - Monday, October 15, 2012 - link
And no encryption? In an Enterprise drive? Also, the R/W performance difference is puzzling.jospoortvliet - Tuesday, October 16, 2012 - link
You should run a load which requires hardware like this on an enterprise OS, not a playtoy as Windows... I think the comment about 32 vs 512 QD made that clear already. MS is nice for SMB with unskilled workers due to familiarity. But prise-performance it's crappy (requires lots of maintenance) andYour choices would be:
https://www.suse.com/products/server/
http://www.redhat.com/products/enterprise-linux/
That's what you use at, for example, at the London Stock Exchange. Just google "london stock exchange linux dotnet" and see how MS failed at a real demanding workload and the Stock Exchange lost billions on a bad bet.
But I guess you'll have to ignore all that as you're not trained for anything else :D
jospoortvliet - Tuesday, October 16, 2012 - link
(missing part would of course be "... and it fails completely at more demanding workloads")DataC - Tuesday, October 16, 2012 - link
I’d like to respond to the drivers concern. I work for Micron in our SSD organization and can certify that we have fully tested our drive in two server-class Windows-based operating systems (in addition to Linux). These are Windows Server 2008 and 2012. This is an Enterprise-class drive and as such we currently do not support desktop operating systems such as Windows 7. Some of the chipset compatibility issues like the H67 fall also into the category of desktop systems and as such we do not explicitly support them. Understandably, while this makes reviewing the card somewhat difficult (most reviewers don’t want to spend $10K+ on a server) we need to be clear that this is not a driver maturity issue but a conscious decision we made to support datacenter, server-grade hardware and OSs.boogerlad - Monday, October 15, 2012 - link
Ah, just the review I was waiting for. This drive isn't usable as an os boot drive? How unfortunate...zlyles - Wednesday, October 17, 2012 - link
Because it wasn't meant to be... it makes me laugh to see how many people think Micron, Intel, and OCZ are developing these PCI-e SSD's for the consumer market. This is an enterprise class drive, and as such is not meant to be a bootable drive unless you are booting VM's on a hypervisor.Cloakstar - Monday, October 15, 2012 - link
I get the impression these tests did not stress this drive.Compare the relationship between disk busy time and average QD for the test tp the other drives. The higher the QD, the lower the relative disk busy time compared to the competition.
-In IOMeter tests with QD32, no disk busy time is recorded, but the drive is in a solid lead for random noncompressible data throughtput.
-The poorest numbers for this drive happen at lowest QD.
-The highest listed QD for any test, here is 32.
-"Micron claims much higher sequential read/write numbers under Linux at 256 concurrent IOs."
apmon2 - Tuesday, October 16, 2012 - link
"I get the impression these tests did not stress this drive."Yes, that seems to be the case. thessdreview.com have what looks like a really nice review [1] including tests of a QD up to 512. There one can see that it achieves only about a 1/3 of its peak performance with a QD of 32. Not till a QD of 128 or even 256 does it achieve its full potential. Then however it seems to perform truly amazing, and is able to completely saturate the 8x PCIe 2.1 bus with 4k random reads! It supposedly can sustain 3.3GB/s of 4kb random reads.
Even at small 512B read requests it can, according to them, still achieve on the order of 600Mb/s, achieving well in access of 1.5 million IOps. Even then the limiting factor was the CPU, not the device, despite using a core i7 that was overclocked to 4.9 GHz.
So if those numbers are true, Anand didn't even come close to stressing this SSD to its limit (or intended purpose).
[1] http://thessdreview.com/our-reviews/micron-p320h-h...
bthanos - Monday, October 15, 2012 - link
Hi Anand,Nice Article, however the Micron p320h is not a NVME interface drive. Its PCIe Gen 2 , AHCI.
Jaybus - Tuesday, October 16, 2012 - link
As stated in the article, the drive is using IDT's 32-channel PCIe gen 3 x8 controller, but operating it in gen 2 mode. Since x8 gen 2 is sufficiently faster than the drive is capable of, it is a good choice, as it allows compatibility for use in systems without gen 3 slots. IDT claims full compliance with the NVM Express standard. See http://www.idt.com/products/interface-connectivity... for controller specs.Looks like a NVM Express drive to me. Why would you say it is not?
bthanos - Tuesday, October 16, 2012 - link
Because the IDT controllers released at Flash Memory Summit are new SOCs, the Micron p320h drive is using a previous jointly developed SOC which is not NVME. See comment from Anand..colonelpepper - Monday, October 15, 2012 - link
On the issue of durability or whatever you want to call it..."50 petabytes of writes" is totally meaningless marketing intellect abuse.
It only takes on a meaning if you were to fill up the entire drive at once and then erase it and then write to the entire volume again etc until you reached 50 petabytes of writes.
Show me a hard drive that is ever used like that and I'll donate the pot of gold I've got stashed out back to your favorite charity.
DataC - Tuesday, October 16, 2012 - link
Colonel Pepper, at Micron we spec TBW or total bytes written, but it’s closely related to another standard you’ll see in the enterprise industry, “X drive fills per day for 5 years.” The two specs are simply different ways to express the same number. The spec tracks the amount of bytes you can write to the SSD before the NAND exceeds its wear life and reverts to a write-protect (read only) mode. It includes any and every write ever made to the drive, not just the full drive fills and erases you’ve described.rrohbeck - Monday, October 15, 2012 - link
What I don't understand though is why SSD controllers don't have PHYs that can talk PCIe as well as SATA/SAS. Then manufacturers could leverage the high volume/high performance SATA/SAS designs for PCIe too. The firmware would probably be even simpler.DanNeely - Monday, October 15, 2012 - link
Because it would be adding additional complexity, die size, and cost, to mass produced consumer parts with very thin margins. All of which are good ways to go out of business.Jaybus - Tuesday, October 16, 2012 - link
Any particular drive is going to be one or the other. Why make a single complex cotroller when you can make two targeted controllers?crackedwiseman - Monday, October 15, 2012 - link
Any chance we could see some power consumption numbers on the various PCI-E SSDs? It would be interesting to see if the higher levels of integration in the NVMe controller solution are reflected by power savings. Also, its a shame it's not PCI-E 3.0 certified (although I'm sure it will be, given time) - it's not that the extra bandwidth is necessary, but you could achieve the same bandwidth with fewer lanes.cosmotic - Wednesday, October 17, 2012 - link
Where's the test results from something like an Areca 1882 filled with high-performance SSDs?N00dles71 - Wednesday, October 17, 2012 - link
Its funny to see all the dreamers here wishing it booted so when they won the lotto they could have one to go with their quad chip triple SLI dream machine.Seriously though, the SLC makes it a good sell to the boss who is worried about it dying before its time. Versus SAS hard drives/SSDs you just pull the dead one out and swap in a new one. Not so easy with these cards if you don't have them mirrored. Add the current market cost of a 2nd one to the mirror it does make it a hard sell. You could rely on these running a bit too well so that failure becomes a massive concern. So far we only trust the HP ones to store data that can easily be restored or is our prime business of massive scale market data capture and therefore mostly useless after 30 minutes.
We have just started using the HP badged (Fusion) IO Accelerator and are well impressed with its performance. We would love to start using it in more servers but even with preferential HP pricing these are not cheap. If this thing was certified to run in Proliant DL 38x & 58x servers at $3000 I think the market just got tipped on its head. I can't see the competition getting much cheaper than around $8000 on the HP cards so Micron are either going to go in hard and disruptive or they might settle for a "cheaper" price closer to the competition to keep margins high. It would be a shame if they did, these devices are just about ready for a big push into the enterprise market that it is ripe for someone to come in and sweep it all up. We would have brought 4 times the number of cards versus the HP ones.
klmccaughey - Friday, November 2, 2012 - link
This is the second Micron review you have done that there is no available product for. The last one was their new SSD drive which still hasn't appeared. I am in the UK. I wrote to them about the last review you did and they said that although Micron was their parent company, they has no information on when a product would or would not be available.I don't understand why they send you products for review, yet even months (nearly a year for the first one?) they aren't shipping the product.
I can understand proof of concept and all that, and I love this PCIE card, but it's all fantasy IT until we can actually get our hands on it :)
klmccaughey - Friday, November 2, 2012 - link
I meant Crucial by the way - Crucial are still only on the M4 and no sign of any of the stuff they send you ending up on the shelf ;)snozzy - Tuesday, November 6, 2012 - link
Go to cdw.com and search for "micron p320". You can purchase the exact same card used in this review.You can also buy these parts though Dell in a 2.5" form factor along with a server. Go configure a R620/R720/R820 with the PCIe SSD option.