Original Link: https://www.anandtech.com/show/7843/testing-sata-express-with-asus



During the hard drive era, the Serial ATA International Organization (SATA-IO) had no problems keeping up with the bandwidth requirements. The performance increases that new hard drives provided were always quite moderate because ultimately the speed of the hard drive was limited by its platter density and spindle speed. Given that increasing the spindle speed wasn't really a viable option for mainstream drives due to power and noise issues, increasing the platter density was left as the only source of performance improvement. Increasing density is always a tough job and it's rare that we see any sudden breakthroughs, which is why density increases have only given us small speed bumps every once in a while. Even most of today's hard drives can't fully saturate the SATA 1.5Gbps link, so it's obvious that the SATA-IO didn't have much to worry about. However, that all changed when SSDs stepped into the game.

SSDs no longer relied on rotational media for storage but used NAND, a form of non-volatile storage, instead. With NAND the performance was no longer dictated by the laws of rotational physics because we were dealing with all solid-state storage, which introduced dramatically lower latencies and opened the door for much higher throughputs, putting pressure on SATA-IO to increase the interface bandwidth. To illustrate how fast NAND really is, let's do a little calculation.

It takes 115 microseconds to read 16KB (one page) from IMFT's 20nm 128Gbit NAND. That works out to be roughly 140MB/s of throughput per die. In a 256GB SSD you would have sixteen of these, which works out to over 2.2GB/s. That's about four times the maximum bandwidth of SATA 6Gbps. This is all theoretical of course—it's one thing to dump data into a register but transferring it over an interface requires more work. However, the NAND interfaces have also caught up in the last couple of years and we are now looking at up to 400MB/s per channel (both ONFI 3.x and Toggle-Mode 2.0). With most client platforms being 8-channel designs, the potential NAND-to-controller bandwidth is up to 3.2GB/s, meaning it's no longer a bottleneck.

Given the speed of NAND, it's not a surprise that the SATA interface quickly became a bottleneck. When Intel finally integrated SATA 6Gbps into its chipsets in early 2011, SandForce immediately came out with its SF-2000 series controllers and said, "Hey, we are already maxing out SATA 6Gbps; give us something faster!" The SATA-IO went back to the drawing board and realized that upping the SATA interface to 12Gbps would require several years of development and the cost of such rapid development would end up being very high. Another major issue was power; increasing the SATA protocol to 12Gbps would have meant a noticeable increase in power consumption, which is never good.

Therefore the SATA-IO had to look elsewhere in order to provide a fast yet cost efficient standard in a timely matter. Due to these restrictions, it was best to look at already existing interfaces, more specifically PCI Express, to speed up the time to the market as well as cut costs.

  Serial ATA PCI Express
  2.0 3.0 2.0 3.0
Link Speed 3Gbps 6Gbps 8Gbps (x2)
16Gbps (x4)
16Gbps (x2)
32Gbps (x4)
Effective Data Rate ~275MBps ~560MBps ~780MBps
~1560MBps
~1560MBps
~3120MBps (?)

PCI Express makes a ton of sense. It's already integrated into all major platforms and thanks to scalability it offers the room for future bandwidth increases when needed. In fact, PCIe is already widely used in the high-end enterprise SSD market because the SATA/SAS interface was never enough to satisfy the enterprise performance needs in the first place.

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn't 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn't 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don't have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

But what exactly is SATA Express? Hop on to next page to read more!



What Is SATA Express?

Officially SATA Express (SATAe from now on) is part of the SATA 3.2 standard. It's not a new command or signaling protocol but merely a specification for a connector that combines both traditional SATA and PCIe signals into one simple connector. As a result SATAe is fully compatible with all existing SATA drives and cables and the only real difference is that the same connector (although not the same SATA cable) can be used with PCIe SSDs.

As SATAe is just a different connector for PCIe, it supports both the PCIe 2.0 and 3.0 standards. I believe most solutions will rely on PCH PCIe lanes for SATAe (like the ASUS board we have), so until Intel upgrades the PCH PCIe to 3.0, SATAe will be limited to ~780MB/s. It's of course possible for motherboard OEMs to route the PCIe for SATAe from the CPU, enabling 3.0 speeds and up to ~1560MB/s of bandwidth, but obviously the PCIe interface of the SSD needs to be 3.0 as well. The SandForce, Marvell, and Samsung designs are all 2.0 but at least OCZ is working on a 3.0 controller that is scheduled for next year.

The board ASUS sent us has two SATAe ports as you can see in the image above. This is a similar port that you should find in a final product once SATAe starts shipping. Notice that the motherboard connector is basically just two SATA ports and a small additional connector—the SATA ports work normally when using a standard SATA cable. It's only when the connector meets the special SATAe cable that PCIe magic starts happening.

ASUS mentioned that the cable is not a final design and may change before retail availability. I suspect we'll see one larger cable instead of three separate ones for esthetic and cable management reasons. As there are no SATAe drives available yet, our cable has the same connector on both ends and the connection to a PCIe drive is provided with the help of a separate SATAe daughterboard. In the final design the other end of the cable will be similar to the current SATA layout (data+power), so it will plug straight into a drive.

That looks like the female part to the SATA connector in your SSD, doesn't it?

Unlike regular PCIe, SATAe does not provide power. This was a surprise for me because I expected SATAe to fully comply with the PCIe spec, which provides up to 25W for x2 and x4 devices. I'm guessing the cable assembly would have become too expensive with the inclusion of power and not all SATA-IO members are happy even with the current SATAe pricing (about $1 in bulk per cable compared to $0.30 for normal SATA cables). As a result, SATAe drives will still source their power straight from the power supply. The SATAe connector is already quite large (about the same size as SATA data + power), so instead of a separate power connector we'll likely see something that looks like this:

In other words, the SATAe cable has a power input, which can be either 15-pin SATA or molex depending on the vendor. The above is just SATA-IO's example/suggestion—they haven't actually made any standard for the power implementation and hence we may see some creative workarounds from OEMs.

 



Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.



NVMe vs AHCI: Another Win for PCIe

Improving performance is never just about hardware. Faster hardware can only help to reach the limits of software and ultimately more efficient software is needed to take full advantage of the faster hardware. This applies to SSDs as well. With PCIe the potential bandwidth increases dramatically and to take full advantage of the faster physical interface, we need a software interface that is optimized specifically for SSDs and PCIe.

AHCI (Advanced Host Controller Interface) dates back to 2004 and was designed with hard drives in mind. While that doesn't rule out SSDs, AHCI is more optimized for high latency rotating media than low latency non-volatile storage. As a result AHCI can't take full advantage of SSDs and since the future is in non-volatile storage (like NAND and MRAM), the industry had to develop a software interface that abolishes the limits of AHCI.

The result is NVMe, short for Non-Volatile Memory Express. It was developed by an industry consortium with over 80 members and the development was directed by giants like Intel, Samsung, and LSI. NVMe is built specifically for SSDs and PCIe and as software interfaces usually live for at least a decade before being replaced, NVMe was designed to be capable of meeting the industry needs as we move to future memory technologies (i.e. we'll likely see RRAM and MRAM enter the storage market before 2020).

  NVMe AHCI
Latency 2.8 µs 6.0 µs
Maximum Queue Depth Up to 64K queues with
64K commands each
Up to 1 queue with
32 commands each
Multicore Support Yes Limited
4KB Efficiency One 64B fetch Two serialized host
DRAM fetches required

Source: Intel

The biggest advantage of NVMe is its lower latency. This is mostly due to a streamlined storage stack and the fact that NVMe requires no register reads to issue a command. AHCI requires four uncachable register reads per command, which results in ~2.5µs of additional latency.

Another important improvement is support for multiple queues and higher queue depths. Multiple queues ensure that the CPU can be used to its full potential and that the IOPS is not bottlenecked by single core limitation.

Source: Microsoft

Obviously enterprise is the biggest beneficiary of NVMe because the workloads are so much heavier and SATA/AHCI can't provide the necessary performance. Nevertheless, the client market does benefit from NVMe but just not as much. As I explained in the previous page, even moderate improvements in performance result in increased battery life and that's what NVMe will offer. Thanks to lower latency the disk usage time will decrease, which results in more time spend at idle and thus increased battery life. There can also be corner cases when the better queue support helps with performance.

Source: Intel

With future non-volatile memory technologies and NVMe the overall latency can be cut to one fifth of the current ~100µs latency and that's an improvement that will be noticeable in everyday client usage too. Currently I don't think any of the client PCIe SSDs support NVMe (enterprise has been faster at adopting NVMe) but the SF-3700 will once it's released later this year. Driver support for both Windows and Linux exists already, so it's now up to SSD OEMs to release compatible SSDs.



Testing SATA Express

SATAe is not commercially available yet but ASUS sent us a pre-production unit of the SATA Express version of their Z87 Deluxe motherboard along with the necessary peripherals to test SATAe. This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe SATA Express (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.

The ASUS Z87 Deluxe SATA Express has two SATAe ports: one routed from the Platform Controller Hub (PCH) and the other provided by an ASMedia ASM106SE chip. The ASMedia is an unreleased chip, hence there is no information to be found about it and ASUS is very tight-lipped about the whole thing. I'm guessing we are dealing with the same SATA 6Gbps design as other ASM106x chips but with added PCIe pass-through functionality to make the chip suitable for SATA Express.

I did a quick block diagram that shows the storage side of the ASUS SATAe board we have. Basically there are four lanes in total dedicated to SATAe with support for up to two SATAe drives in addition to four SATA 6Gbps devices. Alternatively you can have up to eight SATA 6Gbps devices if neither of the SATAe ports is operating in PCIe mode.

Since there are no SATAe drives available at this point, ASUS sent us a SATAe demo daughterboard along with the motherboard. The daughterboard itself is very simple: it has the same SATAe connector as found in the motherboard, two molex power inputs, a clock cable header, and a PCIe slot.

This is what the setup looks like in action (though as you can see, I took the motherboard out of the case since inside case photos didn't turn out so well with the poor camera I have). The black and red cable is the external clock cable, which is only temporary and won't be needed with a final SATAe board.

The Tests

For testing I used Plextor's 256GB M6e PCIe SSD, which is a PCIe 2.0 x2 SSD with Marvell's new 88SS9183 PCIe controller. Plextor rates the M6e at up to 770MB/s read and 580MB/s write, so we should be capable of reaching the full potential of PCIe 2.0 x2. Additionally I tested the SATA 6Gbps ports with a 256GB OCZ Vertex 450. I used the same sequential 128KB Iometer tests that we use in our SSD reviews but I ramped up the queue depth to 32 to make sure we are looking at a maximum throughput situation.

Iometer—128KB Sequential Read (QD32)

There is no practical difference between a PCIe slot on the motherboard and PCIe that is routed through SATA Express. I'm a little surprised that there is absolutely no hit in performance (other than a negligible 1.5MB/s that's basically within the margin of error) because after all we are using cabling that should add latency. It seems that SATA-IO has been able to make the cabling efficient enough to transmit PCIe without additional overhead.

As for SATA 6Gbps, the performance is the same as well, which isn't surprising since only the connector is slightly different while electrically everything is the same. With the ASMedia chipset there is ~25-27% reduction in performance but that is inline with the previous ASMedia SATA 6Gbps chipsets I've seen. As I mentioned earlier, I doubt that the ASM106SE brings anything new to the SATA side of the controller and that's why I wasn't expecting more than 400MB/s. Generally you'll only get full SATA bandwidth from an Intel chipset or a higher-end SATA/RAID card.

Iometer—128KB Sequential Write (QD32)

The same goes for write performance. The only case where you are going to see a difference is if you connect to the ASMedia SATA 6Gbps port. I did run some additional benchmarks (like our performance consistency test) to see if a different workload would yield different results but all my tests showed that SATAe in PCIe mode is as fast as a real PCIe slot, so I'm not going to post a bunch additional graphs showing that the two are equivalent.



Final Thoughts

While testing SATA Express and writing this article, I constantly had one thought in my head: do we really need SATA Express? Everything it provides can be accomplished with existing hardware and standards. Desktops already have PCIe slots, so we don't need SATAe to bring PCIe SSDs to desktop users. In fact, SATAe could be viewed as a con because it takes at least two PCIe lanes and dedicates them to storage, whereas normal PCIe slots can be used for any PCIe devices. With only 16+8 (CPU/PCH) PCIe lanes available in mainstream platforms, there are no lanes to waste.

For the average user, it wouldn't make much difference if you took two or four lanes away for SATAe, but gamers and enthusiasts can easily use up all the lanes already (higher-end motherboards tend to have additional controllers for SATA, USB 3.0, Thunderbolt, Ethernet, audio etc., which all use PCIe lanes). Sure there are PCIe switches that add lanes (but not bandwidth), and these partially solve the issue but add cost. And if you add too many devices behind a switch there's a high chance that bandwidth will become a bottleneck if all are in use simultaneously.

I'm just not sure if I like the idea of taking two, potentially four or six, PCIe lanes and dedicating them to SATAe. I'd much rather have regular PCIe slots and let the end-user decide what to do with them. Of course, part of the problem is that we're dealing with not having enough lanes to satisfy all use cases, and SATAe could spur Intel and other chipset to provide more native PCIe lanes.

For laptops and other small form factor builds SATAe makes even less sense because that's the purpose of M.2. 2.5" SSDs can't compete with M.2 in space efficiency and that is what counts in the mobile industry. The only purpose of SATAe in mobile that I can see is laptops that use 2.5" SATA drives by default that can then be upgraded to 2.5" PCIe SSDs. That would allow OEMs to use the same core chassis design for multiple SKUs that could then be differentiated with the form of storage and it would also allow better end-user upgradeability. However, I still believe M.2 is the future in mobile especially as we are constantly moving towards smaller and thinner designs where 2.5" is simply too big. The 2.5" scenario would mainly be a niche scenario for laptops that don't have an M.2 or mSATA slot.

This is how small mSATA and M.2 are

Another issue exists in the OEM space. There are already four dominant form factors: 2.5" SATA, half-height/length PCIe, mSATA, and M.2. With SATA Express we would need an additional one: 2.5" SATAe (PCIe). The half-height/length PCIe is easy because all you need is an adapter for an M.2 PCIe SSD like Plextor has, but 2.5" PCIe is a bit trickier. It would be yet another model for OEMs to build and given the current NAND situation I'm not sure whether the OEMs are very happy about that.

The problem is that the more form factors there are, the harder it is to manage stock efficiently. If you build too many units in a form factor that doesn't sell, you end up having used tons of NAND on something that could have been better used in another form factor with more demand. This is why M.2 and half-height/length PCIe are great for the OEMs—they only need to manufacture M.2 SSDs and the end-product can be altered based on demand by adding a suitable adapter.

Fortunately the inclusion of both SATA and PCIe in SF-3700 (and some others too, e.g. OCZ's upcoming Jetstream Express controller) helps because OEMs only need to build one 2.5" drive that can be turned into either SATA or PCIe based on the demand. However, not all controllers support this, so there are still cases where OEMs face the issue of an additional model--and even for those drives that do support SATA and PCIe, it takes additional die area and R&D resources, resulting in higher costs.

Ultimately I don't believe the addition of a new form factor is a major issue because if there is customer demand, the OEMs will offer supply. It may, however, slow down the adoption of SATAe because the available models will be limited (i.e. you can score a better deal by getting a regular PCIe SSD) as some manufacturers will certainly be slower in adopting new form factors.

All in all, the one big issue with SATAe is the uncertainty due to the lack of product announcements. Nobody has really come forward and outlined plans for SATAe integration, which makes me think it's not something we'll see very soon. Leaks suggest that Intel won't be integrating SATAe into its 9-series chipsets, which will push mainstream availability back by at least a year. While chipset integration is not required to enable SATAe, it lowers the cost for motherboard OEMs since fewer parts and validation are required. Thus I suspect that SATAe will mainly be a high-end only feature for the next year and a half or so and it won't be until Intel integrates it into chipsets that we'll see mainstream adoption.

Log in

Don't have an account? Sign up now