Original Link: https://www.anandtech.com/show/8006/samsung-ssd-xp941-review-the-pcie-era-is-here



When SATA was introduced a bit over a decade ago, it provided major advantages over the old PATA interface. Not only was it faster and more power efficient thanks to the better signaling protocol, but the cabling was no longer big and clumsy with very limited length. It was no wonder that the industry quickly adopted SATA as the new interface for client storage and it has held its position throughout the years.

Since hard drives were the dominant storage media, SATA provided everything that the industry needed. The first generation SATA 1.5Gbps was already fast enough for the vast majority of use cases but about two years after the initial release of SATA, the second generation SATA was ready for prime time, doubling the throughput to 3Gbps. Even today's fastest hard drives can't fully saturate SATA 3Gbps, so there was obviously no rush to improve the interface as it already met the industry's needs. But that changed quickly around 2008.

The reason wasn't a sudden improvement in hard drive technology, but an emerging new technology that was based on non-volatile memory. We're talking about SSDs of course. The non-volatile memory part wasn't actually anything new as NAND has been around since the late 80s, but it was the first time NAND was being used in a PC form factor. Previously NAND had mainly been the choice for ultra mobile devices like MP3 players and phones but it was realized that the technology had the potential to be used in all computer-like devices, including PCs and servers. Since NAND was a solid-state semiconductor and it didn't have to rely on mechanical rotation, it allowed for much, much greater speeds. It's simply much faster to move electrons around a silicon chip than it is to rotate a heavy metal disk.

Obviously the first generation SSDs weren't all that fast and in many cases a traditional hard drive would still provide better sequential performance (although SSDs would destroy hard drives in a random IO workload). However, as the SSD companies learned to manage NAND and its characteristics better, the performance went up significantly. In 2009 we were already at a point where SATA 3Gbps was bottlenecking drives and a faster interface was needed to unleash the performance of NAND. Fortunately the Serial ATA International Organization (SATA-IO) had already released the standard for third generation SATA, which would again double the bandwidth to 6Gbps.

Crucial's C300

In 2010 we saw the first SATA 6Gbps SSD, Crucial's C300, make its appearance. Back then SATA 6Gbps wasn't integrated into chipsets yet and users had to buy a SATA 6Gbps PCIe card (or a motherboard with a third party SATA 6Gbps controller) to utilize the drive's full performance, but as soon as Intel was ready with their 6-series chipsets with native SATA 6Gbps support, every man and his dog came out with a SATA 6Gbps SSD.

But there was a problem. SATA 6Gbps still wasn't fast enough to meet the needs of SSD manufacturers as they were already able to saturate it. SATA-IO was given a difficult task: they would have to come up with a new standard with drastically better performance only a few years after the previous strandard had been announced. Not only would it have to be faster, but it also needed to be cost and power efficient. Instead of developing the SATA protocol further, which would have been expensive and time consuming, SATA-IO decided to utilize an existing interface found in every mainstream computer: PCI Express.

To allow backwards compatibility with the SATA interface, SATA-IO came up with two standards: SATA Express and M.2 (formerly NGFF). The idea behind SATA Express is that it routes PCIe and SATA 6Gbps signals to a single connector, which can then be used to connect either PCIe or SATA devices depending on the drive. It's mainly aimed at the desktop crowd and we did an extensive review of it just a while ago. M.2 on the other hand is the successor of mSATA and is electrically very similar to SATA Express. It also supports both PCIe and SATA 6Gbps signals, although ultimately it's up to the PC OEM to choose whether it will route both to the slot (i.e. you can have an M.2 slot with just PCIe or just SATA functionality).

 



The Samsung XP941

In June last year, Samsung made an announcement that they were mass producing the industry's first native PCIe SSD: the XP941. In the enterprise world, native PCIe SSDs were nothing new but in the consumer market all PCIe SSDs had previously just used several SATA SSDs behind a PCIe bridge. Samsung didn't reveal many details about the XP941 and all we knew was that it was capable of speeds up to 1,400MB/s and would be available in capacities of 128GB, 256GB and 512GB. Given that the XP941 was (and still is) an OEM-only product, that wasn't surprising. OEM clients often don't want the components they are buying to be put under a public microscope as it may lead to confusion among their customers (e.g. why isn't the drive in my laptop performing as well as the drive in the review?).

Photography by Juha Kokkonen

The XP941 is available only in M.2 2280 form factor. The four-digit code refers to the size of the drive in millimeters and 2280 is currently the second largest size according to the M.2 standard. Basically, the first two digits refer to the width, which per M.2 standard is always 22mm (hence the 22 in the code) and the last two digits describe the length, which in this case is 80mm. There are four possible lengths in the M.2 spec: 42mm, 60mm, 80mm and 110mm, and the purpose of different lengths is to allow manufacturers to design drives for multiple uses.

The reason for the variable size comes from mSATA, where the issue industry had problems with the fixed size. That's why Apple, ASUS, and others went with custom designs in order to fit more NAND packages on the PCB. I believe 80mm (i.e. 2280) will be the most popular form factor as it's capable of holding up to eight NAND packages but especially drives that are aimed for tablets and other ultra mobile devices may utilize the smaller 2242 and 2260 form factors.

As the XP941 isn't available in retail, Samsung isn't sampling it to media. Fortunately an Australian Samsung OEM retailer, RamCity, was kind enough to send us a review sample. The SSD we received is the highest capacity 512GB model, so we should be able to reach the maximum potential performance. RamCity actually sent us two 512GB drives and I couldn't resist putting the two in RAID 0 configuration in order to see what kind of throughput two drives could offer.

RamCity also sent us a PCIe 3.0 x4 adapter for connecting the drive to our testbed. The adapter in question is a Lycom DT-120, which retails for around $25. There are also other adapters in the market, such as Bplus' dual PCIe 2.0 x4 adapter, but the Lycom is the cheapest I've seen and is likely the best option for the average buyer.

Samsung SSD XP941 Specifications
Capacity 128GB 256GB 512GB
Controller Samsung S4LNO53X01 (PCIe 2.0 x4)
NAND Samsung 19nm MLC
Sequential Read 1000MB/s 1080MB/s 1170MB/s
Sequential Write 450MB/s 800MB/s 950MB/s
4KB Random Read 110K IOPS 120K IOPS 122K IOPS
4KB Random Write 40K IOPS 60K IOPS 72K IOPS
Power (idle / active) 0.08W / 5.8W
Warranty Three years (from RamCity)

The XP941 is based on Samsung's in-house PCIe controller. Samsung isn't willing to disclose any exact details of the controller but I would expect it to be quite similar to the MEX controller found in the 840 EVO, except it utilizes a PCIe interface instead of SATA. The controller supports up to four PCIe 2.0 lanes, so in practice it should be good for up to ~1560MB/s without playing with the PCIe clock settings (it's possible to overclock the PCIe interface for even higher bandwidths). In terms of the software interface, the XP941 is still AHCI based but Samsung does have an NVMe based SSD for the enterprise market. I would say that AHCI is a better solution for the consumer market because the state of NVMe drivers is still a bit of a question and the gains of NVMe are much more significant in the enterprise space (see here why NVMe matters).

NAND wise the XP941 uses Samsung's own 64Gbit 19nm Toggle-Mode 2.0 MLC NAND. There are only four packages on the PCB, two on each side, which means we are dealing with 16-die packages. I'm not sure if Samsung has a proprietary technology for NAND die stacking but at least they seem to be the only manufacturer that is packing more than eight dies in a single package. From what I have heard, other manufacturers don't want to go above eight dies for signal integrity reasons, which directly impacts performance, but Samsung doesn't seem to have issues with this.

Again, no details such as the P/E cycle count are known but I'm guessing the NAND is still rated at the same 3,000 P/E cycles as its 21nm counterpart. Samsung's datasheet doesn't list any specific endurance for the drive, though that makes sense since it's an OEM drive and the warranty will be provided for the whole product (e.g. a laptop) where the drive is used in.

The XP941 shows up in Samsung's Magician software but it's not recognized as a Samsung SSD. This has been typical for Samsung's OEM SSDs as even though they sometimes share the same hardware and firmware as their retail counterparts, the Magician functionality is disabled. It makes sense because PC OEMs often source their drives from more than one SSD OEM, so it wouldn't be fair to buyers if one drive had additional functionality while the other didn't.

The SSD in 2013 Mac Pro - courtesy of iFixit

The XP941 is hardware wise the same drive that you can find inside 2013 Macs, although most of the laptop version are limited to PCIe 2.0 x2 interface. However, the form factor and interface Apple uses are custom and the larger form factor allows for eight NAND packages and thus support for a 1TB model.

Test System

CPU Intel Core i5-2500K running at 3.3GHz
(Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5
(1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit



Boot Support: Mac? Yes. PC? Mostly No.

Booting from PCIe devices has always been rather quirksome because the BIOSes in motherboards are designed to boot from SATA devices, as they usually do. Most PCIe SSDs and SATA adapters include boot support by loading special drivers before the BIOS, which makes the drive visible in the boot screen. Without the drivers the drive won't show up in the BIOS but once you boot into the OS it will be accessible like any other drive. The problem is, you can't boot from the drive unless it shows up in the BIOS' boot screen.

Unfortunately, the XP941 doesn't seem to include the drivers necessary to enable booting. At least no driver loading screen shows up during boot, which suggests that the XP941 doesn't even have such drivers (ASUS is also telling us that this is the case). I'm thinking that the XP941 is designed for UEFI booting because from what I have heard, you don't need the drivers for UEFI boot but there is still some sort of a UEFI key needed to make the drive bootable. I did try the UEFI boot method on my ASUS Z87 Deluxe board but even though I was able to install the OS to the drive just fine, it wouldn't show up in the boot order.

The good news is that 9-series chipsets bring some ease to the situation. Back when the 8-series was introduced, there weren't many PCIe SSDs on the market but this year we will see PCIe entering the mainstream segment. That obviously forces the motherboard manufacturers to work on PCIe boot support and we can confirm that at least AsRock's Z97 Extreme6, which has a PCIe 2.0 x4 M.2 slot, supports booting from the XP941 out of the box. Whether other 9-series motherboards support booting from the XP941 remains to be seen. Most manufacturers, however, seem to be limiting the M.2 slot to just two PCIe 2.0 lanes, so you wouldn't want to use the XP941 in those boards anyway (unless the XP941 is used in a standard PCIe slot with an adapter). Anyway, we'll be sure to investigate the bootability of the XP941 in our motherboard reviews and work with the OEMs in order to bring better support for PCIe booting.

Update 5/20: ASUS just sent me an email that all their Z97 based motherboards will get a BIOS update that enables booting from the XP941. The BIOS is currently in beta testing and ASUS is expecting public release in about two weeks.

Macs, on the other hand, can boot from the XP941 just fine. I confirmed this using an early 2009 Mac Pro and the volume in the drive shows up in the boot option screen just like any other volume does. I have to admit that I don't know why exactly this is the case, but I'm guessing it's a fundamental difference between how the EFI in Macs and the BIOS/UEFI in PCs handle device recognition.

Under OS X, the XP941 shows up like any other SATA device. Since it utilizes the AHCI command set, OS X thinks it's a SATA device even though it's not. However, it's also listed under PCI cards but the page doesn't provide any meaningful info.



Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

The interface has never been the bottleneck when it comes to random write performance, especially in steady-state. Ultimately the NAND performance is the bottleneck, so without faster NAND we aren't going to see any major increases in steady-state performance.

The graphs above and below illustrate this as the XP941 isn't really any faster than the SATA 6Gbps based 840 Pro. Samsung has made some tweaks to their garbage collection algorithms and overall the IO consistency gets a nice bump over the 840 Pro but still, this is something we've already seen with SATA 6Gbps SSDs. I wouldn't say the IO consistency is outstanding because the Plextor M6e does slightly better with the default over-provisioning (both drives have ~7%) but if you increase the over-provisioning the XP941 will show its magic.

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

  Samsung SSD XP941 Plextor M6e Samsung SSD 840 Pro SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

TRIM Validation

Update 5/20: I got an email from one of our readers suggesting that the TRIM issue might be related to Windows 7 and that Windows 8 should have functioning TRIM for PCIe SSDs. To try this, I installed Windows 8.1 to a secondary drive and ran our regular pre-conditioning (fill with sequential data and torture with 4KB random write for 60 minutes). To measure performance, I had to rely on Iometer as HD Tach didn't work properly under Windows 8. I ran the same 128KB sequential write test that we usually run (QD=1, 100% LBA) but extended the length to 10 minutes to ensure that the results are steady and not affected by burst performance.

Samsung SSD XP941 512GB - Iometer 128KB Sequential Write (QD1)
  Clean After TRIM
Samsung SSD XP941 512GB 607.7 MB/s 598.9 MB/s

And TRIM seems to function as it should, so it indeed looks like this is just a Windows 7 limitation, which is excellent news.

------------------------

To test TRIM, I took a secure erased XP941 and filled it with sequential data, followed by a 60-minute torture with 4KB random writes (QD32). After the torture, I TRIM'ed all user-accessible LBAs and ran HD Tach to produce the graph below:

It looks like TRIM isn't functional, although I'm not that surprised. I'm waiting to hear back from Samsung about whether this is a limitation in the operating system because I've heard that Windows doesn't treat PCIe drives the same even if they utilize the same AHCI software stack like the XP941 does. If that's true, we'll need either updates to Windows or some other solution.

In a Mac TRIM support is listed as "yes" when TRIM is enabled for third party drives using TRIM Enabler, though I didn't have the time to verify if it actually works.



AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based—we record all IO requests made to a test system and play them back on the drive we're testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

In our most demanding storage test, the XP941 is just amazing. It's about 40% faster than any SATA 6Gbps drive we have tested, which is huge. Obviously it's not the random performance that makes the XP941 shine but the large IO sequential performance where the PCIe interface can be used to its full extent. While most IOs in client workloads tend to be random, the sequential performance can certainly make a big difference and high queue depth random reads can also take advantage of the faster interface.

Storage Bench 2013 - The Destroyer (Service Time)



AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The same goes for our 2011 Storage Bench: the XP941 is unbeatable. Only in the Light Workload test, the 8-controller OCZ behemoth is able to beat the XP941 by a small margin, but other than that there's nothing that can challenge the XP941. The consumer-oriented OCZ RevoDrive comes close but the XP941 once again shows how a good single controller design can beat any RAID 0 configuration.

Light Workload 2011 - Average Data Rate



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

The random performance of XP941 doesn't stand out. Especially random write speeds are quite low by today's standards and queue depth scaling is close to non-existent. That said, I don't believe that high queue depth performance is really important for client workloads as our internal workload analysis shows that even under heavy use the average queue depth tends to be no more than 5. Our Storage Benches also show that even though the random performance isn't excellent, the strong sequential performance thanks to the faster PCIe interface makes up for the difference.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

The sequential speeds are the highest we have ever tested in a consumer SSD. Even the 8-controller Z-Drive R4 behemoth can't beat the XP941, which really speaks for the efficiency of a single controller design. If you were to increase the queue depth, the Z-Drive would easily beat the XP941 since higher queue depth would increase parallelism and the Z-Drive could take advantage of all of its eight controllers. However, I was able to reach speeds of up to 1560MB/s with the XP941 at queue depth of 32, which is pretty much as fast as you can go with PCIe 2.0 x4 without tweaking any settings (the PCIe bus can be overclocked to achieve even higher speeds, though there can be a negative impact on random performance. We will investigate this at a later date).

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers. The XP941 does brilliantly in AS-SSD as well but now the strength of eight controllers starts to show for the Z-Drive. Even then, the XP941 is still about twice as fast as the fastest SATA 6Gbps SSD.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. The XP941 doesn't perform that well at the smaller transfer sizes but once we go over 8KB, there is no question about which drive is the fastest. At the IO size of 64KB, the XP941 is already reaching 1GB/s for read and it stays at ~1050MB/s for all the larger IOs. Write performance isn't as good but still reaches 1GB/s when the IO size is large enough.

Click for full size

 



Mac Benchmarks: QuickBench, AJA & Photoshop Installation

Since the XP941 is currently only bootable in Macs, I decided to run some benchmarks with the XP941 inside a Mac Pro. The specs of the Mac Pro are as follows:

Test Setup
Model Mac Pro 4.1 (Early 2009)
Processor Intel Xeon W3520 (2.66/2.93GHz, 4/8, 8MB L3)
Graphics NVIDIA GeForce GT120 512MB GDDR3
RAM 12GB (2x4GB + 2x2GB) DDR3-1066 ECC
OS OS X 10.9.2

We would like to thank RamCity for providing us with the Mac Pro, so we were able to run these tests and confirm boot support.

I installed OS X 10.9.2 to all drives and they were the boot drives when benchmarked, just like they would be for most end users. As I mentioned on page one, RamCity actually sent us two 512GB XP941 and I just had to put them in RAID 0 configuration. With a Mac you can easily boot from a software RAID 0 array, so all I had to do was to create a RAID 0 array in Disk Utility and select it as the boot volume. I placed the drives in PCIe slots 2 and 4 to ensure that both drives were getting full PCIe bandwidth and we wouldn't run into bottlenecks there. I picked Intel's 480GB SSD 730 to be the comparison point as it was lying on my table and is among the fastest SATA 6Gbps SSDs in the market. Note that the 2009 Mac Pro only supports SATA 3Gbps, so there's obviously some performance penalty from that as the benchmarks show.

QuickBench

QuickBench is one of the more sophisticated drive benchmark tools for OS X. It's shareware and retails for $15 but compared to the freeware tools available, it's worth it. While QuickBench lacks the option to increase queue depth, it supports various transfer sizes from 4KB to up to 100MB (or more through a custom test). For this test, I just ran the standard test where the IO sizes range from 4KB to xMB. Additionally I ran the extended test, which focuses on very large IOs (20-100MB) in order to get the maximum performance out of the drives. In both cases the tests ran for 10 cycles to ensure sustained results.

QuickBench - 4KB Random Read

QuickBench - 4KB Random Write

The random results don't reveal anything interesting. The RAID 0 array is slightly slower due to the overhead from the software RAID configuration but overall the results make sense when compared with our Iometer scores. Bear in mind that QuickBench only uses queue depth of 1, whereas our Iometer tests are run at queue depth of 3, hence there's a difference that is roughly proportional to the queue depth.

QuickBench - 128KB Sequential Read

QuickBench - 128KB Sequential Write

The sequential tests show that the XP941 seems to be slightly slower in the Mac Pro compared to sequential performance in Iometer. In this case both tests are at a queue depth of 1 and should thus be comparable, but it's certainly possible that there are some other differences that cause the slightly slower performance. Either way, we are still looking at much, much higher performance than any drive would provide under the Mac Pro's native SATA 3Gbps interface.

QuickBench - 90MB Sequential Read

QuickBench - 20MB Sequential Write

Since QuickBench doesn't allow increasing the queue depth, the only way to increase performance is to scale the transfer size. QuickBench's preset tests allow for up to 100MB IO sizes and I ran the preset that tests from 20MB to 100MB and picked the highest perfoming IO sizes that were 90MB and 20MB in this case. There wasn't all that much variation but these seemed to be the highest performing IO sizes for all three configurations.

Now the XP941 and especially RAID 0 show their teeth. With two XP941s in RAID 0, I was able to reach throughput of nearly 2.5GB/s (!) and half of that with a single drive. Compared to the SSD 730 in the SATA 3Gbps bus, you are getting over four times the performance and to reach the performance of X941 RAID 0 you would need at least ten SSDs in a SATA 3Gbps RAID 0 configuration.

AJA System Test

In addition to QuickBench, I decided to run AJA System Test as it's a freeware tool and quite widely used to test disk performance. It's mainly designed to test the performance of video throughput but as the results are reported in megabytes per second, it works for general IO testing as well. I set the settings to the maximum (4096x2160 10-bit RGB, 16GB file size) to product the results below.

AJA System Test - Read Speed

AJA System Test - Write Speed

The results are fairly similar to the QuickBench ones but the performance seems to be slightly lower. Then again, this is likely due to the difference in the data the software uses for testing but the speeds are still well over 1GB/s for a single drive and 2GB/s for RAID 0.

Adobe Photoshop CS6 Installation

One of the most common criticism I hear towards our tests is that we don't run any real world tests. I've been playing around with real-time testing a lot lately in order to build a suite of benchmarks that meet our criteria but for this review I decided to run a quick installation benchmark to see what kind of differences can be expected in real world. I grabbed the latest version of Photoshop CS6 trial from Adobe's website and installed it to all three drives while measuring the time with a stopwatch.

Photoshop CS6 Installation

Obviously the gains are much smaller in typical real world applications. That's because other bottlenecks come to play, which are absent when only testing IO performance. Still, especially for IO heavy workloads the extra performance is always appreciated even if the gains aren't as substantial as benchmarks show.



Final Words

I don't think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It's not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ's Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.

The only major problem in the XP941 is that it doesn't support booting in most Windows systems. If you are a Mac Pro owner, this issue doesn't concern you but for everyone else it's definitely a major drawback. Using an SSD as a secondary drive can make sense for e.g. a video professional where the performance can be utilized as a scratch disk, but otherwise the only real use case for an SSD is as a boot drive. There is hope that 9-series motherboards will bring better support for native PCIe booting but that remains to be seen.

The lack of proper TRIM support is also a minor concern but I'm willing to overlook that because the performance is just so great. I would also like to see hardware encryption support (TCG Opal 2.0 & IEEE-1667) and power loss protection but I understand that for an OEM product, these aren't necessary. Hopefully there will be retail versions of XP941 that address these items.

  120/128GB 240/256GB 480/512GB
Samsung SSD XP941 ~$229 ~$310 ~$569
Plextor M6e $180 $300 -
OCZ RevoDrive 350 - $530 $830

Note that the XP941 prices in the table above do not include the adapter or shipping. The adapter comes in at around $25 and RamCity charges $29 for shipping overseas, so you are looking at about $55 in addition to the drive itself. However, you don't have to pay the 10% Goods and Services Tax (GST) when purchasing from overseas and I've already subtracted the GST from the listed prices in the table above. To summarize, the total cost with the adapter and shipping included ends up being about $283 for 128GB, $364 for 256GB and $623 for 512GB. In the end the exact pricing depends on the AUD to USD ratio and banks may also charge a bit if paying with foreign currencies.

In terms of pricing, the XP941 is a steal compared to the competition. The M6e is cheaper but it's also only PCIe 2.0 x2 design and can't offer the same level of performance as the XP941 can. Of course, ultimately two or three SATA 6Gbps SSDs in RAID 0 would be the cheapest route but with RAID 0 you run into other issues (such as increased risk of disk failure). For the average user, I'd still recommend a drive like Samsung SSD 840 EVO or Crucial M500/M550 but I can certainly see the enthusiast and professional crowd paying the premium for the XP941.

All in all, I can't wait for Samsung to release a retail version of the XP941. Right now the only problems are the limited availability and lack of boot support but once these are sorted out, the XP941 will be the king of the market. I'm guessing that we'll probably see something from Samsung at this year's Global SSD Summit, or at least I deeply hope so. We'd also like to see more competition from other SSD manufacturers, but until SandForce's SF3700 is ready to hit the market in the second half of 2014, there isn't a drive that can challenge the XP941.

Log in

Don't have an account? Sign up now