" I strongly recommend having at least 25% free space with the M5M. The more you fill the drive, the more likely it is that you'll face inconsistent performance."
Would this really effect the average user? Do you let the drives idle long enough so the normal garbage collection can kick in?
Hi there, First of all Kristian thanks for the reviews. You've finally answered my queries about the best mSATA SSD to get. (from the Intel 525 review)
Could you please advise what is the best method to leave the 25% free space on the drive for over provisioning to enhance the performance.
Anand answered that in another article. I believe you are supposed to shrink the partition, create a second partition out of the unallocated space, then delete the new partition. The act of deleting the partition brings the OS to TRIM that portion of the drive freeing it up for use as spare area. And since you won't be writing to it any more it is permanently spare area (well, unless you repartition or something)
I have wondered for a long time if the extra free space is really necessary. Home users aren't benchmarking, drives are mostly idle. Not often do you transfer 100GB at a time or install programs.
Unrealistic workloads for a consumer environment result in unrealistic test results. How many consumer notebooks or laptops, hell even enterprise mobile devices, will be subjected to this type of load? Answer: Zero. Even in a consumer desktop this is NEVER going to happen.
It was stated a long time ago at Anandtech that their testing was harsher than typical consumer loads for the express purpose of separating the field. Under typical consumer workloads, there is practically no difference between modern drives. I don't know how many times I've read that any SSD is a significant step up from an HDD. It has pretty much been a standing assumption since the old jMicron controllers left the market. However, more information is required for those that need (or think they need) the performance to handle heavier workloads.
Personally, everything else being equal, I'd rather have the drive that performs better/more consistently, even if it is only in workloads I never see. I don't think Kristian is trying to pull the wool over your eyes. He simply gives the readers here enough credit to make up their own mind about the level of performance they need.
If the drive is nearly full and there's no extra OP, then it's possible that even normal (but slightly larger/heavier, like app installation) usage will cause the performance to become inconsistent which will affect the overall performance (average IOPS will go down). Performance will of course recover with idle time but the hit in performance has already been experienced.
Running a simple trace of an application install will show that this is not an accurate statement. This testing also does not benefit from TRIM because there is no filesystem during the test. This ends up making an overly-negative portrayal.
Which test in particular are you referring to that has no access to TRIM, that otherwise would?
As far as application traces go, I can confirm Kristian's statement is accurate on both a Corsair Force GT 120GB and a Crucial M4 128GB. Performance drops appreciably when installing programs with a large number of small files (or copying a large number of small files I.E. Libraries). As an aside, it can also tank the performance of Xilinx ISE, which is typically limited by memory bandwidth and single threaded CPU performance.
The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews.
Hi Kristian, Let me know the regulator part number and I can calculate the loss in the regulator. The main difference is if it is a switching or linear part. A linear part will waste 100% * (5-3.3)/5 percent of the power, or 34% neglecting the usually small quiescent current. A switcher will waste less, usually 10-20%.
That is a linear part. Current in = current out + the ground pin current. See the graph on page 10. The ground current is about 1/50 the output current in this part. so the input current is a good approximation of the output current.
So the powers in the graphs above should be approx 0.41W, 2.75W and 2.98 W respectively. (Maybe slightly less in le lower digit if I were to include regulator losses).
There are two problems with this statement: "In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time."
1. Anand did not introduce this testing, another website did. 2. it isnt looking at individual operations, thousands of operations are happening per second, hence the term 'IOPS' (I/O Per Second)
Actually there is a third problem with the statement, it isnt looking at latency either. It is looking at IOPS, which is much different than latency. There are no latency numbers in this test.
There are no latency numbers displayed directly in the results, but latencies are implicit in the IOPS measurement. You may not be getting individual operation latencies, but IOPS is the inverse of average operation latency. So Just divide 1 by the number IOPS and you'll get your average operation latency.
In general, I give reviewers the benefit of the doubt and try to put aside small slip ups in nomenclature or semantics as long as it is relatively easy to understand the points they are trying to make. That said, you seem to have it out for Kristian (or perhaps Anandtech as a whole), giving no slack and even reading things into statements that I'm not sure are there. I have no vested interest in Anandtech beyond the interest of reading good reviews, but I have to ask, did Kristian kick your dog or something? I'm honestly interested if you have a legitimate grievance.
Pointing out numerous problems with methodology is simply that, in particular the consistency tests are wildly misleading for a number of reasons, the least of which is an unreal workload. I will not resort to replying to thinly veiled flamebait attempts.
Sorry, I wasn't trying to bait you. The posts just came off as a little hostile. Probably a result of the my morning meetings.
If I'm understanding you correctly, your biggest issue is with the method of consistency. I read in another of your posts that this method is similar to the tests that several large enterprises use. You seem to be familiar with these methods. Is there an alternate (better) method in use that Anandtech could be using? Alternately do you have a superior method in mind that isn't currently in use? I'm guessing (for starters) you'd be happier with a method that measures individual operation latencies (I would too), but I'm unaware any tools that could accomplish this.
The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews. If you are testing consumer hardware, the first order of the day is to use methods that accurately reflect real workloads. Removing the must crucial component required for performance consistency (TRIM), then testing 'consistency' anyway, is ridiculous. Would you test a vehicle without fuel?
TRIM does not affect performance consistency of a continuous write workload. TRIM will only tell the controller which LBAs are no longer in use - the actual LBAs still need to be erased before new data can be written. When you're constantly writing to the drive, it doesn't have time to erase the blocks as fast as new write requests come in, which causes the performance to sink.
If you know methods that "accurately reflect real workloads" then please share them. Pointing out flaws is easy but unhelpful unless you can provide a method that's better.
Pasted from the Wiki: "The TRIM command is designed to enable the operating system to notify the SSD which pages no longer contain valid data due to erases either by the user or operating system itself. During a delete operation, the OS will both mark the sectors as free for new data and send a TRIM command to the SSD to be marked as no longer valid. After that the SSD knows not to relocate data from the affected LBAs during garbage collection."
During a pure write workload there is no need for the SSD's internal garbage collection functions to read-write-modify in order to write new data. That is the purpose of TRIM. Without TRIM writes require read-write-modify activity, with TRIM they do not. Very easy to see how it boosts performance.
You still have to erase the blocks, which is the time consuming part. Again, there's no time for normal idle garbage collection to kick in. Yes, the drive will know what LBAs are no longer in use but it still has to erase the blocks containing those LBAs. If you let the drive idle, then it will have time to reorganize the data so that there'll be enough empty blocks to maintain good performance but that is not the case in a continuous write workload.
It is removing the 'write' from the read-write-modify cycle. Writing a page smaller than the block requires the SSD to relocate the other data in the block first, adding work for the SSD. Remember, they erase at block level. If it isn't aware that the rest of the block is also invalid (the point of TRIM) it must first move the other data.
It's read-modify-write cycle (read the block to cache, modify the data, write the modified data) so the write operation is still there, otherwise the drive wouldn't complete the write request in the first place. You also seem to be assuming that the rest of the pages in the block are invalid, which is unlikely the case unless we're dealing with an empty drive. Hence it's exactly the same cycle with TRIM as you still have to read at least some of the data and then rewrite it. You may have to read/write less data as some of it will be invalid, but remember that garbage collection (with TRIM off) will also mark pages as invalid on its own. That's the reason why performance will stay high even if TRIM is not supported (e.g. OS X), assuming that the garbage collection is effective (there's at least 7% OP so there is always invalid pages).
I am not assuming the data is still valid, the SSD does. It has to move the data if it considers it valid. TRIM removes the need to move this 'other' data, thus speeding the drive.
1) I was not aware that another website created this method of characterizing performance, but I'll give you the benefit of the doubt. Nonetheless, the statement that Anand introduced it to the standard test suite here at Anandtech in the Intel SSD DC S3700 review is a true statement. Given the context of the original statement, this is more likely the intended interpretation. Out of curiosity, which site did create the method?
2) I'm not sure whether or not the test measures individual operation latencies or not as IOPS is basically the inverse of an average of the those latencies over time. It is kind of like the difference between FPS and Frame latencies. That said, the representation on the graphs is more the inverse of a one second sliding window average. Saying as much is kind of a mouthful, though. How would you phrase it?
Several large enterprise websites have used this methodology for years for numerous types of testing. This method does not give an accurate portrayal of latency performance. It merely gives one second intervals with thousands of IOPS each, which hides the maximum results in among thousands of other I/O.
Recently buy from onlyssd dot com, I got within 2 working days from the order date. Drive working fine....I think in mSATA drive Plextor is better option compare to crucial n more.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
36 Comments
Back to Article
kmmatney - Wednesday, April 17, 2013 - link
" I strongly recommend having at least 25% free space with the M5M. The more you fill the drive, the more likely it is that you'll face inconsistent performance."Would this really effect the average user? Do you let the drives idle long enough so the normal garbage collection can kick in?
msahni - Wednesday, April 17, 2013 - link
Hi there,First of all Kristian thanks for the reviews. You've finally answered my queries about the best mSATA SSD to get. (from the Intel 525 review)
Could you please advise what is the best method to leave the 25% free space on the drive for over provisioning to enhance the performance.
Cheers....
Minion4Hire - Wednesday, April 17, 2013 - link
Anand answered that in another article. I believe you are supposed to shrink the partition, create a second partition out of the unallocated space, then delete the new partition. The act of deleting the partition brings the OS to TRIM that portion of the drive freeing it up for use as spare area. And since you won't be writing to it any more it is permanently spare area (well, unless you repartition or something)xdrol - Wednesday, April 17, 2013 - link
Actually, Windows does not trim when you delete a partition, rather when you create a new one.Hrel - Wednesday, April 17, 2013 - link
I have wondered for a long time if the extra free space is really necessary. Home users aren't benchmarking, drives are mostly idle. Not often do you transfer 100GB at a time or install programs.JellyRoll - Wednesday, April 17, 2013 - link
Unrealistic workloads for a consumer environment result in unrealistic test results. How many consumer notebooks or laptops, hell even enterprise mobile devices, will be subjected to this type of load? Answer: Zero.Even in a consumer desktop this is NEVER going to happen.
JPForums - Thursday, April 18, 2013 - link
It was stated a long time ago at Anandtech that their testing was harsher than typical consumer loads for the express purpose of separating the field. Under typical consumer workloads, there is practically no difference between modern drives. I don't know how many times I've read that any SSD is a significant step up from an HDD. It has pretty much been a standing assumption since the old jMicron controllers left the market. However, more information is required for those that need (or think they need) the performance to handle heavier workloads.Personally, everything else being equal, I'd rather have the drive that performs better/more consistently, even if it is only in workloads I never see. I don't think Kristian is trying to pull the wool over your eyes. He simply gives the readers here enough credit to make up their own mind about the level of performance they need.
Kristian Vättö - Wednesday, April 17, 2013 - link
If the drive is nearly full and there's no extra OP, then it's possible that even normal (but slightly larger/heavier, like app installation) usage will cause the performance to become inconsistent which will affect the overall performance (average IOPS will go down). Performance will of course recover with idle time but the hit in performance has already been experienced.JellyRoll - Wednesday, April 17, 2013 - link
Running a simple trace of an application install will show that this is not an accurate statement. This testing also does not benefit from TRIM because there is no filesystem during the test. This ends up making an overly-negative portrayal.JPForums - Thursday, April 18, 2013 - link
Which test in particular are you referring to that has no access to TRIM, that otherwise would?As far as application traces go, I can confirm Kristian's statement is accurate on both a Corsair Force GT 120GB and a Crucial M4 128GB. Performance drops appreciably when installing programs with a large number of small files (or copying a large number of small files I.E. Libraries). As an aside, it can also tank the performance of Xilinx ISE, which is typically limited by memory bandwidth and single threaded CPU performance.
JellyRoll - Thursday, April 18, 2013 - link
The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews.bobsmith1492 - Wednesday, April 17, 2013 - link
Hi Kristian,Let me know the regulator part number and I can calculate the loss in the regulator. The main difference is if it is a switching or linear part. A linear part will waste 100% * (5-3.3)/5 percent of the power, or 34% neglecting the usually small quiescent current. A switcher will waste less, usually 10-20%.
Kristian Vättö - Wednesday, April 17, 2013 - link
It's Micrel 29150 as far as I know. Here's the datasheet http://www.micrel.com/_PDF/mic29150.pdfAshaw - Wednesday, April 17, 2013 - link
That is a linear part. Current in = current out + the ground pin current. See the graph on page 10. The ground current is about 1/50 the output current in this part. so the input current is a good approximation of the output current.Ashaw - Wednesday, April 17, 2013 - link
So the powers in the graphs above should be approx 0.41W, 2.75W and 2.98 W respectively. (Maybe slightly less in le lower digit if I were to include regulator losses).bobsmith1492 - Wednesday, April 17, 2013 - link
Agreed, the SSD is using approximately 66% of the measured power on the 5V rail.JellyRoll - Wednesday, April 17, 2013 - link
There are two problems with this statement:"In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time."
1. Anand did not introduce this testing, another website did.
2. it isnt looking at individual operations, thousands of operations are happening per second, hence the term 'IOPS' (I/O Per Second)
JellyRoll - Wednesday, April 17, 2013 - link
Actually there is a third problem with the statement, it isnt looking at latency either. It is looking at IOPS, which is much different than latency. There are no latency numbers in this test.JPForums - Thursday, April 18, 2013 - link
There are no latency numbers displayed directly in the results, but latencies are implicit in the IOPS measurement. You may not be getting individual operation latencies, but IOPS is the inverse of average operation latency. So Just divide 1 by the number IOPS and you'll get your average operation latency.In general, I give reviewers the benefit of the doubt and try to put aside small slip ups in nomenclature or semantics as long as it is relatively easy to understand the points they are trying to make. That said, you seem to have it out for Kristian (or perhaps Anandtech as a whole), giving no slack and even reading things into statements that I'm not sure are there. I have no vested interest in Anandtech beyond the interest of reading good reviews, but I have to ask, did Kristian kick your dog or something? I'm honestly interested if you have a legitimate grievance.
JellyRoll - Thursday, April 18, 2013 - link
Pointing out numerous problems with methodology is simply that, in particular the consistency tests are wildly misleading for a number of reasons, the least of which is an unreal workload. I will not resort to replying to thinly veiled flamebait attempts.JPForums - Thursday, April 18, 2013 - link
Sorry, I wasn't trying to bait you. The posts just came off as a little hostile. Probably a result of the my morning meetings.If I'm understanding you correctly, your biggest issue is with the method of consistency. I read in another of your posts that this method is similar to the tests that several large enterprises use. You seem to be familiar with these methods. Is there an alternate (better) method in use that Anandtech could be using? Alternately do you have a superior method in mind that isn't currently in use? I'm guessing (for starters) you'd be happier with a method that measures individual operation latencies (I would too), but I'm unaware any tools that could accomplish this.
JellyRoll - Thursday, April 18, 2013 - link
The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews.If you are testing consumer hardware, the first order of the day is to use methods that accurately reflect real workloads. Removing the must crucial component required for performance consistency (TRIM), then testing 'consistency' anyway, is ridiculous. Would you test a vehicle without fuel?
Kristian Vättö - Thursday, April 18, 2013 - link
TRIM does not affect performance consistency of a continuous write workload. TRIM will only tell the controller which LBAs are no longer in use - the actual LBAs still need to be erased before new data can be written. When you're constantly writing to the drive, it doesn't have time to erase the blocks as fast as new write requests come in, which causes the performance to sink.If you know methods that "accurately reflect real workloads" then please share them. Pointing out flaws is easy but unhelpful unless you can provide a method that's better.
JellyRoll - Thursday, April 18, 2013 - link
Pasted from the Wiki:"The TRIM command is designed to enable the operating system to notify the SSD which pages no longer contain valid data due to erases either by the user or operating system itself. During a delete operation, the OS will both mark the sectors as free for new data and send a TRIM command to the SSD to be marked as no longer valid. After that the SSD knows not to relocate data from the affected LBAs during garbage collection."
During a pure write workload there is no need for the SSD's internal garbage collection functions to read-write-modify in order to write new data. That is the purpose of TRIM. Without TRIM writes require read-write-modify activity, with TRIM they do not. Very easy to see how it boosts performance.
Kristian Vättö - Thursday, April 18, 2013 - link
You still have to erase the blocks, which is the time consuming part. Again, there's no time for normal idle garbage collection to kick in. Yes, the drive will know what LBAs are no longer in use but it still has to erase the blocks containing those LBAs. If you let the drive idle, then it will have time to reorganize the data so that there'll be enough empty blocks to maintain good performance but that is not the case in a continuous write workload.JellyRoll - Thursday, April 18, 2013 - link
It is removing the 'write' from the read-write-modify cycle. Writing a page smaller than the block requires the SSD to relocate the other data in the block first, adding work for the SSD. Remember, they erase at block level. If it isn't aware that the rest of the block is also invalid (the point of TRIM) it must first move the other data.Kristian Vättö - Thursday, April 18, 2013 - link
It's read-modify-write cycle (read the block to cache, modify the data, write the modified data) so the write operation is still there, otherwise the drive wouldn't complete the write request in the first place. You also seem to be assuming that the rest of the pages in the block are invalid, which is unlikely the case unless we're dealing with an empty drive. Hence it's exactly the same cycle with TRIM as you still have to read at least some of the data and then rewrite it. You may have to read/write less data as some of it will be invalid, but remember that garbage collection (with TRIM off) will also mark pages as invalid on its own. That's the reason why performance will stay high even if TRIM is not supported (e.g. OS X), assuming that the garbage collection is effective (there's at least 7% OP so there is always invalid pages).JellyRoll - Thursday, April 18, 2013 - link
I am not assuming the data is still valid, the SSD does. It has to move the data if it considers it valid. TRIM removes the need to move this 'other' data, thus speeding the drive.Kristian Vättö - Monday, April 22, 2013 - link
Here are some tests I did with Plextor M5 Pro XtremeRAW (no partition): https://dl.dropboxusercontent.com/u/128928769/Cons...
NTFS (default cluster size): https://dl.dropboxusercontent.com/u/128928769/Cons...
As you can see, there's no major difference. In fact, there's a bigger slowdown with NTFS versus raw drive.
JPForums - Thursday, April 18, 2013 - link
1) I was not aware that another website created this method of characterizing performance, but I'll give you the benefit of the doubt. Nonetheless, the statement that Anand introduced it to the standard test suite here at Anandtech in the Intel SSD DC S3700 review is a true statement. Given the context of the original statement, this is more likely the intended interpretation. Out of curiosity, which site did create the method?2) I'm not sure whether or not the test measures individual operation latencies or not as IOPS is basically the inverse of an average of the those latencies over time. It is kind of like the difference between FPS and Frame latencies. That said, the representation on the graphs is more the inverse of a one second sliding window average. Saying as much is kind of a mouthful, though. How would you phrase it?
JellyRoll - Thursday, April 18, 2013 - link
Several large enterprise websites have used this methodology for years for numerous types of testing. This method does not give an accurate portrayal of latency performance. It merely gives one second intervals with thousands of IOPS each, which hides the maximum results in among thousands of other I/O.puppies - Wednesday, April 17, 2013 - link
Spelling mistake in the 3rd line. Should be controlled not controller.Great article btw :D.
Kristian Vättö - Thursday, April 18, 2013 - link
Fixed! Thanks for the heads up :-)iwodo - Thursday, April 18, 2013 - link
And we have to wait till Broadwell Chipset before we get SATA Express with ( hopefully ) 16Gbps.abhilashjain30 - Monday, July 29, 2013 - link
You can purchase online from SSD Portal ( http://onlyssd.com/ssd-brand/buy-plextor-ssd )abhilashjain30 - Thursday, September 26, 2013 - link
Recently buy from onlyssd dot com, I got within 2 working days from the order date. Drive working fine....I think in mSATA drive Plextor is better option compare to crucial n more.