Indeed. I'd be interested in seeing how a Crucial M4 64GB mated to a pair of short-stroked single-platter Samsung drives in RAID-0 would perform in a dedicated gaming system.
Really? Man, I thought short-stroking drives was all but dead these days. That's the whole point of SSDs: if you're so concerned about storage performance that you're willing to short-stroke an HDD, just move to a full SSD and be done with it. Plus, storage is only a minor bottleneck in a "dedicated gaming system"; your GPU is the biggest concern, at least if you have any reasonable CPU and enough RAM.
My biggest concern with SRT is the reliability stuff Anand mentions. I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization. That seems like something that would need to be done at the hardware level, though, and you always run the risk of data loss if the SSD cache somehow fails (though that should be relatively unlikely). Heck, all HDDs already have a 16-64MB cache on them, and I'd like the SSD to be a slower but much larger supplement to that.
Anyway, what concerns me is that we're not talking about caching at the level of, say, your CPU's L1 or L2 or even L3 cache. There's no reason the caching algorithm couldn't look at a much longer history of use so that things like your core OS files never get evicted (i.e. they are loaded every time you boot and accessed frequently, so even if you install a big application all of the OS files still have far higher hit frequency). Maybe that does happen and it's only in the constraints of initial testing that the performance degrades quickly (e.g. Anand installed the OS and apps, but he hasn't been using/rebooting the system for weeks on end).
The "least recently used" algorithm most caching schemes use is fine, but I wonder if the SSD cache could track something else. Without knowing exactly how they're implementing the caching algorithm, it's hard to say would could be improved, and I understand the idea of a newly installed app getting cached early on ("Hey, they user is putting on a new application, so he's probably going to run that soon!"). Still, if installing 30GB of apps and data evicts pretty much everything from the 20GB cache, that doesn't seem like the most effective way of doing things--especially when some games are pushing into the 20+ GB range.
It seems like a good way to do it would be for the software to recognize periods of high disk activity and weigh caching of all LBAs during that period much higher.
So for example, system boot, where lots and lots of files are read off of the drive, would be a situation where the software would recognize that there is a high rate of disk I/O going on and to weigh all of the files loaded during this time very highly in caching.
The more intense the disk I/O, the higher the weight. This would essentially mean that the periods that you most want to speed up - those with heavy disk I/O - are most likely to benefit from the caching, and disk activity that is typically less intense (say, starting a small application that you use frequently but that is relatively quick to load because of the small number of disk hits) would only be cached if it didn't interfere with the caching of more performance-critical data.
All that being said, I am not a fan of complex caching mechanisms like this to try to improve performance. The big drawback, as pointed out in this well-presented article, is that there is a lack of consistency; sometimes you will get good performance and sometimes not, depending on tons of intangible factors affecting what is and what isn't in the cache. Furthermore, you are always introducing extra overhead in the complexity of the caching schemes, and in this case because it's being driven by a piece of software on the CPU, and because data is being shuffled around and written/read multiple times more than it would have with no caching involved.
Then again, it is highly unlikely to *hurt* performance so if you don't mind sometimes waiting more than other times for the same thing to happen (this in particular drives me crazy though; if I am used to a program loading in 5 seconds, the time it takes 10 seconds really stands out like a sore thumb), and can absorb the extra cost involved, then it's not a totally unreasonable way to try to get a little bit of performance.
What is the algorithm that the filesystem would use to decide what data to cache in preference to other cacheable data? That is the question at hand, and it doesn't matter at what level of the software stack it's done, the problem is effectively the same.
<quote>I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization.</quote>
I heartily agree with that. Everyone is so gung ho about having a SSD for OS and applications, a HD for data and then <b>manually managing the data!</b> Isn't technology supposed to being doing this for us? Isn't that the point? Enthusiast computers should be doing things the consumer level stuff can't even dream about.
Intel, please, for the love of all that is holy, remove the 64GB limit.
On a completely unrelated note, why is the AT commenting software unable to do things the DailyTech site can? Quotes, bolding, italics and useful formatting features like that would really be welcome. :)
I'm not sure when they got removed, but standard BBS markup still works, if you know the codes. So...
[ B ]/[ /B ] = Bolded text [ I ]/[ /I ] = Bolded text [ U ]/[ /U ] = Bolded text
There used to be an option to do links, but that got nuked at some point. I think the "highlight" option is also gone... but let's test:
[ H ]/[ /H ] = [h]Bolded text[/h]
So why don't we have the same setup as DT? Well, we *are* separate sites, even though DT started as a branch off of AT. They have their own site designer/web programmer, and some of the stuff they have (i.e. voting) is sort of cool. However, we would like to think most commenting on AT is of the quality type so we don't need to worry about ratings. Most people end up just saying "show all posts" regardless, so other than seeing that "wow, a lot of people didn't like that post" there's not much point to it. And limiting posts to plain text with no WYSIWYG editor does reduce page complexity a bit I suppose.
Obviously, I missed changing the pasted text above. That's Bold, Italics, and underlined text. (And highlighted text is now gone, thankfully, so people talking about [H]OCP don't look weird. LOL)
Hopefully Intel will be more concern about what the users really needs & not just simply apply their own set of rules to users by limiting certain functions as they like.
Obviously, a lot of time goes into these reviews, but I would really like to see an update using a 64GB Vertex 3 or other fast 64GB drive as the cache. I suppose that the only real improvement would be how many apps/files are cached before eviction. But the Vertex 3 is a LOT faster than the new Intel 311 or whatever it is...
Take this with a huge grain of salt. The following quote from the review makes me shiver “In my use I've only noticed two reliability issues with Intel's SRT. The first issue was with an early BIOS/driver combination where I rebooted my system (SSD cache was set to maximized) and my bootloader had disappeared. The other issue was a corrupt portion of my Portal 2 install, which only appeared after I disabled by SSD cache.”
Don’t get me wrong, I’m not trolling. I was really looking forward to SSD caching. But my previous experience when I randomly lost all data on an Intel RAID 1 array without any signs of hard-drive failures made me skeptical in the Intel RAID software.
Anand writes: "Paired with a 20GB SLC SSD cache, I could turn a 4-year-old 1TB hard drive into something that was 41% faster than a VelociRaptor."
That's an assertion that really needs some heavy qualification, for instance by appending "at least sometimes and for certain things."
SRT is an intriguing approach on the part of Intel, but ultimately it comes across to me as insufficient and unfinished. I have little confidence in its ability to gauge what's important to cache as opposed to what's used more often. Those aren't the same things at all.
I'd like to see a drive approach where a limited capacity boot/application SSD is combined with a conventional HD within a single standard drive enclosure. This hybrid would present itself to the host as a single drive, but include a software toggle to allow selective access to each drive for setup purposes. You'd install the OS and programs on the SSD for rapid boot/launch, while user mass file storage would be on the HD. In normal use you wouldn't know, other than in performance terms, that two devices were involved.
Yes, I know that we can achieve much of that today by using separate SSD and HD devices. I have two such setups, one a server and the other a workstation. However they both require some technical attention on the part of the user, and it's not an approach that works in a laptop, at least not without big compromises.
Can install OS on 1 60 GB SSD for example and then SRT a second 60 SSD for a 2 TB Raid 0 array?
I've got two 60's in a Raid 0 now, but obviously, most of my programs are on seprate HDD's. If my above question is possible, maybe this is a way to split the difference as it were.
Seems that it would be better to designate a partition to be cached and other partitions uncached. With only a 20GB cache SSD, ripping from BD to .MP4 could easily cause cache evictions.
And, will this work with a mixed Rapid Storage array? I typically run hard drives in pairs, and mirror (raid 1) the first 120GB and stripe the remaining so I've got a fault-protected 120GB boot device and a 1700GB speedster. In this case, I'd only want the boot device cached.
This looks like a valid concern. For HTPCs, there is usually a data partition separate from the boot / program files partition. Usage of the SSD cache for the data partition makes no sense at all.
I agree with validity of this proposal, but must also comment on (non)sensicality of caching the data partition: I for one was disappointed when I read that multi-MB writes are not (write) cached. This is the only thing that keeps my RAID-5 storage slow. And a nice 32GB cache would be just the perfect thing for me. That's the largest I ever write to it in a single chunk. So instead of 100MB/s speeds I'm still stuck with 40 down to 20MB/s - as my raid provides.
Still - this is not the issue at all. I have no idea why manufacturers always think they know it all. Instead of just providing a nice settings screen where one could set preferences they just hard-code them...
1. Manually move folder from SSD to hard drive. 2. On HDD, select folder, right click, "Pick Link Source" 3. On SSD, right click, "Drop As" > "Symbolic Link" 4. Profit!
I can really 'feel' Windows' superfecth advantadges, sam way I can feel when I'm requesting files that are not 'fetched'.
This software-feature Intel is now pulling is very similar to Superfecth, although it uses as SSD instead of RAM, and a lot more is available in an SSD than it usually is in RAM.
It is a neat feature, and I'm sure it will be copied from other software houses from now on.
In fact, Sun has invented a timemachine and copied this approach into ZFS years ago. ;-)
The feature is called L2ARC (level 2 adaptive replacement cache) there and works nicely with SSDs (but also "fast" HDDs, battery-backed DDR nonvolatile memory, etc.). The nice thing is that if SRT takes off and these 20g SLC SSDs get competition and a price crunch (as well as general availability), using these features in ZFS is going to become a lot cheaper. Though even now there is nothing preventing you from using a dozen 240g SSDs as L2ARC (and ZIL) to speed up your farm of disks :)
Damn you Intel, so all my old first gen 16GB SSDs can go to the bin? Only you in the world uses 10bit channel for flash, and that's why you set a 20GB (18.6 formatted) limit? ******
Dan's correct, it doesn't say you have to have a 20GB drive. Intel just happens to be launching a 20GB drive that they are hoping you will use for this.
I am interested in using Linux and I am wondering about various things: 1. Will it work under Linux? Can I configure it from Linux? 2. Is it file system dependant? I guess it is not. 3. Whether it will work on multi-OS machines. For example, what happens if I dual boot Windows and Linux?
Unrelated to Linux is, does this scheme get confused by say using lots of VMs?
No, so called fake raid (software raid) that the chipset/drivers supports do never work in Linux. RST do not work under other OS's then Windows. Mac and Linux will have to use the built in software raid rather then the none existing driversbased one. And will lack all support for SSD caching.
VMs usually write to a virtual hard drive (file) that saves the data to the disk. That should be absolutely fine.
well, you can always set the root partition on the ssd, by creating custom partitions during installation, and set the resource hogging partitions such as /var and /home on your hdd. this way, all the binaries and libraries load from the hdd. if you don't have enough space on your hdd to do that either, then you are out of luck. thats the closest you can get to ssd caching in linux. ofcourse there is a patch for the kernel to do ssd caching natively, but its pretty outdated and probably not compatible with your hardware. to do ssd-caching in windows with linux, you have to preserve some space on ssd to do so.
Can you have an SSD as your boot drive, then a large HDD (typical configuration... OS/apps on SSD, data/etc on HDD) and then have yet-another SSD enabled with SRT for caching the HDD? Seems like the best of both worlds (other than cost).
I reckon that setup would almost certainly work fine.
What I'm wondering is whether you could use a single SSD partitioned so that part of it was a boot drive and the other part was a cache for a HDD. Such a setup would solve the problem of the 120gb SSD not being quite the right size for any particular purpose.
A 60-80gb partition with Windows and apps on it and the remaining space used as a cache. This would avoid the problem of having to symbolic link Steam games and so forth, while also not requiring you to buy two SSDs in order to have a boot drive and a cache drive.
Anand did mention that a cache drive could be partitioned so that only part of it needed to be used as a cache. Just not sure if there would be any issues that might arise with using the remaining partition as a boot drive.
Here is a quote from vr-zone.com’s review (http://vr-zone.com/articles/first-look-msi-z68a-gd... ) on SRT: “All existing partitions on the SSD must be deleted before it can be used as a cache”. This makes me believe that using oneSSD for dual purposes (boot drive and SRT at the same time) is not possible. I really want to hear Anand’s last word on this.
What do you mean by that? You partition the SSD drive, install the OS in the first partition, set-up the other partition as a cache, and then format your remaining HDD?
I wonder, can you tell SRT to cache blocks only from the HDD onto the cache partition, because by default SRT may decide to cache system files that already reside onto a fast SSD partition...
I know it's early on for Z68, but I'm curious how other SSDs will perform in SRT mode. I ask because the 40 GB X25-V is on sale here for half its usual price...
To answer my own question, Tom's Hardware reviewed SRT with several SSDs and to put it bluntly, the X25-V sucks. Its very low write speed of 35 mb/sec actually drags the hard drive down in a few tests.
Yeah that is a nice way of putting it. Talk about sugar coating. Here is a question for ya: was intel being "conservative" when they tried to shove rambus down everyone's throats? If it werent for AMD and DDR god knows how much memory would cost now. I still have one of those rambus P4 systems running in the lab right now. (intel 850 chipset with dual channel RDRAM). I did some memory benchmarking on it and was shocked to find that it was actually slower than any of the P4 DDR 266 machines we have running. (Yes we are slow to upgrade lol.) It runs at about DDR200 equivalent speeds. And we really paid out the wazoo for that system.
discrete graphics cards are limited - even though they often have three, four.. or more connectors these days, they can often only drive two monitors at a time. (unless you use a displayport connector... and monitors with DP don't really exist yet) I have two monitors driven by my HD6950 via the digital video out connectors. So the HDMI connector on that card is "dead" until I turn one of the monitors off. What I would like to be able to do... is have my dGPU drive my two monitors, and the iGPU drive my 1080p TV via HDMI. Can I do that? This discussion on virtu muddies the water some. unclear.
Well, so SRT is a good idea but again it is limited artificially in its use. Sounds to me like the P67/H67 stund all over again.
Why is it limited? * For starters it is driver supported, and I believe that means Windows only (I could find no mention of what OS is supported). To be fully useful it belongs into the chipset/BIOS realm. * Next there is the artificial 64GB limit. As is obvious from even the tests that is not really the practical limit of its usefulness. It is simply a marketing limit to not compete with Intels own full SSD business. You got to ask yourself, why not use your aging SSD of 100GB or 256 GB (a couple years down the road) as an SRT drive? * "With the Z68 SATA controllers set to RAID (SRT won't work in AHCI or IDE modes) just install Windows 7 on your hard drive like you normally would." So only RAID setups are supported? Well you are testing with a single hard drive, so this might be a confusing statement. But if it is RAID only then that is ceratinly not what Joe Shmoe has in its desktop (let alone in its Laptop).
If the AT Heavy Workload Storage Bench is a typical usage case for you, than you shouldn't be using SRT anyway - you'd have a RAID array of SSDs to maximize your performance.
For caching purposes, I'm sure 64GB is a very reasonable limit. The more data you cache, the more data you have to pay attention to when it comes to kicking out old data.
And it isn't a RAID set up, per se. You set the motherboard to RAID, but the entire system is handled in software. So Joe Shmoe wouldn't even have to know what a RAID is, though I don't see Joe Shmoe even knowing what a SSD is...
Who says you can't use your old 100 or 256GB SSD as an SRT device? The article clearly states that you can use whatever size drive you want. Up to 64GB of it will be used for cache and the rest can be used for data. If you have more than 64GB of data that you need to have cached at one time then SRT isn't the solution you should be looking into.
As for OS limitations...you can't seriously think Intel would wait until they had this running on every platform imaginable before they released it to the public, can you? This is the first version of the driver that supports it so of course it will have limitations. You can't expect a feature of a Windows-only driver to be supported by a non-Windows OS. I'm sure this feature will be available on Linux once Intel actually makes a Linux RST driver.
And don't forget that if you don't partition the rest of the space on the SSD it will use it for wear levelling, which will be even more important in this situation.
I still dont get why western digital doesnt take 4GB of SLC and solder it onto the back of their hard drive controller boards. It's not like they dont have the room. Hopefully now they will do that. 1TB +4GB SLC all for under $100 in one package, with 2 sata ports.
What is the OS support on those drivers (Windows?, Linux?, Mac OS X?, BSD?, Open Source?, ...)?
Does the SRT drive get TRIM? Does it need it?
"With the Z68 SATA controllers set to RAID (SRT won't work in AHCI or IDE modes) just install Windows 7 on your hard drive like you normally would."???
Is there any optimization to allow the hard drive to avoid seeks? If this all happens on the driver level (as opposed to on the BIOS level) then I'd expect to gain extra efficiency from optimizing the cached LBAs so as to avoid costly seeks. In other words you don't want to look at LBAs alone but at sequences of LBAs to optimize the utility. Any mention of this?
Also one could imagine a mode where the driver does automatic defragmentation and uses the SSD as the cache to allow to do that during slow times of hard drive access. Any comment from Intel?
What happened to the prposed prices? If I remember correctly the caching drive was supposed to cost only 30-40$? Now with 110$, the customer should better buy a "real" 60GB SSD.
It's interesting, Anand has a generally positive review and generally positive comments. Tom's Hardware, which I generally don't respect nearly as much as Anand, reviewed SRT both a while back and covered it again recently and is far less impressed as are its readers. I have to say that I agree with Tom's on this particular issue though.
It is *not* a halfway house or a cheaper way to get most of the benefit of an SSD. For $110 extra plus the premium of a Z68 mobo you may as well get an SSD that is 40-60GB bigger than Larson Creek (or 40-60GB bigger than your main system SSD) and just store extra data on it directly and with faster access and no risk of caching errors.
For those who said SRT is a way of speeding up a cheap HTPC - it doesn't seem that way as it's not really cheap and it won't cache large, sequential media files anyway. For those who said it will speed up your game loadings, it will only do so for a few games on 2nd or 3rd run only and will evict if you use a lot of different games so you're better off having the few that count directly on the SSD anyway (using Steam Mover if necessary).
For your system drive it's too risky at this point or you need to use the Enhanced mode (less impressive) and to speed up your large data (games/movies) it's barely relevant for the aforementioned reasons. For all other scenarios you're better off with a larger SSD.
It's too little too late and too expensive. The fact that it's not worth bothering is a no brainer to me which is a shame as I was excited by the idea of it.
Could one kindly request for the numbers from both the 64GB C300 and 20GB sans harddisk 311 to be added. It would give a good idea of the performance hit one could expect for using these in SRT vs as a standalone boot drive.
Wouldn't be persistent between restarts, so that's a problem right there. It would have to build up the cache every time you reboot, and you couldn't use "Max Cache" mode, so you'd have to wait for the HDD for all writes.
In this article, it was pointed out that using an SSD is still better for those who want speed. But is a SATA3 SSD & SATA2 Velociraptor combo possible? Or what about an SSD + SSD/HDD combo? Some sort of comprimise w/o the great penalties, or smaller penalties & greater value?
I've seen similar numbers using a smaller SSD with Windows 7's ReadyBoost, and it kept the most used data in the cache better. I'd prefer just using that, as it seems more predictable.
you can (and I have) set up ReadyBoost to a SATA SSD. I had a 60GB OCZ Apex as my ReadyBoost drive for about 6 months, before I got my dual Vertex 2s as a new boot drive. Windows 7 has a limit of 32GB for Readyboost usage, though. It made a heck of a difference in boot time and some program load time, however, it took a little while to get the caching set up right to cache what I actually used on a regular basis. It started caching Firefox rather quickly, but took it a while to pick up on caching Diablo 2.
I'd REALLY like to see you guys compare this SRT caching to two of the fastest 7200rpm drives out there in RAID 0. Cause 1-4 seconds on launching applications on loading game levels isn't work 100 extra bucks.
So compare configurations: 1 MD 1 MD with Cache 2MD in RAID 0 (MD = Mechanical Disk) 2 MD in RAID 0 with cache Vertex 3 SSD by itself (and/or the really fast Corsair one)
You already have most of this testing done and in this article.
PLEASE PLEASE PLEASE PLEASE do this soon! Thanks guys!
RAID0 can't even compare. With PCMark Vantage a RAID0 with MD's gives you roughly a 10% increase in performance in the HDD suite. A high end SSD is is 300-400% faster in the Vantage HDD suite scores. Even if you only achieve 50% of the SSD performance increase with SRT you'd still be seeing 150-200% increase and this article seems to claim that SRT is much closer to a pure SSD than 50%.
Obviously benchmarks like Vantage HDD suite don't always reflect real world performance but I think there's still an obvious difference between 10% and a couple hundred %...
all I know is since I switched to RAID 0 my games load in 2/3 the time they used to. 10% is crazy. RAID 0 should get you a 50% performance improvement across the board; you did something wrong.
Or the fact that it is an entirely software based solution. Intel's software does not, as far as I and google know, run on linux, nor would I be inclined to install such software on linux even if it were. So this is a non-starter for me. For steam and games I say get a 60-120gb consumer level ssd and call it a day. No software glitches, no stuff like that.
This kind of caching needs to be implemented at the filesystem level if you ask me, which is what I hope some linux filesystems will bring 'soon'. On windows the outlook is bleak.
Are there any plans in the future of this technology being made available to P67 boards?
Before I read this I thought it was a chipset feature. I had no idea this was being implemented in software at a driver level.
I am hoping that after a reasonable amount of time passes they make this available for P67 users. I understand that for now they want to add some value to this new launch but after some time passes why not?
Given that the drive has built in 4gb of flash, it would be very interesting to compare this to the aforementioned SRT. Architecturally similar, though it requires two drives instead of one. Heck, what would happen if you used SRT with a Seagate Momentus?
"Even gamers may find use in SSD caching as they could dedicate a portion of their SSD to acting as a cache for a dedicated games HDD, thereby speeding up launch and level load times for the games that reside on that drive."
Does Intel make any mention of possible future software versions allowing user customization to specifically select applications to take precedence over others to remain in cache. For example say that you regularly run 10 - 12 applications (assuming that this work load is sufficient to begin the eviction process), rather than having the algorithm just select the least utilized files have it so that you can point to an exe and then it could track the associated files to keep them in cache above the priority of the standard cleaning algorithm.
2. Would it even make sense to use this in a system that has a 40/64/80 gig OS SSD and then link this to a HDD/array or would the system SSD already be handling the caching? Just trying to see if this would help offload some of the work/storage to the larger HDDs since space is already limited on these smaller.
What is the long term use degradation like? I know without TRIM SSD's tend to lose performance over time. Is there something like trim happening here since this all seems to be below the OS level?
This technology looks to be a boon for so many users. Whereas technophiles who live on the bleeding edge (like me) probably won't settle for anything less than an SSD for their main boot drive, this SSD cache + HDD combo looks to be an amazing alternative for the vast majority of users out there.
There's several reasons why I really like this technology:
1. Many users are not smart and savvy at organizing their files, so a 500GB+ C drive is necessary. That is not feasible with today's SSD prices.
2. This allows gamers to have a large HDD as their boot drive and an SSD to speed up game loads. A 64GB SSD would be fantastic for this as the cache!
3. This makes the ultimate drop-in upgrade. You can build a PC now with an HDD and pop in an SSD later for a wicked speed bump!
I'm strongly considering swapping my P67 for a Z68 at some point, moving my 160GB SSD to my laptop (where I don't need tons of space but the boot speed is appreciated), and using a 30-60GB SSD as a cache on my desktop for a Seagate 7200.12 500GB, my favourite cheap boot HDD.
Is the intel 311 the best choice for the $$, or would other SSDs of a similar cost perform better. For example the egg has OCZ Vertex 2 and other sandforce based drives in the 60GB range for approx $130. That is a better cache size than the 20GB of the intel drive.
Sandforce relies on compression to get some of its high data rates, would that still work well in this kind of a cache scenario?
I was wondering if it would still be possible to do a RAID set up with SRT? For example I would probably want to do a RAID 5 set up with 3 3TB drives but also have the cache enabled, not sure if this would work though.
RE: Z68 capable of managing SRT and traditional RAID at the same time?
I've looked for an answer to this without success.
I did find out the H67 express chipset can't manage more than one RAID array. I won't be surprised to learn the same for the Z68. Which is to say, your choice: either SRT or traditional RAID, but not both.
" I view SRT as more of a good start to a great technology. Now it's just a matter of getting it everywhere."
It actually doesn't have much of a future,so ok Marvell first made it's own chip that does this,now Intel put similar tech on Z68 but lets look at what's ahead.As you said NAND prices are coming down and soon enough SSDs will start to get into the mainstream eroding the available market for SRT while at the same time HDD makers will also have much better hybrid drives.All in all SRT is a few years late.
What happens if we have an SSD as a boot drive? Would it recognize it as an SSD and only cache the secondary HDD? It would be nice to have that as my boot SSD is only 80GB and my less frequent used progs are in my 2teras. This is also great for upgraders as now you have a use for your last gen SSD drives!
A feature of the Z68 is that any SSD can be used, up to 64GB in size. Anandtech does the best SSD reviews I've read, and I was dissapointed to not see some tests with a larger cache drive, especially when there were issues with bumping data off of the 20GB drive.
I think that a larger cache drive will be the real life situation for a majority of users. There are some nice deals on 30GB to 64GB drives right now and it would be great to see a review that tries to pinpoint the sweet spot in cache drive size.
Hopefully my next workstation will have SSD for cache and an HDD for applications storage. This will greatly shorten length of time required to transition to SSD in the workplace. A one drive letter solution is just what was needed for mass adoptation.
This seems like a good usage for old "obsolete" SSDs that you wouldn't use as a boot drive any more. I have a couple of 32GB Mtrons laying around, and while their random write sucks (on par with velociraptor sustained, but not burst) the random read at low QDs is good (10K at QD 1 = 40MB/s). I've been using them as boot drives in older machines running dual cores, but it could be nice to upgrade and use them as cache drives instead.
It would be nice to see a lineup of different older low-capacity SSDs (16-64GB) with the same HDDs used here, for a comparison and to see if there's any point in putting a OCZ Core, Core V2, Apex, Vertex (Barefoot), Trancend TS, Mtron Mobi/Pro, Kingston V+, or WD Silicon Drive for caching duty.
I'd like to see if using something like a Vertex 3 at 64GB would make much difference compared to using Intels 20GB SSD. Seems like it should evict almost never; so I'd expect some pretty hefty reliability improvements.
For those working w/in the X58 chipset world and who have access to the Marvell 9128 "Hyperduo" SATA III (6GB) chip supported motherboards, what have people seen in terms of stability and speed?
Understandably, the X58 chipset is a quickly fading market, but I happened to have a spare i7 920 D0 lying around and picked up a recently released LGA 1366 motherboard to put that CPU to use....
I'm looking for a functionality/application acting like: 1. Smart responce technology (problem: cannot be used when OS is installed on SSD) or 2. Readyboost, but without deleting the cache during reboot.
I want a program/function working like a read and write cache(*) for a the 7200rpm drive (using e.g. 10-30GB of the SSD disk or USB for cache) that "survives" OS restart. Do anyone know if there exist any application with this functionality (Solutions I know: 1. buy a second SSD to use for HD cache, and 2. I could install OS on the 7200 rpm drive and use part of the SSD as cache)?
(*) With cache I mean something like: - mirror the latest filecs read from the HD, and - writes data directly to the USB, and later mirror the data to the Hard drive (when it has started up from idle to 7200rpm.)
Background: My system: Windows 7, Z68 motherboard, 120GB SSD + 1GB disk 7200rpm. The slower disk goes into standby (which is fine because I doesn't use it so often), but when data is needed it starts up slowly which is annoying.
I'm suprised the HD manufacturers have not started fighting back and providing hybrid SSD/HDD's with write through cache etc, 1TB hard disk with 64GB SSD on board would rock. Especially if they take the supercapacitor route for guarenteed writes to SSD NAND on power failures. I've recently bought one of the new Comay Venus 120GB SSD's and it has these features, not to mention performance that blows OCZ out of the water. Just wish I didn't have to mess around thinking what to keep on SSD and what to keep on HDD, a hybrid would be simplicity itself.
Do you know if SRT will work with all processors that are otherwise compatible with the Z68 chipset? I've seen some reports that only true "core" processors are supported, like the i3/i5/i7 while Sandy Bridge based Celerons and Pentiums are not.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
106 Comments
Back to Article
MrCromulent - Wednesday, May 11, 2011 - link
Thanks for the review! Good to see that Intel's SSD caching actually works quite well.I'm looking forward to the next generation of SB notebooks with a ~20GB mSATA SSD drive combined with a 1TB 2,5" hard drive.
dac7nco - Wednesday, May 11, 2011 - link
Indeed. I'd be interested in seeing how a Crucial M4 64GB mated to a pair of short-stroked single-platter Samsung drives in RAID-0 would perform in a dedicated gaming system.JarredWalton - Wednesday, May 11, 2011 - link
Really? Man, I thought short-stroking drives was all but dead these days. That's the whole point of SSDs: if you're so concerned about storage performance that you're willing to short-stroke an HDD, just move to a full SSD and be done with it. Plus, storage is only a minor bottleneck in a "dedicated gaming system"; your GPU is the biggest concern, at least if you have any reasonable CPU and enough RAM.My biggest concern with SRT is the reliability stuff Anand mentions. I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization. That seems like something that would need to be done at the hardware level, though, and you always run the risk of data loss if the SSD cache somehow fails (though that should be relatively unlikely). Heck, all HDDs already have a 16-64MB cache on them, and I'd like the SSD to be a slower but much larger supplement to that.
Anyway, what concerns me is that we're not talking about caching at the level of, say, your CPU's L1 or L2 or even L3 cache. There's no reason the caching algorithm couldn't look at a much longer history of use so that things like your core OS files never get evicted (i.e. they are loaded every time you boot and accessed frequently, so even if you install a big application all of the OS files still have far higher hit frequency). Maybe that does happen and it's only in the constraints of initial testing that the performance degrades quickly (e.g. Anand installed the OS and apps, but he hasn't been using/rebooting the system for weeks on end).
The "least recently used" algorithm most caching schemes use is fine, but I wonder if the SSD cache could track something else. Without knowing exactly how they're implementing the caching algorithm, it's hard to say would could be improved, and I understand the idea of a newly installed app getting cached early on ("Hey, they user is putting on a new application, so he's probably going to run that soon!"). Still, if installing 30GB of apps and data evicts pretty much everything from the 20GB cache, that doesn't seem like the most effective way of doing things--especially when some games are pushing into the 20+ GB range.
bji - Wednesday, May 11, 2011 - link
It seems like a good way to do it would be for the software to recognize periods of high disk activity and weigh caching of all LBAs during that period much higher.So for example, system boot, where lots and lots of files are read off of the drive, would be a situation where the software would recognize that there is a high rate of disk I/O going on and to weigh all of the files loaded during this time very highly in caching.
The more intense the disk I/O, the higher the weight. This would essentially mean that the periods that you most want to speed up - those with heavy disk I/O - are most likely to benefit from the caching, and disk activity that is typically less intense (say, starting a small application that you use frequently but that is relatively quick to load because of the small number of disk hits) would only be cached if it didn't interfere with the caching of more performance-critical data.
All that being said, I am not a fan of complex caching mechanisms like this to try to improve performance. The big drawback, as pointed out in this well-presented article, is that there is a lack of consistency; sometimes you will get good performance and sometimes not, depending on tons of intangible factors affecting what is and what isn't in the cache. Furthermore, you are always introducing extra overhead in the complexity of the caching schemes, and in this case because it's being driven by a piece of software on the CPU, and because data is being shuffled around and written/read multiple times more than it would have with no caching involved.
Then again, it is highly unlikely to *hurt* performance so if you don't mind sometimes waiting more than other times for the same thing to happen (this in particular drives me crazy though; if I am used to a program loading in 5 seconds, the time it takes 10 seconds really stands out like a sore thumb), and can absorb the extra cost involved, then it's not a totally unreasonable way to try to get a little bit of performance.
Zoomer - Wednesday, May 11, 2011 - link
Or the filesystem can manage the cache. That would be a much more intelligent and foolproof way to do this.vol7ron - Wednesday, May 11, 2011 - link
Can you point a RAM Disk to this caching drive?bji - Wednesday, May 11, 2011 - link
What is the algorithm that the filesystem would use to decide what data to cache in preference to other cacheable data? That is the question at hand, and it doesn't matter at what level of the software stack it's done, the problem is effectively the same.Mr Perfect - Wednesday, May 11, 2011 - link
<quote>I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization.</quote>I heartily agree with that. Everyone is so gung ho about having a SSD for OS and applications, a HD for data and then <b>manually managing the data!</b> Isn't technology supposed to being doing this for us? Isn't that the point? Enthusiast computers should be doing things the consumer level stuff can't even dream about.
Intel, please, for the love of all that is holy, remove the 64GB limit.
Mr Perfect - Wednesday, May 11, 2011 - link
On a completely unrelated note, why is the AT commenting software unable to do things the DailyTech site can? Quotes, bolding, italics and useful formatting features like that would really be welcome. :)JarredWalton - Wednesday, May 11, 2011 - link
I'm not sure when they got removed, but standard BBS markup still works, if you know the codes. So...[ B ]/[ /B ] = Bolded text
[ I ]/[ /I ] = Bolded text
[ U ]/[ /U ] = Bolded text
There used to be an option to do links, but that got nuked at some point. I think the "highlight" option is also gone... but let's test:
[ H ]/[ /H ] = [h]Bolded text[/h]
So why don't we have the same setup as DT? Well, we *are* separate sites, even though DT started as a branch off of AT. They have their own site designer/web programmer, and some of the stuff they have (i.e. voting) is sort of cool. However, we would like to think most commenting on AT is of the quality type so we don't need to worry about ratings. Most people end up just saying "show all posts" regardless, so other than seeing that "wow, a lot of people didn't like that post" there's not much point to it. And limiting posts to plain text with no WYSIWYG editor does reduce page complexity a bit I suppose.
JarredWalton - Wednesday, May 11, 2011 - link
Obviously, I missed changing the pasted text above. That's Bold, Italics, and underlined text. (And highlighted text is now gone, thankfully, so people talking about [H]OCP don't look weird. LOL)Mr Perfect - Thursday, May 12, 2011 - link
I hadn't though to try BBCode[\b[\i][\u].Thanks, Jarred.
Mr Perfect - Thursday, May 12, 2011 - link
Much less use it correctly...FlameDeer - Thursday, May 12, 2011 - link
Hi Jarred, about the option to do links, I have tried before, by using the below codes:[L=text]/[/L] = [L=AnandTech]http://www.anandtech.com/[/L]
The codes I put are
<L=AnandTech>http://www.anandtech.com/</L>
just replace the < > symbols to [ ] will do. :)
Hopefully Intel will be more concern about what the users really needs & not just simply apply their own set of rules to users by limiting certain functions as they like.
Good job of the review & take care, guys! :)
FlameDeer - Thursday, May 12, 2011 - link
Ops, not working, anyway I try again few codes here, if still not working then just abandon it.[ L ]/[ /L ] = [L]Text[/L]
[ A ]/[ /A ] = [A]Text[/A]
[ B ]/[ /B ] = Text
[ I ]/[ /I ] = Text
[ U ]/[ /U ] = Text
[ H ]/[ /H ] = [H]Text[/H]
therealnickdanger - Wednesday, May 11, 2011 - link
Obviously, a lot of time goes into these reviews, but I would really like to see an update using a 64GB Vertex 3 or other fast 64GB drive as the cache. I suppose that the only real improvement would be how many apps/files are cached before eviction. But the Vertex 3 is a LOT faster than the new Intel 311 or whatever it is...y2kBug - Wednesday, May 11, 2011 - link
Take this with a huge grain of salt. The following quote from the review makes me shiver “In my use I've only noticed two reliability issues with Intel's SRT. The first issue was with an early BIOS/driver combination where I rebooted my system (SSD cache was set to maximized) and my bootloader had disappeared. The other issue was a corrupt portion of my Portal 2 install, which only appeared after I disabled by SSD cache.”Don’t get me wrong, I’m not trolling. I was really looking forward to SSD caching. But my previous experience when I randomly lost all data on an Intel RAID 1 array without any signs of hard-drive failures made me skeptical in the Intel RAID software.
NCM - Wednesday, May 11, 2011 - link
Anand writes: "Paired with a 20GB SLC SSD cache, I could turn a 4-year-old 1TB hard drive into something that was 41% faster than a VelociRaptor."That's an assertion that really needs some heavy qualification, for instance by appending "at least sometimes and for certain things."
SRT is an intriguing approach on the part of Intel, but ultimately it comes across to me as insufficient and unfinished. I have little confidence in its ability to gauge what's important to cache as opposed to what's used more often. Those aren't the same things at all.
I'd like to see a drive approach where a limited capacity boot/application SSD is combined with a conventional HD within a single standard drive enclosure. This hybrid would present itself to the host as a single drive, but include a software toggle to allow selective access to each drive for setup purposes. You'd install the OS and programs on the SSD for rapid boot/launch, while user mass file storage would be on the HD. In normal use you wouldn't know, other than in performance terms, that two devices were involved.
Yes, I know that we can achieve much of that today by using separate SSD and HD devices. I have two such setups, one a server and the other a workstation. However they both require some technical attention on the part of the user, and it's not an approach that works in a laptop, at least not without big compromises.
LancerVI - Wednesday, May 11, 2011 - link
Can install OS on 1 60 GB SSD for example and then SRT a second 60 SSD for a 2 TB Raid 0 array?I've got two 60's in a Raid 0 now, but obviously, most of my programs are on seprate HDD's. If my above question is possible, maybe this is a way to split the difference as it were.
Any insight would be appreciated.
djgandy - Wednesday, May 11, 2011 - link
Considering you can pick up a 30GB SSD in the UK for £45, this seems like an easy way to get some performance increase for desktop productivity.http://www.overclockers.co.uk/showproduct.php?prod...
KayDat - Wednesday, May 11, 2011 - link
I know this bears zero relevance to Z68...but that CGI girl that Lucid used in their software is downright creepy.SquattingDog - Wednesday, May 11, 2011 - link
I tend to agree - maybe if she had some hair it would help...lolRamarC - Wednesday, May 11, 2011 - link
Seems that it would be better to designate a partition to be cached and other partitions uncached. With only a 20GB cache SSD, ripping from BD to .MP4 could easily cause cache evictions.And, will this work with a mixed Rapid Storage array? I typically run hard drives in pairs, and mirror (raid 1) the first 120GB and stripe the remaining so I've got a fault-protected 120GB boot device and a 1700GB speedster. In this case, I'd only want the boot device cached.
ganeshts - Wednesday, May 11, 2011 - link
This looks like a valid concern. For HTPCs, there is usually a data partition separate from the boot / program files partition. Usage of the SSD cache for the data partition makes no sense at all.velis - Wednesday, May 11, 2011 - link
I agree with validity of this proposal, but must also comment on (non)sensicality of caching the data partition:I for one was disappointed when I read that multi-MB writes are not (write) cached. This is the only thing that keeps my RAID-5 storage slow. And a nice 32GB cache would be just the perfect thing for me. That's the largest I ever write to it in a single chunk.
So instead of 100MB/s speeds I'm still stuck with 40 down to 20MB/s - as my raid provides.
Still - this is not the issue at all. I have no idea why manufacturers always think they know it all. Instead of just providing a nice settings screen where one could set preferences they just hard-code them...
fb - Wednesday, May 11, 2011 - link
SRT is going to be brilliant for Steam installs, as you're restricted to keeping all your Steam apps on one drive. Wish I had a Z68. =)LittleMic - Wednesday, May 11, 2011 - link
Actually, you can use a unix trick known as symbolic link to move the installed game elsewhere.On WindowsXP, you can use Junction,
On Windows Vista and 7, the tool mklink is provided with the OS.
jonup - Wednesday, May 11, 2011 - link
Can you elaborate on this or provide some links?Thanks in advance!
LittleMic - Wednesday, May 11, 2011 - link
Consider c:\program files\steam\...\mygame that is taking a lot of place.You can move the directory to d:\mygame for instance then you can use the command
vista/7 (you need to be administrator to be able to do so)
mklink /d c:\program files\steam\...\mygame d:\mygame
xp (administrator rights required too)
junction c:\program files\steam\...\mygame d:\mygame
to create the link.
The trick is that steam will still find its data in c:\program files\steam\...\mygame but they will be physically located on d:\mygame.
Junction can be found here :
http://technet.microsoft.com/fr-fr/sysinternals/bb...
LittleMic - Wednesday, May 11, 2011 - link
Update : see arthur449 suggestion.Steam mover is doing this exact operation with a nice GUI.
arthur449 - Wednesday, May 11, 2011 - link
Google "Steam Mover"LittleMic - Wednesday, May 11, 2011 - link
Indeed, someone has done a nice GUI to do the operation I just described.jimhsu - Wednesday, May 11, 2011 - link
http://schinagl.priv.at/nt/hardlinkshellext/hardli... for a general GUI way to do this, not just for Steam but for all content.jimhsu - Wednesday, May 11, 2011 - link
The basic steps.1. Manually move folder from SSD to hard drive.
2. On HDD, select folder, right click, "Pick Link Source"
3. On SSD, right click, "Drop As" > "Symbolic Link"
4. Profit!
velanapontinha - Wednesday, May 11, 2011 - link
I can really 'feel' Windows' superfecth advantadges, sam way I can feel when I'm requesting files that are not 'fetched'.This software-feature Intel is now pulling is very similar to Superfecth, although it uses as SSD instead of RAM, and a lot more is available in an SSD than it usually is in RAM.
It is a neat feature, and I'm sure it will be copied from other software houses from now on.
shatteredstone - Thursday, May 19, 2011 - link
In fact, Sun has invented a timemachine and copied this approach into ZFS years ago. ;-)The feature is called L2ARC (level 2 adaptive replacement cache) there and works nicely with SSDs (but also "fast" HDDs, battery-backed DDR nonvolatile memory, etc.). The nice thing is that if SRT takes off and these 20g SLC SSDs get competition and a price crunch (as well as general availability), using these features in ZFS is going to become a lot cheaper. Though even now there is nothing preventing you from using a dozen 240g SSDs as L2ARC (and ZIL) to speed up your farm of disks :)
AnnihilatorX - Wednesday, May 11, 2011 - link
Damn you Intel, so all my old first gen 16GB SSDs can go to the bin?Only you in the world uses 10bit channel for flash, and that's why you set a 20GB (18.6 formatted) limit? ******
MonkeyPaw - Wednesday, May 11, 2011 - link
Its Intel. If there's one thing that is almost certain, it's that forward compatibility is not going to happen.DanNeely - Wednesday, May 11, 2011 - link
Where does it say that the minimum size of the SRT cache is 20GB?Mr Perfect - Wednesday, May 11, 2011 - link
Dan's correct, it doesn't say you have to have a 20GB drive. Intel just happens to be launching a 20GB drive that they are hoping you will use for this.µBits - Monday, July 11, 2011 - link
http://download.intel.com/support/motherboards/des...System Requirements:
For a system to support Intel Smart Response Technology it must have the following:
codedivine - Wednesday, May 11, 2011 - link
I am interested in using Linux and I am wondering about various things:1. Will it work under Linux? Can I configure it from Linux?
2. Is it file system dependant? I guess it is not.
3. Whether it will work on multi-OS machines. For example, what happens if I dual boot Windows and Linux?
Unrelated to Linux is, does this scheme get confused by say using lots of VMs?
Penti - Wednesday, May 11, 2011 - link
No, so called fake raid (software raid) that the chipset/drivers supports do never work in Linux. RST do not work under other OS's then Windows. Mac and Linux will have to use the built in software raid rather then the none existing driversbased one. And will lack all support for SSD caching.VMs usually write to a virtual hard drive (file) that saves the data to the disk. That should be absolutely fine.
Mulberry - Saturday, May 21, 2011 - link
but to the question on dual booting:Can you dual boot eg. Win XP and Win 7?
headhunter00 - Sunday, August 7, 2011 - link
well, you can always set the root partition on the ssd, by creating custom partitions during installation, and set the resource hogging partitions such as /var and /home on your hdd. this way, all the binaries and libraries load from the hdd. if you don't have enough space on your hdd to do that either, then you are out of luck. thats the closest you can get to ssd caching in linux. ofcourse there is a patch for the kernel to do ssd caching natively, but its pretty outdated and probably not compatible with your hardware. to do ssd-caching in windows with linux, you have to preserve some space on ssd to do so.MonkeyPaw - Wednesday, May 11, 2011 - link
The virtue interface is awful. Looks like the ugly tree fell on that android girl.sunbear - Wednesday, May 11, 2011 - link
Consumer nases (readynas, qnap, etc) could really benefit from this. Flashcache (http://planet.admon.org/flashcache-caching-data-in... released by facebook also looks interesting.fitten - Wednesday, May 11, 2011 - link
Can you have an SSD as your boot drive, then a large HDD (typical configuration... OS/apps on SSD, data/etc on HDD) and then have yet-another SSD enabled with SRT for caching the HDD? Seems like the best of both worlds (other than cost).swhitton - Wednesday, May 11, 2011 - link
I reckon that setup would almost certainly work fine.What I'm wondering is whether you could use a single SSD partitioned so that part of it was a boot drive and the other part was a cache for a HDD. Such a setup would solve the problem of the 120gb SSD not being quite the right size for any particular purpose.
A 60-80gb partition with Windows and apps on it and the remaining space used as a cache. This would avoid the problem of having to symbolic link Steam games and so forth, while also not requiring you to buy two SSDs in order to have a boot drive and a cache drive.
Anand did mention that a cache drive could be partitioned so that only part of it needed to be used as a cache. Just not sure if there would be any issues that might arise with using the remaining partition as a boot drive.
Thoughts anyone?
y2kBug - Wednesday, May 11, 2011 - link
Here is a quote from vr-zone.com’s review (http://vr-zone.com/articles/first-look-msi-z68a-gd... ) on SRT: “All existing partitions on the SSD must be deleted before it can be used as a cache”. This makes me believe that using oneSSD for dual purposes (boot drive and SRT at the same time) is not possible. I really want to hear Anand’s last word on this.jordanclock - Wednesday, May 11, 2011 - link
You can use a drive for both, but you must set up your data partition AFTER you set up the cache partition.jorkolino - Wednesday, June 6, 2012 - link
What do you mean by that? You partition the SSD drive, install the OS in the first partition, set-up the other partition as a cache, and then format your remaining HDD?jorkolino - Wednesday, June 6, 2012 - link
I wonder, can you tell SRT to cache blocks only from the HDD onto the cache partition, because by default SRT may decide to cache system files that already reside onto a fast SSD partition...evilspoons - Wednesday, May 11, 2011 - link
I know it's early on for Z68, but I'm curious how other SSDs will perform in SRT mode. I ask because the 40 GB X25-V is on sale here for half its usual price...evilspoons - Wednesday, May 11, 2011 - link
To answer my own question, Tom's Hardware reviewed SRT with several SSDs and to put it bluntly, the X25-V sucks. Its very low write speed of 35 mb/sec actually drags the hard drive down in a few tests.Shadowmaster625 - Wednesday, May 11, 2011 - link
Yeah that is a nice way of putting it. Talk about sugar coating. Here is a question for ya: was intel being "conservative" when they tried to shove rambus down everyone's throats? If it werent for AMD and DDR god knows how much memory would cost now. I still have one of those rambus P4 systems running in the lab right now. (intel 850 chipset with dual channel RDRAM). I did some memory benchmarking on it and was shocked to find that it was actually slower than any of the P4 DDR 266 machines we have running. (Yes we are slow to upgrade lol.) It runs at about DDR200 equivalent speeds. And we really paid out the wazoo for that system.Shinobisan - Wednesday, May 11, 2011 - link
discrete graphics cards are limited - even though they often have three, four.. or more connectors these days, they can often only drive two monitors at a time. (unless you use a displayport connector... and monitors with DP don't really exist yet)I have two monitors driven by my HD6950 via the digital video out connectors. So the HDMI connector on that card is "dead" until I turn one of the monitors off.
What I would like to be able to do... is have my dGPU drive my two monitors, and the iGPU drive my 1080p TV via HDMI.
Can I do that? This discussion on virtu muddies the water some. unclear.
Conficio - Wednesday, May 11, 2011 - link
Well, so SRT is a good idea but again it is limited artificially in its use. Sounds to me like the P67/H67 stund all over again.Why is it limited?
* For starters it is driver supported, and I believe that means Windows only (I could find no mention of what OS is supported). To be fully useful it belongs into the chipset/BIOS realm.
* Next there is the artificial 64GB limit. As is obvious from even the tests that is not really the practical limit of its usefulness. It is simply a marketing limit to not compete with Intels own full SSD business. You got to ask yourself, why not use your aging SSD of 100GB or 256 GB (a couple years down the road) as an SRT drive?
* "With the Z68 SATA controllers set to RAID (SRT won't work in AHCI or IDE modes) just install Windows 7 on your hard drive like you normally would." So only RAID setups are supported? Well you are testing with a single hard drive, so this might be a confusing statement. But if it is RAID only then that is ceratinly not what Joe Shmoe has in its desktop (let alone in its Laptop).
A5 - Wednesday, May 11, 2011 - link
If the AT Heavy Workload Storage Bench is a typical usage case for you, than you shouldn't be using SRT anyway - you'd have a RAID array of SSDs to maximize your performance.jordanclock - Wednesday, May 11, 2011 - link
For caching purposes, I'm sure 64GB is a very reasonable limit. The more data you cache, the more data you have to pay attention to when it comes to kicking out old data.And it isn't a RAID set up, per se. You set the motherboard to RAID, but the entire system is handled in software. So Joe Shmoe wouldn't even have to know what a RAID is, though I don't see Joe Shmoe even knowing what a SSD is...
cbass64 - Wednesday, May 11, 2011 - link
Who says you can't use your old 100 or 256GB SSD as an SRT device? The article clearly states that you can use whatever size drive you want. Up to 64GB of it will be used for cache and the rest can be used for data. If you have more than 64GB of data that you need to have cached at one time then SRT isn't the solution you should be looking into.As for OS limitations...you can't seriously think Intel would wait until they had this running on every platform imaginable before they released it to the public, can you? This is the first version of the driver that supports it so of course it will have limitations. You can't expect a feature of a Windows-only driver to be supported by a non-Windows OS. I'm sure this feature will be available on Linux once Intel actually makes a Linux RST driver.
futrtrubl - Wednesday, May 11, 2011 - link
And don't forget that if you don't partition the rest of the space on the SSD it will use it for wear levelling, which will be even more important in this situation.Shadowmaster625 - Wednesday, May 11, 2011 - link
I still dont get why western digital doesnt take 4GB of SLC and solder it onto the back of their hard drive controller boards. It's not like they dont have the room. Hopefully now they will do that. 1TB +4GB SLC all for under $100 in one package, with 2 sata ports.mamisano - Wednesday, May 11, 2011 - link
Seagate has the Momentus 500gb 7200rmp drive with 4GB SLC. It's in 2.5" 'Notebook' format but obviously can be using in a PC.I am wondering why such a drive wasn't included in these tests.
jordanclock - Wednesday, May 11, 2011 - link
Because, frankly, it sucks. The caching method is terrible and barely helps more than a large DRAM cache.Conficio - Wednesday, May 11, 2011 - link
What is the OS support on those drivers (Windows?, Linux?, Mac OS X?, BSD?, Open Source?, ...)?Does the SRT drive get TRIM? Does it need it?
"With the Z68 SATA controllers set to RAID (SRT won't work in AHCI or IDE modes) just install Windows 7 on your hard drive like you normally would."???
Is there any optimization to allow the hard drive to avoid seeks? If this all happens on the driver level (as opposed to on the BIOS level) then I'd expect to gain extra efficiency from optimizing the cached LBAs so as to avoid costly seeks. In other words you don't want to look at LBAs alone but at sequences of LBAs to optimize the utility. Any mention of this?
Also one could imagine a mode where the driver does automatic defragmentation and uses the SSD as the cache to allow to do that during slow times of hard drive access. Any comment from Intel?
Lonesloane - Wednesday, May 11, 2011 - link
What happened to the prposed prices? If I remember correctly the caching drive was supposed to cost only 30-40$?Now with 110$, the customer should better buy a "real" 60GB SSD.
JNo - Thursday, May 12, 2011 - link
+1It's interesting, Anand has a generally positive review and generally positive comments. Tom's Hardware, which I generally don't respect nearly as much as Anand, reviewed SRT both a while back and covered it again recently and is far less impressed as are its readers. I have to say that I agree with Tom's on this particular issue though.
It is *not* a halfway house or a cheaper way to get most of the benefit of an SSD. For $110 extra plus the premium of a Z68 mobo you may as well get an SSD that is 40-60GB bigger than Larson Creek (or 40-60GB bigger than your main system SSD) and just store extra data on it directly and with faster access and no risk of caching errors.
For those who said SRT is a way of speeding up a cheap HTPC - it doesn't seem that way as it's not really cheap and it won't cache large, sequential media files anyway. For those who said it will speed up your game loadings, it will only do so for a few games on 2nd or 3rd run only and will evict if you use a lot of different games so you're better off having the few that count directly on the SSD anyway (using Steam Mover if necessary).
For your system drive it's too risky at this point or you need to use the Enhanced mode (less impressive) and to speed up your large data (games/movies) it's barely relevant for the aforementioned reasons. For all other scenarios you're better off with a larger SSD.
It's too little too late and too expensive. The fact that it's not worth bothering is a no brainer to me which is a shame as I was excited by the idea of it.
Boissez - Wednesday, May 11, 2011 - link
Could one kindly request for the numbers from both the 64GB C300 and 20GB sans harddisk 311 to be added. It would give a good idea of the performance hit one could expect for using these in SRT vs as a standalone boot drive.Boissez - Wednesday, May 11, 2011 - link
First sentence should be: "Could one kindly request for the numbers from both the 64GB C300 and 20GB 311 sans harddisk to be added?"... sorrydagamer34 - Wednesday, May 11, 2011 - link
Like most technologies, stay away from first-gen implementations.iwod - Wednesday, May 11, 2011 - link
I wonder if you could install 32GB DDR3 RAM, and just use 20GB of that as Intel SRT.It would be interesting to see how its performance went.
kmmatney - Wednesday, May 11, 2011 - link
Wouldn't be persistent between restarts, so that's a problem right there. It would have to build up the cache every time you reboot, and you couldn't use "Max Cache" mode, so you'd have to wait for the HDD for all writes.liveonc - Wednesday, May 11, 2011 - link
In this article, it was pointed out that using an SSD is still better for those who want speed. But is a SATA3 SSD & SATA2 Velociraptor combo possible? Or what about an SSD + SSD/HDD combo? Some sort of comprimise w/o the great penalties, or smaller penalties & greater value?dgingeri - Wednesday, May 11, 2011 - link
I've seen similar numbers using a smaller SSD with Windows 7's ReadyBoost, and it kept the most used data in the cache better. I'd prefer just using that, as it seems more predictable.jordanclock - Wednesday, May 11, 2011 - link
This IS a big deal. However, a comparison of performance between SRT and ReadyBoost would be handy. Especially ReadyBoost with USB3.dgingeri - Wednesday, May 11, 2011 - link
you can (and I have) set up ReadyBoost to a SATA SSD. I had a 60GB OCZ Apex as my ReadyBoost drive for about 6 months, before I got my dual Vertex 2s as a new boot drive. Windows 7 has a limit of 32GB for Readyboost usage, though. It made a heck of a difference in boot time and some program load time, however, it took a little while to get the caching set up right to cache what I actually used on a regular basis. It started caching Firefox rather quickly, but took it a while to pick up on caching Diablo 2.randinspace - Wednesday, May 11, 2011 - link
I still haven't been able to finish the multiplayer mode due to hardware issues stranding me on a glorified netbook.DesktopMan - Wednesday, May 11, 2011 - link
Anand: http://soerennielsen.dk/mod/VGAdummy/index_en.phpShouldn't this work perfectly fine to enable the IGPU when connected to the DGPU without any of the driver nonsense?
Hrel - Wednesday, May 11, 2011 - link
I'd REALLY like to see you guys compare this SRT caching to two of the fastest 7200rpm drives out there in RAID 0. Cause 1-4 seconds on launching applications on loading game levels isn't work 100 extra bucks.So compare configurations: 1 MD
1 MD with Cache
2MD in RAID 0 (MD = Mechanical Disk)
2 MD in RAID 0 with cache
Vertex 3 SSD by itself (and/or the really fast Corsair one)
You already have most of this testing done and in this article.
PLEASE PLEASE PLEASE PLEASE do this soon! Thanks guys!
cbass64 - Wednesday, May 11, 2011 - link
RAID0 can't even compare. With PCMark Vantage a RAID0 with MD's gives you roughly a 10% increase in performance in the HDD suite. A high end SSD is is 300-400% faster in the Vantage HDD suite scores. Even if you only achieve 50% of the SSD performance increase with SRT you'd still be seeing 150-200% increase and this article seems to claim that SRT is much closer to a pure SSD than 50%.Obviously benchmarks like Vantage HDD suite don't always reflect real world performance but I think there's still an obvious difference between 10% and a couple hundred %...
Hrel - Thursday, May 12, 2011 - link
all I know is since I switched to RAID 0 my games load in 2/3 the time they used to. 10% is crazy. RAID 0 should get you a 50% performance improvement across the board; you did something wrong.DanNeely - Thursday, May 12, 2011 - link
Raid only helps with sequential transfers. If Vantage has a lot of random IO with small files it won't do any good.don_k - Wednesday, May 11, 2011 - link
Or the fact that it is an entirely software based solution. Intel's software does not, as far as I and google know, run on linux, nor would I be inclined to install such software on linux even if it were. So this is a non-starter for me. For steam and games I say get a 60-120gb consumer level ssd and call it a day. No software glitches, no stuff like that.This kind of caching needs to be implemented at the filesystem level if you ask me, which is what I hope some linux filesystems will bring 'soon'. On windows the outlook is bleak.
jzodda - Wednesday, May 11, 2011 - link
Are there any plans in the future of this technology being made available to P67 boards?Before I read this I thought it was a chipset feature. I had no idea this was being implemented in software at a driver level.
I am hoping that after a reasonable amount of time passes they make this available for P67 users. I understand that for now they want to add some value to this new launch but after some time passes why not?
michael2k - Wednesday, May 11, 2011 - link
Given that the drive has built in 4gb of flash, it would be very interesting to compare this to the aforementioned SRT. Architecturally similar, though it requires two drives instead of one. Heck, what would happen if you used SRT with a Seagate Momentus?kenthaman - Wednesday, May 11, 2011 - link
1. You mention that:"Even gamers may find use in SSD caching as they could dedicate a portion of their SSD to acting as a cache for a dedicated games HDD, thereby speeding up launch and level load times for the games that reside on that drive."
Does Intel make any mention of possible future software versions allowing user customization to specifically select applications to take precedence over others to remain in cache. For example say that you regularly run 10 - 12 applications (assuming that this work load is sufficient to begin the eviction process), rather than having the algorithm just select the least utilized files have it so that you can point to an exe and then it could track the associated files to keep them in cache above the priority of the standard cleaning algorithm.
2. Would it even make sense to use this in a system that has a 40/64/80 gig OS SSD and then link this to a HDD/array or would the system SSD already be handling the caching? Just trying to see if this would help offload some of the work/storage to the larger HDDs since space is already limited on these smaller.
Midwayman - Wednesday, May 11, 2011 - link
What is the long term use degradation like? I know without TRIM SSD's tend to lose performance over time. Is there something like trim happening here since this all seems to be below the OS level?jiffylube1024 - Wednesday, May 11, 2011 - link
Great review, as always on Anandtech!This technology looks to be a boon for so many users. Whereas technophiles who live on the bleeding edge (like me) probably won't settle for anything less than an SSD for their main boot drive, this SSD cache + HDD combo looks to be an amazing alternative for the vast majority of users out there.
There's several reasons why I really like this technology:
1. Many users are not smart and savvy at organizing their files, so a 500GB+ C drive is necessary. That is not feasible with today's SSD prices.
2. This allows gamers to have a large HDD as their boot drive and an SSD to speed up game loads. A 64GB SSD would be fantastic for this as the cache!
3. This makes the ultimate drop-in upgrade. You can build a PC now with an HDD and pop in an SSD later for a wicked speed bump!
I'm strongly considering swapping my P67 for a Z68 at some point, moving my 160GB SSD to my laptop (where I don't need tons of space but the boot speed is appreciated), and using a 30-60GB SSD as a cache on my desktop for a Seagate 7200.12 500GB, my favourite cheap boot HDD.
samsp99 - Wednesday, May 11, 2011 - link
Is the intel 311 the best choice for the $$, or would other SSDs of a similar cost perform better. For example the egg has OCZ Vertex 2 and other sandforce based drives in the 60GB range for approx $130. That is a better cache size than the 20GB of the intel drive.Sandforce relies on compression to get some of its high data rates, would that still work well in this kind of a cache scenario?
davidgamer - Wednesday, May 11, 2011 - link
I was wondering if it would still be possible to do a RAID set up with SRT? For example I would probably want to do a RAID 5 set up with 3 3TB drives but also have the cache enabled, not sure if this would work though.hjacobson - Thursday, May 12, 2011 - link
RE: Z68 capable of managing SRT and traditional RAID at the same time?I've looked for an answer to this without success.
I did find out the H67 express chipset can't manage more than one RAID array. I won't be surprised to learn the same for the Z68. Which is to say, your choice: either SRT or traditional RAID, but not both.
Sigh.
jjj - Thursday, May 12, 2011 - link
" I view SRT as more of a good start to a great technology. Now it's just a matter of getting it everywhere."It actually doesn't have much of a future,so ok Marvell first made it's own chip that does this,now Intel put similar tech on Z68 but lets look at what's ahead.As you said NAND prices are coming down and soon enough SSDs will start to get into the mainstream eroding the available market for SRT while at the same time HDD makers will also have much better hybrid drives.All in all SRT is a few years late.
HexiumVII - Thursday, May 12, 2011 - link
What happens if we have an SSD as a boot drive? Would it recognize it as an SSD and only cache the secondary HDD? It would be nice to have that as my boot SSD is only 80GB and my less frequent used progs are in my 2teras. This is also great for upgraders as now you have a use for your last gen SSD drives!Bytown - Thursday, May 12, 2011 - link
A feature of the Z68 is that any SSD can be used, up to 64GB in size. Anandtech does the best SSD reviews I've read, and I was dissapointed to not see some tests with a larger cache drive, especially when there were issues with bumping data off of the 20GB drive.I think that a larger cache drive will be the real life situation for a majority of users. There are some nice deals on 30GB to 64GB drives right now and it would be great to see a review that tries to pinpoint the sweet spot in cache drive size.
irsmurf - Thursday, May 12, 2011 - link
Hopefully my next workstation will have SSD for cache and an HDD for applications storage. This will greatly shorten length of time required to transition to SSD in the workplace. A one drive letter solution is just what was needed for mass adoptation.Its like a supercharger for your hard drive.
GullLars - Thursday, May 12, 2011 - link
This seems like a good usage for old "obsolete" SSDs that you wouldn't use as a boot drive any more. I have a couple of 32GB Mtrons laying around, and while their random write sucks (on par with velociraptor sustained, but not burst) the random read at low QDs is good (10K at QD 1 = 40MB/s). I've been using them as boot drives in older machines running dual cores, but it could be nice to upgrade and use them as cache drives instead.It would be nice to see a lineup of different older low-capacity SSDs (16-64GB) with the same HDDs used here, for a comparison and to see if there's any point in putting a OCZ Core, Core V2, Apex, Vertex (Barefoot), Trancend TS, Mtron Mobi/Pro, Kingston V+, or WD Silicon Drive for caching duty.
Hrel - Thursday, May 12, 2011 - link
I'd like to see if using something like a Vertex 3 at 64GB would make much difference compared to using Intels 20GB SSD. Seems like it should evict almost never; so I'd expect some pretty hefty reliability improvements.marraco - Thursday, May 12, 2011 - link
Is only matter of time until SSD caching is cracked and enabled on any motherboard.ruzveh - Thursday, May 19, 2011 - link
Not so impressive as i would like it to be.quang777 - Monday, August 8, 2011 - link
Does it work with older SSDs that don't support TRIM? Will SRT "cleanup" like TRIM to keep the cache "clean"?cbuck - Thursday, September 22, 2011 - link
For those working w/in the X58 chipset world and who have access to the Marvell 9128 "Hyperduo" SATA III (6GB) chip supported motherboards, what have people seen in terms of stability and speed?Understandably, the X58 chipset is a quickly fading market, but I happened to have a spare i7 920 D0 lying around and picked up a recently released LGA 1366 motherboard to put that CPU to use....
Tastare - Monday, October 31, 2011 - link
I'm looking for a functionality/application acting like:1. Smart responce technology (problem: cannot be used when OS is installed on SSD) or
2. Readyboost, but without deleting the cache during reboot.
I want a program/function working like a read and write cache(*) for a the 7200rpm drive (using e.g. 10-30GB of the SSD disk or USB for cache) that "survives" OS restart. Do anyone know if there exist any application with this functionality (Solutions I know: 1. buy a second SSD to use for HD cache, and 2. I could install OS on the 7200 rpm drive and use part of the SSD as cache)?
(*) With cache I mean something like:
- mirror the latest filecs read from the HD, and
- writes data directly to the USB, and later mirror the data to the Hard drive (when it has started up from idle to 7200rpm.)
Background: My system: Windows 7, Z68 motherboard, 120GB SSD + 1GB disk 7200rpm. The slower disk goes into standby (which is fine because I doesn't use it so often), but when data is needed it starts up slowly which is annoying.
bell2366 - Tuesday, February 28, 2012 - link
I'm suprised the HD manufacturers have not started fighting back and providing hybrid SSD/HDD's with write through cache etc, 1TB hard disk with 64GB SSD on board would rock.Especially if they take the supercapacitor route for guarenteed writes to SSD NAND on power failures.
I've recently bought one of the new Comay Venus 120GB SSD's and it has these features, not to mention performance that blows OCZ out of the water. Just wish I didn't have to mess around thinking what to keep on SSD and what to keep on HDD, a hybrid would be simplicity itself.
astrojny - Friday, May 4, 2012 - link
Any thought on using Intel's Smart Technology with the 1TB Western Digital Raptor that was just released?btkcsd - Saturday, December 13, 2014 - link
Do you know if SRT will work with all processors that are otherwise compatible with the Z68 chipset? I've seen some reports that only true "core" processors are supported, like the i3/i5/i7 while Sandy Bridge based Celerons and Pentiums are not.