I must have 50 sticks of unused PC100 and PC133 SDRAM.
Something like this for old RAM would be a value, (for me).
Does anyone know of an adapter that would take, lets say, 10 sticks of SDRAM and give me an IDE or USB connector?
It was on Gigabytes site as I looked today and the past month while making the descion to get it.
There's been a lag while retailers get rid of the v.1.2 and Gigabyte sends out the v1.3 cards.
I just got one of the new ones and will use it to run my FTP server application. I have 14 or 16 drives connected (6TB) to the server and previous reviews by others have pointed to the performance increse from the FTP app. searching and retaining the disk locations.
since I just got it I am not %100 on the reality, and the real benifit will be realized by the client seeking a file from the server. Using it for MS SQL Server is also a great idea. Other than that I haven't heard any real world uses, I mean users might be able to load Doom faster, but this device seems to be a bit expensive for most.
Also this card is bigger in area than most video cards, so if your box is crammed w/ wires or liquid pumps and resivours. The logistics of getting say 2 video cards and the RamDisk in a midsized case are pretty obsurd. Plus you need a fan or 2 in there to swirl around the heat generated by 3 heat monger cards) ...There goes more money in a bigger case.
For the general user, I would go with the new Raptor (the clear one) if you want to compromise speed, size and cost on a rational level.
The simplelist and best use of the i-ram is to store the "Temp DB" for SQL Server. MS SQL Server constanly writes to this database in most larger installations. It is temporary and by definitions does need to exist after reboot. (Alas SQL Server does not/CAN NOT keep this database in RAM) So on reboot a script will need to verify that it is still formatted and the appropriate file system/ files exist -- copied from the hard disk. SQL Serve is fussy about hardware so it masquerading as a disk is perfect.
In an hour on google I can't find someone to sell it to my boss to try it out. sigh.
My prediction is a 5-10% boost to overall throughput on a SQL server installation with lots of "temp DB" activilty -- well worth the cost of the ram chips.
I've been keeping an eye on ram disks for a little while now, but other than software they are just too expensive. The earlier post that had links to them (both flash and DRAM based disks) was the same stuff that I found. More recently I had been relieved by the availability of 64bit Systems and OSes with more slots/address space for ram and thus bigger ram disks. But it still really burned me that someoune couldn't make something really cheap that didn't rely on a big fat motherboard (which still has only so many slots, but admittedly faster).
This qualifies. The second I heard about this while reading computex stuff I said to myself: Self, this thing only takes power from the PCI bus, therefore it would be a trivial thing to buy some PCI slots (like 8) and wire them for power, then raid or jbod these together and get one heck of database drive at a fraction of the cost of other solutions, and scaleable at that (I can start out with 2 or 3).
I also think it would be a nice (and easy) thing for them to put it in a 3.5" form factor with both molex and/or 3.3v standby loopthrough (through a pci dummy card or something). And yes 8 slots would be much more saleable, understanding that the mem controller may not support that (though some sort of bank switch would work since you have time to wait for the SATA or SATA2 bus, 3.5 form factor would get difficult with 8 slots though).
The situation that got me looking at this stuff is I have a mysql database (tested others as well) that has to do a table scan each time I do a query since it is a '%something%' query (loading web logs and running user demanded reports on them) The database is at around 4 gigs already (about 6 months worth, including 0.5GB packed indexes) and the report takes about two minutes (2 15k drives in RAID 1, not bad) But I still have to run it at night and make a summary table. (maybe a database with multithreaded partitions or grid would do it, but how much does that cost???!?) Anyway, my 2 cents (sorry for the long post). I'd really, really like to know what benchmarks say the latency for this thing is.
They should have went with a SATA II capable interface instead of regular SATA since it has much more capable bandwidth sitting there waiting to be used. Also the 4 gig of mem only hurts it a tad too.
Now just as a thought to scary uses for the i-RAM. Law enforcement will hate these things. Peadophiles will have instant access to wiping there files without a trace, terrorist won't have to worry about the good guys being able to track their files.
Nope, pedos have a compulsive urge to collect stuff, 4GB wouldn't even come close. Besides, if the pedo was thinking that far in advance, there are plenty of already-existent technologies far more secure. When the cops come busting down someone's door, do you think they'll saw something like "freeze, don't move, unless you prefer to go over to your computer and wipe data!" ? Then again, general ignorance about the need to keep the evidence battery charged could be an issue.
Anand quotes $90 per GB of RAM here, but I'm wondering if the I-Ram works with the much cheaper high-density junk you see out there all the time. Like 128Mx4 modules. On motherboards, usually only SiS chipsets can handle that type of RAM, but there's no reason the Xilinx FPGA couldn't.
Since Athlon64 north bridge no need the memory controller. Why shouldn't the original memory controller used for iRam purpose. By supporting both SDRam and DDR Ram, people can make use of their old RAM (which no longer useful nowadays) and make it as Physical Ram Drive.
Spare some space for additional DDR module slot on motherboard exclusively for iRam, and additional daughter card can be added for even more Slots.
Would it be a cheaper solution for iRam ultimately ?
And more, power can be directly drive from ATA power in motherboard. By implementing similar approach to iRam, an extra battery can power the ram for certain hours.
By enabling north bridge to be DDR/SDRam capability is not a new technology, every chipset compnay have such tech. They can just stick the original memory controller with lower performance (DDR200, so more moudle can be supported and lower cost) to north bridge, the cost overhead is relatively small.
What I think the extra cost comes from extra motherboard layout, north bridge die size, chipset packaging cost (more pins). I suppose it can cost as low as $20 ?
More, the original SATA physical link can be omitted as the controller in North Bridge can communicate directory to SATA controller internally (South bridge thru HT ?) In this case, would the performance increate considerably and the overall layout more tidy ? (no need external cable and cards)
NO these are all problems. The purpose is to have a universal platform support that is gentle on power consumption. That means a tailored controller and even then we're seeing the main limit is the battery. "Tidy" is an unimportant human desire, particularly less important inside a closed PC case. All they have to do is route bus traces well on the card and be done.
HP sell an add on for their DL 380 server for $200 (at discount) that gets you 128MB of disk write cache... makes a good system also fast for disk writes.
This card could be used by linux vendors to enable file-system data and control logging for similar money for GB(s) of write cache... Cheap, reliable, fast general purpose file servers.. that have fast disk write speed without risking data loss.. Speed meaning no disk-head latency, no rotational latency - just transfer time.
It would sell better with ECC memory.. or the ability to use two cards in a mirror.. at least to careful server buyers..
You could set up the iRam drive as the journal device for Resier or Ext-3 logged file systems - and log both control info and data - for fast, safe systems without too much fuss.
I think I want one - but not as much as I want other stuff..
i guess it would add to the base board cost, but a SATA controller on the PCI card would make it a littl nicer as then you are not takeing up one of your SATA channels, i only have 2 and they are current both used for a Raid-0
Also if they made the PCI card a SATA interface and then short circeted the backend to conect directly to the memory, wouldn't they then be able to get much higher transfer speeds than sata and yet all the existint SATA divers could be used with it, given they emulate a existing SATA interface.
a 33MHz/32bit PCI slot only grants a max of 133MB/sec. This would make the PCI bus a limiting factor to the SATA controller.
Step beyond that and remember that the PCI bus is shared among all your PCI cards. Depending on the motherboard some onboard devices can be built onto the PCI bus.
With bandwidth on current southbridge chips already being dedicated to SATA (or SATA-II), it would be a waste in more ways than one to build a SATA controller into the i-RAM.
I agree with the people who mention server uses for this product. There are already quite a few products like this around in the server space, but they are all VERY expensive. There's a comprehensive list here:
The one thing to note, most of these are flash based drives, which means they retain their data, but are actually quite slow transfer speed wise. When it comes to pure performance solutions (which are usually DRAM with battery and/or HD backup), there's only a couple of companies:
We've been long time users of micro memory products, and in general they've been great. We place database journals, filesystem journals, and general server "hot" files on the device and get great performance out of it.
The biggest issue with most of these is price and support. Rocket Drive is Windows only (we have Linux srevers). HyperDrive doesn't appear to be shipping yet (we ordered one and haven't heard anything). Jetspeed I've never even been able to get a sensible reply from. Curtis seem to be focussing on fibre channel (their SCSI interface drive is now quite old, only 80MB/s), which means you need to spend an extra $1000 almost on just a controller. RamSan are incredibly expensive and FC only, but apparently have amazing performance as well. Umem does have a Linux driver, but Umem are no longer selling their retail, they are only selling wholesale to big storage vendors that use them in their products.
So that basically left us really interested in iRAM as a potential long term replacement for for Umem in new servers we buy. It's a pity that the apparent performance is a bit lacking. On the other hand, the biggest advtange of RAM based drives is the latency reduction. Basically you can write and have your data commited to "permanent" storage and move along with the next task straight away. This is the whole point of database/filesystem journals. It would be great to test the iRAM with real server scenarios that rely on this low latency ability. Rerunning the database tests with a combination of journal and full database on the drive would be really interesting.
Basically it seems that this is a really hard product to sell. There's definitely a market for it in the server space, but most of the people who realise that are big DB/file system users, and are usually willing to spend more to get an "enterprise" like product. It would be really nice if all those "middle" users with database/filesystem/email issues could be shown how to use one of these to significantly extend the life/performance of one of their servers...
I see this as a much easier way to run your OS in RAM (hell, I don't think there is a way to run XP on a RAM partition).
If you have 4GB of RAM, you can partition 3.5GB and run win9x in it. That leaves the max 512MB conventional RAM for 9x to work with. It takes a lot of work, but I think it is faster than this because you don't have the PCI bus constraint, and the RAM controller on a motherboard is probably flatout superior.
It is a 300mb folder containing several files that could be located in diferent positions which means a more random access. The other is a unique file, it is larger but the data is read from adyacent positions in the disc. In the first case you have to add the overhead of the procesing time of the OS when dealing with several files.
Actually, you need to make it a bit more clear: it's the Firefox source code, which is likely thousands of small files. It's not just a few or many, but *TONS* of little files. Even though the access times of the i-RAM are much lower than that of a standard HDD, there is still latency associated with the SATA bus and other portions of the system, so it's not "instantaneous". Three times as fast is still good, and that's relative to the Raptor - something like a 7200 RPM drive would be even slower relative to the i-RAM. Still, best case scenario for heavy IO seems to suggest the current i-RAM is only about 3X faster than a good HDD setup. Good but not great.
There's only one comment so far in this entire thread that really touches on where the i-Ram is truly going to succeed, and a few posters flirt with the notion in an offhanded manner.
The benefits of an i-Ram would really come out during I/O intensive operations, as in high volumes of reads and writes, without really being high data transfer volumes, which is the case for a lot of database operations. A lot of the tests performed in the article really had a focus of large volume data retrieval, and that's like using the haft of a katana to hammer in a nail.
Think about web bulletin boards like PHP-nuke, Slashcode, PHPBB, any active dynamic website that is constantly accessing a database to load user preferences, banner ads, static images. Forum posting, article retrieval, content searching, etc. An applicable consumer example would be putting your web browser's cache on the I-Ram, or your mail or news reader's data files, or dumping a copy of your entire document's folder to it, then using Windows' search function to dig through them all for all occurences of "the". Throw a squid cache on it. Put your innoDB transaction log on it. Hell, for that matter, slot a handful of these and use them as innoDB raw partitions for your data.
The kinds of tests you need to perform to make an I-Ram shine would be high volumes of simultaneous searches across the entire volume, the kind of act that would make a regular disk drive grind to a screaming halt in a fit of schizophrenic head twitching. This isn't video editing, OS booting (with exceptions), game loading, or most of the scenarios commented on above. It's still a SATA drive. Your bulk data isn't going to transfer any faster, but you *can* find it quicker and open, update, and close your files faster. Leverage *those* strengths and stop thinking it's a RocketDrive.
All my concerns on this product were pretty much addressed
-SATA2
-5.25" Bay drive instead of PCI slot
-Using a 4pin Molex connector or SATA power connector instead
-PCI-E instead of SATA (drivers are made everyday)
A few comments I have on this product that weren't mentioned. Everyone talked about putting these into a Raid0 array to improve size but no one mentioned that it could very well double performance. I don't know what's causing the current bottle necks with these cards besides the SATA interface but that just doesn't seem right. Anand needs to run benchmarks like Sisoft File System Benchmark or HD Tach to narrow it down. Read/Write/Sequential and Random should all be almost instaneous only limited by the bandwidth of SATA and the bridge it is attached to. This card could very well be limited by the chipset they tried it on (southbridge/northbridge interconnet). It might be even faster on a chipset that lacks a southbridge and only has a northbridge such as the nForce4.
Given the nature of this product I don't know why motherboard manufacturers just don't add this right onto a board or make a special adapter for it you can buy (with a better interface). I could see alot more use in something like this if the dimms were attached right to my board and straight to my notherbridge.
What Gigabyte should've done (all companies with a bright idea should do this) is just give this to review sites such as Anand and others just to see what feedback emerges before they try to market something like this. I guess Gigabyte is sort of doing this by only producing 1,000 but that's still 1,000 more then they need to. If my guess is correct the second revision of this product should follow quite shortley after this one hits the market.
As was mentioned the price is a killer (I would rather get a SCSI320 controller and a 15,000 RPM Cheetah).
The bandwidth, which could have really blown SATA drives out of the water in certain tasks, is obviously crippled by its attachment to SATA. Yet if i-RAM was running at full PCI Express speed, then I should think opening the specs for the memory controller would quickly lead to open source drivers. The storage is, after all, cheap DDR sticks.
Sure, these drivers might be written for Linux or BSD instead of Windows, but surely porting GPL'd drivers to Windows would be easy for a company which can open the specs? nVidia and ATI have proprietary drivers because they claim it would be suicide for them to open up their proprietary chip interfaces. But i-RAM?
I thought that compilation would make a good application for this. Source code, intermediate, and output files take up less than 4 GB. The large amount of small text files involved should allow the i-RAM's random access performance advantage to really shine. Add to that the fact that long compiles can take several hours - or days if you are building Gentoo, for example - and the difference should be quite noticable. Yet there don't seem to be any compiler tests in this article. Maybe they simply aren't I/O limited?
This a a 4GB PCI Drive @$3000 (yes three thousand) but this is for a native drive with direct access to the PCI bus thus can sustain 133Mbit/s.
What I'd like to see is a version that fits in 5.25" drive slot 12+ slots for RAM using a std connector for power and SATA II or SCSI (SCA?).
I can see several advantages for this product IF you think about it
Webcache server (hold the cache)
Temporary files (great for those programs that write temp files like crazy)
Swap space on Database server (lookup PAE, SQL server and 36bit addressing - 32bit windows can address upto 8GB RAM IF the O/S and the app are writen for it (been there :( )
Swap space on badly behaved app - there are apps that are ported from *nix to windows that tell the OS I have pagable RAM which the server then dumps to disc (4million page faults in 2 hours!) only for the app to ask for it
Log files - DB servers write out transitional logs once per transaction, this needs a drive that is FAST
Having more than one of these in a system (power system) means that you can seperate out the I/O onto seperate physical drives or even better controller or best seperate PCI buses (Servers, Really big servers can have three PCI buses) this means for a server (Unit means logical disc made from RAID arrays, seperated out as much as possible, by controller and PCI bus)
Unit 1 - OS and Apps Binaries
Unit 2 - Paging file
Unit 3 - Logs
Unit 4 - Temp
Unit 5 - Data
First off, another good article Anand. Now, on to my point...
I'm wondering about World of Warcraft. After the first article where the info debuted there was a lot of talk in the comments section, and one of the subjects was WoW. It wouldn't have been possible to install WoW to the i-RAM because it's too big (~4.6GB on my machine). However, once AnandTech recieves another i-RAM to test with, either in JBOD or RAID-0, I would like to hear at least a subjective opinion on how WoW runs in large battles and such. I know my brother's machine gets stuttery when there's a big PvP battle, and through my troubleshooting I've gathered that it's a hard drive speed issue. If any of the AnandTech team has a high level character on their account and like PvP, please post something on performance in WoW.
I can't see having the i-RAM as being more beneficial to any game than simply adding more RAM to the system. If you're going to have 4x1GB DIMMs installed on the i-RAM, why not just put them into the system itself instead? As for WoW, even if the installed size is 4.6 GB, I doubt the game actually goes much above 1GB of memory use - very few applications do. If you have 2GB or more of RAM, do you still get stuttering issues in WoW? If so, there's a reasonable chance that it's simply GPU power that's lacking rather than RAM - or perhaps GPU RAM would help?
(Note: I'm not a WoW player, so I'm just shooting from the hip.)
There are at least 3 seperate data files in the WoW installation that are 1 GB in size each. A bunch of smaller but still over 100 MB files as well. All told as he said its about 4.6GB, and its more than 4GB in that one folder alone. So yeah, the game would go over 1GB in memory use if it was written well enough.
I play WoW a lot, and loading into highly populated areas sucks. You hard drive thrashes and you have no control of your character until everything is loaded. I'm assuming its busy loading the textures of the equipment that all the player charactes around you are wearing.
This I-Ram thing might help out a lot, seeing as consumer motherboards don't support over 4GB of memory and the data files alone for WoW totals over 4GB. The problem again is that you'd need to raid two of the I-Ram devices together to get that much storage, and we don't even know if it would result in a tangible benefit.
As others have mentioned, for all fast action games, it isn't the load times that Anand should be focusing on... its the in-game stutters when something suddenly has to get loaded from disk. Those are killer, and even if the initial game load times only decrease by 5%, if the stutters are eliminated, this might just be worth the cash, more than a new $600 video card certainly.
My point wasn't that WoW doesn't ever exceed 1GB, but that it doesn't exceed 2GB of RAM use. Actually, we should have probably mentioned that point as well: no single application under 32-bit Windows (not counting PAE/NUMA setups) can use more than 2GB of RAM. The 32-bit memory space is partitioned into 2GB for applications and 2GB for the OS, if I have my information right. Basically, you need to try out WoW with a 2GB setup before you can say that i-RAM would or wouldn't be able to help.
Going back to the earlier statements, though, i-RAM is still nowhere near as fast as system RAM. The delay of PC3200 is around 140ns worst case, and bandwidth is still 3.2 GBps or 6.4 GBps dual-channel. i-RAM seems to be somewhere in the microseconds range for access times, and it's limited to 150 MBps bandwidth. If you can add RAM to your PC, that would be the first step to improving performance.
If you have Windows XP Pro, you should be able to make a volume that includes the I-RAM and a regular disk. Then you can make a hard links on the I-RAM that point to the additional 600 Megabytes or so on the regular disk that won't fit on the I-RAM. I've never done anything like this myself, but I think it should work. Any comments?
someone's probably said all this, but i don't feel like reading all 80-odd comments:
First, this strikes me more as a proof-of-concept effort. Sure, they'll sell you the engineering samples, for $150. Rev 2 will be the real product.
Second, I did see several people suggest that interfacing the board to the SATA interface rather than directly to the PCI bus makes it slower. Why? Standard 32-bit 33Mhz PCI only has 133MB/s of bandwidth, and that's often shared by other devices as well. SATA has 150MB/s of bandwidth, and in most cases is connected to the system by at least a 66Mhz PCI link, or more often some other high-speed chipset link.
Interfacing to SATA also means that Gigabyte doesn't have to write drivers for 32- and 64-bit flavors of Windows and various Linux distributions, MAC, and more obscure but definitely presents OSes like BSD, NetWare and Solaris (/me wonders about putting the boot partition and SYS volume of a NetWare server on an iRam... probably no real benefit, but you never know).
Third, I might imagine that Rev 2 will support SATA II with 300MB/s transfer speeds, ECC, and perhaps 8 DDR slots.
Would have been nice to see some info on what it performed like as the temp folder for windows. all that internet web browser cache and other stuff that windows sticks off in the temp while it does stuff.
this is data that you don't usally mind if it just disapears everyone in a while :)
I remember five or six years ago there were products that would plug into a PCI slot and use PC133 RAM to do this same job. They would show up as a harddrive controller and windows would use default drivers unless you needed something different. This was when programs didn't expect you to have enough RAM to keep a scratch file in RAM, so they'd write out files after every action. A PCI card with a gig of RAM for accepting these scratch files made a huge difference. There's just less need now.
Then there's the other problem. SATA may be 150MB/s, but the PCI bus it's attached to is only 133MB/s. This certainly explains why everything runs at DDR200. If they'd made a PCI-X card there might be a bigger improvement. The bright side is that they used an FPGA. If next week they decide to implement SATA2, they can issue an update and everyone can upgrade their cards. Companies like Cisco do this several times a year in telecom products.
I'd hope and pray this thing is a lot faster than the iRam for all the extra cost. But the fact that it sits in a PCI card slot (I'm talking about the QikDrive linked above, not the iRam) makes me question that.
I was really surprised at how little it helped as a page file. Myself I sometimes encounter periods of slowdown due to paging that can last for several minutes where nothing can be done. I don't know if there's a common name for this but I'll call it the "page file wall". I don't know exactly how you would recreate such a tragedy in the lab. Too many apps open with too little memory obviously. But less obviously, it seems that during a period of overnight inactivity (with apps left open) windows will page a lot of stuff out to disk and you can experience the page file wall the next morning. It'd be interesting if Anand could devise a consistent "page file wall" benchmark.
As the article and many posts above suggest doubling my RAM would probably end my problems.
I still think this product (or revision 2 or 3) could bridge an obvious gap with PC's: SLOW harddisks and EXPENSIVE ram. When you run out of ram it can be like hitting a wall. It can be like crossing the country, but you go half by jet and the other half on foot. The gap should be filled with something cheaper than modern DDR and faster than harddisks. (This product is barely either.) I'd like to see a PC with 1 GB normal ram and 2GB of cheap-o 1/8 speed auxiliary ram. The OS could use this slower ram for paging with priority over paging to the harddisk. Not just for enthusiasts, but for regular beige PC's. Owners would then have another upgrade option with a better cost/benefit ratio depending on their needs.
I was waiting for a performance review of this thing and I'm so glad trusty Anandtech provided.
I was in my local computer shop and the guy working there pointed at a stack of hardware and said some guy just dropped $8000 on a Intel 955X or whatever system that included around 16 gigs of Ram disks and I asked if it was based on ddr400 and he said no it was in fact ddr2 533 I think. A quick search on the internet found nothing about ddr2 ram drives and it defies logic to me anyway since i would think that ddr 400 would be faster due to latency issues, etc. Has anyone heard anything like this? Also the guy at the store told me that it boots in to Windows XP in 4 seconds. It sounds like a tall tale but i don't see any reason why he would be making it up as they are pretty reputable.
any time you need to write something before you can continue the latency becomes critical. Database writes (and logging) are a perfect example of this.
Under *nix the Journal of a journaling filesystem is performance critical (although it's useually a sequential write so it is about as good as you can get.
For Database engines that have good crash recovery (MySQL is not that good at this, but Postgres or Oracle are) they need to make sure that their log gets to a safe storage media before they can consider the write completed and tell the caller that it's done.
even for an apache webserver, with normal logging apache will not return the page until the log has been written.
As a lot of ppl have posted here, it would make sense to use this as a cache for our harddrives by making it possible to plug the harddrive into the i-ram and i-ram to the motherboard. This would overcome the 4gb limitation and we probably wouldnt need the full 4gb for cache we can use like 1gb or 2 gb. But to see more increase in performance they will need to move it to sata2 and have programmers write into their code to precache data to take full advantage of the i-ram.
Well it seems that modern hard drives are getting alot faster and solid state doesn't seem to help as much as it would of say, 2 or 4 years ago when we were running crusty low density HDs...
However, I am also slighty disappointed in the design...
Why put main system memory in a drive and then limit it to SATA I (not SATA2)?
I thought the whole point of a ram drive was provide maximum i/o performance...??
Second not allowing 2GB sticks doesn't make sense to me... i mean 4gb is really small.
Maybe they should of thought this,
"Gee, let's try to offer more capacity - like, golly bunny, currently available 2gb ram modules..."
Even so, if this can do 591% higher i/o performance than a raptor in ipeak business winstone, then i'm sure there are ways to utilize this in computing tasks...
Also if u put the os on it u wont ever need to defrag...
Nice, but expensive for now ... expensive doesn't mean its crap.. just weakly spec'd to my mind for now...
Why do something like this and then water it down?
I think the disappointing benchmarks ought to say something about current OS's suitability to the iRAM, and not iRAM's capabilities. I really think this is an idea ahead of its time. Windows XP isn't tuned for solid-state storage, the FPGA chip on the iRAM isn't the best solution, and the SATA interface itself is a bottleneck. If Windows Vista and future BIOSes had support for PCIe storage, imagine a version of iRAM that had a straight PCIe interface supporting the full 1.6Gb/s or more depending on the type of memory you put on, and 8Gb or more memory thanks to 64bit addressing.
Windows Vista will already have support for hybrid drives (NAND+platter) so the caching and paging routines will be optimized for solid-state storage. I actually think iRAM might be better than hybrid drives because 1) you can use existing drives with it, 2) iRAM is expandable (up to a limit), 3) DDR is faster than NAND
I could see SATA II could remove the bottleneck, but still, 4GB of data? Gigabyte is smarter then this.. it's just not going to fly. Though, it is a pretty good start.
The next logical steps is probably finding a way to get a standard harddrive to use something like this as a memory buffer (7200rpm with DDR200 1GB of cache) and then maybe it would actaully be worth it.
I was disappointed that nothing was mentioned of the practicalities of moving windows or a game onto this thing. Is there any software that would transfer whatever data is on this thing (including functioning operating systems) to a normal drive at regular intervals? And keep them functioning? If not, what's the point?! Each time you have to install windows/a game to this thing (after powerfailures or just for the sake of having something different on it), you have to install all the updates/personal tweaks/mods/saved games/configurations etc which would takes SO MUCH MORE TIME than the extra few seconds you save from faster boot/game load times... why anandtech does not take these things into consideration?! To paraphrase another poster: WHOOPEE-F*CKING-DO
The $150 thing is a killer. But if they can only pump out 1000 of them, it makes business sense to have the price high. This just like AMD having high X2 prices because they can't possibly make enough quantity to fill orders if the price was lower... same exact thing.
$90 per 1GB stick of ram is high, I'm sure people can shop around and find it cheaper.
As for RAIDing two of these, Anand said he only actually had one of them, but was trying to get a second. So maybe more on that later. I think that even if Raid 0 doesn't work for some reason, JBOD would work.
I'm curious what the bottleneck in computers now-a-days really is. I think Anand should get an NForce Pro with 8 GB of ram running 64bit XP, set up the largest RAM DISK (real software-type RAM disk) you can, and see how that affects performance. If performance shows the same mediocre gains that this device showed, then that means a new SATA2 version wouldn't improve things either. If that test showed there were large gains out there to be had, then yeah there's a future here. I would do it myself but I don't have access to that hardware hehe.
I'd like to see how this would change the overall latency of a system. I have a pretty nice home studio, and I can see using this as a boot drive, and then recording off to a raid array. With all the random accesses coming from the solid state drive, and only sequencial going to the raid, I'd think the latencies would drop significantly. Could be pretty handy, even extending the life of older systems.
Anand, first of all great review, it's nice to see some numbers on this.
Would it be possible to bench a few tests again with 2GB of system memory? I can vouch that 2GB makes a noticeable difference when loading any game. I realize that you were going for an "enthusiast" level machine but games like HL2, Doom3,and Battlefield 2 has started a push with the high end to upgrade to either 2x1GB or 4x512MB.
Could they perhaps have gone with a full-size card and then oriented the DIMM slots perpendicular to the mobo? I had something like that ages ago in an Amiga that worked well from a size perspective. It might get them to 8Gb :)
cost of this unit was increased 3 times.
then it went from sata2 to sata.
Real life performance is not as gd as i expected, when i first heard i was excited to see them working on removing the bottleneck but going from 13 second load time to 10 second doesnt warrant the cost of the 150 card and 4 gb ram.
I think the more useful implementation is to have the RAM pre-installed onto the drive. And I'm not talking RAM sticks. I'm talking about these guys at Gigabyte contacting Samsung, Micron, or Crucial to directly supply the chips and directly solder them onto 5.25" plates. I think in the space of a 5.25" bay, you can fit 2 of these said plates. It won't be hard to think that they'd be able to fit 15GB of RAM in a 5.25" drive's space.
Then with the remaining space, mount a MUCH larger battery. Have the battery be able to last DAYS, not hours. This will set people a little more at ease. It will sure make me feel better. (and no, this 5.25" ramdrive will not be using a molex connector. Simply put in a dummy PCI card to allow the 5.25" to draw power from it)
The fatal flaw in their product design is that most people simply won't have that many RAM sticks laying around to make this thing useful. Why not supply the RAM, and in the process increase the possible size from 4GB, to something much more useful. If we already know that only 'power users' with little budget restraints will buy this, then just supply it the way we know they want it: Big.
Yeah one really needs about 15-20G to make this a livable reality. And that would cost about 3K and about 4K if they did it right i.e. ultra SCSI or even PCIe interface.
If they got real serrious tunned it up with on pcb ddr3. Made it something like a ZIF socket thing. Gave it a direct bus to the chip, changed the memorie contoler to let it throtle wide open. Wrote drivers, OSes to just use it. It might be like a really fast bios set up for the OS. At first it could be like an extra, but as costs came down maybe it would be intergrated into the motherboard. Humm nearly alomst instant boot up...it's a dream, even if it's only mine!
I think another possible use (besides certain kinds of servers, like mail servers), is for video capture. The size is a bit small, but if you were capturing segments of footage, it might work. And the price could be reasonable.
"but 32-bit Windows can't use more than 4GB of RAM, including the swap file size."
First of all... "Swap file" is a misnomer. We talked about "swap file" back in the Windows 3.1 days when the OS would swap a process' entire memory space to the *swap* file.
These days the OS will read/write selected pages of a process' memory from/to the cache manager (who may or may not elect to use the disk to get to the physical pagefile). *Paging*, not "swapping". Executables and libraries are memory mapped and thus start their lives with all pages firmly on disk (so a big executable won't necessarily load slow, but many small DLLs OTOH just might).
I don't have Windows XP in front of me, but my 32-bit Windows 2003 Standard ed. with 4GB memory and 1GB pagefile certainly doesn't seem affected by the limitation you mention. Enterprise edition can address even more physical memory... Each process is still limited to a 2GB virtual address space though. (32-bit processes marked capable of such will gain a 4GB virtual address space under 64-bit Windows)
Without PAE (or something similar), 32-bit OSes are indeed limited to 4GB of RAM. This is what is being referred to, as PAE is limited to Intel and I don't believe it's available on non-Server versions of Windows. (Correct me if I'm wrong, but PAE is pretty much only on Xeons, right?)
You're right that it's paging instead of swapping now, but there's really not much difference between the two. Basically, you put data onto the HDD in order to free up physical RAM, on the assumption that the least recently used data that was moved to the HDD won't be accessed again for a while.
Anyway, I've modified the comment to reflect the original intent. If you're running PAE and Server, it's a whole different ball game for high memory systems.
Wow, my friend and I talked about the possibilities for these things several times. But at 3x the initial price and not the performance increase I would have expected, the techie in me is disappointed. My wallet is happy though.
I wonder what the issue is with RAID that Anand comments on.... seems odd that it would behave differently than a HD in this respect and cause problems...
I would love to have 12gb or more... which is enough for Windows XP, a productivity suite, and a modern game... anything more could be run from NAS
I saw someone else posting as well, but I would very much like to see some database performance numbers from this device, as well as perhaps a web-serving benchmark.
I laughed when I saw that line :) A very interesting device and I look forward to where this goes in the future. Your "Final Words" could use a bit of brevity.
Huh, if this was at the $50 price point it would be a bit more interesting.
I didn't like the pagefile test - it made no sense at all. Of course going from say 4b RAM to 2gb + 2gb iRam isn't going to improve the system... You needed to test what JUST changing the pagefile from HD to iRAM does.What about a typical 1gb RAM setup that most of us use? I still hit the pagefile on occasion and I do have ~1gb of old DDR I could use. Load times? No, I'd like to know if it smooths out gameplay. I know Doom 3 hiccups on my machine due to disk accesses.
Otherwise this doesn't look like it makes a lot of sense in its current incarnation.
Know the article says it doesn't support ECC memory but will it still take it and run in in non-ECC mode? Most mobos I believe can at least do this. What about registered memory? Got a couple sticks of 1GB DDR266 RECC memory I'd like to use!
I definitely won't purchase this product until they implement SATA-II at 300Gb/s. Why should I shell out $150 for SATA150 when my DFI LanParty Ultra-D can do 300.
I even asked one of the product managers at the AMD tech tour. I don't see why they wouldn't do it since SATA-II is backwards compatible to SATA-I.
BTW I hate this new layout. i have to click it to read the next comment. Is there anyway to fix this? also the forums didn't get a makeover visual wise.
If more benches are to be done, I would put in a suggestion to test some compile times. Then I guess you should compare it to boosting youe system memory and installing a RAM drive, but this could be more convenient if you have those old 256 / 512 MB memory sticks lying around.
A while. You would have to find how much power is dissipated by the i-ram, then use the capacity of your UPS to get an exact number. I would go as far as to say maybe up to a month if you have a good ups.
$600 for 4GB (read useless) drive that maybe is not much faster than two 73GB drives in RAID 0 for half price? Uh Huh. If they sell 3000 I'll be shocked.
i thought they said that they were only going to make 1000. enought for the crazies who have money to burn...
P.S. if any of you crazies are reading this i could burn some of that money for you... just let me know.
Thanks for running through the multiple roles for which the iRam might be useful. I'm rather surprised it wasn't MORE useful in the benches. I'd be interested in learning (i.e. slacking back and reading the results of someone else's research) why the i-Ram is still as large a bottleneck as it is. Yes it's faster than the HD, but why isn't it much, much faster? Are we seeing OS inefficiency or something else altogether?
In the end, though, it doesn't fit my needs particularly well, so I'll pass this round. Maybe a future version will be more appealing in terms of cost, speed, size.
No, absolutely not. Even if it were, the SATA interface is *way* too slow to be of use for something like that.
And even if that were not a factor, why spend that kind of money on the i-RAM where the same amount would buy a *much* superior video card with its own dedicated memory?
I think that this would be very helpful as a page file for workstations. Older workstations may be maxed out with 4GB and windows 2000 (which the company does not want to move over to xp-64) and still need additional ram for CAD/CFD/etc. This would be an easy upgrade with a reasonable amount of performance increase.
Was hoping it would offer more, especially as a Pagefile. Any plans to make a PCI-e version(IIRC PCI-e has a ton more bandwidth than SATA), that would likely make this a Must-have. As it stands now I'd only use it for the silence in a HT Setup.
I too am dissapointed that the article lacked any mention of SATA2, which is twice as fast as SATA (300MB/s vs 150MB/s). Considering many motherboards already on the market suport SATA2, and the 300MB/s transfer rate that goes with it, it is a bit of an oversight that the articles doesn't even MENTION if the card supports SATA2 or not. Nor do they mention what they think would happen with SATA2, or if Gigabyte is likely to produce a SATA2 version. It's a weak spot in this article, I think, considering how central the bandwidth of SATA is to the performance of the i-RAM.
quote: I too am dissapointed that the article lacked any mention of SATA2, which is twice as fast as SATA (300MB/s vs 150MB/s)
33MHz PCI only gets you 133 MB/sec theoretical, and more like 110 MB/sec in the real world. The i-RAM with SATA 1 can completely saturate a PCI bus. SATA2 would cost more to implement, and give you no speed increase at all on a 33MHz bus. If you build the card for higher-end PCI specs (e.g. 66MHz, 64 bit, 66MHz/64bit, PCI-X) then you automatically exclude most PC enthusiasts (unless they like buying server boards for their game boxes).
If they end up doing a PCI Express version, then there would be some reason to support SATA2.
This board is not a replacement for a hard drive. It would be incredibly useful as a transaction log though. Reliable (i.e. won't get lost if the machine crashes) write-behind caching for RAID 5 drives will give you a huge boost to write speeds. And the controller cards that support battery-backed write behind caching cost a lot more money than an i-RAM.
Actually, scratch my comment - I had not had enough coffee when I wrote it. I forgot that the PCI connector is doing essentially squat except providing power to this device. Of course you could have a SATA2 controller on a faster bus talking to this thing. But an SATA2 version would probably cost more. (because it would need a faster FPGA, newer SATA transceivers)
You did miss that reference; on page 2 it says "The i-RAM currently implements the SATA150 spec, giving it a maximum transfer rate of 150MB/s".
Given the 1.6GB/s of the RAM, it seems completely silly not to provide a 300MB/s SATA interface instead, especially considering that the whole contraption including RAM will cost as much as 2 or more decent hard drives.
1.6GB/s is actually more than 5 times 300MB/s, the maximum supported by SATA-II. So 300MB/s could easily be fully utilized, and I don't understand why they didn't support that.
It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.
mattsaccount you would need 3 cards to run raid 5.
Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
Raid 0 has a higher access time than no raid. Unless you were running highly disk intensive applications the snappiness would be attributed to ram, not the harddrive.
not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from.
RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.
RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller.
Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion.
I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead.
RAID-0 ought to offer better random read access times as there are two disks that can read independently. Writing would be somewhat slower though as both disks need to be synced.
I'd like to see some server benchmarks with this. For example:
* mail server (especially servers using maildir is generating lots and lots of files)
* web server
* file server
* database server (mysql, for example)
quote: One of the biggest advantages of the i-RAM is its random access performance, which comes into play particularly in multitasking scenarios where there are a lot of disk accesses.
Anand, how about an update with some server / database benchies?
Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte!
This thing would still be useful as a pagefile in some circumstances--if all your memory slots were full and/or you had extra memory lying around. This is what I had been planning to do with it (currently have 4x512mb, plus a couple other smaller capacity DDR sticks which would be nice to use b/c for photoshop stuff). But the price is too high. I'll wait till it drops.
However, using this card as a journaling device for a normal filesystem, like ReiserFS or Reiser4 might be very beneficial. Wouldn't require much RAM either.
Would there be a difference with other SATA cards, such as 3Ware etc - i.e. would CPU usage make a difference perhaps?
Why not use SATA-IO (SATA-2) instead of the older and slower SATA (re: Gigabyte)?
But otherwise a very informative article, thanks Anand.
It would be best to wait for the second version of the card, which will hopefully have a cheaper IC as well as sata II support. Theres no doubt that the ram can do 3.0gb/s.
- File copy performance is mostly a moot point, because copying files from disk to disk will go as fast as the slower of the two can, and other applications that typically require disk performance (unarchiving et al) will only see a minimal performance increase due to bottlenecks in other parts of the system (which becomes even less valuable when you consider that you won't be doing a whole lot of unarchiving to a disk that small).
- Gaming benefit would be okay if it you could fit more than about one modern game on it.
- Using it as a pagefile is, as Anand noted, pointless.
- It does improve boot times, but it's not a huge difference, how many of us shut down often enough to actually be bothered by a few seconds in boot?
- It does improve app loading times slightly, but if you're opening and closing apps that take a lot to open and close, it's probably because you don't have enough system memory, so buy more memory instead.
I'm just gonna pick at a single point ... you could install one game to the i-ram at a time and then archive them on another drive.
You get fast zip times on i-ram and a single file transfer to a magnetic disk is faster than multiple small files (moving the the archive won't take long). Just unzip the game you want to play to i-ram ...
but then ... that kinda defeats the purpose doesn't it ...
I could see this being fun to play with, but I have to agree with Anand -- it needs higher capacity before it is really useful.
I don't really see anyone using this, its costs way too much for too little storage and too little performance benefit, not to mention the risk of data loss. I'll give it a look again when they get some higher bandwidth flash or something like that. this i can pass on for now.
I dunno, I could see the extreme enthusiasts getting these. I mean after all, if they have the money to buy a system with SLI 7800 GTX and FX 57 this would be pocket change.
You people think about using Knoppix and copying to the drive? For that matter, that and stuff like damnsmalllinux and such can be run totally from system ram.
Instead of using this though for a slient drive, you are better off using flash memory drives for that.
Unfortunately if they did that, it would mean that your computer could never be turned off. As noted in the review, the card is currently still powered even if the machine is "off", due to the fact that when a modern ATX computer is off, it's actually more of a super-standby mode that leaves a few choice items powered on for wake-on events(LAN/modems, and the power button of course). All Gigabyte is doing here is taping in to the 3.3v line on the PCI slot that wake-on power is provided through, which is enough to keep the device powered up even when the system is in its diminished state.
Molex plugs on the other hand are completely powered down when the system is "off", so it would be running off of battery power in this case. A lot of us leave our systems on 24/7 anyhow, but I still think they'd have a hard time selling a device that would require your computer to be off for no more than 16 hours at a time.
They could use the USB power. On most motherboards you can enable with a jumper or BIOS to supply standby power to the USB. Often the setting is called "Wake on USB" or "Wake on Keyboard" etc
"What would be really neat is if they could design an i-Ram device that uses 2 HDD bays and supported 8+ GB of ram and ran from a standard molex."
Was thinking something similar myself as i was reading.
I think once ram modules are 4gb or larger, then this could be very useful. But not until it gets updated with sata2, ddr400 etc. When the time comes to build an HTPC then ill give this another look.
What I would be very interested in seeing is the performance of the thing using it as the source for encoding a dvd/mpeg... Most encoders are heavily disk-based and if it could reduce the time significantly it might be worth while - assuming that eventually they come out with one big enough to hold the source. There's now reasonable CPU encode performance, just have to get the data to/from it... maybe the i-drive would help..
Hmmm, the WD Raptor has a sustained transfer rate of 72MB/sec. So on a freshly formatted drive, with no fragmentation, it should still be half the speed of the iRam. But at $200 for a 74GB drive, then you could get a pair of these running in RAID0, which would run at around 140MB/sec anyway, and still have spent less than the cost of the iRam and 4GB of DDR DIMMs. It definitely seems like this product falls short.
The use of PCI 3.3V standby power is clever. Perhaps a future version should just use a dummy PCI card to provide the power, connected to a 5" drive-size case with many more DIMM slots. If you can't cram at least 16 DIMMs in there, then the ability to use old memory is kind of wasted, since the old modules will have such small capacities.
Ultimately I think this type of product will always be a failure.
What they should do instead is make it a pass-through cache for a real SATA drive. So you plug the SATA controller into it, and plug it into a real SATA drive, and it caches all I/O operations to the real drive. That's the only way that you can get meaningful benefit out of only 4GB of memory. A card like this would turn any SATA drive into a speed demon; 4GB is definitely a decent size for most caching purposes.
Of course the next logical step is to put the DIMM slots on the SATA controller card, so that access to cached data occurs at real memory speeds, not just SATA bus speed. This would only be a useful product for folks stuck on 32-bit systems, because otherwise it would be best to just increase the system memory instead. But there are plenty of 32-bit systems out there that would benefit from the approach.
That and/or having the possibility to install very large amounts of RAM (like 32GB) on your motherboard and BIOS settings to decide how much of that is non-volatile.
I have a feeling this is a transitional product that while being a very nice add on to your current system, will become obsolete in 4 to 5 years. If I had to capture loads of high sampled audio (96/24), I'd want one now, though.
I was expecting something closer to the $50 price mentioned at computex... It would have been a nice device to tinker around with, but at that price (plus the price of ram) I dont think most of us will get it.
why they have to waste pci bus speeds and run though a sata chip beyound me it should directly conect to the pci bus have its own bois and run as full fleached ram or as normal ram with a redirect to being a hdd heack u have ram disk software idea the drive is pretty useless as permenment storage why no1 could see this i do not know
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
133 Comments
Back to Article
RobRedman - Saturday, October 7, 2006 - link
I must have 50 sticks of unused PC100 and PC133 SDRAM.Something like this for old RAM would be a value, (for me).
Does anyone know of an adapter that would take, lets say, 10 sticks of SDRAM and give me an IDE or USB connector?
ITLisa - Saturday, October 1, 2005 - link
I spent a little time looking for this and not even the manufacturer lists it on their sitemtownshend - Tuesday, March 14, 2006 - link
It was on Gigabytes site as I looked today and the past month while making the descion to get it.There's been a lag while retailers get rid of the v.1.2 and Gigabyte sends out the v1.3 cards.
I just got one of the new ones and will use it to run my FTP server application. I have 14 or 16 drives connected (6TB) to the server and previous reviews by others have pointed to the performance increse from the FTP app. searching and retaining the disk locations.
since I just got it I am not %100 on the reality, and the real benifit will be realized by the client seeking a file from the server. Using it for MS SQL Server is also a great idea. Other than that I haven't heard any real world uses, I mean users might be able to load Doom faster, but this device seems to be a bit expensive for most.
Also this card is bigger in area than most video cards, so if your box is crammed w/ wires or liquid pumps and resivours. The logistics of getting say 2 video cards and the RamDisk in a midsized case are pretty obsurd. Plus you need a fan or 2 in there to swirl around the heat generated by 3 heat monger cards) ...There goes more money in a bigger case.
For the general user, I would go with the new Raptor (the clear one) if you want to compromise speed, size and cost on a rational level.
Peace all
lrohrer - Tuesday, September 13, 2005 - link
The simplelist and best use of the i-ram is to store the "Temp DB" for SQL Server. MS SQL Server constanly writes to this database in most larger installations. It is temporary and by definitions does need to exist after reboot. (Alas SQL Server does not/CAN NOT keep this database in RAM) So on reboot a script will need to verify that it is still formatted and the appropriate file system/ files exist -- copied from the hard disk. SQL Serve is fussy about hardware so it masquerading as a disk is perfect.In an hour on google I can't find someone to sell it to my boss to try it out. sigh.
My prediction is a 5-10% boost to overall throughput on a SQL server installation with lots of "temp DB" activilty -- well worth the cost of the ram chips.
brandonbates - Friday, August 5, 2005 - link
I've been keeping an eye on ram disks for a little while now, but other than software they are just too expensive. The earlier post that had links to them (both flash and DRAM based disks) was the same stuff that I found. More recently I had been relieved by the availability of 64bit Systems and OSes with more slots/address space for ram and thus bigger ram disks. But it still really burned me that someoune couldn't make something really cheap that didn't rely on a big fat motherboard (which still has only so many slots, but admittedly faster).This qualifies. The second I heard about this while reading computex stuff I said to myself: Self, this thing only takes power from the PCI bus, therefore it would be a trivial thing to buy some PCI slots (like 8) and wire them for power, then raid or jbod these together and get one heck of database drive at a fraction of the cost of other solutions, and scaleable at that (I can start out with 2 or 3).
I also think it would be a nice (and easy) thing for them to put it in a 3.5" form factor with both molex and/or 3.3v standby loopthrough (through a pci dummy card or something). And yes 8 slots would be much more saleable, understanding that the mem controller may not support that (though some sort of bank switch would work since you have time to wait for the SATA or SATA2 bus, 3.5 form factor would get difficult with 8 slots though).
The situation that got me looking at this stuff is I have a mysql database (tested others as well) that has to do a table scan each time I do a query since it is a '%something%' query (loading web logs and running user demanded reports on them) The database is at around 4 gigs already (about 6 months worth, including 0.5GB packed indexes) and the report takes about two minutes (2 15k drives in RAID 1, not bad) But I still have to run it at night and make a summary table. (maybe a database with multithreaded partitions or grid would do it, but how much does that cost???!?) Anyway, my 2 cents (sorry for the long post). I'd really, really like to know what benchmarks say the latency for this thing is.
Zar0n - Tuesday, August 2, 2005 - link
For a mass product gigabyte needs to add.8 dimm slots
SATA2
Let's hope they do it fast...
ybbor - Friday, July 29, 2005 - link
What would happen if you stored a SQL database on the drive... wonder what it would do for database proformance benchmarks.you would probably back up to HD every night, or cluster with db on HD for data integrety
Chadder007 - Friday, July 29, 2005 - link
They should have went with a SATA II capable interface instead of regular SATA since it has much more capable bandwidth sitting there waiting to be used. Also the 4 gig of mem only hurts it a tad too.optiguy - Friday, July 29, 2005 - link
Now just as a thought to scary uses for the i-RAM. Law enforcement will hate these things. Peadophiles will have instant access to wiping there files without a trace, terrorist won't have to worry about the good guys being able to track their files.mindless1 - Friday, July 29, 2005 - link
Nope, pedos have a compulsive urge to collect stuff, 4GB wouldn't even come close. Besides, if the pedo was thinking that far in advance, there are plenty of already-existent technologies far more secure. When the cops come busting down someone's door, do you think they'll saw something like "freeze, don't move, unless you prefer to go over to your computer and wipe data!" ? Then again, general ignorance about the need to keep the evidence battery charged could be an issue.NStriker - Thursday, July 28, 2005 - link
Anand quotes $90 per GB of RAM here, but I'm wondering if the I-Ram works with the much cheaper high-density junk you see out there all the time. Like 128Mx4 modules. On motherboards, usually only SiS chipsets can handle that type of RAM, but there's no reason the Xilinx FPGA couldn't.Right now I'm seeing 1GB of that stuff for $63.
jonsin - Thursday, July 28, 2005 - link
Since Athlon64 north bridge no need the memory controller. Why shouldn't the original memory controller used for iRam purpose. By supporting both SDRam and DDR Ram, people can make use of their old RAM (which no longer useful nowadays) and make it as Physical Ram Drive.Spare some space for additional DDR module slot on motherboard exclusively for iRam, and additional daughter card can be added for even more Slots.
Would it be a cheaper solution for iRam ultimately ?
jonsin - Thursday, July 28, 2005 - link
And more, power can be directly drive from ATA power in motherboard. By implementing similar approach to iRam, an extra battery can power the ram for certain hours.By enabling north bridge to be DDR/SDRam capability is not a new technology, every chipset compnay have such tech. They can just stick the original memory controller with lower performance (DDR200, so more moudle can be supported and lower cost) to north bridge, the cost overhead is relatively small.
What I think the extra cost comes from extra motherboard layout, north bridge die size, chipset packaging cost (more pins). I suppose it can cost as low as $20 ?
jonsin - Thursday, July 28, 2005 - link
More, the original SATA physical link can be omitted as the controller in North Bridge can communicate directory to SATA controller internally (South bridge thru HT ?) In this case, would the performance increate considerably and the overall layout more tidy ? (no need external cable and cards)mindless1 - Friday, July 29, 2005 - link
NO these are all problems. The purpose is to have a universal platform support that is gentle on power consumption. That means a tailored controller and even then we're seeing the main limit is the battery. "Tidy" is an unimportant human desire, particularly less important inside a closed PC case. All they have to do is route bus traces well on the card and be done.slumbuk - Wednesday, July 27, 2005 - link
HP sell an add on for their DL 380 server for $200 (at discount) that gets you 128MB of disk write cache... makes a good system also fast for disk writes.This card could be used by linux vendors to enable file-system data and control logging for similar money for GB(s) of write cache... Cheap, reliable, fast general purpose file servers.. that have fast disk write speed without risking data loss.. Speed meaning no disk-head latency, no rotational latency - just transfer time.
It would sell better with ECC memory.. or the ability to use two cards in a mirror.. at least to careful server buyers..
slumbuk - Wednesday, July 27, 2005 - link
You could set up the iRam drive as the journal device for Resier or Ext-3 logged file systems - and log both control info and data - for fast, safe systems without too much fuss.I think I want one - but not as much as I want other stuff..
AtaStrumf - Wednesday, July 27, 2005 - link
Interesting but hardly useful for most. Kind of makes sense to only make 1000, but of course that's where the $150 price tag comes from.rbabiak - Wednesday, July 27, 2005 - link
i guess it would add to the base board cost, but a SATA controller on the PCI card would make it a littl nicer as then you are not takeing up one of your SATA channels, i only have 2 and they are current both used for a Raid-0Also if they made the PCI card a SATA interface and then short circeted the backend to conect directly to the memory, wouldn't they then be able to get much higher transfer speeds than sata and yet all the existint SATA divers could be used with it, given they emulate a existing SATA interface.
DerekWilson - Thursday, July 28, 2005 - link
Better to use the onboard ports ...a 33MHz/32bit PCI slot only grants a max of 133MB/sec. This would make the PCI bus a limiting factor to the SATA controller.
Step beyond that and remember that the PCI bus is shared among all your PCI cards. Depending on the motherboard some onboard devices can be built onto the PCI bus.
With bandwidth on current southbridge chips already being dedicated to SATA (or SATA-II), it would be a waste in more ways than one to build a SATA controller into the i-RAM.
That's my take on it anyway.
Derek Wilson
jconan - Wednesday, July 27, 2005 - link
Of all the disk intensive apps I could think of aren't Bit Torrents a bit disk intensive? Would I-Ram make a good match for Bit Torrents?robmueller - Tuesday, July 26, 2005 - link
I agree with the people who mention server uses for this product. There are already quite a few products like this around in the server space, but they are all VERY expensive. There's a comprehensive list here:http://www.storagesearch.com/ssd-buyers-guide.html">http://www.storagesearch.com/ssd-buyers-guide.html
The one thing to note, most of these are flash based drives, which means they retain their data, but are actually quite slow transfer speed wise. When it comes to pure performance solutions (which are usually DRAM with battery and/or HD backup), there's only a couple of companies:
http://www.umem.com/Umem_NVRAM_Cards.html">http://www.umem.com/Umem_NVRAM_Cards.html
http://www.superssd.com/default.asp">http://www.superssd.com/default.asp
http://www.curtisssd.com/products/">http://www.curtisssd.com/products/
http://www.cenatek.com/product_rocketdrive.cfm">http://www.cenatek.com/product_rocketdrive.cfm
http://www.hyperossystems.co.uk/07042003/products....">http://www.hyperossystems.co.uk/07042003/products....
http://www.taejin.co.kr/english/product_intro.html">http://www.taejin.co.kr/english/product_intro.html
We've been long time users of micro memory products, and in general they've been great. We place database journals, filesystem journals, and general server "hot" files on the device and get great performance out of it.
The biggest issue with most of these is price and support. Rocket Drive is Windows only (we have Linux srevers). HyperDrive doesn't appear to be shipping yet (we ordered one and haven't heard anything). Jetspeed I've never even been able to get a sensible reply from. Curtis seem to be focussing on fibre channel (their SCSI interface drive is now quite old, only 80MB/s), which means you need to spend an extra $1000 almost on just a controller. RamSan are incredibly expensive and FC only, but apparently have amazing performance as well. Umem does have a Linux driver, but Umem are no longer selling their retail, they are only selling wholesale to big storage vendors that use them in their products.
So that basically left us really interested in iRAM as a potential long term replacement for for Umem in new servers we buy. It's a pity that the apparent performance is a bit lacking. On the other hand, the biggest advtange of RAM based drives is the latency reduction. Basically you can write and have your data commited to "permanent" storage and move along with the next task straight away. This is the whole point of database/filesystem journals. It would be great to test the iRAM with real server scenarios that rely on this low latency ability. Rerunning the database tests with a combination of journal and full database on the drive would be really interesting.
http://www.anandtech.com/IT/showdoc.aspx?i=2447">http://www.anandtech.com/IT/showdoc.aspx?i=2447
Basically it seems that this is a really hard product to sell. There's definitely a market for it in the server space, but most of the people who realise that are big DB/file system users, and are usually willing to spend more to get an "enterprise" like product. It would be really nice if all those "middle" users with database/filesystem/email issues could be shown how to use one of these to significantly extend the life/performance of one of their servers...
Scarceas - Tuesday, July 26, 2005 - link
I see this as a much easier way to run your OS in RAM (hell, I don't think there is a way to run XP on a RAM partition).If you have 4GB of RAM, you can partition 3.5GB and run win9x in it. That leaves the max 512MB conventional RAM for 9x to work with. It takes a lot of work, but I think it is faster than this because you don't have the PCI bus constraint, and the RAM controller on a motherboard is probably flatout superior.
It would be interesting to see a comparison...
Scarceas - Tuesday, July 26, 2005 - link
Why did the 300mb file from the drive to itself take ~4 times as long as the 693MB file from the drive to itself?what am I missing?
Antiflash - Wednesday, July 27, 2005 - link
It is a 300mb folder containing several files that could be located in diferent positions which means a more random access. The other is a unique file, it is larger but the data is read from adyacent positions in the disc. In the first case you have to add the overhead of the procesing time of the OS when dealing with several files.JarredWalton - Wednesday, July 27, 2005 - link
Actually, you need to make it a bit more clear: it's the Firefox source code, which is likely thousands of small files. It's not just a few or many, but *TONS* of little files. Even though the access times of the i-RAM are much lower than that of a standard HDD, there is still latency associated with the SATA bus and other portions of the system, so it's not "instantaneous". Three times as fast is still good, and that's relative to the Raptor - something like a 7200 RPM drive would be even slower relative to the i-RAM. Still, best case scenario for heavy IO seems to suggest the current i-RAM is only about 3X faster than a good HDD setup. Good but not great.- Tuesday, July 26, 2005 - link
There's only one comment so far in this entire thread that really touches on where the i-Ram is truly going to succeed, and a few posters flirt with the notion in an offhanded manner.The benefits of an i-Ram would really come out during I/O intensive operations, as in high volumes of reads and writes, without really being high data transfer volumes, which is the case for a lot of database operations. A lot of the tests performed in the article really had a focus of large volume data retrieval, and that's like using the haft of a katana to hammer in a nail.
Think about web bulletin boards like PHP-nuke, Slashcode, PHPBB, any active dynamic website that is constantly accessing a database to load user preferences, banner ads, static images. Forum posting, article retrieval, content searching, etc. An applicable consumer example would be putting your web browser's cache on the I-Ram, or your mail or news reader's data files, or dumping a copy of your entire document's folder to it, then using Windows' search function to dig through them all for all occurences of "the". Throw a squid cache on it. Put your innoDB transaction log on it. Hell, for that matter, slot a handful of these and use them as innoDB raw partitions for your data.
The kinds of tests you need to perform to make an I-Ram shine would be high volumes of simultaneous searches across the entire volume, the kind of act that would make a regular disk drive grind to a screaming halt in a fit of schizophrenic head twitching. This isn't video editing, OS booting (with exceptions), game loading, or most of the scenarios commented on above. It's still a SATA drive. Your bulk data isn't going to transfer any faster, but you *can* find it quicker and open, update, and close your files faster. Leverage *those* strengths and stop thinking it's a RocketDrive.
Bensam123 - Tuesday, July 26, 2005 - link
All my concerns on this product were pretty much addressed-SATA2
-5.25" Bay drive instead of PCI slot
-Using a 4pin Molex connector or SATA power connector instead
-PCI-E instead of SATA (drivers are made everyday)
A few comments I have on this product that weren't mentioned. Everyone talked about putting these into a Raid0 array to improve size but no one mentioned that it could very well double performance. I don't know what's causing the current bottle necks with these cards besides the SATA interface but that just doesn't seem right. Anand needs to run benchmarks like Sisoft File System Benchmark or HD Tach to narrow it down. Read/Write/Sequential and Random should all be almost instaneous only limited by the bandwidth of SATA and the bridge it is attached to. This card could very well be limited by the chipset they tried it on (southbridge/northbridge interconnet). It might be even faster on a chipset that lacks a southbridge and only has a northbridge such as the nForce4.
Given the nature of this product I don't know why motherboard manufacturers just don't add this right onto a board or make a special adapter for it you can buy (with a better interface). I could see alot more use in something like this if the dimms were attached right to my board and straight to my notherbridge.
What Gigabyte should've done (all companies with a bright idea should do this) is just give this to review sites such as Anand and others just to see what feedback emerges before they try to market something like this. I guess Gigabyte is sort of doing this by only producing 1,000 but that's still 1,000 more then they need to. If my guess is correct the second revision of this product should follow quite shortley after this one hits the market.
As was mentioned the price is a killer (I would rather get a SCSI320 controller and a 15,000 RPM Cheetah).
nullpointerus - Tuesday, July 26, 2005 - link
The bandwidth, which could have really blown SATA drives out of the water in certain tasks, is obviously crippled by its attachment to SATA. Yet if i-RAM was running at full PCI Express speed, then I should think opening the specs for the memory controller would quickly lead to open source drivers. The storage is, after all, cheap DDR sticks.Sure, these drivers might be written for Linux or BSD instead of Windows, but surely porting GPL'd drivers to Windows would be easy for a company which can open the specs? nVidia and ATI have proprietary drivers because they claim it would be suicide for them to open up their proprietary chip interfaces. But i-RAM?
nullpointerus - Tuesday, July 26, 2005 - link
I thought that compilation would make a good application for this. Source code, intermediate, and output files take up less than 4 GB. The large amount of small text files involved should allow the i-RAM's random access performance advantage to really shine. Add to that the fact that long compiles can take several hours - or days if you are building Gentoo, for example - and the difference should be quite noticable. Yet there don't seem to be any compiler tests in this article. Maybe they simply aren't I/O limited?abzzeus - Tuesday, July 26, 2005 - link
http://www.cenatek.com/store/category.cfm?Category...">http://www.cenatek.com/store/category.cfm?Category...This a a 4GB PCI Drive @$3000 (yes three thousand) but this is for a native drive with direct access to the PCI bus thus can sustain 133Mbit/s.
What I'd like to see is a version that fits in 5.25" drive slot 12+ slots for RAM using a std connector for power and SATA II or SCSI (SCA?).
I can see several advantages for this product IF you think about it
Webcache server (hold the cache)
Temporary files (great for those programs that write temp files like crazy)
Swap space on Database server (lookup PAE, SQL server and 36bit addressing - 32bit windows can address upto 8GB RAM IF the O/S and the app are writen for it (been there :( )
Swap space on badly behaved app - there are apps that are ported from *nix to windows that tell the OS I have pagable RAM which the server then dumps to disc (4million page faults in 2 hours!) only for the app to ask for it
Log files - DB servers write out transitional logs once per transaction, this needs a drive that is FAST
Having more than one of these in a system (power system) means that you can seperate out the I/O onto seperate physical drives or even better controller or best seperate PCI buses (Servers, Really big servers can have three PCI buses) this means for a server (Unit means logical disc made from RAID arrays, seperated out as much as possible, by controller and PCI bus)
Unit 1 - OS and Apps Binaries
Unit 2 - Paging file
Unit 3 - Logs
Unit 4 - Temp
Unit 5 - Data
Maximum seperation equeals maximum I/O
Klober - Tuesday, July 26, 2005 - link
First off, another good article Anand. Now, on to my point...I'm wondering about World of Warcraft. After the first article where the info debuted there was a lot of talk in the comments section, and one of the subjects was WoW. It wouldn't have been possible to install WoW to the i-RAM because it's too big (~4.6GB on my machine). However, once AnandTech recieves another i-RAM to test with, either in JBOD or RAID-0, I would like to hear at least a subjective opinion on how WoW runs in large battles and such. I know my brother's machine gets stuttery when there's a big PvP battle, and through my troubleshooting I've gathered that it's a hard drive speed issue. If any of the AnandTech team has a high level character on their account and like PvP, please post something on performance in WoW.
Thanks!
JarredWalton - Tuesday, July 26, 2005 - link
I can't see having the i-RAM as being more beneficial to any game than simply adding more RAM to the system. If you're going to have 4x1GB DIMMs installed on the i-RAM, why not just put them into the system itself instead? As for WoW, even if the installed size is 4.6 GB, I doubt the game actually goes much above 1GB of memory use - very few applications do. If you have 2GB or more of RAM, do you still get stuttering issues in WoW? If so, there's a reasonable chance that it's simply GPU power that's lacking rather than RAM - or perhaps GPU RAM would help?(Note: I'm not a WoW player, so I'm just shooting from the hip.)
EODetroit - Wednesday, July 27, 2005 - link
There are at least 3 seperate data files in the WoW installation that are 1 GB in size each. A bunch of smaller but still over 100 MB files as well. All told as he said its about 4.6GB, and its more than 4GB in that one folder alone. So yeah, the game would go over 1GB in memory use if it was written well enough.I play WoW a lot, and loading into highly populated areas sucks. You hard drive thrashes and you have no control of your character until everything is loaded. I'm assuming its busy loading the textures of the equipment that all the player charactes around you are wearing.
This I-Ram thing might help out a lot, seeing as consumer motherboards don't support over 4GB of memory and the data files alone for WoW totals over 4GB. The problem again is that you'd need to raid two of the I-Ram devices together to get that much storage, and we don't even know if it would result in a tangible benefit.
As others have mentioned, for all fast action games, it isn't the load times that Anand should be focusing on... its the in-game stutters when something suddenly has to get loaded from disk. Those are killer, and even if the initial game load times only decrease by 5%, if the stutters are eliminated, this might just be worth the cash, more than a new $600 video card certainly.
JarredWalton - Thursday, July 28, 2005 - link
My point wasn't that WoW doesn't ever exceed 1GB, but that it doesn't exceed 2GB of RAM use. Actually, we should have probably mentioned that point as well: no single application under 32-bit Windows (not counting PAE/NUMA setups) can use more than 2GB of RAM. The 32-bit memory space is partitioned into 2GB for applications and 2GB for the OS, if I have my information right. Basically, you need to try out WoW with a 2GB setup before you can say that i-RAM would or wouldn't be able to help.Going back to the earlier statements, though, i-RAM is still nowhere near as fast as system RAM. The delay of PC3200 is around 140ns worst case, and bandwidth is still 3.2 GBps or 6.4 GBps dual-channel. i-RAM seems to be somewhere in the microseconds range for access times, and it's limited to 150 MBps bandwidth. If you can add RAM to your PC, that would be the first step to improving performance.
phonon - Wednesday, July 27, 2005 - link
If you have Windows XP Pro, you should be able to make a volume that includes the I-RAM and a regular disk. Then you can make a hard links on the I-RAM that point to the additional 600 Megabytes or so on the regular disk that won't fit on the I-RAM. I've never done anything like this myself, but I think it should work. Any comments?johnsonx - Tuesday, July 26, 2005 - link
someone's probably said all this, but i don't feel like reading all 80-odd comments:First, this strikes me more as a proof-of-concept effort. Sure, they'll sell you the engineering samples, for $150. Rev 2 will be the real product.
Second, I did see several people suggest that interfacing the board to the SATA interface rather than directly to the PCI bus makes it slower. Why? Standard 32-bit 33Mhz PCI only has 133MB/s of bandwidth, and that's often shared by other devices as well. SATA has 150MB/s of bandwidth, and in most cases is connected to the system by at least a 66Mhz PCI link, or more often some other high-speed chipset link.
Interfacing to SATA also means that Gigabyte doesn't have to write drivers for 32- and 64-bit flavors of Windows and various Linux distributions, MAC, and more obscure but definitely presents OSes like BSD, NetWare and Solaris (/me wonders about putting the boot partition and SYS volume of a NetWare server on an iRam... probably no real benefit, but you never know).
Third, I might imagine that Rev 2 will support SATA II with 300MB/s transfer speeds, ECC, and perhaps 8 DDR slots.
rbabiak - Tuesday, July 26, 2005 - link
Would have been nice to see some info on what it performed like as the temp folder for windows. all that internet web browser cache and other stuff that windows sticks off in the temp while it does stuff.this is data that you don't usally mind if it just disapears everyone in a while :)
UrQuan3 - Tuesday, July 26, 2005 - link
I remember five or six years ago there were products that would plug into a PCI slot and use PC133 RAM to do this same job. They would show up as a harddrive controller and windows would use default drivers unless you needed something different. This was when programs didn't expect you to have enough RAM to keep a scratch file in RAM, so they'd write out files after every action. A PCI card with a gig of RAM for accepting these scratch files made a huge difference. There's just less need now.Then there's the other problem. SATA may be 150MB/s, but the PCI bus it's attached to is only 133MB/s. This certainly explains why everything runs at DDR200. If they'd made a PCI-X card there might be a bigger improvement. The bright side is that they used an FPGA. If next week they decide to implement SATA2, they can issue an update and everyone can upgrade their cards. Companies like Cisco do this several times a year in telecom products.
EODetroit - Tuesday, July 26, 2005 - link
#82:You can buy these still. Check out this ebay auction: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&...">http://cgi.ebay.com/ws/eBayISAPI.dll?Vi...egory=16...
I'd hope and pray this thing is a lot faster than the iRam for all the extra cost. But the fact that it sits in a PCI card slot (I'm talking about the QikDrive linked above, not the iRam) makes me question that.
- Tuesday, July 26, 2005 - link
I was really surprised at how little it helped as a page file. Myself I sometimes encounter periods of slowdown due to paging that can last for several minutes where nothing can be done. I don't know if there's a common name for this but I'll call it the "page file wall". I don't know exactly how you would recreate such a tragedy in the lab. Too many apps open with too little memory obviously. But less obviously, it seems that during a period of overnight inactivity (with apps left open) windows will page a lot of stuff out to disk and you can experience the page file wall the next morning. It'd be interesting if Anand could devise a consistent "page file wall" benchmark.
As the article and many posts above suggest doubling my RAM would probably end my problems.
I still think this product (or revision 2 or 3) could bridge an obvious gap with PC's: SLOW harddisks and EXPENSIVE ram. When you run out of ram it can be like hitting a wall. It can be like crossing the country, but you go half by jet and the other half on foot. The gap should be filled with something cheaper than modern DDR and faster than harddisks. (This product is barely either.) I'd like to see a PC with 1 GB normal ram and 2GB of cheap-o 1/8 speed auxiliary ram. The OS could use this slower ram for paging with priority over paging to the harddisk. Not just for enthusiasts, but for regular beige PC's. Owners would then have another upgrade option with a better cost/benefit ratio depending on their needs.
I was waiting for a performance review of this thing and I'm so glad trusty Anandtech provided.
BTW: Long time reader (4+years), 1st time poster.
- Tuesday, July 26, 2005 - link
I was in my local computer shop and the guy working there pointed at a stack of hardware and said some guy just dropped $8000 on a Intel 955X or whatever system that included around 16 gigs of Ram disks and I asked if it was based on ddr400 and he said no it was in fact ddr2 533 I think. A quick search on the internet found nothing about ddr2 ram drives and it defies logic to me anyway since i would think that ddr 400 would be faster due to latency issues, etc. Has anyone heard anything like this? Also the guy at the store told me that it boots in to Windows XP in 4 seconds. It sounds like a tall tale but i don't see any reason why he would be making it up as they are pretty reputable.davidlang - Tuesday, July 26, 2005 - link
where you really suffer is on writes.any time you need to write something before you can continue the latency becomes critical. Database writes (and logging) are a perfect example of this.
Under *nix the Journal of a journaling filesystem is performance critical (although it's useually a sequential write so it is about as good as you can get.
For Database engines that have good crash recovery (MySQL is not that good at this, but Postgres or Oracle are) they need to make sure that their log gets to a safe storage media before they can consider the write completed and tell the caller that it's done.
even for an apache webserver, with normal logging apache will not return the page until the log has been written.
somu - Tuesday, July 26, 2005 - link
As a lot of ppl have posted here, it would make sense to use this as a cache for our harddrives by making it possible to plug the harddrive into the i-ram and i-ram to the motherboard. This would overcome the 4gb limitation and we probably wouldnt need the full 4gb for cache we can use like 1gb or 2 gb. But to see more increase in performance they will need to move it to sata2 and have programmers write into their code to precache data to take full advantage of the i-ram.AngleRider - Tuesday, July 26, 2005 - link
Well it seems that modern hard drives are getting alot faster and solid state doesn't seem to help as much as it would of say, 2 or 4 years ago when we were running crusty low density HDs...
However, I am also slighty disappointed in the design...
Why put main system memory in a drive and then limit it to SATA I (not SATA2)?
I thought the whole point of a ram drive was provide maximum i/o performance...??
Second not allowing 2GB sticks doesn't make sense to me... i mean 4gb is really small.
Maybe they should of thought this,
"Gee, let's try to offer more capacity - like, golly bunny, currently available 2gb ram modules..."
Even so, if this can do 591% higher i/o performance than a raptor in ipeak business winstone, then i'm sure there are ways to utilize this in computing tasks...
Also if u put the os on it u wont ever need to defrag...
Nice, but expensive for now ... expensive doesn't mean its crap.. just weakly spec'd to my mind for now...
Why do something like this and then water it down?
D
UNCjigga - Tuesday, July 26, 2005 - link
I think the disappointing benchmarks ought to say something about current OS's suitability to the iRAM, and not iRAM's capabilities. I really think this is an idea ahead of its time. Windows XP isn't tuned for solid-state storage, the FPGA chip on the iRAM isn't the best solution, and the SATA interface itself is a bottleneck. If Windows Vista and future BIOSes had support for PCIe storage, imagine a version of iRAM that had a straight PCIe interface supporting the full 1.6Gb/s or more depending on the type of memory you put on, and 8Gb or more memory thanks to 64bit addressing.Windows Vista will already have support for hybrid drives (NAND+platter) so the caching and paging routines will be optimized for solid-state storage. I actually think iRAM might be better than hybrid drives because 1) you can use existing drives with it, 2) iRAM is expandable (up to a limit), 3) DDR is faster than NAND
shaw - Tuesday, July 26, 2005 - link
I could see SATA II could remove the bottleneck, but still, 4GB of data? Gigabyte is smarter then this.. it's just not going to fly. Though, it is a pretty good start.The next logical steps is probably finding a way to get a standard harddrive to use something like this as a memory buffer (7200rpm with DDR200 1GB of cache) and then maybe it would actaully be worth it.
JNo - Tuesday, July 26, 2005 - link
I was disappointed that nothing was mentioned of the practicalities of moving windows or a game onto this thing. Is there any software that would transfer whatever data is on this thing (including functioning operating systems) to a normal drive at regular intervals? And keep them functioning? If not, what's the point?! Each time you have to install windows/a game to this thing (after powerfailures or just for the sake of having something different on it), you have to install all the updates/personal tweaks/mods/saved games/configurations etc which would takes SO MUCH MORE TIME than the extra few seconds you save from faster boot/game load times... why anandtech does not take these things into consideration?! To paraphrase another poster: WHOOPEE-F*CKING-DOaraczynski - Tuesday, July 26, 2005 - link
disappointing gaming benefits, too small a size, they should've used a custom controller rather then the connect-the-dots one.i'll be waiting to see what version 3 brings.
EODetroit - Tuesday, July 26, 2005 - link
The $150 thing is a killer. But if they can only pump out 1000 of them, it makes business sense to have the price high. This just like AMD having high X2 prices because they can't possibly make enough quantity to fill orders if the price was lower... same exact thing.$90 per 1GB stick of ram is high, I'm sure people can shop around and find it cheaper.
As for RAIDing two of these, Anand said he only actually had one of them, but was trying to get a second. So maybe more on that later. I think that even if Raid 0 doesn't work for some reason, JBOD would work.
I'm curious what the bottleneck in computers now-a-days really is. I think Anand should get an NForce Pro with 8 GB of ram running 64bit XP, set up the largest RAM DISK (real software-type RAM disk) you can, and see how that affects performance. If performance shows the same mediocre gains that this device showed, then that means a new SATA2 version wouldn't improve things either. If that test showed there were large gains out there to be had, then yeah there's a future here. I would do it myself but I don't have access to that hardware hehe.
pieq3dot14 - Tuesday, July 26, 2005 - link
I'd like to see how this would change the overall latency of a system. I have a pretty nice home studio, and I can see using this as a boot drive, and then recording off to a raid array. With all the random accesses coming from the solid state drive, and only sequencial going to the raid, I'd think the latencies would drop significantly. Could be pretty handy, even extending the life of older systems.bwall04 - Tuesday, July 26, 2005 - link
Anand, first of all great review, it's nice to see some numbers on this.Would it be possible to bench a few tests again with 2GB of system memory? I can vouch that 2GB makes a noticeable difference when loading any game. I realize that you were going for an "enthusiast" level machine but games like HL2, Doom3,and Battlefield 2 has started a push with the high end to upgrade to either 2x1GB or 4x512MB.
racolvin - Tuesday, July 26, 2005 - link
Could they perhaps have gone with a full-size card and then oriented the DIMM slots perpendicular to the mobo? I had something like that ages ago in an Amiga that worked well from a size perspective. It might get them to 8Gb :)somu - Tuesday, July 26, 2005 - link
cost of this unit was increased 3 times.then it went from sata2 to sata.
Real life performance is not as gd as i expected, when i first heard i was excited to see them working on removing the bottleneck but going from 13 second load time to 10 second doesnt warrant the cost of the 150 card and 4 gb ram.
shaw - Tuesday, July 26, 2005 - link
#1 4GB space = poop#2 Still bottlenecked by the SATA bus
I just hope this is the beginning of a bright future, but for now I'm not impressed one bit.
IvanAndreevich - Tuesday, July 26, 2005 - link
How about a Raid0 test with 2 of these cards :)JNo - Tuesday, July 26, 2005 - link
How about Read the Frickin Article?audiophi1e - Tuesday, July 26, 2005 - link
I think the more useful implementation is to have the RAM pre-installed onto the drive. And I'm not talking RAM sticks. I'm talking about these guys at Gigabyte contacting Samsung, Micron, or Crucial to directly supply the chips and directly solder them onto 5.25" plates. I think in the space of a 5.25" bay, you can fit 2 of these said plates. It won't be hard to think that they'd be able to fit 15GB of RAM in a 5.25" drive's space.Then with the remaining space, mount a MUCH larger battery. Have the battery be able to last DAYS, not hours. This will set people a little more at ease. It will sure make me feel better. (and no, this 5.25" ramdrive will not be using a molex connector. Simply put in a dummy PCI card to allow the 5.25" to draw power from it)
The fatal flaw in their product design is that most people simply won't have that many RAM sticks laying around to make this thing useful. Why not supply the RAM, and in the process increase the possible size from 4GB, to something much more useful. If we already know that only 'power users' with little budget restraints will buy this, then just supply it the way we know they want it: Big.
Zebo - Tuesday, July 26, 2005 - link
Yeah one really needs about 15-20G to make this a livable reality. And that would cost about 3K and about 4K if they did it right i.e. ultra SCSI or even PCIe interface.Sindar - Tuesday, July 26, 2005 - link
If they got real serrious tunned it up with on pcb ddr3. Made it something like a ZIF socket thing. Gave it a direct bus to the chip, changed the memorie contoler to let it throtle wide open. Wrote drivers, OSes to just use it. It might be like a really fast bios set up for the OS. At first it could be like an extra, but as costs came down maybe it would be intergrated into the motherboard. Humm nearly alomst instant boot up...it's a dream, even if it's only mine!simpletech - Tuesday, July 26, 2005 - link
I think another possible use (besides certain kinds of servers, like mail servers), is for video capture. The size is a bit small, but if you were capturing segments of footage, it might work. And the price could be reasonable.BikeDude - Tuesday, July 26, 2005 - link
"but 32-bit Windows can't use more than 4GB of RAM, including the swap file size."First of all... "Swap file" is a misnomer. We talked about "swap file" back in the Windows 3.1 days when the OS would swap a process' entire memory space to the *swap* file.
These days the OS will read/write selected pages of a process' memory from/to the cache manager (who may or may not elect to use the disk to get to the physical pagefile). *Paging*, not "swapping". Executables and libraries are memory mapped and thus start their lives with all pages firmly on disk (so a big executable won't necessarily load slow, but many small DLLs OTOH just might).
I don't have Windows XP in front of me, but my 32-bit Windows 2003 Standard ed. with 4GB memory and 1GB pagefile certainly doesn't seem affected by the limitation you mention. Enterprise edition can address even more physical memory... Each process is still limited to a 2GB virtual address space though. (32-bit processes marked capable of such will gain a 4GB virtual address space under 64-bit Windows)
I realise that XPSP2, despite PAE, is limited to 4GB physical memory (http://blogs.msdn.com/carmencr/archive/2004/08/06/...">http://blogs.msdn.com/carmencr/archive/2004/08/06/..., but pagefile as well? Nah, sounds iffy.
JarredWalton - Tuesday, July 26, 2005 - link
Without PAE (or something similar), 32-bit OSes are indeed limited to 4GB of RAM. This is what is being referred to, as PAE is limited to Intel and I don't believe it's available on non-Server versions of Windows. (Correct me if I'm wrong, but PAE is pretty much only on Xeons, right?)You're right that it's paging instead of swapping now, but there's really not much difference between the two. Basically, you put data onto the HDD in order to free up physical RAM, on the assumption that the least recently used data that was moved to the HDD won't be accessed again for a while.
JarredWalton - Tuesday, July 26, 2005 - link
Anyway, I've modified the comment to reflect the original intent. If you're running PAE and Server, it's a whole different ball game for high memory systems.Penth - Tuesday, July 26, 2005 - link
Wow, my friend and I talked about the possibilities for these things several times. But at 3x the initial price and not the performance increase I would have expected, the techie in me is disappointed. My wallet is happy though.StanleyBuchanan - Tuesday, July 26, 2005 - link
I wonder what the issue is with RAID that Anand comments on.... seems odd that it would behave differently than a HD in this respect and cause problems...I would love to have 12gb or more... which is enough for Windows XP, a productivity suite, and a modern game... anything more could be run from NAS
Zan Lynx - Sunday, July 31, 2005 - link
Probably something to do with the PCI bus power. Perhaps two of these cards take more juice than the bus expects to provide while on standby.phaxmohdem - Monday, July 25, 2005 - link
I saw someone else posting as well, but I would very much like to see some database performance numbers from this device, as well as perhaps a web-serving benchmark.xTYBALTx - Monday, July 25, 2005 - link
How some FPS benchies?GTMan - Monday, July 25, 2005 - link
I laughed when I saw that line :) A very interesting device and I look forward to where this goes in the future. Your "Final Words" could use a bit of brevity.Icehawk - Monday, July 25, 2005 - link
Huh, if this was at the $50 price point it would be a bit more interesting.I didn't like the pagefile test - it made no sense at all. Of course going from say 4b RAM to 2gb + 2gb iRam isn't going to improve the system... You needed to test what JUST changing the pagefile from HD to iRAM does.What about a typical 1gb RAM setup that most of us use? I still hit the pagefile on occasion and I do have ~1gb of old DDR I could use. Load times? No, I'd like to know if it smooths out gameplay. I know Doom 3 hiccups on my machine due to disk accesses.
Otherwise this doesn't look like it makes a lot of sense in its current incarnation.
lewis71980 - Monday, July 25, 2005 - link
No mention of using JBOD instead of Raid 0.That way with 4 pci slots used up you could get 16gb.
Maybe that would be enough space to do some proper server / databases.
Use a pair of normal 80 IDE HDD for os boot in raid 1 with file backup, from the i Ram card.
Braxus - Monday, July 25, 2005 - link
Know the article says it doesn't support ECC memory but will it still take it and run in in non-ECC mode? Most mobos I believe can at least do this. What about registered memory? Got a couple sticks of 1GB DDR266 RECC memory I'd like to use!RMSistight - Monday, July 25, 2005 - link
I definitely won't purchase this product until they implement SATA-II at 300Gb/s. Why should I shell out $150 for SATA150 when my DFI LanParty Ultra-D can do 300.I even asked one of the product managers at the AMD tech tour. I don't see why they wouldn't do it since SATA-II is backwards compatible to SATA-I.
Hacp - Monday, July 25, 2005 - link
BTW I hate this new layout. i have to click it to read the next comment. Is there anyway to fix this? also the forums didn't get a makeover visual wise.LeftSide - Monday, July 25, 2005 - link
I wonder If the athlon x2 would have shown a diffrence in the multitaking tests, Instead of useing a fx57?Nanobaud - Monday, July 25, 2005 - link
If more benches are to be done, I would put in a suggestion to test some compile times. Then I guess you should compare it to boosting youe system memory and installing a RAM drive, but this could be more convenient if you have those old 256 / 512 MB memory sticks lying around.nBd
Sunbird - Monday, July 25, 2005 - link
I want to know how long it will take the I-RAM to drain a standard UPS if the PC is off but connected to said UPS?jkostans - Tuesday, July 26, 2005 - link
A while. You would have to find how much power is dissipated by the i-ram, then use the capacity of your UPS to get an exact number. I would go as far as to say maybe up to a month if you have a good ups.Zebo - Monday, July 25, 2005 - link
$600 for 4GB (read useless) drive that maybe is not much faster than two 73GB drives in RAID 0 for half price? Uh Huh. If they sell 3000 I'll be shocked.Aganack1 - Monday, July 25, 2005 - link
i thought they said that they were only going to make 1000. enought for the crazies who have money to burn...P.S. if any of you crazies are reading this i could burn some of that money for you... just let me know.
Houdani - Monday, July 25, 2005 - link
Thanks for running through the multiple roles for which the iRam might be useful. I'm rather surprised it wasn't MORE useful in the benches. I'd be interested in learning (i.e. slacking back and reading the results of someone else's research) why the i-Ram is still as large a bottleneck as it is. Yes it's faster than the HD, but why isn't it much, much faster? Are we seeing OS inefficiency or something else altogether?In the end, though, it doesn't fit my needs particularly well, so I'll pass this round. Maybe a future version will be more appealing in terms of cost, speed, size.
Sunbird - Monday, July 25, 2005 - link
maybe the SATA interface isn't fast enough?pio!pio! - Monday, July 25, 2005 - link
I'm constantly shuffling 1--3 gb mpeg2 files around...this would be greatGed - Monday, July 25, 2005 - link
Would it be possible for an NVIDIA or ATI graphics card that used TurboCache or HyperMemory to make use of the i-RAM?That might be interesting.
Anton74 - Monday, July 25, 2005 - link
No, absolutely not. Even if it were, the SATA interface is *way* too slow to be of use for something like that.And even if that were not a factor, why spend that kind of money on the i-RAM where the same amount would buy a *much* superior video card with its own dedicated memory?
Anton
kleinwl - Monday, July 25, 2005 - link
I think that this would be very helpful as a page file for workstations. Older workstations may be maxed out with 4GB and windows 2000 (which the company does not want to move over to xp-64) and still need additional ram for CAD/CFD/etc. This would be an easy upgrade with a reasonable amount of performance increase.sandorski - Monday, July 25, 2005 - link
Was hoping it would offer more, especially as a Pagefile. Any plans to make a PCI-e version(IIRC PCI-e has a ton more bandwidth than SATA), that would likely make this a Must-have. As it stands now I'd only use it for the silence in a HT Setup.Gatak - Monday, July 25, 2005 - link
Using PCI/PCI-e for transfers would require OS drivers which wouldn't be available for all OSes.sprockkets - Tuesday, July 26, 2005 - link
Keep in mind that for many years the ide/sata controllers are NOT on the PCI bus of the southbridge, so PCI is not a limitation.crazySOB297 - Monday, July 25, 2005 - link
I'm surprised they didn't raid a few of them... I think you could get some huge performance.Googer - Tuesday, July 26, 2005 - link
Not to mention it is a way to also get around the 4gb siza limitation.
Hacp - Monday, July 25, 2005 - link
Dude the article said straight out that SATA150 was the only format supported. Read the entire article.Guspaz - Monday, July 25, 2005 - link
I too am dissapointed that the article lacked any mention of SATA2, which is twice as fast as SATA (300MB/s vs 150MB/s). Considering many motherboards already on the market suport SATA2, and the 300MB/s transfer rate that goes with it, it is a bit of an oversight that the articles doesn't even MENTION if the card supports SATA2 or not. Nor do they mention what they think would happen with SATA2, or if Gigabyte is likely to produce a SATA2 version. It's a weak spot in this article, I think, considering how central the bandwidth of SATA is to the performance of the i-RAM.snorbert - Tuesday, July 26, 2005 - link
33MHz PCI only gets you 133 MB/sec theoretical, and more like 110 MB/sec in the real world. The i-RAM with SATA 1 can completely saturate a PCI bus. SATA2 would cost more to implement, and give you no speed increase at all on a 33MHz bus. If you build the card for higher-end PCI specs (e.g. 66MHz, 64 bit, 66MHz/64bit, PCI-X) then you automatically exclude most PC enthusiasts (unless they like buying server boards for their game boxes).
If they end up doing a PCI Express version, then there would be some reason to support SATA2.
This board is not a replacement for a hard drive. It would be incredibly useful as a transaction log though. Reliable (i.e. won't get lost if the machine crashes) write-behind caching for RAID 5 drives will give you a huge boost to write speeds. And the controller cards that support battery-backed write behind caching cost a lot more money than an i-RAM.
-Jason
sprockkets - Tuesday, July 26, 2005 - link
Also to reply hereKeep in mind that for many years the ide/sata controllers are NOT on the PCI bus of the southbridge, so PCI is not a limitation.
snorbert - Tuesday, July 26, 2005 - link
Actually, scratch my comment - I had not had enough coffee when I wrote it. I forgot that the PCI connector is doing essentially squat except providing power to this device. Of course you could have a SATA2 controller on a faster bus talking to this thing. But an SATA2 version would probably cost more. (because it would need a faster FPGA, newer SATA transceivers)Sorry folks,
Jason the doofus
Anton74 - Monday, July 25, 2005 - link
You did miss that reference; on page 2 it says "The i-RAM currently implements the SATA150 spec, giving it a maximum transfer rate of 150MB/s".Given the 1.6GB/s of the RAM, it seems completely silly not to provide a 300MB/s SATA interface instead, especially considering that the whole contraption including RAM will cost as much as 2 or more decent hard drives.
Anton
ryanv12 - Monday, July 25, 2005 - link
The controller on the card is not SATA-II...it can do a max of 1.6GB/s...not exactly SATA-II speeds there...Anton74 - Monday, July 25, 2005 - link
1.6GB/s is actually more than 5 times 300MB/s, the maximum supported by SATA-II. So 300MB/s could easily be fully utilized, and I don't understand why they didn't support that.Anton
Hacp - Monday, July 25, 2005 - link
It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.Also, nice article Anand!
zhena - Monday, July 25, 2005 - link
mattsaccount you would need 3 cards to run raid 5.Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
ss284 - Monday, July 25, 2005 - link
Raid 0 has a higher access time than no raid. Unless you were running highly disk intensive applications the snappiness would be attributed to ram, not the harddrive.-Steve
zhena - Monday, July 25, 2005 - link
not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from.JarredWalton - Monday, July 25, 2005 - link
RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller.
Gatak - Monday, July 25, 2005 - link
Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion.ss284 - Tuesday, July 26, 2005 - link
I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead.Gatak - Monday, July 25, 2005 - link
RAID-0 ought to offer better random read access times as there are two disks that can read independently. Writing would be somewhat slower though as both disks need to be synced.Gatak - Monday, July 25, 2005 - link
I'd like to see some server benchmarks with this. For example:* mail server (especially servers using maildir is generating lots and lots of files)
* web server
* file server
* database server (mysql, for example)
Maybe some other benchmarks :D
mmp121 - Monday, July 25, 2005 - link
He even states that on page 11:
Anand, how about an update with some server / database benchies?
Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte!
mattsaccount - Monday, July 25, 2005 - link
This thing would still be useful as a pagefile in some circumstances--if all your memory slots were full and/or you had extra memory lying around. This is what I had been planning to do with it (currently have 4x512mb, plus a couple other smaller capacity DDR sticks which would be nice to use b/c for photoshop stuff). But the price is too high. I'll wait till it drops.Son of a N00b - Monday, July 25, 2005 - link
I would love to get two of them and run them in RAID-5 possibly...that way you also have a back up...Gatak - Monday, July 25, 2005 - link
You'd need minimum 3 cards/disks for RAID-5.However, using this card as a journaling device for a normal filesystem, like ReiserFS or Reiser4 might be very beneficial. Wouldn't require much RAM either.
ukDave - Monday, July 25, 2005 - link
Extra things that could have been covered were:Would there be a difference with other SATA cards, such as 3Ware etc - i.e. would CPU usage make a difference perhaps?
Why not use SATA-IO (SATA-2) instead of the older and slower SATA (re: Gigabyte)?
But otherwise a very informative article, thanks Anand.
ss284 - Monday, July 25, 2005 - link
It would be best to wait for the second version of the card, which will hopefully have a cheaper IC as well as sata II support. Theres no doubt that the ram can do 3.0gb/s.Imagine what 2 of these in raid 0 would be like.
-Steve
SDA - Monday, July 25, 2005 - link
- File copy performance is mostly a moot point, because copying files from disk to disk will go as fast as the slower of the two can, and other applications that typically require disk performance (unarchiving et al) will only see a minimal performance increase due to bottlenecks in other parts of the system (which becomes even less valuable when you consider that you won't be doing a whole lot of unarchiving to a disk that small).- Gaming benefit would be okay if it you could fit more than about one modern game on it.
- Using it as a pagefile is, as Anand noted, pointless.
- It does improve boot times, but it's not a huge difference, how many of us shut down often enough to actually be bothered by a few seconds in boot?
- It does improve app loading times slightly, but if you're opening and closing apps that take a lot to open and close, it's probably because you don't have enough system memory, so buy more memory instead.
So basically: whoopee.
DerekWilson - Wednesday, July 27, 2005 - link
I'm just gonna pick at a single point ... you could install one game to the i-ram at a time and then archive them on another drive.You get fast zip times on i-ram and a single file transfer to a magnetic disk is faster than multiple small files (moving the the archive won't take long). Just unzip the game you want to play to i-ram ...
but then ... that kinda defeats the purpose doesn't it ...
I could see this being fun to play with, but I have to agree with Anand -- it needs higher capacity before it is really useful.
Plus, I'd like to see SATA-II :-)
miketheidiot - Monday, July 25, 2005 - link
I don't really see anyone using this, its costs way too much for too little storage and too little performance benefit, not to mention the risk of data loss. I'll give it a look again when they get some higher bandwidth flash or something like that. this i can pass on for now.Sea Shadow - Monday, July 25, 2005 - link
I dunno, I could see the extreme enthusiasts getting these. I mean after all, if they have the money to buy a system with SLI 7800 GTX and FX 57 this would be pocket change.BoberFett - Monday, July 25, 2005 - link
I'd imagine that in some areas the CPU is still the bottleneck and for others the 150MB/sec limit of SATA may be.Sea Shadow - Monday, July 25, 2005 - link
I wonder if the OS is the limiting factor. They should run some tests using other os *cough* linux *cough*.What would be really neat is if they could design an i-Ram device that uses 2 HDD bays and supported 8+ GB of ram and ran from a standard molex.
sprockkets - Tuesday, July 26, 2005 - link
You people think about using Knoppix and copying to the drive? For that matter, that and stuff like damnsmalllinux and such can be run totally from system ram.Instead of using this though for a slient drive, you are better off using flash memory drives for that.
ViRGE - Monday, July 25, 2005 - link
Unfortunately if they did that, it would mean that your computer could never be turned off. As noted in the review, the card is currently still powered even if the machine is "off", due to the fact that when a modern ATX computer is off, it's actually more of a super-standby mode that leaves a few choice items powered on for wake-on events(LAN/modems, and the power button of course). All Gigabyte is doing here is taping in to the 3.3v line on the PCI slot that wake-on power is provided through, which is enough to keep the device powered up even when the system is in its diminished state.Molex plugs on the other hand are completely powered down when the system is "off", so it would be running off of battery power in this case. A lot of us leave our systems on 24/7 anyhow, but I still think they'd have a hard time selling a device that would require your computer to be off for no more than 16 hours at a time.
Gatak - Monday, July 25, 2005 - link
They could use the USB power. On most motherboards you can enable with a jumper or BIOS to supply standby power to the USB. Often the setting is called "Wake on USB" or "Wake on Keyboard" etcreactor - Monday, July 25, 2005 - link
"What would be really neat is if they could design an i-Ram device that uses 2 HDD bays and supported 8+ GB of ram and ran from a standard molex."Was thinking something similar myself as i was reading.
I think once ram modules are 4gb or larger, then this could be very useful. But not until it gets updated with sata2, ddr400 etc. When the time comes to build an HTPC then ill give this another look.
Nice article.
ranger203 - Monday, July 25, 2005 - link
Not to shabby, but i was honestly expecting like 3 second boots, & 5 second game load times... why is there only a 20% speed increase in some areas?Griswold - Thursday, July 28, 2005 - link
Because the data still has to be processed after being loaded - bandwith is obviously not the biggest bottleneck here.forwhom - Tuesday, July 26, 2005 - link
What I would be very interested in seeing is the performance of the thing using it as the source for encoding a dvd/mpeg... Most encoders are heavily disk-based and if it could reduce the time significantly it might be worth while - assuming that eventually they come out with one big enough to hold the source. There's now reasonable CPU encode performance, just have to get the data to/from it... maybe the i-drive would help..highlandsun - Monday, July 25, 2005 - link
Hmmm, the WD Raptor has a sustained transfer rate of 72MB/sec. So on a freshly formatted drive, with no fragmentation, it should still be half the speed of the iRam. But at $200 for a 74GB drive, then you could get a pair of these running in RAID0, which would run at around 140MB/sec anyway, and still have spent less than the cost of the iRam and 4GB of DDR DIMMs. It definitely seems like this product falls short.The use of PCI 3.3V standby power is clever. Perhaps a future version should just use a dummy PCI card to provide the power, connected to a 5" drive-size case with many more DIMM slots. If you can't cram at least 16 DIMMs in there, then the ability to use old memory is kind of wasted, since the old modules will have such small capacities.
Ultimately I think this type of product will always be a failure.
What they should do instead is make it a pass-through cache for a real SATA drive. So you plug the SATA controller into it, and plug it into a real SATA drive, and it caches all I/O operations to the real drive. That's the only way that you can get meaningful benefit out of only 4GB of memory. A card like this would turn any SATA drive into a speed demon; 4GB is definitely a decent size for most caching purposes.
highlandsun - Monday, July 25, 2005 - link
Of course the next logical step is to put the DIMM slots on the SATA controller card, so that access to cached data occurs at real memory speeds, not just SATA bus speed. This would only be a useful product for folks stuck on 32-bit systems, because otherwise it would be best to just increase the system memory instead. But there are plenty of 32-bit systems out there that would benefit from the approach.ceefka - Tuesday, July 26, 2005 - link
That and/or having the possibility to install very large amounts of RAM (like 32GB) on your motherboard and BIOS settings to decide how much of that is non-volatile.I have a feeling this is a transitional product that while being a very nice add on to your current system, will become obsolete in 4 to 5 years. If I had to capture loads of high sampled audio (96/24), I'd want one now, though.
Furen - Monday, July 25, 2005 - link
I was expecting something closer to the $50 price mentioned at computex... It would have been a nice device to tinker around with, but at that price (plus the price of ram) I dont think most of us will get it.weazel1 - Sunday, November 4, 2012 - link
why they have to waste pci bus speeds and run though a sata chip beyound me it should directly conect to the pci bus have its own bois and run as full fleached ram or as normal ram with a redirect to being a hdd heack u have ram disk software idea the drive is pretty useless as permenment storage why no1 could see this i do not know