Redundant Array of "Inexpensive" Disks. Raptors dont count as "inexpensive". also you did not apper to test other motherboard/raid controlers. and then the drill down on how each of thous are setup.
Raid 0 will give you faster IO than a normal 7200rpm drive. if you set it up correctly.
Article also show something more.
Seagate disk is better to buy then a raptor.
Lower price, same prestanda.
Disk is cooler and lower noise.
Bigger size in storage.
Most test is done togheter with other operations (sometimes manual). If RAID 0 is to be compared with single disk use time for other operations have to be clocked separately. Most of the test shows that its no differens wich disk you chose, disktime is same.
I did the same benchmarks (more or less) myself and came to astonishing result. The performance gains with RAID-0 are on average 50% faster. So i think you are doing something wrong
Here is a problem with this evaluation. According to this article the 10 year old IBM 75GXP 30 Gig hard drive is only 8% slower than the latest Western Digital Raptor II. So the conclusion according to this article is Western Digital's 8MB Buffer Cache with 4.5 ms Seek time and 1,200 MBits/sec access time and the "worlds fastest Serial drive" (according the Western Digital web site) isn't much better than the old 15 Gig drives. I would suggest pulling out our old 386s and using their hard drives if we don't mind a 10% slow down in speed.
RAID0 disks make a HUGE difference when streaming large amounts of data (such as high speed video at over 100 MegaBytes/sec) for long periods of time (we often go for over 20 minutes continuous when viewing rocket launches). Sustained transfer on single IDE disks is usually in the range of 40 to 50 MB/sec (note that the burst read & write rate, which is really what was covered in the article, is only peripherally related to the sustained data write rate), with SCSI disks being about 10% higher, and SATA disks being about 15% higher. Two SCSI RAID0 disks will usually sustain about 110 MB/sec, while 7 SCSI RAID0 disk will sustain about 320 MB/sec (requires 2 on-board SCSI controllers; all PCI SCSI & SATA RAID controllers are junk when it comes to trying to sustain high data write rates!). The SATA RAID disks sustain higher data write rates than SCSI disks but often show data drop outs (every second or so) that I can't explain.
I love how when testing video cards and Anand runs into results like the Multimedia Content Creation Winstone 2004 test (where the most of the results are close like Commanche 4 benchmarks) he says "Hmm, obviously this game is CPU intensive and is putting little stress on the GPU." Or something to that effect.
But in similar results in this benchmark, where it is obvious that there is a bottleneck somewhere else or does not put enough emphasis on the hard drive, he chooses to say "RAID 0 is marginal and not worth investing money in."
"This review is hardly based on any scientific approach - one controller / motherboard combination and one brand of hard drive. Get real. And to not delve into WHY there was only minimal performance increase further dilutes the results of this review/test."
#112 I don't think we would have seen much difference between different card/drive combos in the office tests. The problem is that this "review" is fundamentally flawed. No effort was put forth to quantify the very real effect RAID0 has on the end user's everyday computing experience. For average Joe User, RAID0 is worthless, let him not experience the responsiveness and quickness that RAID0 provides cheaply and easily. Only let us everyday users of RAID0 enjoy that benefit.
Kudos to Anand (and all of the other testers referenced) for giving us a completely lopsided view of what RAID0 is all about.
What we have here ladies and gentlemen is a comparison of apples and oranges. I KNOW my raid 0+1 system is faster than the same system using only one same drives. Do expect it to actually be faster for web surfing? Of course not. The drives are not the bottleneck for most types of web surfing. Do I expect it to be faster for writing a word document? Well, partially. I expect it to load and save the file faster, but everything else will be the same speed. Do I expect defrags to be faster? Yes, most defintely.
OK on the flip side. Do I expect my X800 video card to speed up word processing? Of course not. Do I expect my X800 to speed up web surfing.. NO Do I expect my X800 to speed up gaming? Most types of games, yes I do.
For all of you wanting to see smaller, cheaper, and slower disks in RAID 0 as a comparison:
You will see approximately the same percentage improvement of the faster drives over a single... In all of the tests. The Ipeak tests will be within a few percentage point of roughly 20% and 40% better (respectively) than a single drive of the same model and the office benchmarks will see maybe at most a 5% improvement over a single drive of the same model.
What Anand fails to bring to this "test" is the realization of what RAID0 is actually used for. That is to speed up hard drive reads and writes. Since the hard drive does not always read and write, how could he jump to the assumption that RAID 0 is worthless because of a test that percentagewise does very little hard drive reading and writing? (as is obvious per the tests.) Ask anyone that uses a RAID 0 array... The entire computer "feels" much more responsive. I have had more than one person use my home computer for normal web surfing comment on how much faster my computer seems than theirs does. So don't give me that "placebo" crap. They had no clue how my computer was set up until I explained it to them.
RAID 0 does enhance the user experience on a computer... It just may be hard to quantify with the primitive testing that is currently employed. But nonetheless, if you have used RAID 0 and then went back to a single on the same system, you already understand what I am referring to. Everyone else.... Don't knock it until you have tried it.
Whereas our processors are zooming in Ghz and Rams in GBs... Our old humble HDDs are still around 60MB/.s
A properly setup RAID (with dedicated controller) will give close to 90% performance boost. I run Software RAID 0 on my laptop (its a monster laptop with 2 x 7200 RPM seagate momentus) for My database... Its just not about the Raw speed and numbers... the machine 'feels' damm so fast. Also, despite being software RAID, Single drive gives 45MB/s.... Whereas RAID 0 partition gives easily 85 MB/s/.
While this article on RAID was quite informative, I would be very interested in seeing what the measured peformance differences are for an application such as Photoshop, where a secondary hard drive assigned as a scratch disk is recommened for optimum performance. Perhaps one problem here is finding some reliable Photoshop benchmarking tool, but surely one exists. Even without that, just performing a repeatable action that exercises a series of steps that are certain to force scratch disk usage could demonstrate what RAID 0 advantages exist, when used as the scratch disk. Such a test would likely require the manipulation also of a rather large image file...perhaps 100MB or more. I suspect other applications that work with large data files, such as video editing applications, would also benefit noticeably from a RAID 0 array. If this doesn't prove to be the case, then my plans for my next PC could be simplified and costs reduced. I've anticipated building a system with a RAID 0 system drive and a RAID 0 data drive, although I'm now thinking that RAID 0 for a system drive may be overkill.
"Unfortunately, if you lose any one of the drives in the array, all of your data is lost and isn't recoverable."
If you lose your drive in a single drive system, all of your data is lost and isn't recoverable, thus, the above statement has no special meaning. However, in both cases, if you have backed up your drives, then you can recover. Also, I fail to see how two drives in a RAID 0 will halve the MTBF.
This review is hardly based on any scientific approach - one controller / motherboard combination and one brand of hard drive. Get real. And to not delve into WHY there was only minimal performance increase further dilutes the results of this review/test.
#64 pretty much spells it all out - nice job.
#87 (et.al) ... It doesn't matter if it doubles, triples or 1000 times it ... just like with ONE drive ... you are hosed if you don't have a full system backup! The primary concept of RAID 0 is to get more performance, not to worry about drive failure. If you are worried about drive failure, then use RAID 1 or some other RAID level which deals with redundancy.
Backup is the key here. And I can attest that Acronis True Image (www.acronis.com) is one brilliant piece of backup software. I have used it in multiple scenarios, including restoring an image file on to a brand new, un-partitioned, un-formatted hard drive (both IDE and SCSI), then booting up with the new drive - restored 100 %. And the good news .... it supports (some) SATA RAID configurations - and the new Promise FastTrak TX2200 controller (according to Acronis tech support).
So, the issue is NOT about the so-called dangers of a RAID 0 array and drive failure and statistics and probability, but instead >> performance. It's time to create some new benchmarks that focus specifically on testing RAID configurations, ie back to the future, instead of all the sorry old benchmarks mentioned above.
I will be interesting to see how new Promise FastTrak TX2200 SATAII controller connected to a pair of the new Maxtor SATA DiamondMax 10 300GB, 16MB in a RAID 0 configuration fairs out in tests. In fact, it would be interesting to see how the Promise FastTrak TX4200 with 4 Maxtor drives in a RAID 10 configuration works out. Best if both worlds - performance and redundancy? Since both the new Promise controller and Maxtor drives support NCQ and SATA TCQ ... one would think this should make a dent in RAID 0 (and RAID 10, etc) performance. I'm about to find out ... as soon as the TX4200 arrives at my door step.
I surely do wish that this testing of RAID 0 had included the unique cards from Netcell. Their SyncRaid line looks as though it might be promising, but in the end are their results in "real world" testing sufficently good to make it worthwhile.
How about it AnandTech, could you expose one of these SyncRaid cards to the same testing you had in this article?
I would haveliked to see the differences using RAID with slower hard drives. Not everybody has a Raptor. Does RAID have a greater impact if the hard drives are slower to begin with?
I find the conclusions in the article surprising. When I have bought a second drive and configured it in a RAID 0 array, the performance increase could be clearly seen. I was so impressed that the first thing I'm looking for in the next mobo I plan to buy is a good RAID controller with 4 drive RAID 0 capability.
Perhaps the tests simply don't show it, but in reality, at least in my case, the performance did increase noticeably.
You can even see it in Windows Explorer. Opening a directory with a couple hundred files inside is faster than with a single drive. The same applies to other apps that work on large ammounts of data.
Nicely put mdrohn, I still dont regard backup as superfluous. Its really terms like: extra redundancy or built-in redundancy etc. that grate with me. The term seems to be used for things that mean error protection or failure proofing. In Raid1 the second disc is superfluous as far as extra storage goes, but it can be used for simultaneous reads or even striped reads which improve speed as well as backup - hardly supefluous. My usage of the word has been as 'not needed' more akin to obsolescence than anything else. But apparently the term is widely used in the electronics industry. So perhaps another example of English being subroutined (or bastardised).
At least we made it to 3figures in the posts and page 6.
Well, my Aussie friend, since we're splitting linguistic hairs, I maintain that the primary meaning of 'redundancy' (from the latin 'redundare' meaning 'to overflow'), both linguistically and electronically, is in its sense of 'superfluity or excess'. The sense of 'uselessness' is a secondary meaning which has evolved from that primary one. Every dictionary I have checked lists the superfluous sense above the useless one, which is an indicator of semantic primacy.
But I suspect that this discussion has become somewhat redundant, since we both seem to be making the same points over and over again ;)
Interesting read on storagereview, Timw. It proves what I suspected-- that beyond 3 disks, performance for large Raid 0 arrays actually declines. It also demonstrates that Raid 0 using slower disks isn't appreciably better than with pricey Raptors.
Also interesting...for single-user operation, command tagging and queuing tends to decrease performance.
The past postings to this thread demonstrate why some people believed in a flat earth up to the 19th century, and some today think the Apollo moonlanding was staged by Hollywood. People cling to illusions, despite reality.
To use technical terms, Desktop Raid-0 sucks. Get over it.
This isn't really anything new. As someone else mentioned, seek time and cache size with the right firmware optimizations are the most important. RAID 0 won't be able to improve that, and may actually be slower than a single drive in many instances. If you don't believe what Anandtech has to say, take a look at the latest article at storagereview.com.
Wrong again, mdrohn, Australia to be exact, but I was using an Oxford dictionary. The hyperdictionary.com (USA based I think as it advertises cheap dental insurance for US residents) gives redundancy as:
Definition: [n] repetition of an act needlessly
[n] the attribute of being superfluous and unneeded; "the use of industrial robots created redundancy among workers"
[n] (electronics) a system design that duplicates components to provide alternatives in case one component fails
[n] repetition of messages to reduce the probability of errors in transmission
Which agrees with both our definitions. Yours is more correct electronically. Hyperdictionary has a more extensive electronic description but doesnt add much more to the above electronic definition:
http://www.hyperdictionary.com/dictionary/redundan...
I'm at odds with that electronic meaning of redundancy. After all the language came before the electronics.
Heh, I guess that means you are writing from Britain, Pumpkinierre. That special meaning of 'redundancy' in the workplace context of someone losing their job is unknown here in the USA. In fact I'd never heard it in my life until watching 'The Office' on DVD last month ;) We call that layoffs or downsizing here.
The electronics/systems meaning I posted was also taken straight from a dictionary.
Your job becomes redundant when you are no longer of use. You then get a redundancy payout based on the years worked etc.. So I think they initially used the term to describe drives that were no longer of use as they had been replaced by newer bigger drives. My dictionary has redundant as meaning superfluous which is more like your definition, #100, and I suppose you could regard a backup drive as such....until your main drive lets go. So, I dont like the usage of redundancy for duplexed or mirrored drives.
Actually, 'redundant' more precisely means 'exceeding what is required' or 'exactly duplicating the function or meaning of another', which is an important distinction.
'Redundancy' in an electronics or systems context means 'incorporating extra components that perform the same function in order to cope with failures and errors'. Thus RAID 0 is not, strictly speaking, a 'redundant' array of disks despite its RAID name, since every drive in RAID 0 records different data. RAID 1 is classic redundancy--all the drives in a RAID 1 array are reading and writing exactly the same data.
I dont know about that latency increase with RAID. Seek times dont seem to be much affected in reviews I've seen. If the controller reads the drives simultaneously then there shouldnt be much effect on latency.
> I'm fairly certain that the performance
> advantages of having 4 or 5 striped drives are
> likely to be a lot better than just 2...
No, not for ATA drives. You're still limited to the max bandwidth of 133MB/s (150 for some SATA implementations), so beyond 3 drives you don't get the full transfer rate of each drive. Plus, the latency gets worse the more drives you add...with 5 or more drives, your mean latency is essentially your max latency of any single drive.
So a larger array is a repeat of the 2-drive situation. Its much faster in the rare case of a disk-bound app transferring huge files...and no faster (or possibly slightly slower) the rest of the time.
You are right on one thing though. Cheaper (translation: slower) disks would tend to look a bit better here. Not a huge difference, but the slower the disk, the more likely the app is to be disk-bound.
Perhaps the review ought to have pointed out what RAID stands for: Redundant Array of *Inexpensive* Disks. The idea is to improve the performance or reliability of a system by using many smaller/cheaper disks compared to using a single expensive disk. Often this isn't the case (I doubt anyone would say the 15krpm disks in modern servers are 'inexpensive'), but the origin behind the technology remains applicable today.
Maybe a better test would be to take some cheap disks and see how well they perform. Also, am I not right in thinking that SATA RAID allows for more than just two devices? I'm fairly certain that the performance advantages of having 4 or 5 striped drives are likely to be a lot better than just 2...
Umm, MadAd...The "array" of drives can't fail unless a drive in it fails...and it will always fail if one of its drives does. Its just a logical grouping, not a separate entity.
Furthermore, MTBF is the wrong statistic to use here. MTTF is the relevant one.
Obviously the raid controller itself could fail, but this is outside the scope of the argument. And such a failure is highly unlikely to impact data in any case.
"But when we are talking about an ARRAY of drives, the operating life of each individual drive in the array is not what is at issue. What is relevant is ARRAY failure, not DRIVE failure."
Are you also trying to say that if the array fails without a drive failing then thats still down to the drives MTBF? wouldnt that be an array MTBF?
If a drive fails in Raid0 then of course we expect the array will fail. If a drive does not fail but the array fails (and you can reuse the drive) then thats nothing to do with the drives MTBF is it? The drives not failed, its still got service life, the array failed. You'll need a different way to measure the chance of an array failure since (unless its connected with a drive failure) its nothing to do with the expected longevity of the components that we measure by drive manufacturers MTBF figures of service life.
Yes that's correct #93. Its the data that counts. The probability of failure of one drive in a Raid0 OR Raid1 over a given period is the same. For two drive Raids, this is double the probability of a single drive failure over the same period if all drives are the same at start of functioning(ie same prob. of failure). In Raid0, probability of LOSS OF DATA corresponds to this doubled single drive failure probability. However, in Raid1, the parameters change. Here, it is the probability of both drives failing on the SAME day in a given period (assuming backup can be completed in a day). This probability is much, much lower than a single HDD or Raid0 data loss probability which is ANY day of a given period.
This makes Raid1 the superior Raid for desktop use despite the apparent loss of capacity. With cheap 160GB around, I dont think that's a problem (I got a 120 and its not a third full and I dont backup because I'm lazy and evil). Read requests in Raid1 ought to be faster than Raid0 as variable size virtual striping could be carried out on this raid format. Unfortunately, they used to stripe Raid1 but dont anymore relegating it to the duplexing or mirroring role. Reads apparently are only improved in modern Raid1 when a simultaneous multiple read requests are initiated. Here the controller's extra buffering and ability to read the Raid drives simultaneously at different locations helps out. Once again good for Servers where this is a common requirement but not good for desktops where a striped read would be of far greater use for the speed it brings. We really need Arnie on this one- the broom and the Gatling!
Redundancy means of no further use. A backup drive isnt of no use. So redundancy doesnt mean backup despite how some people use the term to describe Raid1. RAID which stands (I teenk) for Redundant Array of Independent Drives was initially a method of combining older (hence smaller) drives into one big drive. That saved them from being thrown out ie redundant.
"Now im sure ill get a roasting from statisticians for not following the rules exactly however as has already been mentioned previously, the notion that by buying 2 of something will halve its chances of enjoying a useful life is just nonsense in individual cases."
I'm not a statistician, nor do I play one on TV ;) But similarly to WaltC, you are misunderstanding the fact that in a RAID 0 setup, if ONE member drive fails, the whole array fails IN ITS FUNCTION AS AN ARRAY. What you say is true--the individual life of a single drive is not affected by how many drives you own. But when we are talking about an ARRAY of drives, the operating life of each individual drive in the array is not what is at issue. What is relevant is ARRAY failure, not DRIVE failure.
Let's say you have two drives in a RAID 0 array. One drive fails and the other drive remains in perfect working order. You can reformat the surviving drive and keep using it as long as it continues to function. But you have lost the data on the ENTIRE ARRAY because in RAID 0 there is no redundancy and no backup, and you need both drives working in order to access the data on the array.
"Thus the chance of failure for a RAID 0 array is the probability at any given time that *_one_* of the component drives will fail. Assuming all the disks are identical, that chance is equal to the failure probability of one drive, multiplied by the number of drives."
OK remind me never to pull formulae out of my butt on a holiday weekend. The actual probability of failure for a RAID 0 array with n members is as follows:
fRAID0 = 1 - (1-fa)(1-fb)(1-fc)...(1-fn)
Where fa, fb, fc, etc are the individual chances of failure for each array member.
The question we are asking, "what is the chance that at least one component drive in a RAID 0 array will fail?" is mathematically identical to asking, "what is the complement (opposite) of the chance that none of the component drives will fail?" The chance that a drive will not fail is the complement of the drive's chance to fail, or 1-fa. The probability that multiple independent events will occur simultaneously ("none of the drives will fail") is the product of those chances. So the probability that multiple independent events will NOT occur simultaneously is the complement of that product.
The problem with probabilities are that it is a general model to make assumptions and not meant to replicate real world events.
If I get 1 raffle ticket from a raffle of 100, then the probability is 1:100 that I will win. If I buy 2 tickets then thats 2:100 or 1 in 50 chance I will win. However in the worst case there are still 98 other tickets that could be drawn from the hat before one of mine and the 1 in 50 figure will only be realistic if we do lots and lots of raffles and calculate the results as a set.
As far as MTBF is concerned, I would say that a way to more realsticaly plot the likelyhood of faliure of multiple units would be to analyse the values within the range of MTBF results, to 2 s.d. (2 standard deviation measures 95% of results).
E.G. if MTBF is say 60 months, and 95% of the results fall within the 55 to 65 month range then while one drive is likely to last 60 months, either of 2 drives should last at least 57.5 months.
Of course theres a chance that you get a dodgy one that fails in 10 months, that doesnt make it wrong, just that one was one that fell outside the 95% level on the curve.
Now im sure ill get a roasting from statisticians for not following the rules exactly however as has already been mentioned previously, the notion that by buying 2 of something will halve its chances of enjoying a useful life is just nonsense in individual cases.
#80 says:
> Sending the two seek commands versus one should
> add negligeable time. The actual seeks would be
> done concurrently. The rotational latencies on
> each drive is independent. Therefore the time
> to locate the data should be very close to the
> same as for a single drive.
The latencies for each drive are indepdent, yes...thats the very reason the overall latency is higher. Simple statistics. I'll give you a somewhat simplified explanation of why.
A seek request sent to a single drive finds the disk in a random position, evenly distributed between (best_case_latency) and (worse_case_latency). The mean latency is therefore (best+worse)/2.
Add a second drive to the picture now. On the average, half the time it will be faster than the first drive at a given request, and half the time slower. In the first case, the ARRAY speed is limited by the first drive. In the second case, the array is limited by disk two, which will be randomly distributed between (worst-best)/2 and (worst). The average in this case is therefore (3w-b)/4.
Probability of first case = (1/2)
Probability of second case = (1/2)
Overall mean = (1/2)(w+b)/2 + (1/2)(3w-b)/4 = 5w+b/8.
Assuming best case=0 and worst case=1, you get a mean seek for a single disk of 50%, and a mean seek for a two-disk array of 62%.
"(3)Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 drive failure is exactly twice as high as it is for a single drive."
Nighteye2 is correct. The above quote contains a fundamental misstatement and does not correctly represent why RAID 0 multiplies failure rate. WaltC's entire ensuing argument is logically correct, but because it is based on the wrong premise it is not relevant to RAID 0 failure rates. The quote should have read as follows:
"Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 *_ARRAY_* failure is exactly twice as high as it is for a single drive."
Having multiple disks in a RAID 0 array does not, as WaltC correctly says, affect an individual disk's chance of failure. But what is relevant to this subject is the failure of the array as a whole. Since in RAID 0 the component drives are linked together without any redundancy or backup, losing one component means that the entire array fails. Thus the chance of failure for a RAID 0 array is the probability at any given time that *_one_* of the component drives will fail. Assuming all the disks are identical, that chance is equal to the failure probability of one drive, multiplied by the number of drives.
Let's take the car analogy. In WaltC's example the two cars are independent, autonomous vehicles. To make it a proper analogy to RAID 0, the two cars would have to be functionally linked so that they operate as one. Let's say you welded the two cars together side by side with steel bars to make one supervehicle. Then if the tires gave out on any one of the two component cars, the entire supervehicle would be stuck.
If a single HD has a 50% chance of failing in 5 years, a RAID 0 array with 2 of those drives has a 50% chance of failing in about 4 years, dependant on the distribution of the failure probability function.
#84, you should study failure theory better. RAID 0 in fact *does* double the chance of failure at any given time. However, this does not mean the MTBF is halved, because disk failure chances are time-dependant, and increase over time.
#84, Even though I agree with some of your comments on the testing, the fact is that Anand was looking at Raid0 from the viewpoint of the desktop user/gamer which is the target audience of the AT website. So he is legitimate in using the tests that are relevant and understood by this target audience for testing HDD performance in both single and RAID combinations rather than specific HDD performance tests. He reaches similar conclusions to storagereviews.com's assessment of RAID use in the desktop environment. However, criticism about failure of testing other controllers (even if limited to onboard controllers) and RAID1 performance I feel are valid.
With regards to the likelihood of failure of a component, it must be recognised that all processes in nature are stochastic (probabibility based). This is at the core of quantum mechanics. So all components have an associated probability of failure. That probability is lessened by better manufacturing, quality control, newness etc. but is always present. Naturally, the longer you use the HDD the greater the probability of failure due to wear etc.but it still is possible for it to fail in the first year (and this does happen). The warranty period doesnt mean your HDD is not going to fail, it means they will replace it if it fails. The laws of probability are clear, if you have two components with associated probabilities of failure, you must ADD the two probabilities if you want the probability of ANY ONE of them failing. So, in the case of using two new HDDs Raid O has double the probality of you losing your data to a single HDD.
The consequence of the above (and having lost a HDD at 3yrs and 1day!) means to me, along with many others desktop users who fail to backup (despite having burners) because of laziness, that the oft forgotten Raid1 ought to be the prime candidate for the desktop. Here the probabilities are refined to simultaneous failure of the HDDs on any PARTICULAR day of the 3yr warranty period which is a different probability to failure of EITHER of the discs over the WHOLE 3years. Naturally, when one disc fails in Raid1, the desktop user gets off her butt and backs up on the day prior to any repair. The fact that Raid1 ought to be better at reads than even Raid0 (see my previous posts) is even greater reason to adopt this mode for the desktop (where writes are less used) but has been ignored by the IT community.
There are so many basic errors in this article that it's difficult to know just where to start, but I'll wing it...;)
From the article:
"The overall SYSMark performance graph pretty much says it all - a slight, but completely unnoticeable, performance increase, thanks to RAID-0, is what buying a second drive will get you."
Heh...;) Next time you review a 3d card you could use all of the "real world" benchmarks you selected for this article and conclude that there's "no difference in performance" between a GF4 and a 6800U, or an R8500 and an x800PE, too...;) That would be, of course, because none of these "real world" benchmarks you selected (Sysmark, Winstone, etc.) was created for the specific purpose of measuring 3d gpu performance. Rather, they measure things other than 3d-card performance, and so the kind of 3d card you install would have minimal to no impact at all on these benchmark scores. Likewise, in this case, it's the same with hard drive performance relative to to the functions measured by the "real world" benchmarks you used.
Basically, overall Sysmark scores, for instance, may include possibly 10% (or less) of their weight in measuring the performance of the hard drive arrangements in the system tested. So, even if the mb/sec read from hard disk for RAID 0 is *double* that of normal single-drive IDE in the tested system, because of the fact that these benchmarks spend 90% or more of their time in the cpu and system ram doing things other than testing HD performance, these benchmarks may reflect only a tiny, near insignificant increase in overall performance between RAID 0 and single-drive IDE systems--which is exactly what you report.
But that's because all of the "real world" benchmarks you used here are designed to tell you little to nothing specifically about hard-drive performance, just as they are not suitable for use in evaluating performance differences between 3d gpus, either. Your conclusions as I quoted them above, to the effect that these "real world" benchmark results prove that RAID 0 has no impact on "real world" performance, are therefore invalid. The problem is that the software you used doesn't specifically attempt to measure the real-world read & write performance of RAID 0, or even the performance of single-drive IDE for that matter, much less provide any basis from which to compare them and draw the conclusions you've reached.
I'd recommend at this point that you return to your own article and carefully read the descriptions of the "real world" benchmarks you used, as quoted by you (verbatim in your article, direct from the purveyors of these "real world" benchmarks), and search for even one of them which declares: "The express purpose of this benchmark is to measure, in terms of mbs/sec, the real-world read and write performance of hard drives and their associated controllers." None of the "real-world" benchmarks you used make such a declaration of purpose, do they?
Next, although I consider this really a minor footnote in comparison to the basic flaw in your review method here and the inaccuracies resulting in the inappropriate conclusions you've reached, I have to second what others have said in response to your article about the fact that if your intent is actually to at some point measure hard drive and controller read/write performance and to then draw conclusions and make general recommendations--that you be mindful that just as their are differences relative to performance among hard drives made by competing companies, there are also differences between the hard drive controllers different companies make, and this certainly applies to both standard single-drive IDE controllers as well as to RAID controllers. So I think you want to avoid drawing blanket conclusions based merely on even the appropriate testing for a single manufacturer's hard drive controller, regardless of whether it's a RAID controller or something else. One size surely doesn't fit all.
As to your conclusions in this article, again, I'm also really surprised that you didn't logically consider their ramifications, apparently. I'm surprised it didn't occur to that if it was true that RAID 0 had no impact on read/write drive performance that it would also have to be true that Intel, nVidia (and all the other core-logic chip and HD-controller manufacturers to which this applies), not to mention controller manufacturers like Promise, are just wasting their time and throwing good money after bad in their development and deployment of RAID 0 controllers.
I think you'll have to agree that this is an illogical proposition, and that all of these manufacturers clearly believe their RAID 0 implementations have a definite performance value over standard single-drive IDE--else the only kind of RAID development we'd see is RAID mirroring for the purpose of concurrent backup.
In reading some of the responses in this thread, it's obvious that a lot of your readership really doesn't understand the real purpose of RAID 0, and views it as a "marketing gimmick" of some ill-defined and vague nature that in reality does nothing and provides no performance advantages over standard IDE controller support. I think it's unfortunate that you haven't served them in providing them with worthwhile information in this regard, but instead are merely echoing many of the myths that persist as to RAID 0, myths based in ignorance as opposed to knowledge. My opinion as to the value of RAID 0 is as follows:
For years, ever since the first hard drives emerged, the chief barrier and bottleneck to hard drive performance has always been found within hard drives themselves, in the mechanisms that have to do with how hard drives work--platters, heads, rotational rate, platter size and density, etc. The bottleneck to IDE hard drive performance, measured in mbs/sec read & write performance, has actually never been the host-bus interface for the drive, and even today the vintage ATA100 bus interface is an average of 2x + faster than the fastest mass-market IDE drives you can buy, which average 30-50mbs/sec average in sustained read from the platters.
Drives can "burst" today right up to the ceiling of the host-bus interface they support, but these transfer speeds only pertain to data in the drive's cache transferring to the host bus and do not apply to drive data which must be retrieved from the drive because it isn't in the cache--which is when we drop back to the maximums currently possibly with platter technology--30-50mbs/sec depending on the drive.
Increases in platter density and rotational speeds, and increases in the amount of onboard cache in hard drives, have been the way that hard drive performance has traditionally improved. At a certain point--say 7,200 rpms for platter rotation--an equilibrium of sorts is reached in terms of economies of scale in the manufacture of hard drives, and pushing the platter rotational speed beyond that point--to 10,000 rpms and up-- results in marked diminishing returns both in price and performance, and the price of hard drives then begins to skyrocket in cost per megabyte (thermal issues and other things also escalate to further complicate things.) So the bottom line for mass-market IDE drives in terms of ultimate maximum performance is drawn both by cost and by the current SOA technical ceilings in hard drive manufacturing.
Enter RAID 0 as a relatively inexpensive, workable, and reliable solution to the performance--and capacity--bottlenecks imposed in single-drive manufacturing. With RAID 0, striped according to the average file size that best fits the individual user's environment, it's fairly common to see read speeds (and sometimes write, too) in mbs/sec go to *double* that possible with either single drive used in a RAID 0 setup when you run it individually on a standard IDE controller, regardless of the host-bus interface.
At home I've been running a total of 4 WD ATA100 100mb PATA drives for the last couple of years. Two of them--the older 2mb-cache versions--I run singly on IDE 0 as M/S through the onboard IDE controller, and the other two are 8mb-cache WD ATA100 100mb drives running in RAID 0 from a PCI Promise TX2K RAID controller as a single 200mb drive, out of which I have created several partitions.
From the standpoint of Windows the two drives running through the Promise controller in RAID 0 are transparent and indistinguishable from the operation and management of a single 200mb physical hard drive. What I get from it is a 200mb drive with read/write performance up to double the speed possible with each single drive, a 200mb RAID 0 drive utilizing 16mbs of onboard drive cache, and I get a 200mb hard drive which formats and partitions and behaves just like an actual 200mb single drive but which costs significantly less (but not, to be fair, if I include the cost of the RAID controller--but I'm willing to pay it for performance ceilings just not possible with a current 200mb single IDE drive.)
Here are some of the common myths about such a setup that I hear:
(1) The RAID 0 performance benefit is a red herring because you don't always get double the performance of a single drive. It's so silly to say that, imo, since single-drive performance isn't consistent, either, as much depends on the platter location of the data in a single drive as to the speed at which it can be read, and so on, just as it does in a RAID drive. What's important to RAID 0 performance, and is certainly no red herring, is that read/write drive performance is almost always *higher* than the same drive run in single-drive operation on IDE, and can reach double the speed at various times, especially if the user has selected the proper stripe size for his personal environment.
(2) RAID 0 is unsafe for routine use because the drives aren't mirrored. The fact is that RAID 0 is every bit as safe and secure as normal single-drive IDE use, as those aren't mirrored, either (which you'd think ought to be common sense, right?)...;) As with single-drive use, the best way to protect your RAID 0 drive data is to *back it up* to reliable media on a regular basis.
On a personal note, one of my older WD's at home died a couple of weeks ago of natural causes--WD's diagnostic software showed the drive unable to complete both smart diagnostic checks, so I know the drive is completely gone. The failed drive was my IDE Primary slave, not one of the RAID drives. Apart from what I had backed up, I lost all the data on it, of course. Proves conclusively that single-drive operation is no defense against data loss...;)
OTOH, in two+ years of daily RAID 0 operation, I have yet to lose data in any fashion from it, and have never had to reformat a RAID 0 drive partition because of data loss, etc. It has consistently functioned as reliably as my single IDE drives, and indeed my IDE single-drive failure was the first such failure I've had in several years with a hard drive, regardless of controller.
If people would think rationally about it they'd understand that the drives connected to the RAID controller are the same drives when connected individually to the standard IDE controller, and work in exactly the same way. The RAID difference is a property of the controller, not the drive, and since the drives are the same, the probability of failure is exactly the same for a physical drive connected to a RAID controller and the same drive connected to an IDE controller. There's just no difference.
(3)Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 drive failure is exactly twice as high as it is for a single drive. This is another of those myths that circulates through rumor because people simply don't stop to think it through. While it is true that the addition of a second drive, whether it's added on the Primary IDE channel as a slave, or constitutes the second drive in a RAID 0 configuration, elevates the chance that "a drive" will fail slightly above the chance of failure presented by a single drive--since you now have two drives running instead of one--does this mean you now have increased the probability that a drive will fail by 100%? If you think about it that makes no sense because...
If I install a single drive which, just for the sake of example, is of sufficient quality that I can reasonably expect it to operate daily for three years, and then I add another drive of exactly the same quality, how can I rationally expect both drives to operate reliably for anything less than three years, since the reliability of either drive is not diminished in the least merely by the addition of another drive just like it? I mean, how does it follow that adding in a second drive just like the first suddenly means I can expect a drive failure in 18 months, instead of three years?...;) Adding a second drive does not diminish the quality of the first, since the second drive is exactly like the first and is of equal quality, and hence both drives should theoretically be equal in terms of longevity.
But the rumor mongering about RAID 0 is that adding in a second drive somehow means that the theoretical operational reliability of *each* drive is magically reduced by 50%...;) That's nonsense of course, since component failure is entirely an individual affair, and is not affected at all by the number of such components in a system. The best way to project component reliability, then, is not by the number of like components in a system, but rather by the *quality* of each of those components when considered individually. Considering components in "pairs," or in "quads," etc., tells us nothing about the likelihood that "a component" among them will fail.
Look at the converse as proof: If I have two drives connected to IDE 0 as m/s, and I expect each of those drives to last for three years, does it follow logically that if I remove the slave drive that I increase the projected longevity of the master drive to six years?...;) Of course not--the projected longevity is the same, whether it's the master drive alone, or master and slave combined, because projected component longevity is calculated completely on an individual basis, and is unaffected entirely by the number of such components in a system. The fact is that I could remove the slave drive and the next day the master could fail...;) But that failure would have had nothing whatever to do with the presence or absence of the second drive.
Putting it another way, does it follow that one 512mb DIMM in a system will last twice as long as two 512mb DIMMs in that system? If I have one floppy drive is it reasonable to expect that adding another just like it will cut the projected longevity of each floppy in half? If I have a motherboard with four USB ports, does it follow that by disabling three of them the theoretical longevity of the remaining USB port will be quadrupled? No? Well, neither does it follow that enabling all four ports will quarter the projected longevity of any one of them, either.
Consider as well the plight of the hard drive makers if the numerical theory of failure likelihood had legs: if it was true that as the number of like components increases the odds for the failure of each of them increases by 100%, irrespective of individual component quality, then assembly-line manufacturing of the type our civilization depends on would have been impossible, since after manufacturing x-number of widgets they would all begin to fail...;)
One last example: my wife and I each bought new cars in '98. Both cars included four factory-installed tires meeting the road. Flash forward four years--and I had replaced my wife's entire set of tires with an entirely different make of tire, because with her factory tires she suffered two tread separations while driving--no accidents though as she was very fortunate, and the other two constantly lost air inexplicably. All the difference with the new set. As for my factory tires, however, I'm still driving on them today, with tread to spare, and never a blow-out or leak since '98. The cars weigh nearly the same (mine is actually about 500lbs heavier), the cars are within 5,000 miles of each other in total mileage, and neither of us is lead-footed. Additionally, I serviced both cars every 3,000 miles with an oil change and tire rotation, balancing, inflation, etc.
The stark variable between us, as it turned out, was that my factory-installed tires were of a much higher quality than her factory-installed tires, as I discovered when replacing hers. It's yet another example in reality of how the number of like components in a system is far less important than the quality of those components individually, when making projections as to when any single component among them might fail.
Anyway, I think it would nice if we could move into the 21st century when talking about RAID 0, and realize that crossing ourselves, throwing salt over a shoulder, or avoiding walking under ladders won't add anything in the way of longevity to our individual components, nor will this behavior in any way serve to reduce that longevity, which is endemic to the quality of the component, regardless of number. Given time, all components will fail, but when they fail, they always fail individually, and being one of many has nothing to do with it, but being crappy has everything to do with it, which is the point to remember...;)
The article pretty much confirmed my feeling that for general day-to-day usage, RAID 0 is more trouble than its worth.
There are times when RAID 0 could theoretically help, extracting large (CD image sized) archives, or copying (not moving) a large file to another folder on the same drive. Even though I almost exclusively use CD images and Daemon Tools these days, the time spent extracting or copying them is negligible, and certainly not worth the considerably longer amount of time I'd need to spend when either drive in a RAID 0 array fails.
Its true that Windows and applications will load faster from a RAID 0 array but again we're just talking a second or two for even the largest applications. As for Windows starting up, I personally never turn my main box off except when doing a hardware change so thats not an issue, but for those who do its unlikely to be more than five or six seconds difference so its hardly the end of the world. It would take an awful lot longer to reinstall Windows XP when one of the drives in the array fails than the few seconds saved each morning.
I also happen to do video capture and processing which involves files upwards of ten gigs in size and feel RAID 0 is worthless here too, provided the single drive you capture to can keep up with the video bitrate (my Maxtor DiamondMax Plus9 7200rpm drive has no trouble at all with uncompressed lossless Huffyuv encoded 768x576 @ 25fps).
When it comes to processing the video, I read it from one drive and write the output to another different physical hard-drive meaning it works faster than any RAID 0 array ever could-- one drive is doing nothing but reading the source file while the other only needs to write the result. With a RAID 0 array, both drives would be constantly switching between reading and writing two seperate files which would result in constant seek-time overheads even assuming the two-drive array was twice as fast as one drive (which they never are).
So IMO, although the article could have included a few more details about the exact setup, it was overall spot on in suggesting you don't use onboard RAID 0 for desktop and home machines. And I'd add that you're better off *without* RAID 0 and keeping the two drives as seperate partitions if you're into video editing.
This includes encoding (dont know about rendering but that can be cpu intensive as well as gpu). Large sequential reads with minimal cpu requirement will benefit from Raid eg simple file merging. You are better off splitting the raid up for encoding etc. and using one disc as the read and the other as the write on different controllers.
Games only benefit in the loading stage if large files are required eg bitmaps in Baldur's Gate.
RAID1 has the advantage of backup recovery as well as improved read speeds which is more beneficial to desktop use than writes. Raid0 has the capacity improvement advantager. So if size is not the problem (and it never is!), Raid1 is better for the desktop than Raid0. I'm sure if they varied the stripe size in Raid1 then games loading times would be improved. Even AT had one game load substantially faster (equivalent to the double platter 74Gb big brother Raptor). Perhaps an analysis of game file structure and loading by AT would be more beneficial to readers.
>It's simple, really. Locating data on one disk is faster than > locating it on two disks simultaneously. That is no matter > which controller you use. Sending the two seek commands versus one should add negligeable time. The actual seeks would be done concurrently. The rotational latencies on each drive is independent. Therefore the time to locate the data should be very close to the same as for a single drive.
However, if the time to locate the data swamps the data tranfer time, say twenty times as long, then yes, doubling the data transfer rate is not going to show much. So according to this idea, almost all file transfers take place in approximately the seek + rotation latency time, and the remainder of the transfer is negligeable. The problem is that the data transfer would be even more neglible for more drives. Let's say the actual data transfer accounts for 5% with one drive. Then it would be 2-3% for 2 drives, and 1% for 4 drives. OTOH, people are claiming that with higher RAID, you do get dramatic differences, not negligeable differences.
>Let me get this straight, you think apps today (I assume you mean >desktop/office apps) aren't dependent enough on disk I/O, and should start >to be written so they are more I/O bound?
>I hope you don't mind, but I'm going to put this in the old sig >library for use someday. :)
No you didn't get it straight. Don't worry, Denial, you will understand what it means when they start doing it in the next few years.
But if you need something for your sig, try this: "People have been saying John Kerry eats excrement sandwiches for lunch at the French embassy. No way. Excrement doesn't go with quiche, croissants and chardonay. Maybe for breakfast."
Actually #72, Anand tested level loading in Far Cry and Unreal 2004, which to my knowledge fit the bill for games you suggested. The result: RAID 0 was equal or actually a little worse. I guess latencies are still more important than bandwidth here...?
You seem to be the only person turned on by my system. Sorry, but I don't swing that way. You'll have a better chance at the internet cafes over in Chelsea.
#71, the statistics and applicability for MTBF and MTTF are a bit complex...so much so that most drive manufacturers themselves usually don't apply them properly (or intentionally mislead people).
Technically, you're correct...RAID0 doesn't halve MTBF. However, for what the average user means by "chance of failure", a two-disk Raid0 array does indeed double your chance of a failure.
As to your comment of Raid-0 loading maps faster...true if its a large file (10MB+) and probably not discernably noticeable till you're in the 20-40MB range. For tiny files or heavily fragmented ones, it may even be slower.
This article was very informative. However, the statements that RAID0 cuts MTBF in half and that RAID1 doubles MTBF are statistically incorrect. Also, RAID0 does improve game performance especially when large game maps are involved (i.e., BF1942, BF Vietnam, Far Cry). It definitely provides an advantage in online FPS games by loading large maps faster and giving RAIDO equipped players an advantage in first choice for weapons and position. Your tests probably didn't measure map loading times.
"What's the difference between losing one 74gig Raptor in RAID-0 array or one 160gig stand-alone drive? THERE IS NO DIFFERENCE!"
There is. The chance of you getting a HDD failure increases with every drive you add. A 2 disk RAID-0 array will have the same chance at failure as 2 independent non-RAIDed drives. The difference is, with the independent drives, you lose one drive's worth of data when it fails. With the RAID-0 array, you lose two.
I keep hearing about double the cost and the additional risk associated with a RAID-0 array.
First off, double the price gets you DOUBLE the capacity of a single drive. It's a wash price wise. On top of that, you increase disk performance by up to 20+%. Normally, there tends to be a decrease in performance as capacity increases when comparing similar generation drives.
Secondly, with regard to risk. What's the difference between losing one 74gig Raptor in RAID-0 array or one 160gig stand-alone drive? THERE IS NO DIFFERENCE! If you don't have a recent backup, you've lost everything. Just spend an additional $90 to buy a backup 160gig 7200rpm IDE and use Acronis to do a complete disk mirror every week or two. If you lose a RAID-0 drive, you can just boot off of the backup drive and be up and running in a matter of minutes. Worst case, you've lost you're most recent work only.
Power to the Raptors and I think I'll stick with my RAID-0 array!!!!!!!
I think #63 summed it up pretty well. For most real world usage the RAID0 setup never gets to shine because of the ratio of seek times/data transfers. Lower (7200) RPM drives will only compound the situation since their seek times are worse. Finally, add to this phenomenon the fact that the ratio of seeks will increase over time as fragmentation increases.
Which begs the question of how well defraggers work in a RAID 0 setup? Anybody know?
#61, jvrobert is right in saying that the advantages of a raptor-raid0 are restricted to faster boot-up times and smoother handling of very large files, eg when it comes to dv.
although i have heard similar statements before, whether a raid0-array of two 160gb samsung drives comes close to the performance of a single 74gb raptor drive, i don't know - i would surely appreciate being pointed to an appropriate review or at least a couple of significant benchmark results.
First, it doesn't matter much what card you use for RAID 0. There's no parity calculation, so onboard hardware won't help much. Probably Windows striping, Intel RAID, VIA RAID, Highpoint RAID, etc.. are within 1 percent of eachother.
Second, this is a limited test that comes to an overgeneralized conclusion. As some have mentioned - these are raptors. I have a single raptor as my OS disk. Where RAID helps is with slower drives - you can get a "virtual" raptor of e.g. 360GB by buying 2 cheap, quiet, cool Samsung spinpoints.
Third (OK, 3 points) - it only tests games (which don't use much hard disk IO) and business (again, disk speed doesn't matter much). I'm getting into video now, and RAID 0 will certainly improve performance there. It will also help with load times of the OS and of large applications.
So the article comes to an over-general conclusion limited on a few quick tests.
Let's not forget the target audience of this article: home users. The point of this article was that a two disk RAID0 array has little to no benefit whatsoever for the home user and/or gamer, which is exactly the primary consumer who'd make use of Intel's onboard RAID controller.
Power users with multiple-disk RAID setups would be VERY unlikely to use an onboard RAID controller, opting instead for a dedicated RAID controller with onboard cache, processor offloading, etc.
And as others said, the controller makes very little difference in a two-disk RAID array. It's only with multiple disks (4+) that the controller starts showing its importance by managing read/write requests, caching data, etc.
In summary, for those of you who were expecting a full blown RAID review, go over to Storage Review where they specialize in those types of tests. This article was simply showing that onboard RAID is really quite useless for its target audience.
Funny how quick people are to dismiss an article the moment it doesn't confirm what they already believed...
I might be the only one here, but I'm not really surprised by this article in general. RAID has its place, yes, but not as a desktop system.
Yes, bandwidth goes way up, but so does latency. Instead of locating a file on one drive, you have to locate it on two drives, before you can even start the transfer. With sequential transfers, RAID is obviously faster, but with multiple smaller accesses, it will be slower. There's no magic in it, no faked results, and no incompetent and biased authors of that article.
It's simple, really. Locating data on one disk is faster than locating it on two disks simultaneously. That is no matter which controller you use. Yes, a faster controller might mean a smaller performance penalty, but doesn't change the fact.
The most expensive part of I/O is the seek time. The actual transfer is fast by comparison.
The problem is that RAID aids the already acceptable transfer speed, but slows down seek time, which was already a bottleneck.
So yes, it can improve performance, but only if you have large sequential reads/writes, where you don't need to waste time seeking, and where the faster transfer really becomes useful.
In other words, *not* on normal desktop systems, and not on normal gaming systems.
> "I'd like to get to the truth about RAID0 for > desktop users like myself."
RAID0 really isn't significantly faster for most users and apps. Its not due to the reason KF states though-- HD performance is still very important to most apps.
But a RAID array doesn't increase performance across the board. Bandwidth goes up sharply...but latency rises as well. The only apps you'll see large gains in are ones that favor bandwidth much more than latency-- such as streaming huge files in a diskbound mode.
The Intel onboard raid controller isn't the best one out there. You can buy a dedicated card and scrape another couple percentage points out. A small gain for the dollars invested.
This is my first post in this forum. Let me start by saying that anandtech.com appears to be a great place to get news, and I've enjoyed the articles so far.
While I agree that the "Raptor RAID0" article has some issues, I fail to see how so many of you can dismiss the results, and even the conclusions.
Anand has presented a real-world test of a commonly used RAID0 setup against commonly accepted benchmarks. Frankly, I'm astounded by the number of "I don't care what his results show, my RAID0 setup is faster" comments. If your array IS faster, please post some evidence! There is way too much anecdotal assertion on this thread for my taste.
Honestly, I'm poised to purchase a couple Raptors for a desktopo RAID0 setup--based on the general yahoo about the performance benefits of RAID0. I was suprised and concerned to read this article, and the similar articles linked-to in this thread. As someone on the verge of dropping several hundred dollars for the supposed increased performance, I'd like to get to the truth about RAID0 for desktop users like myself.
I appreciate KF's thoughts on "why" RAID0 doesn't make a difference, and I'd like to hear more opinions and thoughts--especially opinions backed up by some kind of evidence!
Anand pretty much (except for the game tests) confined his test to synthetic benchmarks. Anyone have any results with actual applications and/or files?
Specifically, I plan(ned?) on using a dual 74Gb Raptor RAID0 array as a scratch/capture disk for DV work. DV files are huge (multiple Gb), and disk speed is important for smooth and error-free capture from a DV camera. Any thoughts?
> "I can tell you for a fact that my 8 disk RAID > 10 array, with 15k 73GB Cheetahs, running on a > LSI 320-2, installed in a 133MHZ PCI-X slot..."
Is it just me, or does Denial sound like he's trying to score chicks by bragging about the size of his array?
Oh, and BTW Denial...the servers your employer use don't count. You're either a liar for claiming you run this setup in your personal desktop...or an idiot if you're telling the truth.
Nice article but very incomplete. Next time please include chip7 from via & nvidia And more modern drives are available like Seagate 200GB. Also include tests with raid 1
No SCSI drives, keep it real, most ppl have SATA or ATA drives.
There have been A LOT of issues/concerns raised by various ppl here regarding things like benches and configuration setups etc that were left out in the article. I think it would be great if there was a follow-up article to this one in which these issues were addressed and previous things further explained.
>>> indeed, if ALL :) the issues were addressed in the said follow-up article, it may end-up being the most comprehensive RAID report/review ever!
Anyways, something for the guys at AnandTech to think about - i think its hard to overlook the fact that alotta ppl are feeling quite a bit of discontent at the way this article hit upon its (pre-concluded :) ) conclusion.
"Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks."
Let me get this straight, you think apps today (I assume you mean desktop/office apps) aren't dependent enough on disk I/O, and should start to be written so they are more I/O bound?
I hope you don't mind, but I'm going to put this in the old sig library for use someday. :)
Denial: You are in denial. The results of Anand's simple-to-understand test are the same as the results that have been reported in overwhelmingly mind-numbingly-detailed reviews at specialized storage sites. This just happens to be about the IMPORTANT latest incarnation, which will no doubt put RAID capability on 90% of new computers, once the Intel production machine is rolling. Until Pariah opined, I wondered if I was the only one that understood those reviews, the way people seem to tout RAID 0 so relentlessly.
Maybe this will be simple to understand: The authors of programs know what a slug their programs would be if they wrote them in such a way as to depend on the slowest link in the chain; namely the HD. Therefore HD accesses are avoided at all costs, and everything accessed is cached (in memory.) The OS (Windows) caches everything out-the-whazoo as well. In other words: all algorithms are selected to preserve locality. Therefore HD speed only shows up during intitialization and where there is no way to arrange locality. Therefore real-life benchmrks have a small dependence on HD speed.
Since HD I/O is interrupt driven, and transfers are DMA, a program does not have to just sit and wait until the I/O is performed. It can do useful work concurrently provided the I/O algorithms look-ahead. Then the data will be there (most of the time) before it is needed.
As for why the loading of games does not show a RAID 0 boost, I can only guess that they are doing a lot more than just loading HD data into memory. Possibly most of the HD I/O was done before the point that timing was done, and the slowness at that point is due to other operations. Pre-calculating known physics? Buffering major scenery changes?
I still think people could get a feeling of extra speed during times when the HD IS loading. It may only be a tiny part of the whole time a program is run, but you could notice it during that time.
Furthermore, if the past is a guide, every new capability that becomes commonplace gradually is made more and more use of, especially where Intel is concerned. (AGP, 2xAGP, USB, DMA66, SSE.) So Intel putting RAID 0 in its chipset means RAID 0 will be used more and more. Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks. Before that comes about, people will correctly warn that the extra money buys you very little. Fortunately for the rest of us, there are a few people willing to pay for that extra bit, which gets the ball rolling.
#45,#46 Generally the reviews I've seen on RAID1 have the read rates equal or a little bit more than a single drive while RAID0 shows 30%+ improvement. Why? I dont know. To me, in the read part of the deal, it should be the same in Raid 0 or 1. With my suggestion of virtual striping, I also suggested variable stripe size in a previous post (not possible in Raid0 but possible in Raid1 because the stripes are virtual). Here a smaller stripe size could be used for smaller data size requests and a larger stripe for bigger files or sequential data requests. This would speed up reads significantly and give a net advantage over Raid0 which is limited to one stripe size at inception. The controller on request for a particular data file would optimise the size of the stripe based on the request. For desktops where data throughput can range from the few k to the gigs, it would be perfect. This seems possible to me but I have'nt heard anyone implement it.
If you look way back at comment #37 you will see my last paragraph is basically exactly what you said in your last paragraph. I agree completely that the article stunk, and that basically all the storage related articles on this site throughout its history have stunk. I just think that your nitpicking of his usage of the word RAID in the conclusion was one of the least important problems in the article, as anyone with half a brain knew what he was talking about when he said that, regardless of whether it was a valid point or not.
As you can see by what i wrote above, I agree with you. What he meant and what he said are different things though. The fact that he left out too much of the systems configuration in order for us to see if it was configured properly means that the test cannot be replicated and is therfore useless. Whatever you think regarding IDE RAID 0, which I have no experience with, you cannot dispute the fact that leaving out critical configuration settings is a no-no for a review such as this. When he runs those gaming tests for one of those new overpriced video cards, he invludes quite a bit more details in the review, yes? Suppose he didn't say what resolution the game was run at, AA on or off, filtering level, etc. Would you be able to verify his results? No.
Why does he spend so much time and put so much detail into a review for items as "important" as a 3d card, then completely blow it on this review? I don't have an answer to that, do you? My assumption is that he is not well versed in modern storage technologies and how the file system and many other details play a major role in overall performance, in which case he should have had somebody else perform the tests and write the article, as he does with many of the other articles on the site.
Denial, you need some help in determining "target audience." 99% of home users using RAID or thinking about it are likely thinkinb about a 2 drive RAID 0 array. Those 99% of users are who this article is targetted at. And when he says RAID in his conclusion, the setup he tested (2 drive RAID 0), is what he means.
If you are thinking about going with something more complicated/advanced, then this article was NOT for you.
"If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop."
This statement is so far fetched it is rediculous. This might be applicable to Raptors on an Intel onboard garbage RAID controller, but the above is a general statement. Maybe if it was changed to
"If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for an ATA RAID-0 array on a desktop computer using an Intel onboard RAID controller."
Written this way his conclusion might be correct, but the way he wrote, it's flat out WRONG.
#48, your post was even more worthless than Anand's article. At least Anand's article had a setup that was remotely applicable to what a user he was targetting with the article would have.
Despite what numerous people seem to think in the comments, for a 2 drive ATA RAID 0 array, the controller you use is about as irrelevant an issue as there is. $500 card, free onboard, hardware, software, me using a piece of notebook paper and crayons to calculate drive assignments for data will not make any noteworthy difference in performance.
The above comments about stripe size are true as well. Depending on the app, a 64k array stripe matched with a 64k stripe in NTFS will produce much different results that the default 4k(?) that windoze uses on NTFS partitions. We're talking a HUGE *HUGE* **HUGE** difference in performance here. HUGE!!!! I've never used one of those onboard raid solutions, do they even allow the option of setting the stripe size? I'd wouldn't be surprised if they didn't which would make this review even that much more useless.
What kind of scientific testing was this? ALL RAID 0 is useless because an Intel onboard RAID solution sucks? Why did Anand waste his time on this?
I can tell you for a fact that my 8 disk RAID 10 array, with 15k 73GB Cheetahs, running on a LSI 320-2, installed in a 133MHZ PCI-X slot on my dual Xeon 7505 motherboard (Vero) is just a *tad* faster than a single drive setup. ;)
Anand must be on crack making a blanket statement like that. Is this the best he can gve us after a few years of college? What a sorry article.
#1 that's called placebo effect! Same goes for any other "feels" faster hardware such as the A64. The user spent the money, expecting a return, and it gave a false positive.
I have been droning on about how raid 0 is worthless waste of money since I read this:
As others are stating, RAID 1 on any *decent* RAID controller should have faster read rates. www.StorageReview.com has shown this in a recent article. So the statement "We won't be benchmarking RAID-1 here because, for the most part, there's no performance increase or decrease" while true in part (you didn't perform the benchmarks), was a bad decision, as performance should differ from a single drive. Of course, for single-user usage, RAID 1 would be even less useful than RAID 0 - except for adding redundancy.
It's a shame that a RAID 1 array with the same Raptor II (and 7200 RPM drives) wasn't benchmarked. The read performance of RAID 1 can be as good as that of RAID 0 on a good controller. The only case where that's not true is if a lot of writing, which is slower than either RAID 0 or a single disk is being done to the array while reads are in progress.
Yeah I agree something's wrong with this article. Anand needs to check his setup closely.
Insomniac, #14 you were right first time: if you had a slower disc in an intelligent RAID1, you ought to read from the faster disc exclusively (I dont say that present controllers can). Also your suggestion on striped reads in RAID1 is good and mentioned also by Arth 1 #34. But as far as I am aware inexplicably RAID1 doesnt do this anymore (perhaps on more expensive controllers) see: http://arstechnica.com/paedia/r/raid-2.html
"I should note that this discussion is based on the more recent, er, modern definition of RAID 1. The original model for this config actually included striping (as in RAID 0), and not simply "disk duplexing." In the end, however, the duplexing model is what the industry uses, and RAID 1 is synonymous with that. Therefore, notice that RAID 1's contribution to the world of storage technology is the principle of data mirroring"
But they do say earlier and strangely: "Now here's an oddity: a read transaction can theoretically occur twice as fast as on single disk. Hence RAID 1 is often used on low-end web servers. The read performance is standard, if not better than single disk performance, and the poorer write performance is largely irrelevant on most web servers (save those doing transactions, of course). RAID 1 configs are great for mid-volume FTP servers as well."
From what I gather modern implementations of Raid1 are only a little better at reads due to the extra buffering and faster controllers. In terms of RAID, a RAID1 with virtually striped disks is the way to go for a gamer. It ought to give you faster loads as well as backup at the cost of slightly slower writes (not a great problem for gamers) and smaller storage (doesnt matter- cheap big HDDs nowadays). Yet it all but seems to be ignored by the manufacturers and IT industry as only relevant to servers.
As much as I usually agree with articles posted here, I think the reviewer wasn't thinking clearly. My point is that RAID-0 is most obviously beneficial when working with LARGE files; big Photoshop tiffs, RAW audio files, video, and 3D graphics. Running tests that make use of office applications aren't going to demonstrate the arrays function. Kind of like driving a Ferrari in a residential neighborhood isn't going to demonstrate any real performance.
My computer has three identical 160GB drives. Two are in a RAID-0 array and the third is my 'mirror' where art and other important files are backed up. Pulling the exact same file from the working array takes about half the time as it takes when pulling it off of the backup drive without the array. Exactly what would be expected. If you need more speed in loading/saving large files then RAID-0 is for you. Since none of the tests in the article test this obvious advantage I would have to say that the test is either flawed or the writer didn't think through what a RAID-0 array is best at.
If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop.
And in another article:
Recommended: Dual Western Digital Raptor 74GB 10,000RPM SATA in RAID 0 Configuration
"I really wish some regular 7200 RPM drives had been used, considering someone who can afford a 74GB Raptor won't care about the costs of RAID anyway."
Wouldn't make any difference, you can extrapolate the performance of 7200RPM drives by looking at the improvement of the Raptor from one drive to 2 drives. You're not going to see a 5% increase with the Raptor and a 45% increase with 7200 or -30% loss. You'll see just about the same increase/decrease.
"I thought if it was smart, it would use both drives and improve performance."
True, IF it was smart. Unfortunately, basically every ATA/SATA RAID controller does NOT load balance reads in RAID 1. 3Ware controllers do and Highpoint controllers that advertise RAID "1.5" support do as well, I'm not aware of any others that do, though there may be random other models.
"If that wasn't the case, I wondered if you could choose which drive it read from."
No, there is no "drive affinity" setting for RAID arrays.
"But the one really fundamental thing wrong with the whole comparison is that you didn't actually compare RAID0 with a decent RAID 0 card like HighPoint RocketRAID."
Changing the controller would make pretty much no difference whatsoever for the configuration that was tested in the article (2 drive RAID 0 array). Software cards will perform just as well as a $500 3Ware card in 2 drive RAID 0 arrays. As you add drives and use increasingly complex RAID levels, then the controller will play a significant role in overall performance.
"Why bother to waste space describing in detail the differences between RAID 1 & 0 if no benchmarks from a RAID 1 are going to be included in the article??"
I agree, didn't understand that myself.
"AND the differences in CPU utilization between them. Most of the onboard soloutions are actually SOFTWARE RAID's as compared to a true dedicated hardware device."
For RAID 0 and even more so for RAID 1, CPU utilization is irrelevant as far as the controller is concerned, because there are practically no calculations necessary for a 2 drive RAID 0 array. For RAID 1, there really aren't any at all.
"I use 2 Seagate 15k.3's in RAID0 on a Adaptec 39320 Host RAID device. It sure feels faster than a single drive to me."
Can you say placebo? Sorry to hear you wasted your money on a garbage controller like that. Adaptec controllers are widely know to be horrendous performers. Adaptec is the only company I'm aware of that has released RAID controllers for any interface that actually don't perform better in even lowlevel benchmarks in RAID 0 than they would with just a single drive.
"Onboard RAID (and most cheap raid cards such as Promise) are technically software RAID cards and usually do not offer any speed increases over 5%. True hardware RAID cards offer speed increases at about 40% (as shown in the past)."
Maybe 5+ years ago, with a Pentium 90 and non-DMA ATA drives. Not true at all anymore. Moving to hardware for a simple RAID 0 array will net you nothing in additional performance.
"I was suprised not to see any Iometer benchmarks. IOPS and response times are king in determining disk performance. Iometer is still the best tool, as you can configure workers match typical workloads."
IOMeter is a glorified access time benchmark that doesn't give anything in the way of useful applicable results for home users.
Though the results of the article are not surprising, it was still a pretty poor read overall. Anandtech needs some more work on its storage articles if it wants to catch up to other sites like SR, Tech-Report and Digit-Life. The overall knowledge displayed in articles is noticeably lacking, well below the standards set in its CPU and video card articles.
Why does this site blast a Raptor RAID-0 array in this article so badly and recommend a Raptor RAID-0 array in their high-end buyer's guide so highly? Seems like a massive 180-degree shift to me. Very curious...
The major problem I have with this article is that 20% to 38% improvement in IO operations with RAID 0 in the first benchmarks is ignored as "not much", and then the "Proof" that RAID O doesn't improve performance is Winstones and SysMark which are sequential or linear benchmarks. I can also tell you that Winstones and Sysmark provide about the same scores with on-board Intel video as they do with an X800 XT and this does NOT prove to me that high-end graphics are a waste of money, it just proves that Winstones and Sysmark are not a good tool to measure graphics perfomance.
The review seems far too strained to prove a pre-concluded idea, IMHO, and really doesn't prove anything except Winstones and Sysmark are terrible tools for comparing Hard Drive performance. The tests in Winstones, as i understand them, are not my real world, they are office-worker-running-one task-at-a-time world. I DO multitask on my computer, as do most users today, and this is where RAID 0 DOES make a difference. Where are benchmarks that compare performance in multitasking situations?
The article contains several factual errors. RAID 1, for example, does have *read* speed benefits over a single drive, as you can read one block from one drive and the next block from the other drive at the same time. Also, what was the block size used, and what was the stripe size? Was the block size doubled when striping (as is normally recommended to keep the read size identical)? Since non-serial-ATA drives were part of the test, how come THEY were not tried in a RAID? That way we could have seen how much was the striping effect and how much was due to using two serial ATA ports. All in all a very useless article, I'm afraid
Regarding Intel Application Accelerator, I would like to know if that was installed or not as well. It seems to me that could potentially affect performance quite a bit. But perhaps it doesn't make a difference? Either way, I would like to know.
It's funny to see metion of ATA and performance. If you really want disk performance, get some real SCSI drives. Without tag cmd queuing, RAID configurations aren't able to reach their full potential.
It would be interesting see hadware sites measure SCSI performance. Sure, ATA has the price point, but with 15K SCSI spinners so cheap these days, the major cost is the investment in the HBA. With people dropping 500 bucks on a video card, why is it so inconvievable to think power users wouldn't want to run with the best I/O available?
I was suprised not to see any Iometer benchmarks. IOPS and response times are king in determining disk performance. Iometer is still the best tool, as you can configure workers match typical workloads.
Show me a review of the latest dual ported ultra320 hardware raid HBA stripped across four 15k spinners. Compare that with a 2 drive configuration and the SATA stuff. Show me IOPS, response times, and CPU utilization. That would be meaningful, as people could better justify the extra $2-300 cost going with a real I/O performer.
Of course, RAID 0 makes little sense for raptors, which are already so fast that they hardly form a bottleneck.
RAID 0 makes more sense for slower, cheaper HD's...try 2 WD 80GB 8MB cache harddisks, for example. Together they are cheaper than a raptor, but I expect performance will be very similar, if not faster.
I am tired of seeing these RAID 0 articles just throwing 2 disk together and getting results that are contrary to what is expected and not dig deeper into what's the problem. I am only posting my comment here because of my repect for this site. Drive technology and methodlogy has to play apart in discussion of RAID technology. The principle behind RAID 0 is sound. The throughput is a multiple of the number of drives in the array (You will not get 100% but close to it). Not getting this, it should be examined as to WHY? One of my suspicion is that incorrect setup of the array is the primary culprit. How is information written to/from the drive, the array and to individual drives in individual arrays. What is the cluster and sectors sizes. How is the information broken up by the controller to be written to the array. Take for example each drive in a array has a minimum data size of 64bits and you have array sizes of 2 rives 128bits, 3 drives 192bits and four drives 256bits. In initializing you array do you intialize for 64bits, 128bits, 192bits or 256bit? Does it matter? Say for example you initialize for 64bits, does the array controller writes 64bits to each drive or does it writes 64bits to the first drive and 0bits (null spaces and wasting and defeating the purpose of the extra drives) to the other drives because it is expecting the array size bits (eg 128bits for 2 drives)or does it split the 64bits between the drives and waste space and kill performance because each drive allocate a minimum of 64bits. I was waiting for someone to examine in detail what's happening. Xbitlabs came close (from looking at the charts)that they could almost taste it I am sure but still jump to incorrect reasoning.
I know I am rambling but in short the premise of RAID arrays are sound so why is it not showing up in the results of the testing?
AMDScooter is right on. Onboard RAID (and most cheap raid cards such as Promise) are technically software RAID cards and usually do not offer any speed increases over 5%. True hardware RAID cards offer speed increases at about 40% (as shown in the past). This varies of course with the implementation but on average hardware RAID has been shown to increase performance much more than these cheap RAID impelementations. Regs needs to look into what he's talking about more because performance advantages are not lost in advertising.
I do a some video editing and I'm wondering the performance gain of say reading a a 4GB file and writing directly writing it again (ie a copy) in a RAID or non raid configuration. I'm using a single HDD right now, but I'm thinking of going to 2 HDD and read from one HDD and copy to the other..but I'm wondering if a RAID configuration will offer similar advantages?
I am building my self a new system this year, and I am seriously thinking of getting 2x250 Western Digital Caviars (SATA) and making them into a RAID 1, for redundancy purposes. I already knew that RAID 0 offers little real world improvements, but I would like to see how it compares to RAID 0 and just a single drive. I have never under stood why you bother comparing 8 normal drive, and one of them in RAID 0.
Why not rerun the tests with just a single type of drive, one standard (stand alone), one RAID 0 and one RAID 1. All things being equal this should give a better indication of just how well any drives should do in the following configurations, using that RAID chip. (Yes there will be some small differences, but the should end up being negligible)
I would recommend choosing your favorite three drives, and doing a comparison of each RAID version on that.
I hope this does not come off as a bash as the review was informative to some extent, but I feel it is lacking in several areas. Why bother to waste space describing in detail the differences between RAID 1 & 0 if no benchmarks from a RAID 1 are going to be included in the article?? And as mentioned earlier, using only a single onboard RAID soloution has some merit for parity in benchmarking but is hardly definitive. This would have been a more well rounded review just by adding adding some RAID 1 benchmarks along with benchmarks from different RAID IDE/SATA controllers AND the differences in CPU utilization between them. Most of the onboard soloutions are actually SOFTWARE RAID's as compared to a true dedicated hardware device. It would also have been nice to see some SCSI RAID benchies tossed in the mix. SATA drives are almost in the same price as entry level 15k rpm SCSI U320 drives. While SCSI RAID is not on any normal desktop MOBO's, many users purchase seperate RAID cards anyway. I use 2 Seagate 15k.3's in RAID0 on a Adaptec 39320 Host RAID device. It sure feels faster than a single drive to me ;)
This article and previous raid related ones ive read here have all seemed to be opposite of results ive seen with my setup. I have a Promise TX2000 raid controller and four IBM/Hitachi 180GB 7200rpm drives.
Originally i only had one of the hitachis and when i went to a 2 drive raid 0 the perfomance increase was definitely noticeable. I wont bother repeating any benchmarks i have of it because i dont feel they really tell anything, nor do i still have ay records of them. But most places i could see noticeable improvements were in application loads, game loads and most signicantly when windows would boot up, especially once the windows install had become old and lots of apps were all trying to load at the same time.
Then last fall i purchased another 2 hitachi drives and decided to test out a 4 disk raid 0 Now did that thing fly, application loads were almost instantanious for all but the largest programs. and my performance was limited almost entirely by the PCI bus (oh how i hate thee) as i was acheiving average transfer rates of 120 MB/s as reported by sandra and HdTach.
Then recently (yesterday to be exact) I purchased 2 SATA Hitachi 250GB drives and i hooked them up as raid 0 on my onboard sata raid controller (a Silicon Image 3112 controller on my Albatron KX18D) here i would acheive about 65 MB/s transfer rates. this seems on par with was i would expect, but then i notice that cpu usage with the sata raid was around 55% and it was only about 5% with the promise IDE raid.
Even though the average transfer rate of the new array are greater than one drive, performance with programs running off of it dont seem any faster than a single drive.
My only thought is whether these onboard raid solutions use up so much overheard that the performance increase is negligable. all my experience with them seems to say raid 0 on them is useless, but raid 0 on a dedicated controller seems to increase performance drastically.
I usually like your articles Anand. But the one really fundamental thing wrong with the whole comparison is that you didn't actually compare RAID0 with a decent RAID 0 card like HighPoint RocketRAID.
I was wondering how RAID1 reads back data. I thought if it was smart, it would use both drives and improve performance. If that wasn't the case, I wondered if you could choose which drive it read from. That way, you could get a WD Raptor II and a low cost 80GB hard drive to pair up. You get the redundancy and speed of the Raptor for a lower cost. What about RAID5? (I know the ICH5/6 doesn't support it, but I thought there were some chipset m,akers that did. I would like to see what that brings to the mix. Given the choice right now, I'd take redundancy over performance. Maybe RAID 5 can give you both for less than 0+1.
I really wish some regular 7200 RPM drives had been used, considering someone who can afford a 74GB Raptor won't care about the costs of RAID anyway. =P Besides, to me it seems like Raptors already perform so well that it's hard to find any performance gain anyway. I was also under the impression that a lot of people with SATA drives in RAID 0 were actually getting much more noticeable performance gains; i.e. outperforming lone Raptors. Well, whatever.
I noticed that they came to the conclusion of only using 2 drives in a RAID setup, but in my expereince the more drives the supposed increase in performance. Perhaps they should revisit this with 4 Raptors in a RAID setup.
While your overall disk bound throughput may be higher, seek times are sill only as fast as the slowest drive in the array. Since seek time is a more important desktop performance metric, I would think there would be very little benefit to doing this.
Well the review was nice if you are thinking of running 2 raptors on a ICH5/6 SATA ports, but what about the other 99% of use that may use VIA, SiS, etc.. and/or other 7200 rpm hard drives?
I'd be interested in seeing how using RAID0 with older drives, or one old drive and a newer drive, works out. If you're upgrading your motherboard, then given that RAID comes "for free", it could be a good way to save money by buying a second, smallish hard-drive, and using your old hard-drive with this new one in parallel...
Very good article. The results are not surprising. I have one comment about RAID1. While in theory it is simply a data redundancy mechanism, in practice there are performance benefits. Any good RAID1 algorithm will use read optimizations that will allow for parallellism during read requests. Thus, under the right conditions, most RAID1 arrays will achieve higher read IOPS than a single drive. Also, there may be a performance hit on writes due to the fact that writes will only be as fast as the slowest drive.
This article doesn't seem to be up to the standards I've come to expect from Anandtech.
It would be more fair to say "Intel's onboard RAID 0 solution offers no performance gain." I'd be interested to see results from other RAID controllers. You can't take one product and make a blanket comment like "RAID 0 is not worth it." That would be like me reviewing an NVIDIA Vanta graphics card and saying "3D acceleration is not worth it."
Any subjective comments on whether the system using RAID-0 feels any smoother? A lot of people comment that P4s with Hyperthreading produce a system that just feels more responsive regardless of whether it's really any faster.
I find the best thing to do (under Windows) when you've got two drives hooked up is to move your Virtual memory onto the one which you use less. There's all sorts of tricks you can use to distribute your system load without necessarilly using RAID.
I recently did a test by copying a few GB of data from a WD 160 GB drive to another WD 160 GB drive. It took about 4 mins.
I then renamed the folder that I just copied and then copied it back to the original drive, and again got about the same time, with only a few seconds difference.
I timed my boot from Windows from the time the OS takes over, all the way to the desktop, and it took about 35 seconds. I do NOT have any bloatware or junk on my system.
Finally, I enabled RAID 0 for these two drives. Now the same version of Windows boots up in about 25 seconds (not as fast as you'd think). Also, copying the same folder from my 3rd hard drive to my RAID 0 drives is taking 1 minute and 45 seconds. The seek time itself may be still slow, but once you get the data going, it'll definitely help out.
I play eve online and i can be running at any given time 3 clients of eve a music player / video / or a text to speech program a browsing client with usually 5 - 7 tabs and sometimes i even want to be able to extract files at the same time I think for that kind of usage RAID 0 would be very worth it, did you even consider a lot of users do multiple demanding tasks at once?
while yes when running one application RAID 0 is usually useless but when like me most of the time you are using 2-3 clients of a heavily HDD relent game where sometimes it takes a while to get the files for the 3D images and sometimes because of that your they wont show up on the screen for sometimes 5 seconds and i know its not anything else but the HDD because i have a new computer and the only piece of hardware that hasn't been updated is the HDD and im still getting it.
if i was able i would also be running a HD movie or have my computer read a book to me with a text to speech program or be playing music and also maybe extracting something with Winrar you cant tell me that with all those IO's that RAID 0 wouldn't help at all, considering the game im playing is eve online and when i jump in and a gate can have 1000 ships on it thats maybe 32 different ships types which have to be gotten from the HD which is probably something like 100mb times 2 - 3 thats 64 to 96 IO's thats if theres isnt multiple files that need to be called up for 1 ship type so yeah I think in the deck top for power users there is a place for RAID 0
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
127 Comments
Back to Article
Madpeter - Wednesday, June 3, 2009 - link
Redundant Array of "Inexpensive" Disks. Raptors dont count as "inexpensive". also you did not apper to test other motherboard/raid controlers. and then the drill down on how each of thous are setup.Raid 0 will give you faster IO than a normal 7200rpm drive. if you set it up correctly.
Dr. Foo - Wednesday, August 4, 2010 - link
Then get the hint. RAID stands for Redundant Array of Independent Disks.piroroadkill - Friday, November 17, 2017 - link
It used to be Inexpensive, not Independent.Abki - Friday, May 22, 2009 - link
Article also show something more.Seagate disk is better to buy then a raptor.
Lower price, same prestanda.
Disk is cooler and lower noise.
Bigger size in storage.
Most test is done togheter with other operations (sometimes manual). If RAID 0 is to be compared with single disk use time for other operations have to be clocked separately. Most of the test shows that its no differens wich disk you chose, disktime is same.
AstroGuardian - Thursday, March 12, 2009 - link
I did the same benchmarks (more or less) myself and came to astonishing result. The performance gains with RAID-0 are on average 50% faster. So i think you are doing something wrongShrikant - Thursday, June 2, 2005 - link
Here is a problem with this evaluation. According to this article the 10 year old IBM 75GXP 30 Gig hard drive is only 8% slower than the latest Western Digital Raptor II. So the conclusion according to this article is Western Digital's 8MB Buffer Cache with 4.5 ms Seek time and 1,200 MBits/sec access time and the "worlds fastest Serial drive" (according the Western Digital web site) isn't much better than the old 15 Gig drives. I would suggest pulling out our old 386s and using their hard drives if we don't mind a 10% slow down in speed.ArnAdams - Thursday, March 10, 2005 - link
RAID0 disks make a HUGE difference when streaming large amounts of data (such as high speed video at over 100 MegaBytes/sec) for long periods of time (we often go for over 20 minutes continuous when viewing rocket launches). Sustained transfer on single IDE disks is usually in the range of 40 to 50 MB/sec (note that the burst read & write rate, which is really what was covered in the article, is only peripherally related to the sustained data write rate), with SCSI disks being about 10% higher, and SATA disks being about 15% higher. Two SCSI RAID0 disks will usually sustain about 110 MB/sec, while 7 SCSI RAID0 disk will sustain about 320 MB/sec (requires 2 on-board SCSI controllers; all PCI SCSI & SATA RAID controllers are junk when it comes to trying to sustain high data write rates!). The SATA RAID disks sustain higher data write rates than SCSI disks but often show data drop outs (every second or so) that I can't explain.t1n0m3n - Sunday, November 14, 2004 - link
One last thing that I find interesting.I love how when testing video cards and Anand runs into results like the Multimedia Content Creation Winstone 2004 test (where the most of the results are close like Commanche 4 benchmarks) he says "Hmm, obviously this game is CPU intensive and is putting little stress on the GPU." Or something to that effect.
But in similar results in this benchmark, where it is obvious that there is a bottleneck somewhere else or does not put enough emphasis on the hard drive, he chooses to say "RAID 0 is marginal and not worth investing money in."
t1n0m3n - Sunday, November 14, 2004 - link
"This review is hardly based on any scientific approach - one controller / motherboard combination and one brand of hard drive. Get real. And to not delve into WHY there was only minimal performance increase further dilutes the results of this review/test."#112 I don't think we would have seen much difference between different card/drive combos in the office tests. The problem is that this "review" is fundamentally flawed. No effort was put forth to quantify the very real effect RAID0 has on the end user's everyday computing experience. For average Joe User, RAID0 is worthless, let him not experience the responsiveness and quickness that RAID0 provides cheaply and easily. Only let us everyday users of RAID0 enjoy that benefit.
t1n0m3n - Sunday, November 14, 2004 - link
Kudos to Anand (and all of the other testers referenced) for giving us a completely lopsided view of what RAID0 is all about.What we have here ladies and gentlemen is a comparison of apples and oranges. I KNOW my raid 0+1 system is faster than the same system using only one same drives. Do expect it to actually be faster for web surfing? Of course not. The drives are not the bottleneck for most types of web surfing. Do I expect it to be faster for writing a word document? Well, partially. I expect it to load and save the file faster, but everything else will be the same speed. Do I expect defrags to be faster? Yes, most defintely.
OK on the flip side. Do I expect my X800 video card to speed up word processing? Of course not. Do I expect my X800 to speed up web surfing.. NO Do I expect my X800 to speed up gaming? Most types of games, yes I do.
For all of you wanting to see smaller, cheaper, and slower disks in RAID 0 as a comparison:
You will see approximately the same percentage improvement of the faster drives over a single... In all of the tests. The Ipeak tests will be within a few percentage point of roughly 20% and 40% better (respectively) than a single drive of the same model and the office benchmarks will see maybe at most a 5% improvement over a single drive of the same model.
What Anand fails to bring to this "test" is the realization of what RAID0 is actually used for. That is to speed up hard drive reads and writes. Since the hard drive does not always read and write, how could he jump to the assumption that RAID 0 is worthless because of a test that percentagewise does very little hard drive reading and writing? (as is obvious per the tests.) Ask anyone that uses a RAID 0 array... The entire computer "feels" much more responsive. I have had more than one person use my home computer for normal web surfing comment on how much faster my computer seems than theirs does. So don't give me that "placebo" crap. They had no clue how my computer was set up until I explained it to them.
RAID 0 does enhance the user experience on a computer... It just may be hard to quantify with the primitive testing that is currently employed. But nonetheless, if you have used RAID 0 and then went back to a single on the same system, you already understand what I am referring to. Everyone else.... Don't knock it until you have tried it.
bhvm - Tuesday, June 22, 2010 - link
t1n0m3n,Agreed cent percent.
Whereas our processors are zooming in Ghz and Rams in GBs... Our old humble HDDs are still around 60MB/.s
A properly setup RAID (with dedicated controller) will give close to 90% performance boost. I run Software RAID 0 on my laptop (its a monster laptop with 2 x 7200 RPM seagate momentus) for My database... Its just not about the Raw speed and numbers... the machine 'feels' damm so fast. Also, despite being software RAID, Single drive gives 45MB/s.... Whereas RAID 0 partition gives easily 85 MB/s/.
Long live the RAID!
Ambress - Wednesday, November 10, 2004 - link
While this article on RAID was quite informative, I would be very interested in seeing what the measured peformance differences are for an application such as Photoshop, where a secondary hard drive assigned as a scratch disk is recommened for optimum performance. Perhaps one problem here is finding some reliable Photoshop benchmarking tool, but surely one exists. Even without that, just performing a repeatable action that exercises a series of steps that are certain to force scratch disk usage could demonstrate what RAID 0 advantages exist, when used as the scratch disk. Such a test would likely require the manipulation also of a rather large image file...perhaps 100MB or more. I suspect other applications that work with large data files, such as video editing applications, would also benefit noticeably from a RAID 0 array. If this doesn't prove to be the case, then my plans for my next PC could be simplified and costs reduced. I've anticipated building a system with a RAID 0 system drive and a RAID 0 data drive, although I'm now thinking that RAID 0 for a system drive may be overkill.DatabaseMX - Tuesday, October 19, 2004 - link
At the beginning of the article:"Unfortunately, if you lose any one of the drives in the array, all of your data is lost and isn't recoverable."
If you lose your drive in a single drive system, all of your data is lost and isn't recoverable, thus, the above statement has no special meaning. However, in both cases, if you have backed up your drives, then you can recover. Also, I fail to see how two drives in a RAID 0 will halve the MTBF.
This review is hardly based on any scientific approach - one controller / motherboard combination and one brand of hard drive. Get real. And to not delve into WHY there was only minimal performance increase further dilutes the results of this review/test.
#64 pretty much spells it all out - nice job.
#87 (et.al) ... It doesn't matter if it doubles, triples or 1000 times it ... just like with ONE drive ... you are hosed if you don't have a full system backup! The primary concept of RAID 0 is to get more performance, not to worry about drive failure. If you are worried about drive failure, then use RAID 1 or some other RAID level which deals with redundancy.
Backup is the key here. And I can attest that Acronis True Image (www.acronis.com) is one brilliant piece of backup software. I have used it in multiple scenarios, including restoring an image file on to a brand new, un-partitioned, un-formatted hard drive (both IDE and SCSI), then booting up with the new drive - restored 100 %. And the good news .... it supports (some) SATA RAID configurations - and the new Promise FastTrak TX2200 controller (according to Acronis tech support).
So, the issue is NOT about the so-called dangers of a RAID 0 array and drive failure and statistics and probability, but instead >> performance. It's time to create some new benchmarks that focus specifically on testing RAID configurations, ie back to the future, instead of all the sorry old benchmarks mentioned above.
I will be interesting to see how new Promise FastTrak TX2200 SATAII controller connected to a pair of the new Maxtor SATA DiamondMax 10 300GB, 16MB in a RAID 0 configuration fairs out in tests. In fact, it would be interesting to see how the Promise FastTrak TX4200 with 4 Maxtor drives in a RAID 10 configuration works out. Best if both worlds - performance and redundancy? Since both the new Promise controller and Maxtor drives support NCQ and SATA TCQ ... one would think this should make a dent in RAID 0 (and RAID 10, etc) performance. I'm about to find out ... as soon as the TX4200 arrives at my door step.
><><><>
MplsBob - Wednesday, October 13, 2004 - link
I surely do wish that this testing of RAID 0 had included the unique cards from Netcell. Their SyncRaid line looks as though it might be promising, but in the end are their results in "real world" testing sufficently good to make it worthwhile.How about it AnandTech, could you expose one of these SyncRaid cards to the same testing you had in this article?
kmmatney - Monday, September 13, 2004 - link
I would haveliked to see the differences using RAID with slower hard drives. Not everybody has a Raptor. Does RAID have a greater impact if the hard drives are slower to begin with?mbor - Tuesday, August 24, 2004 - link
btw, a striping array of 2 raptors shouldn't be called RAID, but NAED:Non-redundant Array of Expensive Disks ;)
mbor - Monday, August 23, 2004 - link
I find the conclusions in the article surprising. When I have bought a second drive and configured it in a RAID 0 array, the performance increase could be clearly seen. I was so impressed that the first thing I'm looking for in the next mobo I plan to buy is a good RAID controller with 4 drive RAID 0 capability.Perhaps the tests simply don't show it, but in reality, at least in my case, the performance did increase noticeably.
You can even see it in Windows Explorer. Opening a directory with a couple hundred files inside is faster than with a single drive. The same applies to other apps that work on large ammounts of data.
Pumpkinierre - Friday, July 9, 2004 - link
Nicely put mdrohn, I still dont regard backup as superfluous. Its really terms like: extra redundancy or built-in redundancy etc. that grate with me. The term seems to be used for things that mean error protection or failure proofing. In Raid1 the second disc is superfluous as far as extra storage goes, but it can be used for simultaneous reads or even striped reads which improve speed as well as backup - hardly supefluous. My usage of the word has been as 'not needed' more akin to obsolescence than anything else. But apparently the term is widely used in the electronics industry. So perhaps another example of English being subroutined (or bastardised).At least we made it to 3figures in the posts and page 6.
mdrohn - Friday, July 9, 2004 - link
Well, my Aussie friend, since we're splitting linguistic hairs, I maintain that the primary meaning of 'redundancy' (from the latin 'redundare' meaning 'to overflow'), both linguistically and electronically, is in its sense of 'superfluity or excess'. The sense of 'uselessness' is a secondary meaning which has evolved from that primary one. Every dictionary I have checked lists the superfluous sense above the useless one, which is an indicator of semantic primacy.But I suspect that this discussion has become somewhat redundant, since we both seem to be making the same points over and over again ;)
masher - Friday, July 9, 2004 - link
Interesting read on storagereview, Timw. It proves what I suspected-- that beyond 3 disks, performance for large Raid 0 arrays actually declines. It also demonstrates that Raid 0 using slower disks isn't appreciably better than with pricey Raptors.Also interesting...for single-user operation, command tagging and queuing tends to decrease performance.
The past postings to this thread demonstrate why some people believed in a flat earth up to the 19th century, and some today think the Apollo moonlanding was staged by Hollywood. People cling to illusions, despite reality.
To use technical terms, Desktop Raid-0 sucks. Get over it.
timw - Thursday, July 8, 2004 - link
This isn't really anything new. As someone else mentioned, seek time and cache size with the right firmware optimizations are the most important. RAID 0 won't be able to improve that, and may actually be slower than a single drive in many instances. If you don't believe what Anandtech has to say, take a look at the latest article at storagereview.com.Pumpkinierre - Thursday, July 8, 2004 - link
Wrong again, mdrohn, Australia to be exact, but I was using an Oxford dictionary. The hyperdictionary.com (USA based I think as it advertises cheap dental insurance for US residents) gives redundancy as:Definition: [n] repetition of an act needlessly
[n] the attribute of being superfluous and unneeded; "the use of industrial robots created redundancy among workers"
[n] (electronics) a system design that duplicates components to provide alternatives in case one component fails
[n] repetition of messages to reduce the probability of errors in transmission
Which agrees with both our definitions. Yours is more correct electronically. Hyperdictionary has a more extensive electronic description but doesnt add much more to the above electronic definition:
http://www.hyperdictionary.com/dictionary/redundan...
I'm at odds with that electronic meaning of redundancy. After all the language came before the electronics.
mdrohn - Thursday, July 8, 2004 - link
Heh, I guess that means you are writing from Britain, Pumpkinierre. That special meaning of 'redundancy' in the workplace context of someone losing their job is unknown here in the USA. In fact I'd never heard it in my life until watching 'The Office' on DVD last month ;) We call that layoffs or downsizing here.The electronics/systems meaning I posted was also taken straight from a dictionary.
Pumpkinierre - Thursday, July 8, 2004 - link
Your job becomes redundant when you are no longer of use. You then get a redundancy payout based on the years worked etc.. So I think they initially used the term to describe drives that were no longer of use as they had been replaced by newer bigger drives. My dictionary has redundant as meaning superfluous which is more like your definition, #100, and I suppose you could regard a backup drive as such....until your main drive lets go. So, I dont like the usage of redundancy for duplexed or mirrored drives.mdrohn - Wednesday, July 7, 2004 - link
"Redundancy means of no further use."Actually, 'redundant' more precisely means 'exceeding what is required' or 'exactly duplicating the function or meaning of another', which is an important distinction.
'Redundancy' in an electronics or systems context means 'incorporating extra components that perform the same function in order to cope with failures and errors'. Thus RAID 0 is not, strictly speaking, a 'redundant' array of disks despite its RAID name, since every drive in RAID 0 records different data. RAID 1 is classic redundancy--all the drives in a RAID 1 array are reading and writing exactly the same data.
Pumpkinierre - Wednesday, July 7, 2004 - link
I dont know about that latency increase with RAID. Seek times dont seem to be much affected in reviews I've seen. If the controller reads the drives simultaneously then there shouldnt be much effect on latency.(will it make it to the 6th page?!)
masher - Wednesday, July 7, 2004 - link
> I'm fairly certain that the performance> advantages of having 4 or 5 striped drives are
> likely to be a lot better than just 2...
No, not for ATA drives. You're still limited to the max bandwidth of 133MB/s (150 for some SATA implementations), so beyond 3 drives you don't get the full transfer rate of each drive. Plus, the latency gets worse the more drives you add...with 5 or more drives, your mean latency is essentially your max latency of any single drive.
So a larger array is a repeat of the 2-drive situation. Its much faster in the rare case of a disk-bound app transferring huge files...and no faster (or possibly slightly slower) the rest of the time.
You are right on one thing though. Cheaper (translation: slower) disks would tend to look a bit better here. Not a huge difference, but the slower the disk, the more likely the app is to be disk-bound.
kapowaz - Wednesday, July 7, 2004 - link
Perhaps the review ought to have pointed out what RAID stands for: Redundant Array of *Inexpensive* Disks. The idea is to improve the performance or reliability of a system by using many smaller/cheaper disks compared to using a single expensive disk. Often this isn't the case (I doubt anyone would say the 15krpm disks in modern servers are 'inexpensive'), but the origin behind the technology remains applicable today.Maybe a better test would be to take some cheap disks and see how well they perform. Also, am I not right in thinking that SATA RAID allows for more than just two devices? I'm fairly certain that the performance advantages of having 4 or 5 striped drives are likely to be a lot better than just 2...
masher - Wednesday, July 7, 2004 - link
Umm, MadAd...The "array" of drives can't fail unless a drive in it fails...and it will always fail if one of its drives does. Its just a logical grouping, not a separate entity.Furthermore, MTBF is the wrong statistic to use here. MTTF is the relevant one.
Obviously the raid controller itself could fail, but this is outside the scope of the argument. And such a failure is highly unlikely to impact data in any case.
MadAd - Tuesday, July 6, 2004 - link
"But when we are talking about an ARRAY of drives, the operating life of each individual drive in the array is not what is at issue. What is relevant is ARRAY failure, not DRIVE failure."Are you also trying to say that if the array fails without a drive failing then thats still down to the drives MTBF? wouldnt that be an array MTBF?
If a drive fails in Raid0 then of course we expect the array will fail. If a drive does not fail but the array fails (and you can reuse the drive) then thats nothing to do with the drives MTBF is it? The drives not failed, its still got service life, the array failed. You'll need a different way to measure the chance of an array failure since (unless its connected with a drive failure) its nothing to do with the expected longevity of the components that we measure by drive manufacturers MTBF figures of service life.
Pumpkinierre - Tuesday, July 6, 2004 - link
Yes that's correct #93. Its the data that counts. The probability of failure of one drive in a Raid0 OR Raid1 over a given period is the same. For two drive Raids, this is double the probability of a single drive failure over the same period if all drives are the same at start of functioning(ie same prob. of failure). In Raid0, probability of LOSS OF DATA corresponds to this doubled single drive failure probability. However, in Raid1, the parameters change. Here, it is the probability of both drives failing on the SAME day in a given period (assuming backup can be completed in a day). This probability is much, much lower than a single HDD or Raid0 data loss probability which is ANY day of a given period.This makes Raid1 the superior Raid for desktop use despite the apparent loss of capacity. With cheap 160GB around, I dont think that's a problem (I got a 120 and its not a third full and I dont backup because I'm lazy and evil). Read requests in Raid1 ought to be faster than Raid0 as variable size virtual striping could be carried out on this raid format. Unfortunately, they used to stripe Raid1 but dont anymore relegating it to the duplexing or mirroring role. Reads apparently are only improved in modern Raid1 when a simultaneous multiple read requests are initiated. Here the controller's extra buffering and ability to read the Raid drives simultaneously at different locations helps out. Once again good for Servers where this is a common requirement but not good for desktops where a striped read would be of far greater use for the speed it brings. We really need Arnie on this one- the broom and the Gatling!
Redundancy means of no further use. A backup drive isnt of no use. So redundancy doesnt mean backup despite how some people use the term to describe Raid1. RAID which stands (I teenk) for Redundant Array of Independent Drives was initially a method of combining older (hence smaller) drives into one big drive. That saved them from being thrown out ie redundant.
mdrohn - Tuesday, July 6, 2004 - link
"Now im sure ill get a roasting from statisticians for not following the rules exactly however as has already been mentioned previously, the notion that by buying 2 of something will halve its chances of enjoying a useful life is just nonsense in individual cases."I'm not a statistician, nor do I play one on TV ;) But similarly to WaltC, you are misunderstanding the fact that in a RAID 0 setup, if ONE member drive fails, the whole array fails IN ITS FUNCTION AS AN ARRAY. What you say is true--the individual life of a single drive is not affected by how many drives you own. But when we are talking about an ARRAY of drives, the operating life of each individual drive in the array is not what is at issue. What is relevant is ARRAY failure, not DRIVE failure.
Let's say you have two drives in a RAID 0 array. One drive fails and the other drive remains in perfect working order. You can reformat the surviving drive and keep using it as long as it continues to function. But you have lost the data on the ENTIRE ARRAY because in RAID 0 there is no redundancy and no backup, and you need both drives working in order to access the data on the array.
mdrohn - Tuesday, July 6, 2004 - link
"Thus the chance of failure for a RAID 0 array is the probability at any given time that *_one_* of the component drives will fail. Assuming all the disks are identical, that chance is equal to the failure probability of one drive, multiplied by the number of drives."OK remind me never to pull formulae out of my butt on a holiday weekend. The actual probability of failure for a RAID 0 array with n members is as follows:
fRAID0 = 1 - (1-fa)(1-fb)(1-fc)...(1-fn)
Where fa, fb, fc, etc are the individual chances of failure for each array member.
The question we are asking, "what is the chance that at least one component drive in a RAID 0 array will fail?" is mathematically identical to asking, "what is the complement (opposite) of the chance that none of the component drives will fail?" The chance that a drive will not fail is the complement of the drive's chance to fail, or 1-fa. The probability that multiple independent events will occur simultaneously ("none of the drives will fail") is the product of those chances. So the probability that multiple independent events will NOT occur simultaneously is the complement of that product.
MadAd - Tuesday, July 6, 2004 - link
Theres lies, damn lies and statistics.The problem with probabilities are that it is a general model to make assumptions and not meant to replicate real world events.
If I get 1 raffle ticket from a raffle of 100, then the probability is 1:100 that I will win. If I buy 2 tickets then thats 2:100 or 1 in 50 chance I will win. However in the worst case there are still 98 other tickets that could be drawn from the hat before one of mine and the 1 in 50 figure will only be realistic if we do lots and lots of raffles and calculate the results as a set.
As far as MTBF is concerned, I would say that a way to more realsticaly plot the likelyhood of faliure of multiple units would be to analyse the values within the range of MTBF results, to 2 s.d. (2 standard deviation measures 95% of results).
E.G. if MTBF is say 60 months, and 95% of the results fall within the 55 to 65 month range then while one drive is likely to last 60 months, either of 2 drives should last at least 57.5 months.
Of course theres a chance that you get a dodgy one that fails in 10 months, that doesnt make it wrong, just that one was one that fell outside the 95% level on the curve.
Now im sure ill get a roasting from statisticians for not following the rules exactly however as has already been mentioned previously, the notion that by buying 2 of something will halve its chances of enjoying a useful life is just nonsense in individual cases.
masher - Tuesday, July 6, 2004 - link
#80 says:> Sending the two seek commands versus one should
> add negligeable time. The actual seeks would be
> done concurrently. The rotational latencies on
> each drive is independent. Therefore the time
> to locate the data should be very close to the
> same as for a single drive.
The latencies for each drive are indepdent, yes...thats the very reason the overall latency is higher. Simple statistics. I'll give you a somewhat simplified explanation of why.
A seek request sent to a single drive finds the disk in a random position, evenly distributed between (best_case_latency) and (worse_case_latency). The mean latency is therefore (best+worse)/2.
Add a second drive to the picture now. On the average, half the time it will be faster than the first drive at a given request, and half the time slower. In the first case, the ARRAY speed is limited by the first drive. In the second case, the array is limited by disk two, which will be randomly distributed between (worst-best)/2 and (worst). The average in this case is therefore (3w-b)/4.
Probability of first case = (1/2)
Probability of second case = (1/2)
Overall mean = (1/2)(w+b)/2 + (1/2)(3w-b)/4 = 5w+b/8.
Assuming best case=0 and worst case=1, you get a mean seek for a single disk of 50%, and a mean seek for a two-disk array of 62%.
mdrohn - Monday, July 5, 2004 - link
WaltC says:"(3)Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 drive failure is exactly twice as high as it is for a single drive."
Nighteye2 is correct. The above quote contains a fundamental misstatement and does not correctly represent why RAID 0 multiplies failure rate. WaltC's entire ensuing argument is logically correct, but because it is based on the wrong premise it is not relevant to RAID 0 failure rates. The quote should have read as follows:
"Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 *_ARRAY_* failure is exactly twice as high as it is for a single drive."
Having multiple disks in a RAID 0 array does not, as WaltC correctly says, affect an individual disk's chance of failure. But what is relevant to this subject is the failure of the array as a whole. Since in RAID 0 the component drives are linked together without any redundancy or backup, losing one component means that the entire array fails. Thus the chance of failure for a RAID 0 array is the probability at any given time that *_one_* of the component drives will fail. Assuming all the disks are identical, that chance is equal to the failure probability of one drive, multiplied by the number of drives.
Let's take the car analogy. In WaltC's example the two cars are independent, autonomous vehicles. To make it a proper analogy to RAID 0, the two cars would have to be functionally linked so that they operate as one. Let's say you welded the two cars together side by side with steel bars to make one supervehicle. Then if the tires gave out on any one of the two component cars, the entire supervehicle would be stuck.
Nighteye2 - Monday, July 5, 2004 - link
If a single HD has a 50% chance of failing in 5 years, a RAID 0 array with 2 of those drives has a 50% chance of failing in about 4 years, dependant on the distribution of the failure probability function.Nighteye2 - Monday, July 5, 2004 - link
#84, you should study failure theory better. RAID 0 in fact *does* double the chance of failure at any given time. However, this does not mean the MTBF is halved, because disk failure chances are time-dependant, and increase over time.Pumpkinierre - Sunday, July 4, 2004 - link
#84, Even though I agree with some of your comments on the testing, the fact is that Anand was looking at Raid0 from the viewpoint of the desktop user/gamer which is the target audience of the AT website. So he is legitimate in using the tests that are relevant and understood by this target audience for testing HDD performance in both single and RAID combinations rather than specific HDD performance tests. He reaches similar conclusions to storagereviews.com's assessment of RAID use in the desktop environment. However, criticism about failure of testing other controllers (even if limited to onboard controllers) and RAID1 performance I feel are valid.With regards to the likelihood of failure of a component, it must be recognised that all processes in nature are stochastic (probabibility based). This is at the core of quantum mechanics. So all components have an associated probability of failure. That probability is lessened by better manufacturing, quality control, newness etc. but is always present. Naturally, the longer you use the HDD the greater the probability of failure due to wear etc.but it still is possible for it to fail in the first year (and this does happen). The warranty period doesnt mean your HDD is not going to fail, it means they will replace it if it fails. The laws of probability are clear, if you have two components with associated probabilities of failure, you must ADD the two probabilities if you want the probability of ANY ONE of them failing. So, in the case of using two new HDDs Raid O has double the probality of you losing your data to a single HDD.
The consequence of the above (and having lost a HDD at 3yrs and 1day!) means to me, along with many others desktop users who fail to backup (despite having burners) because of laziness, that the oft forgotten Raid1 ought to be the prime candidate for the desktop. Here the probabilities are refined to simultaneous failure of the HDDs on any PARTICULAR day of the 3yr warranty period which is a different probability to failure of EITHER of the discs over the WHOLE 3years. Naturally, when one disc fails in Raid1, the desktop user gets off her butt and backs up on the day prior to any repair. The fact that Raid1 ought to be better at reads than even Raid0 (see my previous posts) is even greater reason to adopt this mode for the desktop (where writes are less used) but has been ignored by the IT community.
TheCimmerian - Sunday, July 4, 2004 - link
Thanks for DV capture stuff, PrinceGaz.WaltC - Sunday, July 4, 2004 - link
There are so many basic errors in this article that it's difficult to know just where to start, but I'll wing it...;)From the article:
"The overall SYSMark performance graph pretty much says it all - a slight, but completely unnoticeable, performance increase, thanks to RAID-0, is what buying a second drive will get you."
Heh...;) Next time you review a 3d card you could use all of the "real world" benchmarks you selected for this article and conclude that there's "no difference in performance" between a GF4 and a 6800U, or an R8500 and an x800PE, too...;) That would be, of course, because none of these "real world" benchmarks you selected (Sysmark, Winstone, etc.) was created for the specific purpose of measuring 3d gpu performance. Rather, they measure things other than 3d-card performance, and so the kind of 3d card you install would have minimal to no impact at all on these benchmark scores. Likewise, in this case, it's the same with hard drive performance relative to to the functions measured by the "real world" benchmarks you used.
Basically, overall Sysmark scores, for instance, may include possibly 10% (or less) of their weight in measuring the performance of the hard drive arrangements in the system tested. So, even if the mb/sec read from hard disk for RAID 0 is *double* that of normal single-drive IDE in the tested system, because of the fact that these benchmarks spend 90% or more of their time in the cpu and system ram doing things other than testing HD performance, these benchmarks may reflect only a tiny, near insignificant increase in overall performance between RAID 0 and single-drive IDE systems--which is exactly what you report.
But that's because all of the "real world" benchmarks you used here are designed to tell you little to nothing specifically about hard-drive performance, just as they are not suitable for use in evaluating performance differences between 3d gpus, either. Your conclusions as I quoted them above, to the effect that these "real world" benchmark results prove that RAID 0 has no impact on "real world" performance, are therefore invalid. The problem is that the software you used doesn't specifically attempt to measure the real-world read & write performance of RAID 0, or even the performance of single-drive IDE for that matter, much less provide any basis from which to compare them and draw the conclusions you've reached.
I'd recommend at this point that you return to your own article and carefully read the descriptions of the "real world" benchmarks you used, as quoted by you (verbatim in your article, direct from the purveyors of these "real world" benchmarks), and search for even one of them which declares: "The express purpose of this benchmark is to measure, in terms of mbs/sec, the real-world read and write performance of hard drives and their associated controllers." None of the "real-world" benchmarks you used make such a declaration of purpose, do they?
Next, although I consider this really a minor footnote in comparison to the basic flaw in your review method here and the inaccuracies resulting in the inappropriate conclusions you've reached, I have to second what others have said in response to your article about the fact that if your intent is actually to at some point measure hard drive and controller read/write performance and to then draw conclusions and make general recommendations--that you be mindful that just as their are differences relative to performance among hard drives made by competing companies, there are also differences between the hard drive controllers different companies make, and this certainly applies to both standard single-drive IDE controllers as well as to RAID controllers. So I think you want to avoid drawing blanket conclusions based merely on even the appropriate testing for a single manufacturer's hard drive controller, regardless of whether it's a RAID controller or something else. One size surely doesn't fit all.
As to your conclusions in this article, again, I'm also really surprised that you didn't logically consider their ramifications, apparently. I'm surprised it didn't occur to that if it was true that RAID 0 had no impact on read/write drive performance that it would also have to be true that Intel, nVidia (and all the other core-logic chip and HD-controller manufacturers to which this applies), not to mention controller manufacturers like Promise, are just wasting their time and throwing good money after bad in their development and deployment of RAID 0 controllers.
I think you'll have to agree that this is an illogical proposition, and that all of these manufacturers clearly believe their RAID 0 implementations have a definite performance value over standard single-drive IDE--else the only kind of RAID development we'd see is RAID mirroring for the purpose of concurrent backup.
In reading some of the responses in this thread, it's obvious that a lot of your readership really doesn't understand the real purpose of RAID 0, and views it as a "marketing gimmick" of some ill-defined and vague nature that in reality does nothing and provides no performance advantages over standard IDE controller support. I think it's unfortunate that you haven't served them in providing them with worthwhile information in this regard, but instead are merely echoing many of the myths that persist as to RAID 0, myths based in ignorance as opposed to knowledge. My opinion as to the value of RAID 0 is as follows:
For years, ever since the first hard drives emerged, the chief barrier and bottleneck to hard drive performance has always been found within hard drives themselves, in the mechanisms that have to do with how hard drives work--platters, heads, rotational rate, platter size and density, etc. The bottleneck to IDE hard drive performance, measured in mbs/sec read & write performance, has actually never been the host-bus interface for the drive, and even today the vintage ATA100 bus interface is an average of 2x + faster than the fastest mass-market IDE drives you can buy, which average 30-50mbs/sec average in sustained read from the platters.
Drives can "burst" today right up to the ceiling of the host-bus interface they support, but these transfer speeds only pertain to data in the drive's cache transferring to the host bus and do not apply to drive data which must be retrieved from the drive because it isn't in the cache--which is when we drop back to the maximums currently possibly with platter technology--30-50mbs/sec depending on the drive.
Increases in platter density and rotational speeds, and increases in the amount of onboard cache in hard drives, have been the way that hard drive performance has traditionally improved. At a certain point--say 7,200 rpms for platter rotation--an equilibrium of sorts is reached in terms of economies of scale in the manufacture of hard drives, and pushing the platter rotational speed beyond that point--to 10,000 rpms and up-- results in marked diminishing returns both in price and performance, and the price of hard drives then begins to skyrocket in cost per megabyte (thermal issues and other things also escalate to further complicate things.) So the bottom line for mass-market IDE drives in terms of ultimate maximum performance is drawn both by cost and by the current SOA technical ceilings in hard drive manufacturing.
Enter RAID 0 as a relatively inexpensive, workable, and reliable solution to the performance--and capacity--bottlenecks imposed in single-drive manufacturing. With RAID 0, striped according to the average file size that best fits the individual user's environment, it's fairly common to see read speeds (and sometimes write, too) in mbs/sec go to *double* that possible with either single drive used in a RAID 0 setup when you run it individually on a standard IDE controller, regardless of the host-bus interface.
At home I've been running a total of 4 WD ATA100 100mb PATA drives for the last couple of years. Two of them--the older 2mb-cache versions--I run singly on IDE 0 as M/S through the onboard IDE controller, and the other two are 8mb-cache WD ATA100 100mb drives running in RAID 0 from a PCI Promise TX2K RAID controller as a single 200mb drive, out of which I have created several partitions.
From the standpoint of Windows the two drives running through the Promise controller in RAID 0 are transparent and indistinguishable from the operation and management of a single 200mb physical hard drive. What I get from it is a 200mb drive with read/write performance up to double the speed possible with each single drive, a 200mb RAID 0 drive utilizing 16mbs of onboard drive cache, and I get a 200mb hard drive which formats and partitions and behaves just like an actual 200mb single drive but which costs significantly less (but not, to be fair, if I include the cost of the RAID controller--but I'm willing to pay it for performance ceilings just not possible with a current 200mb single IDE drive.)
Here are some of the common myths about such a setup that I hear:
(1) The RAID 0 performance benefit is a red herring because you don't always get double the performance of a single drive. It's so silly to say that, imo, since single-drive performance isn't consistent, either, as much depends on the platter location of the data in a single drive as to the speed at which it can be read, and so on, just as it does in a RAID drive. What's important to RAID 0 performance, and is certainly no red herring, is that read/write drive performance is almost always *higher* than the same drive run in single-drive operation on IDE, and can reach double the speed at various times, especially if the user has selected the proper stripe size for his personal environment.
(2) RAID 0 is unsafe for routine use because the drives aren't mirrored. The fact is that RAID 0 is every bit as safe and secure as normal single-drive IDE use, as those aren't mirrored, either (which you'd think ought to be common sense, right?)...;) As with single-drive use, the best way to protect your RAID 0 drive data is to *back it up* to reliable media on a regular basis.
On a personal note, one of my older WD's at home died a couple of weeks ago of natural causes--WD's diagnostic software showed the drive unable to complete both smart diagnostic checks, so I know the drive is completely gone. The failed drive was my IDE Primary slave, not one of the RAID drives. Apart from what I had backed up, I lost all the data on it, of course. Proves conclusively that single-drive operation is no defense against data loss...;)
OTOH, in two+ years of daily RAID 0 operation, I have yet to lose data in any fashion from it, and have never had to reformat a RAID 0 drive partition because of data loss, etc. It has consistently functioned as reliably as my single IDE drives, and indeed my IDE single-drive failure was the first such failure I've had in several years with a hard drive, regardless of controller.
If people would think rationally about it they'd understand that the drives connected to the RAID controller are the same drives when connected individually to the standard IDE controller, and work in exactly the same way. The RAID difference is a property of the controller, not the drive, and since the drives are the same, the probability of failure is exactly the same for a physical drive connected to a RAID controller and the same drive connected to an IDE controller. There's just no difference.
(3)Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 drive failure is exactly twice as high as it is for a single drive. This is another of those myths that circulates through rumor because people simply don't stop to think it through. While it is true that the addition of a second drive, whether it's added on the Primary IDE channel as a slave, or constitutes the second drive in a RAID 0 configuration, elevates the chance that "a drive" will fail slightly above the chance of failure presented by a single drive--since you now have two drives running instead of one--does this mean you now have increased the probability that a drive will fail by 100%? If you think about it that makes no sense because...
If I install a single drive which, just for the sake of example, is of sufficient quality that I can reasonably expect it to operate daily for three years, and then I add another drive of exactly the same quality, how can I rationally expect both drives to operate reliably for anything less than three years, since the reliability of either drive is not diminished in the least merely by the addition of another drive just like it? I mean, how does it follow that adding in a second drive just like the first suddenly means I can expect a drive failure in 18 months, instead of three years?...;) Adding a second drive does not diminish the quality of the first, since the second drive is exactly like the first and is of equal quality, and hence both drives should theoretically be equal in terms of longevity.
But the rumor mongering about RAID 0 is that adding in a second drive somehow means that the theoretical operational reliability of *each* drive is magically reduced by 50%...;) That's nonsense of course, since component failure is entirely an individual affair, and is not affected at all by the number of such components in a system. The best way to project component reliability, then, is not by the number of like components in a system, but rather by the *quality* of each of those components when considered individually. Considering components in "pairs," or in "quads," etc., tells us nothing about the likelihood that "a component" among them will fail.
Look at the converse as proof: If I have two drives connected to IDE 0 as m/s, and I expect each of those drives to last for three years, does it follow logically that if I remove the slave drive that I increase the projected longevity of the master drive to six years?...;) Of course not--the projected longevity is the same, whether it's the master drive alone, or master and slave combined, because projected component longevity is calculated completely on an individual basis, and is unaffected entirely by the number of such components in a system. The fact is that I could remove the slave drive and the next day the master could fail...;) But that failure would have had nothing whatever to do with the presence or absence of the second drive.
Putting it another way, does it follow that one 512mb DIMM in a system will last twice as long as two 512mb DIMMs in that system? If I have one floppy drive is it reasonable to expect that adding another just like it will cut the projected longevity of each floppy in half? If I have a motherboard with four USB ports, does it follow that by disabling three of them the theoretical longevity of the remaining USB port will be quadrupled? No? Well, neither does it follow that enabling all four ports will quarter the projected longevity of any one of them, either.
Consider as well the plight of the hard drive makers if the numerical theory of failure likelihood had legs: if it was true that as the number of like components increases the odds for the failure of each of them increases by 100%, irrespective of individual component quality, then assembly-line manufacturing of the type our civilization depends on would have been impossible, since after manufacturing x-number of widgets they would all begin to fail...;)
One last example: my wife and I each bought new cars in '98. Both cars included four factory-installed tires meeting the road. Flash forward four years--and I had replaced my wife's entire set of tires with an entirely different make of tire, because with her factory tires she suffered two tread separations while driving--no accidents though as she was very fortunate, and the other two constantly lost air inexplicably. All the difference with the new set. As for my factory tires, however, I'm still driving on them today, with tread to spare, and never a blow-out or leak since '98. The cars weigh nearly the same (mine is actually about 500lbs heavier), the cars are within 5,000 miles of each other in total mileage, and neither of us is lead-footed. Additionally, I serviced both cars every 3,000 miles with an oil change and tire rotation, balancing, inflation, etc.
The stark variable between us, as it turned out, was that my factory-installed tires were of a much higher quality than her factory-installed tires, as I discovered when replacing hers. It's yet another example in reality of how the number of like components in a system is far less important than the quality of those components individually, when making projections as to when any single component among them might fail.
Anyway, I think it would nice if we could move into the 21st century when talking about RAID 0, and realize that crossing ourselves, throwing salt over a shoulder, or avoiding walking under ladders won't add anything in the way of longevity to our individual components, nor will this behavior in any way serve to reduce that longevity, which is endemic to the quality of the component, regardless of number. Given time, all components will fail, but when they fail, they always fail individually, and being one of many has nothing to do with it, but being crappy has everything to do with it, which is the point to remember...;)
PrinceGaz - Saturday, July 3, 2004 - link
The article pretty much confirmed my feeling that for general day-to-day usage, RAID 0 is more trouble than its worth.There are times when RAID 0 could theoretically help, extracting large (CD image sized) archives, or copying (not moving) a large file to another folder on the same drive. Even though I almost exclusively use CD images and Daemon Tools these days, the time spent extracting or copying them is negligible, and certainly not worth the considerably longer amount of time I'd need to spend when either drive in a RAID 0 array fails.
Its true that Windows and applications will load faster from a RAID 0 array but again we're just talking a second or two for even the largest applications. As for Windows starting up, I personally never turn my main box off except when doing a hardware change so thats not an issue, but for those who do its unlikely to be more than five or six seconds difference so its hardly the end of the world. It would take an awful lot longer to reinstall Windows XP when one of the drives in the array fails than the few seconds saved each morning.
I also happen to do video capture and processing which involves files upwards of ten gigs in size and feel RAID 0 is worthless here too, provided the single drive you capture to can keep up with the video bitrate (my Maxtor DiamondMax Plus9 7200rpm drive has no trouble at all with uncompressed lossless Huffyuv encoded 768x576 @ 25fps).
When it comes to processing the video, I read it from one drive and write the output to another different physical hard-drive meaning it works faster than any RAID 0 array ever could-- one drive is doing nothing but reading the source file while the other only needs to write the result. With a RAID 0 array, both drives would be constantly switching between reading and writing two seperate files which would result in constant seek-time overheads even assuming the two-drive array was twice as fast as one drive (which they never are).
So IMO, although the article could have included a few more details about the exact setup, it was overall spot on in suggesting you don't use onboard RAID 0 for desktop and home machines. And I'd add that you're better off *without* RAID 0 and keeping the two drives as seperate partitions if you're into video editing.
Nighteye2 - Saturday, July 3, 2004 - link
Adding to all the comments already given, the Intel RAID is not very good as far as integrated RAID goes:http://www.tbreak.com/reviews/article.php?cat=stor...
Especially for business benchmarks:
http://www.tbreak.com/reviews/article.php?cat=stor...
Also, notice the increase in performance between single and RAID in the first link.
If you're HD-limited, RAID 0 helps a lot. Which is why using raptors skews the results of the tests Anand has done for this article.
Pumpkinierre - Friday, July 2, 2004 - link
Apparently anything cpu limited wont be better with RAID0:http://faq.storagereview.com/SingleDriveVsRaid0
This includes encoding (dont know about rendering but that can be cpu intensive as well as gpu). Large sequential reads with minimal cpu requirement will benefit from Raid eg simple file merging. You are better off splitting the raid up for encoding etc. and using one disc as the read and the other as the write on different controllers.
Games only benefit in the loading stage if large files are required eg bitmaps in Baldur's Gate.
RAID1 has the advantage of backup recovery as well as improved read speeds which is more beneficial to desktop use than writes. Raid0 has the capacity improvement advantager. So if size is not the problem (and it never is!), Raid1 is better for the desktop than Raid0. I'm sure if they varied the stripe size in Raid1 then games loading times would be improved. Even AT had one game load substantially faster (equivalent to the double platter 74Gb big brother Raptor). Perhaps an analysis of game file structure and loading by AT would be more beneficial to readers.
KF - Friday, July 2, 2004 - link
>It's simple, really. Locating data on one disk is faster than> locating it on two disks simultaneously.
That is no matter
> which controller you use.
Sending the two seek commands versus one should add negligeable time. The actual seeks would be done concurrently. The rotational latencies on each drive is independent. Therefore the time to locate the data should be very close to the same as for a single drive.
However, if the time to locate the data swamps the data tranfer time, say twenty times as long, then yes, doubling the data transfer rate is not going to show much. So according to this idea, almost all file transfers take place in approximately the seek + rotation latency time, and the remainder of the transfer is negligeable. The problem is that the data transfer would be even more neglible for more drives. Let's say the actual data transfer accounts for 5% with one drive. Then it would be 2-3% for 2 drives, and 1% for 4 drives. OTOH, people are claiming that with higher RAID, you do get dramatic differences, not negligeable differences.
KF - Friday, July 2, 2004 - link
>Let me get this straight, you think apps today (I assume you mean>desktop/office apps) aren't dependent enough on disk I/O, and should start
>to be written so they are more I/O bound?
>I hope you don't mind, but I'm going to put this in the old sig
>library for use someday. :)
No you didn't get it straight. Don't worry, Denial, you will understand what it means when they start doing it in the next few years.
But if you need something for your sig, try this:
"People have been saying John Kerry eats excrement sandwiches for lunch at the French embassy. No way. Excrement doesn't go with quiche, croissants and chardonay. Maybe for breakfast."
Pollock - Friday, July 2, 2004 - link
Err, meant #71.qquizz - Friday, July 2, 2004 - link
For those that are asking about the Intel Application Accelerator. The 875 chipset doesn't need/support it:http://www.intel.com/support/chipsets/iaa/sb/CS-00...
MiLLeRBoY - Friday, July 2, 2004 - link
I have a RAID 0 array and I definitely notice a dramatic improvement in copying files, file compression, and loading times.It is definitely worth it.
Pollock - Friday, July 2, 2004 - link
Actually #72, Anand tested level loading in Far Cry and Unreal 2004, which to my knowledge fit the bill for games you suggested. The result: RAID 0 was equal or actually a little worse. I guess latencies are still more important than bandwidth here...?Denial - Friday, July 2, 2004 - link
A$$ Masher,You seem to be the only person turned on by my system. Sorry, but I don't swing that way. You'll have a better chance at the internet cafes over in Chelsea.
TheCimmerian - Friday, July 2, 2004 - link
...Anyone have any thoughts on RAID0 for DV capture/editing/rendering?...masher - Friday, July 2, 2004 - link
#71, the statistics and applicability for MTBF and MTTF are a bit complex...so much so that most drive manufacturers themselves usually don't apply them properly (or intentionally mislead people).Technically, you're correct...RAID0 doesn't halve MTBF. However, for what the average user means by "chance of failure", a two-disk Raid0 array does indeed double your chance of a failure.
As to your comment of Raid-0 loading maps faster...true if its a large file (10MB+) and probably not discernably noticeable till you're in the 20-40MB range. For tiny files or heavily fragmented ones, it may even be slower.
Z80 - Friday, July 2, 2004 - link
This article was very informative. However, the statements that RAID0 cuts MTBF in half and that RAID1 doubles MTBF are statistically incorrect. Also, RAID0 does improve game performance especially when large game maps are involved (i.e., BF1942, BF Vietnam, Far Cry). It definitely provides an advantage in online FPS games by loading large maps faster and giving RAIDO equipped players an advantage in first choice for weapons and position. Your tests probably didn't measure map loading times.GokieKS - Friday, July 2, 2004 - link
"What's the difference between losing one 74gig Raptor in RAID-0 array or one 160gig stand-alone drive? THERE IS NO DIFFERENCE!"There is. The chance of you getting a HDD failure increases with every drive you add. A 2 disk RAID-0 array will have the same chance at failure as 2 independent non-RAIDed drives. The difference is, with the independent drives, you lose one drive's worth of data when it fails. With the RAID-0 array, you lose two.
~KS
sparky123321 - Friday, July 2, 2004 - link
I keep hearing about double the cost and the additional risk associated with a RAID-0 array.First off, double the price gets you DOUBLE the capacity of a single drive. It's a wash price wise. On top of that, you increase disk performance by up to 20+%. Normally, there tends to be a decrease in performance as capacity increases when comparing similar generation drives.
Secondly, with regard to risk. What's the difference between losing one 74gig Raptor in RAID-0 array or one 160gig stand-alone drive? THERE IS NO DIFFERENCE! If you don't have a recent backup, you've lost everything. Just spend an additional $90 to buy a backup 160gig 7200rpm IDE and use Acronis to do a complete disk mirror every week or two. If you lose a RAID-0 drive, you can just boot off of the backup drive and be up and running in a matter of minutes. Worst case, you've lost you're most recent work only.
Power to the Raptors and I think I'll stick with my RAID-0 array!!!!!!!
abocz - Friday, July 2, 2004 - link
I think #63 summed it up pretty well. For most real world usage the RAID0 setup never gets to shine because of the ratio of seek times/data transfers. Lower (7200) RPM drives will only compound the situation since their seek times are worse. Finally, add to this phenomenon the fact that the ratio of seeks will increase over time as fragmentation increases.Which begs the question of how well defraggers work in a RAID 0 setup? Anybody know?
Inferno - Friday, July 2, 2004 - link
Maybe you should try this test with some lesser drive. The raptors are kind of the be all end all drives. Maybe a pair of midrange Maxtors.binger - Friday, July 2, 2004 - link
#61, jvrobert is right in saying that the advantages of a raptor-raid0 are restricted to faster boot-up times and smoother handling of very large files, eg when it comes to dv.although i have heard similar statements before, whether a raid0-array of two 160gb samsung drives comes close to the performance of a single 74gb raptor drive, i don't know - i would surely appreciate being pointed to an appropriate review or at least a couple of significant benchmark results.
jvrobert - Friday, July 2, 2004 - link
Two points:First, it doesn't matter much what card you use for RAID 0. There's no parity calculation, so onboard hardware won't help much. Probably Windows striping, Intel RAID, VIA RAID, Highpoint RAID, etc.. are within 1 percent of eachother.
Second, this is a limited test that comes to an overgeneralized conclusion. As some have mentioned - these are raptors. I have a single raptor as my OS disk. Where RAID helps is with slower drives - you can get a "virtual" raptor of e.g. 360GB by buying 2 cheap, quiet, cool Samsung spinpoints.
Third (OK, 3 points) - it only tests games (which don't use much hard disk IO) and business (again, disk speed doesn't matter much). I'm getting into video now, and RAID 0 will certainly improve performance there. It will also help with load times of the OS and of large applications.
So the article comes to an over-general conclusion limited on a few quick tests.
RyanVM - Friday, July 2, 2004 - link
Let's not forget the target audience of this article: home users. The point of this article was that a two disk RAID0 array has little to no benefit whatsoever for the home user and/or gamer, which is exactly the primary consumer who'd make use of Intel's onboard RAID controller.Power users with multiple-disk RAID setups would be VERY unlikely to use an onboard RAID controller, opting instead for a dedicated RAID controller with onboard cache, processor offloading, etc.
And as others said, the controller makes very little difference in a two-disk RAID array. It's only with multiple disks (4+) that the controller starts showing its importance by managing read/write requests, caching data, etc.
In summary, for those of you who were expecting a full blown RAID review, go over to Storage Review where they specialize in those types of tests. This article was simply showing that onboard RAID is really quite useless for its target audience.
Jalf - Friday, July 2, 2004 - link
Funny how quick people are to dismiss an article the moment it doesn't confirm what they already believed...I might be the only one here, but I'm not really surprised by this article in general.
RAID has its place, yes, but not as a desktop system.
Yes, bandwidth goes way up, but so does latency. Instead of locating a file on one drive, you have to locate it on two drives, before you can even start the transfer. With sequential transfers, RAID is obviously faster, but with multiple smaller accesses, it will be slower. There's no magic in it, no faked results, and no incompetent and biased authors of that article.
It's simple, really. Locating data on one disk is faster than locating it on two disks simultaneously.
That is no matter which controller you use. Yes, a faster controller might mean a smaller performance penalty, but doesn't change the fact.
The most expensive part of I/O is the seek time. The actual transfer is fast by comparison.
The problem is that RAID aids the already acceptable transfer speed, but slows down seek time, which was already a bottleneck.
So yes, it can improve performance, but only if you have large sequential reads/writes, where you don't need to waste time seeking, and where the faster transfer really becomes useful.
In other words, *not* on normal desktop systems, and not on normal gaming systems.
masher - Friday, July 2, 2004 - link
> "I'd like to get to the truth about RAID0 for> desktop users like myself."
RAID0 really isn't significantly faster for most users and apps. Its not due to the reason KF states though-- HD performance is still very important to most apps.
But a RAID array doesn't increase performance across the board. Bandwidth goes up sharply...but latency rises as well. The only apps you'll see large gains in are ones that favor bandwidth much more than latency-- such as streaming huge files in a diskbound mode.
The Intel onboard raid controller isn't the best one out there. You can buy a dedicated card and scrape another couple percentage points out. A small gain for the dollars invested.
TheCimmerian - Friday, July 2, 2004 - link
This is my first post in this forum.Let me start by saying that anandtech.com appears to be a great place to get news, and I've enjoyed the articles so far.
While I agree that the "Raptor RAID0" article has some issues, I fail to see how so many of you can dismiss the results, and even the conclusions.
Anand has presented a real-world test of a commonly used RAID0 setup against commonly accepted benchmarks.
Frankly, I'm astounded by the number of "I don't care what his results show, my RAID0 setup is faster" comments. If your array IS faster, please post some evidence! There is way too much anecdotal assertion on this thread for my taste.
Honestly, I'm poised to purchase a couple Raptors for a desktopo RAID0 setup--based on the general yahoo about the performance benefits of RAID0. I was suprised and concerned to read this article, and the similar articles linked-to in this thread. As someone on the verge of dropping several hundred dollars for the supposed increased performance, I'd like to get to the truth about RAID0 for desktop users like myself.
I appreciate KF's thoughts on "why" RAID0 doesn't make a difference, and I'd like to hear more opinions and thoughts--especially opinions backed up by some kind of evidence!
Anand pretty much (except for the game tests) confined his test to synthetic benchmarks. Anyone have any results with actual applications and/or files?
Specifically, I plan(ned?) on using a dual 74Gb Raptor RAID0 array as a scratch/capture disk for DV work. DV files are huge (multiple Gb), and disk speed is important for smooth and error-free capture from a DV camera. Any thoughts?
Thanks for the dialog.
masher - Friday, July 2, 2004 - link
> "I can tell you for a fact that my 8 disk RAID> 10 array, with 15k 73GB Cheetahs, running on a
> LSI 320-2, installed in a 133MHZ PCI-X slot..."
Is it just me, or does Denial sound like he's trying to score chicks by bragging about the size of his array?
Oh, and BTW Denial...the servers your employer use don't count. You're either a liar for claiming you run this setup in your personal desktop...or an idiot if you're telling the truth.
Zar0n - Friday, July 2, 2004 - link
Nice article but very incomplete.Next time please include chip7 from via & nvidia
And more modern drives are available like Seagate 200GB.
Also include tests with raid 1
No SCSI drives, keep it real, most ppl have SATA or ATA drives.
pookie69 - Friday, July 2, 2004 - link
There have been A LOT of issues/concerns raised by various ppl here regarding things like benches and configuration setups etc that were left out in the article. I think it would be great if there was a follow-up article to this one in which these issues were addressed and previous things further explained.>>> indeed, if ALL :) the issues were addressed in the said follow-up article, it may end-up being the most comprehensive RAID report/review ever!
Anyways, something for the guys at AnandTech to think about - i think its hard to overlook the fact that alotta ppl are feeling quite a bit of discontent at the way this article hit upon its (pre-concluded :) ) conclusion.
Denial - Friday, July 2, 2004 - link
"Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks."Let me get this straight, you think apps today (I assume you mean desktop/office apps) aren't dependent enough on disk I/O, and should start to be written so they are more I/O bound?
I hope you don't mind, but I'm going to put this in the old sig library for use someday. :)
KF - Friday, July 2, 2004 - link
Denial: You are in denial. The results of Anand's simple-to-understand test are the same as the results that have been reported in overwhelmingly mind-numbingly-detailed reviews at specialized storage sites. This just happens to be about the IMPORTANT latest incarnation, which will no doubt put RAID capability on 90% of new computers, once the Intel production machine is rolling. Until Pariah opined, I wondered if I was the only one that understood those reviews, the way people seem to tout RAID 0 so relentlessly.Maybe this will be simple to understand: The authors of programs know what a slug their programs would be if they wrote them in such a way as to depend on the slowest link in the chain; namely the HD. Therefore HD accesses are avoided at all costs, and everything accessed is cached (in memory.) The OS (Windows) caches everything out-the-whazoo as well. In other words: all algorithms are selected to preserve locality. Therefore HD speed only shows up during intitialization and where there is no way to arrange locality. Therefore real-life benchmrks have a small dependence on HD speed.
Since HD I/O is interrupt driven, and transfers are DMA, a program does not have to just sit and wait until the I/O is performed. It can do useful work concurrently provided the I/O algorithms look-ahead. Then the data will be there (most of the time) before it is needed.
As for why the loading of games does not show a RAID 0 boost, I can only guess that they are doing a lot more than just loading HD data into memory. Possibly most of the HD I/O was done before the point that timing was done, and the slowness at that point is due to other operations. Pre-calculating known physics? Buffering major scenery changes?
I still think people could get a feeling of extra speed during times when the HD IS loading. It may only be a tiny part of the whole time a program is run, but you could notice it during that time.
Furthermore, if the past is a guide, every new capability that becomes commonplace gradually is made more and more use of, especially where Intel is concerned. (AGP, 2xAGP, USB, DMA66, SSE.) So Intel putting RAID 0 in its chipset means RAID 0 will be used more and more. Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks. Before that comes about, people will correctly warn that the extra money buys you very little. Fortunately for the rest of us, there are a few people willing to pay for that extra bit, which gets the ball rolling.
Pumpkinierre - Friday, July 2, 2004 - link
#45,#46 Generally the reviews I've seen on RAID1 have the read rates equal or a little bit more than a single drive while RAID0 shows 30%+ improvement. Why? I dont know. To me, in the read part of the deal, it should be the same in Raid 0 or 1.With my suggestion of virtual striping, I also suggested variable stripe size in a previous post (not possible in Raid0 but possible in Raid1 because the stripes are virtual). Here a smaller stripe size could be used for smaller data size requests and a larger stripe for bigger files or sequential data requests. This would speed up reads significantly and give a net advantage over Raid0 which is limited to one stripe size at inception. The controller on request for a particular data file would optimise the size of the stripe based on the request. For desktops where data throughput can range from the few k to the gigs, it would be perfect. This seems possible to me but I have'nt heard anyone implement it.
Pariah - Friday, July 2, 2004 - link
If you look way back at comment #37 you will see my last paragraph is basically exactly what you said in your last paragraph. I agree completely that the article stunk, and that basically all the storage related articles on this site throughout its history have stunk. I just think that your nitpicking of his usage of the word RAID in the conclusion was one of the least important problems in the article, as anyone with half a brain knew what he was talking about when he said that, regardless of whether it was a valid point or not.Denial - Friday, July 2, 2004 - link
As you can see by what i wrote above, I agree with you. What he meant and what he said are different things though. The fact that he left out too much of the systems configuration in order for us to see if it was configured properly means that the test cannot be replicated and is therfore useless. Whatever you think regarding IDE RAID 0, which I have no experience with, you cannot dispute the fact that leaving out critical configuration settings is a no-no for a review such as this. When he runs those gaming tests for one of those new overpriced video cards, he invludes quite a bit more details in the review, yes? Suppose he didn't say what resolution the game was run at, AA on or off, filtering level, etc. Would you be able to verify his results? No.Why does he spend so much time and put so much detail into a review for items as "important" as a 3d card, then completely blow it on this review? I don't have an answer to that, do you? My assumption is that he is not well versed in modern storage technologies and how the file system and many other details play a major role in overall performance, in which case he should have had somebody else perform the tests and write the article, as he does with many of the other articles on the site.
Pariah - Friday, July 2, 2004 - link
Denial, you need some help in determining "target audience." 99% of home users using RAID or thinking about it are likely thinkinb about a 2 drive RAID 0 array. Those 99% of users are who this article is targetted at. And when he says RAID in his conclusion, the setup he tested (2 drive RAID 0), is what he means.If you are thinking about going with something more complicated/advanced, then this article was NOT for you.
Denial - Friday, July 2, 2004 - link
#50, did you not read the article?"If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop."
This statement is so far fetched it is rediculous. This might be applicable to Raptors on an Intel onboard garbage RAID controller, but the above is a general statement. Maybe if it was changed to
"If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for an ATA RAID-0 array on a desktop computer using an Intel onboard RAID controller."
Written this way his conclusion might be correct, but the way he wrote, it's flat out WRONG.
Pariah - Thursday, July 1, 2004 - link
#48, your post was even more worthless than Anand's article. At least Anand's article had a setup that was remotely applicable to what a user he was targetting with the article would have.Despite what numerous people seem to think in the comments, for a 2 drive ATA RAID 0 array, the controller you use is about as irrelevant an issue as there is. $500 card, free onboard, hardware, software, me using a piece of notebook paper and crayons to calculate drive assignments for data will not make any noteworthy difference in performance.
Denial - Thursday, July 1, 2004 - link
The above comments about stripe size are true as well. Depending on the app, a 64k array stripe matched with a 64k stripe in NTFS will produce much different results that the default 4k(?) that windoze uses on NTFS partitions. We're talking a HUGE *HUGE* **HUGE** difference in performance here. HUGE!!!! I've never used one of those onboard raid solutions, do they even allow the option of setting the stripe size? I'd wouldn't be surprised if they didn't which would make this review even that much more useless.Denial - Thursday, July 1, 2004 - link
What kind of scientific testing was this? ALL RAID 0 is useless because an Intel onboard RAID solution sucks? Why did Anand waste his time on this?I can tell you for a fact that my 8 disk RAID 10 array, with 15k 73GB Cheetahs, running on a LSI 320-2, installed in a 133MHZ PCI-X slot on my dual Xeon 7505 motherboard (Vero) is just a *tad* faster than a single drive setup. ;)
Anand must be on crack making a blanket statement like that. Is this the best he can gve us after a few years of college? What a sorry article.
Zebo - Thursday, July 1, 2004 - link
#1 that's called placebo effect! Same goes for any other "feels" faster hardware such as the A64. The user spent the money, expecting a return, and it gave a false positive.I have been droning on about how raid 0 is worthless waste of money since I read this:
http://faq.storagereview.com/tiki-index.php?page=S...
And thanks to anands wonderful work we have coorberation.
TrogdorJW - Thursday, July 1, 2004 - link
As others are stating, RAID 1 on any *decent* RAID controller should have faster read rates. www.StorageReview.com has shown this in a recent article. So the statement "We won't be benchmarking RAID-1 here because, for the most part, there's no performance increase or decrease" while true in part (you didn't perform the benchmarks), was a bad decision, as performance should differ from a single drive. Of course, for single-user usage, RAID 1 would be even less useful than RAID 0 - except for adding redundancy.MajorKong - Thursday, July 1, 2004 - link
It's a shame that a RAID 1 array with the same Raptor II (and 7200 RPM drives) wasn't benchmarked. The read performance of RAID 1 can be as good as that of RAID 0 on a good controller. The only case where that's not true is if a lot of writing, which is slower than either RAID 0 or a single disk is being done to the array while reads are in progress.Kaido - Thursday, July 1, 2004 - link
Wow, this article's conclusions were nice to know...I had wanted to do this.My other idea was to get three of them in RAID 5...how would that compare to a single drive? (It'll take a bit more saving tho lol)
Also, what's the best RAID controller, or does it matter?
ranger203 - Thursday, July 1, 2004 - link
How come they didn't post the raid setup.... i mean anyone that uses it knows there there is a difference between 64K stripe and like 512K....Pumpkinierre - Thursday, July 1, 2004 - link
Yeah I agree something's wrong with this article. Anand needs to check his setup closely.Insomniac, #14 you were right first time: if you had a slower disc in an intelligent RAID1, you ought to read from the faster disc exclusively (I dont say that present controllers can). Also your suggestion on striped reads in RAID1 is good and mentioned also by Arth 1 #34. But as far as I am aware inexplicably RAID1 doesnt do this anymore (perhaps on more expensive controllers) see:
http://arstechnica.com/paedia/r/raid-2.html
"I should note that this discussion is based on the more recent, er, modern definition of RAID 1. The original model for this config actually included striping (as in RAID 0), and not simply "disk duplexing." In the end, however, the duplexing model is what the industry uses, and RAID 1 is synonymous with that. Therefore, notice that RAID 1's contribution to the world of storage technology is the principle of data mirroring"
But they do say earlier and strangely:
"Now here's an oddity: a read transaction can theoretically occur twice as fast as on single disk. Hence RAID 1 is often used on low-end web servers. The read performance is standard, if not better than single disk performance, and the poorer write performance is largely irrelevant on most web servers (save those doing transactions, of course). RAID 1 configs are great for mid-volume FTP servers as well."
From what I gather modern implementations of Raid1 are only a little better at reads due to the extra buffering and faster controllers. In terms of RAID, a RAID1 with virtually striped disks is the way to go for a gamer. It ought to give you faster loads as well as backup at the cost of slightly slower writes (not a great problem for gamers) and smaller storage (doesnt matter- cheap big HDDs nowadays). Yet it all but seems to be ignored by the manufacturers and IT industry as only relevant to servers.
madgonad - Thursday, July 1, 2004 - link
As much as I usually agree with articles posted here, I think the reviewer wasn't thinking clearly.My point is that RAID-0 is most obviously beneficial when working with LARGE files; big Photoshop tiffs, RAW audio files, video, and 3D graphics. Running tests that make use of office applications aren't going to demonstrate the arrays function. Kind of like driving a Ferrari in a residential neighborhood isn't going to demonstrate any real performance.
My computer has three identical 160GB drives. Two are in a RAID-0 array and the third is my 'mirror' where art and other important files are backed up. Pulling the exact same file from the working array takes about half the time as it takes when pulling it off of the backup drive without the array. Exactly what would be expected. If you need more speed in loading/saving large files then RAID-0 is for you. Since none of the tests in the article test this obvious advantage I would have to say that the test is either flawed or the writer didn't think through what a RAID-0 array is best at.
binger - Thursday, July 1, 2004 - link
those that whine about anand not using a hardware raid controller in this test, check out the storagereview article @ http://www.storagereview.com/articles/200406/20040...they use a promise hardware controller and reach the same conclusion.
thatsright - Thursday, July 1, 2004 - link
I waited a long time to see a article like this on AT, because I can expect a fair handed and comprehensive review, Not this TIME!!This article has got to be the most Shoddy, rushed piece I have ever read on AT. It really seems like a Tom's Hardware article.
qooleot - Thursday, July 1, 2004 - link
In this article today you guys write:If you haven't gotten the hint by now, we'll spell it out for you: there is no place, and no need for a RAID-0 array on a desktop computer. The real world performance increases are negligible at best and the reduction in reliability, thanks to a halving of the mean time between failure, makes RAID-0 far from worth it on the desktop.
And in another article:
Recommended: Dual Western Digital Raptor 74GB 10,000RPM SATA in RAID 0 Configuration
Pariah - Thursday, July 1, 2004 - link
"I really wish some regular 7200 RPM drives had been used, considering someone who can afford a 74GB Raptor won't care about the costs of RAID anyway."Wouldn't make any difference, you can extrapolate the performance of 7200RPM drives by looking at the improvement of the Raptor from one drive to 2 drives. You're not going to see a 5% increase with the Raptor and a 45% increase with 7200 or -30% loss. You'll see just about the same increase/decrease.
"I thought if it was smart, it would use both drives and improve performance."
True, IF it was smart. Unfortunately, basically every ATA/SATA RAID controller does NOT load balance reads in RAID 1. 3Ware controllers do and Highpoint controllers that advertise RAID "1.5" support do as well, I'm not aware of any others that do, though there may be random other models.
"If that wasn't the case, I wondered if you could choose which drive it read from."
No, there is no "drive affinity" setting for RAID arrays.
"But the one really fundamental thing wrong with the whole comparison is that you didn't actually compare RAID0 with a decent RAID 0 card like HighPoint RocketRAID."
Changing the controller would make pretty much no difference whatsoever for the configuration that was tested in the article (2 drive RAID 0 array). Software cards will perform just as well as a $500 3Ware card in 2 drive RAID 0 arrays. As you add drives and use increasingly complex RAID levels, then the controller will play a significant role in overall performance.
"Why bother to waste space describing in detail the differences between RAID 1 & 0 if no benchmarks from a RAID 1 are going to be included in the article??"
I agree, didn't understand that myself.
"AND the differences in CPU utilization between them. Most of the onboard soloutions are actually SOFTWARE RAID's as compared to a true dedicated hardware device."
For RAID 0 and even more so for RAID 1, CPU utilization is irrelevant as far as the controller is concerned, because there are practically no calculations necessary for a 2 drive RAID 0 array. For RAID 1, there really aren't any at all.
"I use 2 Seagate 15k.3's in RAID0 on a Adaptec 39320 Host RAID device. It sure feels faster than a single drive to me."
Can you say placebo? Sorry to hear you wasted your money on a garbage controller like that. Adaptec controllers are widely know to be horrendous performers. Adaptec is the only company I'm aware of that has released RAID controllers for any interface that actually don't perform better in even lowlevel benchmarks in RAID 0 than they would with just a single drive.
"Onboard RAID (and most cheap raid cards such as Promise) are technically software RAID cards and usually do not offer any speed increases over 5%. True hardware RAID cards offer speed increases at about 40% (as shown in the past)."
Maybe 5+ years ago, with a Pentium 90 and non-DMA ATA drives. Not true at all anymore. Moving to hardware for a simple RAID 0 array will net you nothing in additional performance.
"I was suprised not to see any Iometer benchmarks. IOPS and response times are king in determining disk performance. Iometer is still the best tool, as you can configure workers match typical workloads."
IOMeter is a glorified access time benchmark that doesn't give anything in the way of useful applicable results for home users.
Though the results of the article are not surprising, it was still a pretty poor read overall. Anandtech needs some more work on its storage articles if it wants to catch up to other sites like SR, Tech-Report and Digit-Life. The overall knowledge displayed in articles is noticeably lacking, well below the standards set in its CPU and video card articles.
ir0nw0lf - Thursday, July 1, 2004 - link
Why does this site blast a Raptor RAID-0 array in this article so badly and recommend a Raptor RAID-0 array in their high-end buyer's guide so highly? Seems like a massive 180-degree shift to me. Very curious...rjm55 - Thursday, July 1, 2004 - link
The major problem I have with this article is that 20% to 38% improvement in IO operations with RAID 0 in the first benchmarks is ignored as "not much", and then the "Proof" that RAID O doesn't improve performance is Winstones and SysMark which are sequential or linear benchmarks. I can also tell you that Winstones and Sysmark provide about the same scores with on-board Intel video as they do with an X800 XT and this does NOT prove to me that high-end graphics are a waste of money, it just proves that Winstones and Sysmark are not a good tool to measure graphics perfomance.The review seems far too strained to prove a pre-concluded idea, IMHO, and really doesn't prove anything except Winstones and Sysmark are terrible tools for comparing Hard Drive performance. The tests in Winstones, as i understand them, are not my real world, they are office-worker-running-one task-at-a-time world. I DO multitask on my computer, as do most users today, and this is where RAID 0 DOES make a difference. Where are benchmarks that compare performance in multitasking situations?
Arth1 - Thursday, July 1, 2004 - link
The article contains several factual errors.RAID 1, for example, does have *read* speed benefits over a single drive, as you can read one block from one drive and the next block from the other drive at the same time.
Also, what was the block size used, and what was the stripe size?
Was the block size doubled when striping (as is normally recommended to keep the read size identical)?
Since non-serial-ATA drives were part of the test, how come THEY were not tried in a RAID? That way we could have seen how much was the striping effect and how much was due to using two serial ATA ports.
All in all a very useless article, I'm afraid
qquizz - Thursday, July 1, 2004 - link
here, here, what about more ordinairy drives.Kishkumen - Thursday, July 1, 2004 - link
Regarding Intel Application Accelerator, I would like to know if that was installed or not as well. It seems to me that could potentially affect performance quite a bit. But perhaps it doesn't make a difference? Either way, I would like to know.pieta - Thursday, July 1, 2004 - link
It's funny to see metion of ATA and performance. If you really want disk performance, get some real SCSI drives. Without tag cmd queuing, RAID configurations aren't able to reach their full potential.It would be interesting see hadware sites measure SCSI performance. Sure, ATA has the price point, but with 15K SCSI spinners so cheap these days, the major cost is the investment in the HBA. With people dropping 500 bucks on a video card, why is it so inconvievable to think power users wouldn't want to run with the best I/O available?
I was suprised not to see any Iometer benchmarks. IOPS and response times are king in determining disk performance. Iometer is still the best tool, as you can configure workers match typical workloads.
Show me a review of the latest dual ported ultra320 hardware raid HBA stripped across four 15k spinners. Compare that with a 2 drive configuration and the SATA stuff. Show me IOPS, response times, and CPU utilization. That would be meaningful, as people could better justify the extra $2-300 cost going with a real I/O performer.
meccaboy858 - Thursday, July 1, 2004 - link
meccaboy858 - Thursday, July 1, 2004 - link
meccaboy858 - Thursday, July 1, 2004 - link
meccaboy858 - Thursday, July 1, 2004 - link
Nighteye2 - Thursday, July 1, 2004 - link
Of course, RAID 0 makes little sense for raptors, which are already so fast that they hardly form a bottleneck.RAID 0 makes more sense for slower, cheaper HD's...try 2 WD 80GB 8MB cache harddisks, for example. Together they are cheaper than a raptor, but I expect performance will be very similar, if not faster.
Taracta - Thursday, July 1, 2004 - link
I am tired of seeing these RAID 0 articles just throwing 2 disk together and getting results that are contrary to what is expected and not dig deeper into what's the problem. I am only posting my comment here because of my repect for this site. Drive technology and methodlogy has to play apart in discussion of RAID technology. The principle behind RAID 0 is sound. The throughput is a multiple of the number of drives in the array (You will not get 100% but close to it). Not getting this, it should be examined as to WHY? One of my suspicion is that incorrect setup of the array is the primary culprit. How is information written to/from the drive, the array and to individual drives in individual arrays. What is the cluster and sectors sizes. How is the information broken up by the controller to be written to the array. Take for example each drive in a array has a minimum data size of 64bits and you have array sizes of 2 rives 128bits, 3 drives 192bits and four drives 256bits. In initializing you array do you intialize for 64bits, 128bits, 192bits or 256bit? Does it matter? Say for example you initialize for 64bits, does the array controller writes 64bits to each drive or does it writes 64bits to the first drive and 0bits (null spaces and wasting and defeating the purpose of the extra drives) to the other drives because it is expecting the array size bits (eg 128bits for 2 drives)or does it split the 64bits between the drives and waste space and kill performance because each drive allocate a minimum of 64bits. I was waiting for someone to examine in detail what's happening. Xbitlabs came close (from looking at the charts)that they could almost taste it I am sure but still jump to incorrect reasoning.I know I am rambling but in short the premise of RAID arrays are sound so why is it not showing up in the results of the testing?
RDMustang1 - Thursday, July 1, 2004 - link
AMDScooter is right on. Onboard RAID (and most cheap raid cards such as Promise) are technically software RAID cards and usually do not offer any speed increases over 5%. True hardware RAID cards offer speed increases at about 40% (as shown in the past). This varies of course with the implementation but on average hardware RAID has been shown to increase performance much more than these cheap RAID impelementations. Regs needs to look into what he's talking about more because performance advantages are not lost in advertising.pio!pio! - Thursday, July 1, 2004 - link
I do a some video editing and I'm wondering the performance gain of say reading a a 4GB file and writing directly writing it again (ie a copy) in a RAID or non raid configuration. I'm using a single HDD right now, but I'm thinking of going to 2 HDD and read from one HDD and copy to the other..but I'm wondering if a RAID configuration will offer similar advantages?mkruer - Thursday, July 1, 2004 - link
I am building my self a new system this year, and I am seriously thinking of getting 2x250 Western Digital Caviars (SATA) and making them into a RAID 1, for redundancy purposes. I already knew that RAID 0 offers little real world improvements, but I would like to see how it compares to RAID 0 and just a single drive. I have never under stood why you bother comparing 8 normal drive, and one of them in RAID 0.Why not rerun the tests with just a single type of drive, one standard (stand alone), one RAID 0 and one RAID 1. All things being equal this should give a better indication of just how well any drives should do in the following configurations, using that RAID chip. (Yes there will be some small differences, but the should end up being negligible)
I would recommend choosing your favorite three drives, and doing a comparison of each RAID version on that.
eastvillager - Thursday, July 1, 2004 - link
Ok, you kind of lost me when you didn't install the Intel Application Accelerator...AMDScooter - Thursday, July 1, 2004 - link
I hope this does not come off as a bash as the review was informative to some extent, but I feel it is lacking in several areas. Why bother to waste space describing in detail the differences between RAID 1 & 0 if no benchmarks from a RAID 1 are going to be included in the article?? And as mentioned earlier, using only a single onboard RAID soloution has some merit for parity in benchmarking but is hardly definitive. This would have been a more well rounded review just by adding adding some RAID 1 benchmarks along with benchmarks from different RAID IDE/SATA controllers AND the differences in CPU utilization between them. Most of the onboard soloutions are actually SOFTWARE RAID's as compared to a true dedicated hardware device. It would also have been nice to see some SCSI RAID benchies tossed in the mix. SATA drives are almost in the same price as entry level 15k rpm SCSI U320 drives. While SCSI RAID is not on any normal desktop MOBO's, many users purchase seperate RAID cards anyway. I use 2 Seagate 15k.3's in RAID0 on a Adaptec 39320 Host RAID device. It sure feels faster than a single drive to me ;)MrMoo - Thursday, July 1, 2004 - link
This article and previous raid related ones ive read here have all seemed to be opposite of results ive seen with my setup. I have a Promise TX2000 raid controller and four IBM/Hitachi 180GB 7200rpm drives.Originally i only had one of the hitachis and when i went to a 2 drive raid 0 the perfomance increase was definitely noticeable. I wont bother repeating any benchmarks i have of it because i dont feel they really tell anything, nor do i still have ay records of them. But most places i could see noticeable improvements were in application loads, game loads and most signicantly when windows would boot up, especially once the windows install had become old and lots of apps were all trying to load at the same time.
Then last fall i purchased another 2 hitachi drives and decided to test out a 4 disk raid 0 Now did that thing fly, application loads were almost instantanious for all but the largest programs. and my performance was limited almost entirely by the PCI bus (oh how i hate thee) as i was acheiving average transfer rates of 120 MB/s as reported by sandra and HdTach.
Then recently (yesterday to be exact) I purchased 2 SATA Hitachi 250GB drives and i hooked them up as raid 0 on my onboard sata raid controller (a Silicon Image 3112 controller on my Albatron KX18D) here i would acheive about 65 MB/s transfer rates. this seems on par with was i would expect, but then i notice that cpu usage with the sata raid was around 55% and it was only about 5% with the promise IDE raid.
Even though the average transfer rate of the new array are greater than one drive, performance with programs running off of it dont seem any faster than a single drive.
My only thought is whether these onboard raid solutions use up so much overheard that the performance increase is negligable. all my experience with them seems to say raid 0 on them is useless, but raid 0 on a dedicated controller seems to increase performance drastically.
Regs - Thursday, July 1, 2004 - link
*what*Regs - Thursday, July 1, 2004 - link
I doubt a better raid card will offer any more performance. Maybe another 1-2%. 2% is was seperates quality these days in advertising.ep0ch - Thursday, July 1, 2004 - link
I usually like your articles Anand. But the one really fundamental thing wrong with the whole comparison is that you didn't actually compare RAID0 with a decent RAID 0 card like HighPoint RocketRAID.Insomniac - Thursday, July 1, 2004 - link
I just realized performance is only as fast as the slowest drive. So pairing a Raptor up with a cheap 80 GB drive is a waste.Insomniac - Thursday, July 1, 2004 - link
I was wondering how RAID1 reads back data. I thought if it was smart, it would use both drives and improve performance. If that wasn't the case, I wondered if you could choose which drive it read from. That way, you could get a WD Raptor II and a low cost 80GB hard drive to pair up. You get the redundancy and speed of the Raptor for a lower cost. What about RAID5? (I know the ICH5/6 doesn't support it, but I thought there were some chipset m,akers that did. I would like to see what that brings to the mix. Given the choice right now, I'd take redundancy over performance. Maybe RAID 5 can give you both for less than 0+1.Pollock - Thursday, July 1, 2004 - link
I really wish some regular 7200 RPM drives had been used, considering someone who can afford a 74GB Raptor won't care about the costs of RAID anyway. =P Besides, to me it seems like Raptors already perform so well that it's hard to find any performance gain anyway. I was also under the impression that a lot of people with SATA drives in RAID 0 were actually getting much more noticeable performance gains; i.e. outperforming lone Raptors. Well, whatever.goku21 - Thursday, July 1, 2004 - link
I noticed that they came to the conclusion of only using 2 drives in a RAID setup, but in my expereince the more drives the supposed increase in performance. Perhaps they should revisit this with 4 Raptors in a RAID setup.kuk - Thursday, July 1, 2004 - link
Just one small thingy ... why is the manufaturer stated in the article summary "3Com/U.S. Robotics"?parrybj - Thursday, July 1, 2004 - link
While your overall disk bound throughput may be higher, seek times are sill only as fast as the slowest drive in the array. Since seek time is a more important desktop performance metric, I would think there would be very little benefit to doing this.Marlin1975 - Thursday, July 1, 2004 - link
Well the review was nice if you are thinking of running 2 raptors on a ICH5/6 SATA ports, but what about the other 99% of use that may use VIA, SiS, etc.. and/or other 7200 rpm hard drives?Matthew Daws - Thursday, July 1, 2004 - link
I'd be interested in seeing how using RAID0 with older drives, or one old drive and a newer drive, works out. If you're upgrading your motherboard, then given that RAID comes "for free", it could be a good way to save money by buying a second, smallish hard-drive, and using your old hard-drive with this new one in parallel...parrybj - Thursday, July 1, 2004 - link
Very good article. The results are not surprising. I have one comment about RAID1. While in theory it is simply a data redundancy mechanism, in practice there are performance benefits. Any good RAID1 algorithm will use read optimizations that will allow for parallellism during read requests. Thus, under the right conditions, most RAID1 arrays will achieve higher read IOPS than a single drive. Also, there may be a performance hit on writes due to the fact that writes will only be as fast as the slowest drive.djm2cmu - Thursday, July 1, 2004 - link
#4: Excellent introduction to all the common RAID levels here: http://www.acnc.com/04_01_00.htmlnofuse - Thursday, July 1, 2004 - link
This article doesn't seem to be up to the standards I've come to expect from Anandtech.It would be more fair to say "Intel's onboard RAID 0 solution offers no performance gain." I'd be interested to see results from other RAID controllers. You can't take one product and make a blanket comment like "RAID 0 is not worth it." That would be like me reviewing an NVIDIA Vanta graphics card and saying "3D acceleration is not worth it."
Runamile - Thursday, July 1, 2004 - link
I liked the diagrams for RAID0 and 1. Would be cool to see 3,4,5, and 10 drawn out too, but that wouldn't of been relevent to the article.ciwell - Thursday, July 1, 2004 - link
Excellent article...and for those who think it is faster experientially: it is all in your head. ;)SilverBack - Thursday, July 1, 2004 - link
I'm using two RAID 0 arrays.A8V mobo with a promise 378 controller and the onboard VIA as well.
I prefer the system this way. It just makes the whole windows experience faster.
RebolMan - Thursday, July 1, 2004 - link
Any subjective comments on whether the system using RAID-0 feels any smoother? A lot of people comment that P4s with Hyperthreading produce a system that just feels more responsive regardless of whether it's really any faster.I find the best thing to do (under Windows) when you've got two drives hooked up is to move your Virtual memory onto the one which you use less. There's all sorts of tricks you can use to distribute your system load without necessarilly using RAID.
wanosd - Wednesday, August 4, 2010 - link
I recently did a test by copying a few GB of data from a WD 160 GB drive to another WD 160 GB drive. It took about 4 mins.I then renamed the folder that I just copied and then copied it back to the original drive, and again got about the same time, with only a few seconds difference.
I timed my boot from Windows from the time the OS takes over, all the way to the desktop, and it took about 35 seconds. I do NOT have any bloatware or junk on my system.
Finally, I enabled RAID 0 for these two drives. Now the same version of Windows boots up in about 25 seconds (not as fast as you'd think). Also, copying the same folder from my 3rd hard drive to my RAID 0 drives is taking 1 minute and 45 seconds. The seek time itself may be still slow, but once you get the data going, it'll definitely help out.
qepsilonp - Monday, October 31, 2011 - link
I play eve online and i can be running at any given time 3 clients of eve a music player / video / or a text to speech program a browsing client with usually 5 - 7 tabs and sometimes i even want to be able to extract files at the same time I think for that kind of usage RAID 0 would be very worth it, did you even consider a lot of users do multiple demanding tasks at once?qepsilonp - Saturday, November 5, 2011 - link
while yes when running one application RAID 0 is usually useless but when like me most of the time you are using 2-3 clients of a heavily HDD relent game where sometimes it takes a while to get the files for the 3D images and sometimes because of that your they wont show up on the screen for sometimes 5 seconds and i know its not anything else but the HDD because i have a new computer and the only piece of hardware that hasn't been updated is the HDD and im still getting it.if i was able i would also be running a HD movie or have my computer read a book to me with a text to speech program or be playing music and also maybe extracting something with Winrar you cant tell me that with all those IO's that RAID 0 wouldn't help at all, considering the game im playing is eve online and when i jump in and a gate can have 1000 ships on it thats maybe 32 different ships types which have to be gotten from the HD which is probably something like 100mb times 2 - 3 thats 64 to 96 IO's thats if theres isnt multiple files that need to be called up for 1 ship type so yeah I think in the deck top for power users there is a place for RAID 0