Original Link: https://www.anandtech.com/show/545



It is normal for users to have their own preference as to what specific components they'd like to see in their system; we see this on a regular basis – there are some users that prefer 3dfx to NVIDIA, AMD to Intel, or on the software platform side, Microsoft to Sun.  But never has there been a situation where so much hatred for a particular company and their products has been so freely expressed and accepted alike by end users and members of the media. 

Visit the newsgroups, the chat rooms, or even our own AnandTech Forums and you will find less than 1% of the responses you get to the question "Should I use RDRAM in my system?" will contain the word "yes" or any other such derivative. 

Rambus has become the outcast of our community, no one wants anything to do with it, and yet seemingly, everyone has something to say against it whenever their flagship product, Direct Rambus DRAM or RDRAM is brought up. 

In an effort to help clear up some of the major misconceptions about RDRAM, we were contacted by Rambus and asked to meet with them under the assumption that they would be able to answer pretty much all of our questions and give us an update as to exactly where Rambus as a company and RDRAM as a product stands today and where they see themselves and their product(s) in the near future.

Not Valid Reasons to hate Rambus

As we just mentioned, there is a lot of hatred towards Rambus.  While some of it is fueled by problems with the company and their technology, a considerable amount of it is fueled by reasons that have no bearing on the quality, functionality or performance of the company's products. 

It is these reasons that we ask that you throw out before proceeding any further with this article.  Remaining open minded and willing to weigh the pros and cons of both sides to this controversial story is the only way to move forward. 

It is partially the fault of Rambus that they have not stepped forward earlier when confronted with some of the allegations that have now become engraved in the minds of the community as a whole, claims such as "Intel owns Rambus" and "AMD doesn't hold a Rambus license" both of which are completely false.  Surprised?  Keep your mind open and let's see exactly how evil Rambus really is and if they do have a chance of succeeding in the future.



The Problem

The one thing that cannot be argued with is that today's PC is hungry for more memory bandwidth.  While it can be disputed that the move to a 133MHz memory bus from the 100MHz standard doesn't yield a huge performance increase, simply looking at the success of the DDR GeForce from NVIDIA in comparison to the originally released GeForce equipped with SDR SDRAM and the performance hit the newly released GeForce 2 GTS takes when storing twice as much information (32-bit color) chews up the "limited" 5.3GB/s of memory bandwidth, you'll quickly agree with the fact that conventional SDRAM simply cannot offer the memory bandwidth necessary for today's applications and games.

But let's get back to a more relevant question: Is there a need for a faster memory solution?  Although we could give Intel's estimations for the bandwidth requirements by the end of 2000 and into 2001, we'll go one step further and provide you with data that AMD is publishing as to what kind of system/memory bandwidth requirements systems will begin needing towards the end of 2000 and into 2001.

The Athlon, currently running off of the 1.06GB/s of memory bandwidth provided by PC133 SDRAM, will migrate to a 133MHz DDR FSB (effectively running at 266MHz) toward the end of this year.  This translates into about 2.1GB/s of bandwidth present between the CPU and the North Bridge of the chipset (this path is otherwise referred to as the FSB).  Combine that with the 1.06GB/s required by AGP 4X and 64-bit PCI running at 66MHz (528MB/s), and you get about 3.7GB/s of bandwidth that the memory interface must be capable of supplying or your AMD based system won't be able to operate at its peak.  We just mentioned that the Athlon's PC133 SDRAM can currently only deliver 1.06GB/s of memory bandwidth, and even using DDR SDRAM, you're only going to increase that to 2.1GB/s, still only 57% of the total memory bandwidth we need for our next generation AMD based systems.

The same thing applies to Intel.  The 440BX chipset with its PC100 SDRAM is in an even worse position since it only has an available 800MB/s of memory bandwidth.  The reason for using AMD as an example here was to prove that this is a real situation and not just some marketing propaganda from Intel to promote their chipsets.  AMD knows this too, which is why you don't see any mention of DDR SDRAM beyond their current 2000 roadmap: 2.1GB/s of memory bandwidth isn't enough for next year's computing platforms.



What is RDRAM?

Before we begin discussing RDRAM as a solution to this lack of memory bandwidth problem, let's talk about what exactly makes RDRAM different from SDRAM.

The technology behind RDRAM is really no different than that behind SDRAM.  According to Rambus, they design high speed interfaces, not a new memory technology.  In essence, that's what makes RDRAM what it is, the high speed interface that connects the RDRAM module to the outside world that is your computer. 

RDRAM doesn't change the way memory is addressed or any of the fundamentals of memory storage/retrieval; instead, it manages to speed up the actual process of transferring the data from one end (RDRAM) to the other (your chipset). 

The way RDRAM achieves this acceleration is by using a small interface present on the RDRAM chips themselves as well as the memory controller, which in the case of a PC is located on the North Bridge or Memory Controller Hub of the chipset. 

Using these Rambus interfaces, RDRAM is capable of transferring data at up to 400MHz DDR (or double pumped), meaning that the effective transfer rate is equal to that of an 800MHz channel. 

Since the current RDRAM implementation features a 16-bit (2-bytes) wide Rambus channel, we get a theoretical peak transfer rate of 800MHz x 2-bytes or 1600MB/s (1.6GB/s).

As you can guess, since this interface must be included on each RDRAM chip on a module, the die size of these modules is increased from what they would otherwise be without the high speed interface.

Fortunately, since the Rambus interface is the same regardless of what size the RDRAM chip you're using is, the percent of the device's die this interface occupies decreases as the memory size increases.  Basically, the interface's size remains constant regardless of the size of the RDRAM device it is integrated with. 

For example, while the interface occupied between 15 – 30% of the first 64Mbit (8MB) RDRAM chips, it assumes 10 – 20% of the die size of the larger RDRAM chips, i.e. the 128Mbit and 256Mbit chips. 

Because of this, the contribution the added die size makes to the current price premium of RDRAM over SDRAM isn't too significant because most of the RDRAM modules on the market today are using 128Mbit and 256Mbit where the percentage of the die occupied by the Rambus interface is minimal.



The other side of the Interface

So far we have yet to establish a relationship between Rambus and Intel, but it is time for that connection to be made. 

First and foremost, Intel does not own Rambus.  Rambus has been a publicly traded company since 1997 and has been around since 1990 as a privately held company.  Intel's relationship with Rambus, although a very public one, is limited to one specific thing: Intel is a Rambus partner that has a license to produce Rambus memory controller devices. 

Rambus has existed in the past.  Long before the release or even the announcement of Intel's 820 chipset, Rambus had products that made their way onto graphics adapters and even console gaming systems alike. 

As it stands, there are three generations of RDRAM, the first incarnation being Base RDRAM, the following being Concurrent RDRAM and the current type, Direct RDRAM.  If you take the first letter from all three of these RDRAM generations, you get the three generations of Rambus technology, B, C, and D – how cute.  So while Direct RDRAM just recently made its introduction (it features a wider bus and higher operating frequency than previous versions of RDRAM), the technology has been around for quite some time.

As we mentioned before, the high speed Rambus interface must be present both on the individual Rambus chips themselves and on the memory controller; in this particular case, on the chipset.  Intel's partnership with Rambus is necessary in order to be able to license the technology required to include the Rambus memory interface in their i820 chipset and upcoming Tehama chipset.  While it is obvious that Intel doesn't have to pay as much for the license as some of the other partners since, according to Rambus, they have a "special" relationship with Intel, other than that there is no real relationship between Rambus and Intel.  The "special" relationship comes into play partially because Intel has claimed the equivalent of undying support for Rambus and RDRAM for their current and future products; if you were in Rambus' position, you would do the same thing to the one company that has helped you so much in the past.

Who are the other "partners" that have a license to use the Rambus interface in their chipset designs? 

Well, other than Intel there is Compaq, Hewlett Packard, National Semiconductor, Acer Laboratories, Inc. (ALi) and Advanced Micro Devices otherwise known to us as the underdog just about everyone has been rooting for, AMD. 

As the current industry leader, Intel is taking a risk in supporting RDRAM.  They are betting on the fact that there is a clear need for a higher bandwidth memory solution and that RDRAM seems to be that very solution.  If it turns out that RDRAM is the solution we've all been waiting for, then don't be surprised if you see AMD supporting RDRAM shortly thereafter, but at the same time, since Intel is the one putting themselves on the line here, if RDRAM isn't all that it's cracked up to be, AMD can just sit back and say "we told you so" without losing face. 



Is RDRAM the Solution?

Earlier we illustrated a situation in which we would need approximately 3.7GB/s of available memory bandwidth for the memory bus not to be a limiting factor in the performance of a system.  However, we didn't say that RDRAM as it currently exists solves the problem because honestly, it doesn't.  Like we said at the beginning of this article, keep an open mind, this isn't designed to bash one company or another, just an attempt to clear up some misconceptions.

The most popular form of RDRAM that we are familiar with now is what is known as PC800 RDRAM.  This naming system is unfortunately a bit misleading, because the maximum operating frequency of the current RDRAM is 400MHz but, as we mentioned earlier, operates in a double pumped fashion, meaning that twice as much data is transferred every clock cycle (à la DDR), which is where the PC800 name comes from  (400MHz x 2 = 800MHz).  And since the Rambus channel is 2-bytes wide, we get an effective 1.6GB/s transfer rate for a single RDRAM channel.

Unfortunately, this is only 43% of the 3.7GB/s we calculated earlier, so RDRAM isn't the solution, right?  Not exactly.

Single channel RDRAM (1.6GB/s) offers less bandwidth than DDR SDRAM running at 133MHz DDR (2.1GB/s) and neither of those solutions offer the 3.7GB/s of memory bandwidth we decided was necessary to run the next-generation PC platforms. 

Without developing a brand-new memory technology between now and the end of the year (which is virtually impossible if you plan on shipping it anytime soon), there has to be a way to take one of the currently available memory technologies and manipulate it in such a way that it provides us with more memory bandwidth. 

You can do this one of two ways, either by increasing the operating frequency (or effective operating frequency) of the devices or by increasing the width of the memory bus. 

For example, in order to adopt SDRAM to the needs of the future, DDR SDRAM will be adopted in designs from AMD and VIA for the upcoming Mustang, otherwise the performance of that platform would be severely hindered by a lack of memory bandwidth.  Since the Mustang will potentially have a large on-die L2 cache, the penalty for an L2 cache miss will be much greater in a system with a slow memory subsystem or a memory bus that is saturated with data requests from other memory masters.  The current Athlon can get around this problem and survive pretty easily because of its off-die, high latency L2 cache running no faster than 350MHz, but once Thunderbird hits, we'll begin to see a definite need for a faster memory bus. 

Since both RDRAM and DDR SDRAM are already double pumped, the technology for quad pumping (or QDR) the two memory technologies would have to be developed and implemented in order for the effective operating frequency of the DRAM types to be increased (simply increasing the actual clock speed isn't a viable option right now until manufacturing processes improve). 

This leaves the latter option, increasing the width of the memory bus and this also brings about the reason why our current SDRAM implementation can't grow much further than DDR SDRAM.



Pin Count

The trend in the computer industry has been towards lowering pin counts.  At this year's Spring IDF, one of the more interesting demonstrations was that of the Serial ATA specification, which is destined to replace our current ATA specification, being referred to as Parallel ATA because of the way it transfers data: multiple bits transferred simultaneously.  Since you have multiple bits of data being transferred simultaneously, you have an enormous pin count, and when dealing with a large pin count, you have problems of maintaining signal integrity as the transfer rates increase.  Serial ATA will help to reduce pin count by transferring data one bit at a time but at higher transfer rates. 

This idea of lowering pin count is mimicked in RDRAM as it transfers data serially, one bit at a time, instead of in a parallel fashion like SDRAM.  This, combined with the fact that a single RDRAM channel is only 16-bits wide versus 64-bits for a single SDRAM channel, makes implementing more than one RDRAM channel on a motherboard much easier than implementing more than one SDRAM channel. 

The higher your pin count, the more difficult it becomes to route the traces going to your memory banks and your memory controller.  Once you reach a certain point you're going to have to start adding more layers to the motherboard's PCB, which begins to increase the cost of the motherboard at a pretty hefty rate.  Some motherboard manufacturers such as ABIT won't even consider a motherboard design that employs a design with more than 4 layers; chances are that if you said that an 8-layer design was required in order to have dual SDRAM banks, they would opt for another solution. 

With RDRAM, you can combine two 16-bit RDRAM channels for a total of 3.2GB/s of memory bandwidth without increasing pin count dramatically.  In fact, i840 motherboards currently have this dual channel RDRAM system implemented, and although the demand for i840 isn't that great, it is a "living, breathing" example of dual Rambus channels at work.  Intel's upcoming Tehama chipset for use with the Willamette will also feature dual Rambus channels, and it shouldn't be much longer before we hear about a higher bandwidth implementation of RDRAM.


Notice the density of the traces present between the North Bridge and the RIMM slots in comparison to those on the above two examples



Is Sony Crazy?

The lower pin count of RDRAM is quite attractive to companies outside of the desktop PC realm, including Sony, who just recently started shipping their highly anticipated Playstation 2 game console systems outfitted with 32MB of PC800 RDRAM. 


Click to Enlarge

Why did Sony choose RDRAM for the Playstation 2?  Did you think all the reasons we just mentioned in favor of RDRAM were just made-up facts?  ;)

The Playstation 2 needed quite a bit of memory bandwidth, without having to move to a larger memory size.  Using the same dual RDRAM channel architecture seen on Intel's 840 chipset, the Playstation 2 attains 3.2GB/s of memory bandwidth via two 16MB PC800 RDRAM devices operating on two independent 16-bit Rambus channels. 

With SDRAM, in order to gain the same bandwidth, you would have to make use of a minimum of 8 SDRAM devices (8 devices x 8-bits = 64-bit wide bus), which take up much more room and would result in a much larger memory size than Sony needed for the PS2. 

As you can see from the picture to the right, there are very few visible traces going from the Emotion Engine to the dual RDRAM banks and the two 16MB RDRAM chips occupy a total of 2.3 in^2 of space, and according to Rambus, the RDRAM solution reduces pin count by 30 – 65%, depending on whether you're using DDR or SDR SDRAM.

 



Power & Heat

Whenever a new product is on the verge of being released, there are always rumors about it overheating or it requiring some insane form of cooling.  We saw this with the Voodoo3 that was not equipped with a fan, although the chip didn't require one, 3dfx was often bashed for failing to include one.  The first thing you have to understand is that a good engineer will not allow a product to be released in a state where it could potentially overheat.  With that said, let's look at the power draw and heat dissipation characteristics of RDRAM.

Remember that RDRAM is a serial device, so at any given time, you're transferring one bit of data at a time instead of transferring multiple data bits in parallel, which is the case in SDRAM.  Because of the way RDRAM works, only one device (RDRAM chip on a module) can be active at one time – by active we mean actively transferring data.  The rest of the devices on the module are in one of four power modes, Power Down, Nap, Standby or eventually in Active where reading/writing can take place. 

According to the PC100 specification, you're looking at about 900mW of power required per SDRAM chip (device) on a module.  All of the devices draw the same amount of power, meaning that they're all drawing 900mW.  This isn't the case for RDRAM.

On a RDRAM module, you have one device active at any given time; let's just assume that you have an RDRAM module with 8 devices, the active device will pull around 1165mW of power, but depending on the state of the rest of the devices, they could be consuming between 10mW (Nap) and 250mW (Standby) as well as the 1165mW (Active) assuming normal operation.  So overall, an RDRAM module will draw less power than a PC100 SDRAM module. 

But then what's the point for the blue heatsink that's present on RDRAM modules?

It's actually not a heatsink, but rather a heat spreader.  The heat spreader is bolted onto the RDRAM module (after the RDRAM devices are all individually tested but not after the module itself without the spreader has been tested) and doesn't actually make physical contact with the RDRAM devices, instead it uses a thermal pad that makes contact with the devices and then touches the underside of the heat spreader. 

What this heat spreader helps to do is essentially "spread" out the heat from the one concentrated source (the one active RDRAM device) across a larger surface area.  If you touch an RDRAM module during heavy operation, it never gets more than warm to the touch, so the rumors of RDRAM being hot enough to cook eggs on are seriously exaggerated (the Cyrix 6x86 was the only CPU worthy of that title ;)…). 



Mobile

Because of the flexibility offered by the four power modes (Power Down, Nap, Standby and Active), RDRAM actually makes sense to be used in a mobile situation. 

Currently, mobile parts are ready, but there are currently no mobile chipsets with RDRAM support. 


SO-RIMM: Small Outline Rambus Inline Memory Module



Cost, Price & Royalties

Until this point there hasn't really been a reason not to like Rambus and this "wonderful" RDRAM technology they've brought with them, but here's the real kicker.  When it comes down to money, people will often change their minds in the blink of an eye, it's the same reason the phrase "vote your pocketbook" came about regarding political elections, everyone has their breaking point and for most users, having to pay $1000 for 128MB of RDRAM wasn't worth the added security of knowing that you had 1.6GB/s of available memory bandwidth.

When asked to explain the reason for the incredible price premium RDRAM held and currently holds over SDRAM with little/no performance increase over current platforms, their primary explanation was basically one of the fundamentals of the laws of supply and demand: there aren't enough RDRAM devices on the market and the demand outweighs the miniscule supply that is out there. 

Whether or not you want to believe that is up to you, but Rambus did help put some incredibly poor rumors to death.  The biggest rumor was that the yields on Rambus parts had dropped to ridiculously low levels.  While we could not get an exact figure out of Rambus, we figured that the yield on RDRAM is decently close to that of SDRAM, although not nearly as high as that of SDRAM, which has really been around since before the days of the old 430TX chipset. 

The price of RDRAM will go down, but by year's end, don't expect it to be the same price as SDRAM although the price difference will definitely decrease.  That decrease may not only be because the price of RDRAM will be falling, but potentially because the price of SDRAM may rise again. 

Currently, you can find a 128MB PC800 for under $600 if you shop around, but if you compare this to $100 you can pick up a generic 128MB PC133 module for, that's still quite pricey. 

The price will come down as more manufacturers begin shipping larger quantities of RDRAM devices.  Currently, NEC, Infineon, Samsung, Hyundai, and Toshiba are in production now, but you also have to take into account that Toshiba is Sony's primary supplier for RDRAM for the PS2, meaning that the millions of RDRAM chips they are producing are going mostly to Sony for the PS2.  Toshiba had to have 4 million 16MB RDRAM chips ready for the launch of the PS2 since Sony had plans to have 2 million units ready at the launch of their system.  As the production of RDRAM increases, the price will definitely go down, but the point where RDRAM will be competitively priced with SDRAM is still quite some time away. 

When the Willamette hits later this year/early next year, for Intel's sake, RDRAM better be affordable because a Tehama + MTH + SDRAM solution is going to severely cripple the performance of Willamette. 

The final topic to discuss are the royalties that everyone seems to know everything about when it comes to RDRAM.  Rambus licenses out their technology to DRAM manufacturers and chipset manufacturers that use their Rambus interfaces in their devices.  These royalties you've heard so much about are paid for by the memory  manufacturers and the chipset manufacturers. 

The royalty is approximately 1 – 2% per RDRAM chip and 2 – 5% for the memory controller; in the end this doesn't total to an incredible amount and is definitely not the cause of the price premium RDRAM holds over SDRAM. 



Summary

The conclusion we were hoping to convey here was that there is a definite need for a higher bandwidth memory solution that features a lower pin-count than dual channel DDR SDRAM, and provided that the price does decrease, RDRAM could be that solution. 

We've pointed out that Intel isn't the only one supporting RDRAM and that AMD holds a license to their memory controller technology alongside Intel.  The reason Intel is taking all of the heat is mainly because RDRAM is so incredibly expensive and they are attempting to make a very sudden transition to the technology. 

At the current point in time, DDR SDRAM makes the most sense, since it will offer a much needed increase in memory bandwidth (the AMD Thunderbird is going to be hurting with only PC133 SDRAM, luckily Mustang will have a DDR SDRAM platform to go alongside it at launch) at a small price premium over regular SDRAM. 

Further down the road, the transition to RDRAM will probably occur, but don't expect that to happen anytime in the near future, not in 2000 at least.  From the memory manufacturers we've talked to, they don't see RDRAM as being a major contender this year, but provided that prices do come down, there is nothing stopping RDRAM from becoming the next widely accepted memory technology, embraced by both Intel and AMD. 

If RDRAM doesn't catch on, the consequences could be severe for Intel, since they are the ones that have invested so much of their roadmap in this one technology. 

Log in

Don't have an account? Sign up now