Original Link: https://www.anandtech.com/show/75

Computer Hardware Info Guide

by Anand Lal Shimpi on October 12, 1997 7:25 AM EST


BIOS, RAM, ROM, Capacity, Transistors, Capacitors, Throughput, Bandwidth, Transfer Rates...and more...these are all buzz words you've heard used on this site, as well as others...but you never got a complete explanation on what they mean. Well, here I am to provide you with some information. I'll try to add more information to this Guide weekly, each week concentrating on a different topic. This week, special thanks goes out to Avinash Baliga for writing 4, count em, 4 sections for this guide. Thanks Avinash!

 

Binary Number System
Without this precious system, there would be no computer hardware for me to test =) Computer Science revolves around the idea that every function, every idea, and every procedure can be expressed by a single digit, a 1 or a 0. The outcome of any given situation is either True, or False, 1 or 0. If you relate this to electricity and electrical engineering (not going to go into great depth here), your processor can only determine one of two things, whether or not there is any electrical current running over a tiny wire at any given point in time. It takes a LOT of these true or false comparisons to generate that pretty little graphical interface you see on your screen now, more than you can imagine.

 

 

BIOS & POST
You should be quite familiar with these terms. BIOS, an acronym meaning Basic Input Output System, is quite self explanatory, it is the Basic I/O System which contains all information about your system necessary for Basic Input and Output functions. Sometimes the BIOS Setup will be referred to as CMOS Setup, however I should mention that the acronym CMOS stands for Complementary Metal Oxide Semiconductor. A term commonly associated with a system's BIOS is POST, or Power-On Self Test. POST is a series of tests which run before you ever see that wonderful "Starting Windows 95" Screen. During POST, the system's key components are examined and quickly tested for possible defects or configuration problems. When a machine "won't POST", it has failed one or more of the initial tests made by the BIOS and therefore an error is generated, signaling the hardware to either stop responding or produce a text output for the user to diagnose and interpret.

 

 

Bits & Bytes & Transfer Rates
Now I know you've used the terms Bits, and Bytes sometime in your computer-using lifetime, most likely with some mega or giga prefixes. But have you ever actually understood what you were talking about? Lets simplify this, in computer science, a bit is the amount of space necessary to account for a single comparison's outcome. Confused? A bit is the amount of space needed to store one digit in a binary number system (see Binary Number System), a 1 or a 0. A byte then is a combination of 8 bits, or 8 1's and 0's (i.e. 10010010). A megabyte, is the most misunderstood measurement of storage. A kilobyte is in fact, 210 bytes, or 1024 bytes. In that case, a Kilobit (not a kilobyte) is 1/8 of a kilobyte (8 bits in a byte) or 128 bytes. Meaning that a megabyte, is 210 kilobytes, or 1024 kilobytes (KB). In turn, a megabit is equal to 1/8 of a megabyte, or 128 kilobytes (KB). Therefore, as you might expect, a gigabyte is equal to 210 megabytes, or 1024 megabytes. Therefore a gigabit is 1/8 of a gigabyte, or 128 megabytes (MB). I know you've heard the modems referred to by their maximum transfer rates, i.e. 28.8K, 33.6K, 56K etc... Lets take a 33.6K modem for example, sometimes referred to a 33,600 bps or baud modem. The 33.6K means the modem can transfer at 33.6 kilobits not kilobytes per second. Many people have made that mistake, and wonder why their transfer rates are so slow. Well that's why, your 33.6K modem is transmitting and receiving data at 33.6 kilobits per second or about 34406 bits per second. Divide that number by 8, and you get a maximum transfer rate of around 4300 bytes per second, which translates into 4.2 kilobytes per second (KB/S). You also have to factor in net traffic and the current limitations of our analog phone lines, but that is basically the deal behind Bits, Bytes, and Transfer Rates

 

 

Bus
The bus is like a set of tiny wires that are on your motherboard. They allow the processor to access RAM and to interact with other devices, such as a graphics card, printer, sound card, hard-drive, and anything else that's in your computer. It used to be that the bus operated at speeds equivalent to the processor (33MHz 80386 had a 33MHz bus), but this quickly changed after processors got to 50MHz (only 33MHz bus-speed back then). Nowadays, with processors at 200+MHz there's no way that the bus can even come close to the processor's speed. This means that if the processor is accessing memory or an external device (on the motherboard) the time it takes will be significantly slower than if it simply accessed it's own internal hardware (registers, L1 cache). The bus is of an ingenious design where the processor sends the signal to all devices and the component that is supposed to receive the message responds, the rest ignore it. The bus is sort of like a megaphone that the processor shouts commands through, and only the individual components the processor is calling obey their commands.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Capacitors
You've heard everyone make the claim that larger capacitors, and more of them make motherboards run stable...but you've never been told why. Lets discuss, first of all, what a capacitor is. A Capacitor is simply 2 conductive plates, separated by an insulating material. A capacitor's main function is to store an electrical charge. The insulating material separating the 2 conductive plates is used as a resistor, or something that resists electrical current, and therefore is used as a sort of a regulator in this case. So what makes more capacitors better? If used properly, more capacitors can allow motherboard manufacturers to more accurately and more reliably control the voltages being supplied to the CPU. Therefore increasing stability, especially during times when the voltage levels required by the CPU become critical (i.e. when overclocking!). It is because of this that motherboards with larger, better quality, and therefore longer lasting capacitors are much more reliable and stable at higher bus speeds (i.e. AOpen AX5T or Megatrends HX83). In the case of some motherboards, superb design and engineering makes it possible to overclock some processors (like the AMD K6) without having to increase the CPU's voltage. Ever wondered why on some motherboards in order to overclock a Pentium 133 to 166, or K6-166 to 225 you need to increase the voltage supplied to the CPU while on other boards with the exact same chip it isn't necessary? It is somewhat due to the engineering quality of the motherboard, was the board manufactured using high quality capacitors with stability in mind? If the answer to that is no, then in some cases you will only receive stable performance at overclocked speeds with a processor by increasing its core voltage. This is not true in all situations, it depends on the placement and application of the capacitors to determine their outcome. Just because one motherboard uses 20 capacitors and another uses 12 doesn't mean that the latter motherboard will be less stable or less reliable. The rule of thumb is that more capacitors near key components or ICs (see IC - Integrated Circuit) (such as the CPU Socket, Voltage Regulators, even expansion slots!) the better the application of the capacitors. When choosing a motherboard there are a few types of capacitors commonly used. Although most Tantalum capacitors are sufficient for normal, and even overclocked operation, the big bad Sanyo capacitors are what you need for a rock solid motherboard. Companies that use Sanyo capacitors, include AOpen, ASUS and Megatrends just to name a few. And it is because of their excellent engineering and design, as well as proper use of high quality capacitors that their motherboards are among the best and most reliable.

 

 

Caches
There are currently two types of caches, L1 (level 1) and L2 (guess...). The L1 cache is built into the processor, so there are no bus-lines to go through; the L2 cache is an external piece, although I understand that on Pentium II's it's attached to the plug-in card. The cache is a layer between system RAM and your processor. Think of it like a salesperson in a department store. If there is a model on the shelf then it will be handed to you with little hassle; but if the shelf is empty then they will have to go to the storage area. Well the cache keeps track of the most recently used memory addresses (locations) within it and the values they contain. When the processor requests an access to memory (trust me, it will) then that address and value must be in the cache. If it is not, then the cache will load that value from memory, and replace some previously cached memory address with this new one (if necessary). If a program is not optimized to take advantage of the cache's ability to make repeated access to the same address, then severe performance hits result. For instance, to speed itself up, the L1 cache on the Pentium loads strips of 32 bytes at a time from the L2 cache. If the address the program is looking for is not in the L1 cache, approximately 75 nanoseconds will be wasted on a P200. If this value is not in the L2 cache either, then an additional 1.5 to 2.5 microseconds will be wasted in "cache thrashing". All this for one add instruction that uses a seldom-used RAM location! Think about the number of adds, subtracts, and moves that are done in code. Microseconds are certainly an insignificant measure of time for you and me, but now think about whether your 50ns EDO RAM or your 10ns SDRAM is performing up to par! I hope I have proved my point.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 



 

Caches
There are currently two types of caches, L1 (level 1) and L2 (guess...). The L1 cache is built into the processor, so there are no bus-lines to go through; the L2 cache is an external piece, although I understand that on Pentium II's it's attached to the plug-in card. The cache is a layer between system RAM and your processor. Think of it like a salesperson in a department store. If there is a model on the shelf then it will be handed to you with little hassle; but if the shelf is empty then they will have to go to the storage area. Well the cache keeps track of the most recently used memory addresses (locations) within it and the values they contain. When the processor requests an access to memory (trust me, it will) then that address and value must be in the cache. If it is not, then the cache will load that value from memory, and replace some previously cached memory address with this new one (if necessary). If a program is not optimized to take advantage of the cache's ability to make repeated access to the same address, then severe performance hits result. For instance, to speed itself up, the L1 cache on the Pentium loads strips of 32 bytes at a time from the L2 cache. If the address the program is looking for is not in the L1 cache, approximately 75 nanoseconds will be wasted on a P200. If this value is not in the L2 cache either, then an additional 1.5 to 2.5 microseconds will be wasted in "cache thrashing". All this for one add instruction that uses a seldom-used RAM location! Think about the number of adds, subtracts, and moves that are done in code. Microseconds are certainly an insignificant measure of time for you and me, but now think about whether your 50ns EDO RAM or your 10ns SDRAM is performing up to par! I hope I have proved my point.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Cache Memory & Cacheable Areas
Cache memory, you've heard of it, and you're using it constantly...but why? Cache memory is merely RAM, that can be accessed at ultra fast speeds, much faster than your system RAM. Your cache memory can access a certain amount of your system RAM at those ultra fast speeds, therefore making retrieval and storage of commonly used, or cached, programs very fast. So, why is it that you experience degraded system performance when using more RAM than you have in your cacheable area? Well, consider your cacheable area the amount of customers you can serve at once, when you have more customers (using more RAM) than you can serve at a time ( using more RAM than you can cache) you experience delays or slow downs. If you have 128MB of RAM for example, in a system that can only cache 64MB, there is still 64MB remaining that cannot be accessed as fast as the other 64MB. Therefore you take a small performance hit when using that uncached RAM.

 

 

Disks
A disk-drive is something that comes standard with almost every computer these days; it could be a hard-disk (also called a fixed-disk) and it could a floppy or ZIP drive (removable media drive). These are called disks because inside the protective plastic covers are flat magnetic circles. Inside more recent hard-disks are not just one magnetic disk, but many, stacked up one on top of another. 3 1/2" floppy drives are very low capacity, but a 2GB hard-disk has a huge capacity, at least compared to your RAM. Because the sizes of hard-disks are simply so overwhelming, instead of containing just linear addresses (like RAM) they are broken into sectors. The sector is the base unit of data on a hard-drive, just as a byte is the base unit of data on a microprocessor. A sector is 512 bytes. Since a sector is still quite small (only half a kilobyte), disks are broken up into tracks as well. Tracks can be thought of as concentric circles on the disk, each one containing the same number of sectors (although the outer ones could contain much more than the inner ones). When a disk is formatted, it is broken up into these tracks and made so that each track contains the same number of sectors. On larger hard-disks with multiple stacked disks, then the number of tracks on one of those disks is also the number of cylinders on the disk. So a cylinder is simply a track but for a multi-disked hard-drive.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Firmware
I like to think of Firmware as a mix between Hardware and Software. Firmware usually refers to electronic units (hardware) which can be modified by a separate medium (software). For example, your system BIOS is hardware however it CAN be modified by software since you can configure the settings contained in it via your BIOS Setup utility.

 

 

Hardware
Hardware is basically the physical equipment used with computers, such as motherboards, peripheral cards, microprocessors, etc...

 

 

IC - Integrated Circuit
Here's another buzz-word you must've heard at least once when talking about Computer Hardware (see Capacitors), an IC, or Integrated Circuit. An IC is a short, sometimes fancy, word for an electronic unit composed of a group of transistors (transistors will be explained later) as well as other circuit elements on, in most cases, a silicon wafer or chip. An example of an IC would be a microprocessor, like the Intel Pentium TM, although a microprocessor is a complex example of an IC, it is an IC nevertheless. Many components you find on Motherboards, peripheral cards (video cards, sounds cards, etc...) are composed of many ICs working cooperatively with each other.

 

 

Pipeline
Have you noticed that Intel is always boasting "Our Pentium series chips have a dual-pipeline that makes your programs run twice as fast!". Well, I'll explain how a pipeline works, then I'll explain how the above "bold" statement is untrue. The processor has a set of instructions that it understands, like moving values into registers and adding. There are five steps involved in the execution of each instruction (a little walk on the techie side):

FETCH the instruction's code.
DECODE what instruction it means.
CALCULATE what memory is going to be used.
EXECUTE the specified operation and store the results internally.
WRITEBACK the results to the memory or registers specified.

In older processors, each instruction was laboriously executed one at a time. However, on more modern processors (486 and above) the pipeline was introduced. A pipeline is like a miniature assembly-line, where as the first instructions is at the DECODE (2nd) stage, the second instruction is being FETCHED (1). Then as the first instruction is at the CALCULATE (3rd) stage , the second instruction will be at the DECODE (2nd) stage, and a third instruction will be FETCHED (1). This allows for much faster execution, with only minor hitches when slow instructions (like multiply) are coded. A dual-pipeline is simply two pipelines, but it's not as great as it sounds. First of all, code must be executed in the order that it is presented in (i.e. it's linear) so if two pipelines are present then the instruction in the U-pipe (1st pipeline) must be done before the instruction in the V-pipe (2nd pipeline). Additionally, the V-pipe can handle a pathetically small number of operations (not the full set as could be inferred by their advertising). To compound this, only a certain set of codes can be "paired" (instruction A goes in U-pipe, instruction B goes in V-pipe). This means that unless the program has been specifically optimized for this particular processor, the difference that the V-pipe makes is insignificant. Trust me, you won't get too "lucky" with your pairings, there are way too many rules. So the next time somebody starts talking about a dual-pipeline, you can say "So what...!"

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

RAM
Memory in your computer is the RAM (random-access memory, for lack of a better term), not your hard-drive space. Memory locations are called addresses and are numbered starting with zero and increasing by one for each byte of RAM until the end of memory is reached. So 4MB of RAM contains 4 million addresses (that's a lot)! And although the mass-media may exclaim that RAM is super-fast, you must realize that they lie! If you've noticed, memory is attached to your motherboard, not your processor. This means that the processor must use Bus lines to access memory. But not to fear, there are special pins on the processor for this purpose (it's designed to use memory) and there are special bus lines (think of them as embedded wires) for CPU/memory interaction. The fact that memory must be accessed through your bus means that memory access is only as fast as your Bus speed. So if you've got a 200MHz Pentium, odds are that your bus is running at 66MHz. This means that if there were no caches whatsoever then an add instruction which takes one clock cycle would take 15 nanoseconds to access one piece of memory (in optimal conditions) versus 5 nanoseconds to access a register. But all modern systems have caches, and these play a role in determining memory access speed.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

See the RAM Guide for More Information

 

 

Registers

The microprocessor is a very complex beast, but programming it can be simple. The microprocessors I will discuss are only x86 processors, but all processors work in a similar style. Because RAM is separate from the processor, it has a small number of extremely fast memory locations called registers. When a processor is referred to as a 32-bit processor, that denotes that its main registers are 32-bits long (I highlighted "main" because the Pentium has 64-bit internal registers but those don't count). The main CPU (central processing unit) is the integer unit, which decodes and executes all instructions dealing with simple integer operations (add, subtract, multiply, and divide). When a profiling program like WinBench gives the MIPS of a processors, that's how many millions of instructions the CPU can execute per second! The FPU (floating-point unit) has been a hot topic recently. Floating point numbers are real numbers (contain a decimal point). FPU's have a separate set of registers than the CPU. Because of the complexity of FPU's, they tend to be slower. Additionally, the registers on an x86 FPU are 80-bits long, and more bits means slower operation. Because accessing registers takes nanoseconds (billionths of a second) versus memory access which takes microseconds (millionths of a second), fast code is code that keeps the memory access to a real minimum.

Courtesy of Avinash Baliga of Vzzrzzn's Programming Homepage

 

 

Software
Software, unlike Hardware consists of all the programs, applications, functions, etc... necessary to make a computer perform specific productive functions and routines.

 

 

Voltage Regulators
You've heard me, time and time again refer to the type of Voltage Regulators and their heatsinks used on motherboards. But what exactly do Voltage Regulators do...and what is the difference between a passive and a switching voltage regulator? A voltage regulator takes the electrical current from your cases' power supply and basically regulates the amount of electricity necessary for your motherboard and, most importantly, your CPU to operate properly. In some cases a more advanced voltage regulator is necessary to provide the current to the CPU as well as the motherboard. For example, the Pentium MMX's split voltage specification (sometimes referred to as dual-voltage) dictates that the I/O Voltage (current to the rest of the motherboard) must be at or around 3.3 volts while the Core Voltage (current to the CPU) must be at or around 2.8 volts, therefore it requires a voltage regulator capable of providing two independent voltages, 3.3v I/O and 2.8v Core. In most newer motherboards you find Dual Voltage Regulators or Split Rail Voltage Regulators capable of providing two independent voltage settings concurrently. Then what is the difference between a passive (or linear) voltage regulator and a switching voltage regulator? It is my understanding that switching voltage regulators more effectively sustain a current stream than passive voltage regulators therefore make up for some shortcomings in your cases' power supply or small flaws in your motherboard's design.

 

Log in

Don't have an account? Sign up now