Search Result

Monday, March 9, 2009

RAM

DRAM Dynamic random access memory. Comes in 80, 70, or 60 nanosecond (ns) speed. The lower the number, the faster the memory.
CDRAM Cached RAM (invented by Mitsubishi Electronics). It combines an SRAM cache with 4 or 16 MB of DRAM within a single chip. This onboard SRAM can be used as both a cache or a buffer and gives the RAM an approximate 15 ns access time.
EDRAM Enhanced DRAM (developed by Ramtron International Corp. of Colorado Springs). Like CDRAM also incorporates an on-chip SRAM cache.
EDO RAM Extended Data Out RAM is a form of DRAM that works by extending the time during which data can be read from memory. Provide from 4 to 15 per cent greater performance than standard DRAM.
RDRAM Rambus DRAM (Toshiba and Samsung). It's similar to SDRAM, but faster, says Rambus.
SRAM Static RAM. Is powered once and doesn't need to be continually refreshed, unlike dynamic RAM.
SDRAM Synchronous DRAM (from Texas Instruments) has its timing synchronized to the system clock. Is about 10 per cent faster than EDO RAM.
DDR Double Data Rate SDRAM
DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a different technology.
Rambus DRAM (RDRAM)
Despite it's higher price, Intel has given RDRAM it's blessing for the consumer market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency. RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are currently at the top of the heap in memory throughput, especially when paired with PC1066 RDRAM memory.
DIMMs vs. RIMMs DRAM comes in two major form factors: DIMMs and RIMMS.
DIMMs are 64-bit components, but if used in a motherboard with a dual-channel configuration (like with an Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5 volts.
RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maximum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM.

Video memory types:
SGRAM Synchronous graphics RAM is a form of DRAM for graphics controllers and printers. According to Fujitsu, produces data bandwidth up to five times that of standard DRAM.
VRAM Video RAM. Co-called "dual port" memory types that allow the graphics processor to read from memory and redraw the screen simultaneously.
WRAM Window RAM (developed by Samsung Electronics) is both faster (50 percent performance increase) and less expensive than VRAM.
Memory Types
RAM chips used to be sold as individual chips, but today several RAM chips are soldered together onto a plug-in board called a module. This RAM module is called a SIMM (Single In-line Memory Module). SIMMs come in three basic designs: an older design that has 30 connector pins, a newer design that has 72 connector pins, and the newest design that has 168 connector pins (also called SDRAM). Each computer is designed to use one or the other of these SIMM designs, but today most all computers use the 72 pin design.

SIMMs come in several difference speeds. The most common speed is called 70 nanoseconds (ns). The rule in RAM is the lower (or smaller) the nanosecond number, the faster the RAM will operate. Therefore, a 60 ns SIMM is faster than a 70 ns SIMM. The new SDRAM has a speed of 10ns, which is 6 times faster than the fastest 72 pin SIMMS. All Pentium II and most new Pentium computers incorporate SDRAM.

Several new memory technologies seek to close the gap between processor and RAM performance. The goal is to increase the chips’s speed and widen the bandwidth with which they communicate with the processor. The players include double data rate RAM, or DDRRAM (also known as SDRAM II), SLDRAM, Direct RDRAM (aka Direct Rambus) and Concurrent RDRAM (aka Concurrent Rambus). Of these, Direct Rambus, endorsed by Intel, offers the greatest speed improvements, moving the peak bandwidth from SDRAM’s 125MBps to an astounding 1.6GBps.
When you think about it, it's amazing how many different types of electronic memory you encounter in daily life. Many of them have become an integral part of our vocabulary:
RAM
ROM
Cache
Dynamic RAM
Static RAM
Flash memory
Memory sticks
Volatile memory
Virtual memory
Video memory
BIOS
SIMM
DIMM
EDO RAM
RAMBUS
DIP


Double Data Rate (DDR) SDRAM: DDR is rated by its speed or potential bandwidth
PCI1600/DDR200 - 1.6 Gbps throughput, 200 MHz bus)
PCI12100/DDR266 - 2.1 Gbps throughput, 266 MHz bus)
PCI2700/DDR333 - 2.7 Gbps throughput, 333 MHz bus)

buffer (To move data into a temporary storage area)

A temporary storage area, usually in RAM. The purpose of most buffers is to act as a holding area, enabling the CPU to manipulate data before transferring it to a device.

Because the processes of reading and writing data to a disk are relatively slow, many programs keep track of data changes in a buffer and then copy the buffer to a disk. For example, word processors employ a buffer to keep track of changes to files. Then when you save the file, the word processor updates the disk file with the contents of the buffer. This is much more efficient than accessing the file on the disk each time you make a change to the file.

Note that because your changes are initially stored in a buffer, not on the disk, all of them will be lost if the computer fails during an editing session. For this reason, it is a good idea to save your file periodically. Most word processors automatically save files at regular intervals.

Another common use of buffers is for printing documents. When you enter a PRINT command, the operating system copies your document to a print buffer (a free area in memory or on a disk) from which the printer can draw characters at its own pace. This frees the computer to perform other tasks while the printer is running in the background. Print buffering is called spooling.
cache

Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching.

A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.

Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger.

Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk.

When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.

No comments:

Post a Comment

Click

 

blogger templates | Make Money Online