iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

RAM FAQ 1.0




Introduction

We proceed with a series of "user's guides" devoted to theoretical and practical aspects of various components of a modern PC, which was started by "Modern desktop x86 processors: general operation principles (x86 CPU FAQ 1.0)". This guide will review the main modern types of memory used in desktop systems (memory for servers and notebooks will be left beyond the scope of this article). That is SDRAM — SDR (Single Data Rate), DDR (Double Data Rate), and DDR2 (DDR memory of the second generation). Perhaps, SDRAM as such (in its initial SDR SDRAM modification) is currently rather outdated. Nevertheless, all the three memory types mentioned belong to the same class and are based on similar operation principles to be reviewed right now.

Table of contents

Part 1. Theoretical basics of modern memory

SDRAM: Definition

SDRAM stands for Synchronous Dynamic Random Access Memory. Let's dwell on each of these definitions. "Synchronous" usually means strict attachment of control signals and time diagrams of memory operation to the FSB clock. In fact, the original sense of the word "synchronous" is getting somewhat relative. First of all, memory bus frequency may differ from that of the system bus (for example, asynchronous DDR SDRAM mode on the AMD K7 platforms with VIA KT333/400 chipsets, where FSB and memory bus frequencies may correlate as 133/166 or 166/200 MHz). Secondly, there currently exist systems, where the system bus notion itself becomes relative — we mean AMD Athlon 64 platforms with a memory controller integrated into a processor. System bus frequency (in this case it's not HyperTransport bus for data exchange with peripheral devices, but the clock bus) in these platforms is just a reference frequency, which is multiplied by a processor by a specified coefficient to get its own clock. The memory controller in this case always operates at the same frequency as the processor, and the memory bus clock is specified by an integer divider, which may disagree with the original system bus frequency multiplier. For example, DDR-333 mode on an AMD Athlon 64 3200+ processor corresponds to the system bus frequency multiplier of 10 (frequencies of a processor and a memory controller are 2000 MHz) and the memory frequency divider of 12 (memory bus frequency - 166.7 MHz). Thus, synchronous SDRAM operation currently means strict attachment of time parameters of sending commands and data along the corresponding memory interfaces to the frequency of memory bus (to put it simply, all operations in memory are executed strictly on rising/falling edges of the clock signal in the memory interface). For example, sending commands and reading/writing data can be executed at each cycle of the memory bus (on the positive-going pulse — rising edge of the clock signal; in case of DDR/DDR2, data can be transferred both on the rising edge and on the negative-going pulse — falling edge of the clock signal), but not at arbitrary time intervals (as in case of asynchronous DRAM).

The notion of dynamic memory, DRAM, refers to all types of memory, from the ancient usual asynchronous dynamic memory to modern DDR2. This term is an antonym to static memory (SRAM). It means that contents of each storage cell should be periodically refreshed (due to design peculiarities dictated by economic reasons). At the same time, static memory, being characterized by a more complex and more expensive design of a storage cell and being used as cache-memory in processors (it used to be installed on motherboards as well), is free of refresh cycles, as it's based on a trigger (static component) instead of a capacitance (dynamic component).

And finally, we should also say a few words about Random Access Memory, RAM. This notion traditionally opposes Read-Only Memory, ROM. Nevertheless, this opposition is not quite correct, as it may suggest that ROM is not random access memory. That's not true, as ROM devices can be accessed in any order, not strictly sequentially. In fact, RAM was initially opposed to early types of memory, where read/write operations could be executed only in sequential order. In this connection, RWM (Read-Write Memory) expresses the purpose and operation principles of memory. Nevertheless, this abbreviation is used very seldom.

SDRAM chips: Physical organization and operation principles

The general principle of organization and operation of DRAM chips is practically the same for all types — both for the initial asynchronous type and for the modern synchronous one. The only exceptions are exotic modifications, which existed even before the introduction of SDRAM, like Direct Rambus DRAM (DRDRAM). DRAM memory array can be considered as a two-dimensional array of elements (strictly speaking, this notion applies to the logical organization level of a memory chip, covered in the next section, but it should be introduced to make things clear), each one containing one or several physical cells (depending on a chip configuration) that can hold an elementary unit of information — one bit. These cells are a combination of a transistor (a gate) and a capacitance (a storage element). Array elements are accessed with row and column address decoders, controlled by RAS# (Row Access Strobe) and CAS# signals (Column Access Strobe).

For package minimization reasons, row and column addresses are transmitted along the same address lines of the chip — in other words, it means multiplexing row and column addresses (the above mentioned differences of the general operation principles of DRDRAM chips from "common" synchronous/asynchronous DRAM reveal themselves here — this type of memory chips has row and column addresses transmitted along different physical interfaces). For example, 22-bit full address of a cell may split into two 11-bit addresses (row and column), which go consecutively (at a certain time interval, see the Memory Timings section) to address lines of a memory chip. The second part of an address (column address) is transmitted together with a corresponding command (read or write) along the shared command-address interface of an SDRAM chip. A memory chip temporarily stores row and column addresses in row and column address buffers (latches).

It's important to note that the dynamic memory array is connected to a special static buffer, SenseAmp, which size is equal to a single row size. This buffer is required for reading and refreshing data in storage cells. As the latter are actually capacitors that discharge each time a read operation is performed, SenseAmp must restore the data in the storage cell after the access cycle is completed (a detailed review of SenseAmp performance in a data read cycle is published below).

Besides, as capacitors discharge in time (irregardless of read operations), contents of storage cells must be periodically refreshed to prevent data loss. Modern memory types, which support automatic refresh (in wake-up mode) and self refresh (in sleep mode), usually delegate this task to the refresh controller, built right into a memory chip.

Here is an outline of storage cell access in the general case:

1. Row address is fed to the address line of a memory chip. It's accompanied by a RAS# signal, which stores the address into a row address buffer (latch).

2. After RAS# signal is stabilized, the row address decoder selects a necessary row and puts its contents into SenseAmp (the logical state of the row is inverted).

3. Column address is fed to address lines of a memory chip together with CAS# signal, which puts the address to the column address buffer (latch).

4. The CAS# signal also acts as a data output signal. So as it stabilizes, SenseAmp sends selected data (that correspond to the column address) to the output buffer.

5. CAS# and RAS# signals are deactivated consecutively to resume the access cycle (after the period when data from SenseAmp return to the row of the storage cell array to restore its previous logical status).

That's the real procedure for accessing a DRAM storage cell in its initial version, implemented before the first chip/modules of asynchronous memory FPM (Fast Page Mode) DRAM. Nevertheless, you can easily notice that this procedure is not optimal. Indeed, if we need to read the contents of several adjacent storage cells (differing only in the column address, not in the row address) instead of a single cell, there is no need to resend the RAS# signal with the same row address (that is to execute Steps 1-2). Instead, you only have to keep the RAS# signal active for the time span that corresponds, for example, to four sequential read cycles (Steps 3-4, with CAS# subsequently deactivated) and then deactivate the RAS# signal. This procedure was used in asynchronous FPM DRAM and later EDO (Enhanced Data Output) DRAM. The latter was notable for advance feed of the next row address to get smaller latencies in reading.

Modern SDRAM chips feature the same procedure for accessing storage cells. We shall review it in detail later, when we discuss memory access latencies (memory timings).

SDRAM chips: Logical organization

And now let's review the organization of SDRAM memory chips on the logical level. As it has been already said, DRAM chip is actually a two-dimensional array of elements, consisting of one or several elementary physical cells. Obviously, the main characteristic of this array is its capacity, expressed in a number of bits it can store. You may often come across such notions as 256 Mbit, 512 Mbit memory chips — that's about this very parameter. But this capacity can be made up in different ways — we are not speaking about the number of rows and columns, but about dimensions or "capacity" of an individual element. The latter is directly connected with the number of data lanes, that is the width of the external data bus of a memory chip (but not necessarily with the aspect ratio of one; we'll see it below, when we review differences between DDR, DDR2 SDRAM and "usual" SDRAM). Data bus width of the first memory chips was just 1 bit. The most popular memory chips now are 4-, 8-, and 16- (more rarely — 32-) bit ones. Thus, a 512 Mbit memory chip can be made up of 128M (134 217 728) 4-bit elements, 64M (67 108 864) 8-bit elements, or 32M (33 554 432) 16-bit elements — corresponding configurations are written as "128Mx4", "64Mx8", and "32Mx16". The first digit is called the depth of a memory chip (non-dimensional quantity), the second — width (expressed in bits).

Essential difference of SDRAM from earlier DRAM types consists in the data array split into several logical banks (as least 2, usually 4). Don't confuse it with the notion of a physical bank (which is also called a memory rank), which is defined for a module, not for a memory chip — it will be described later on. As for now, we'll just note that the external data bus of each logical bank (unlike the physical one, which is made up of several memory chips to "fill" the data bus of a memory controller) is characterized by the same capacity (width) as the capacity (width) of the external data bus of a memory chip on the whole (x4, x8, or x16). In other words, logical division of the chip array into banks is done on the level of a number of elements in the array, but not of the element capacity. Thus, the above mentioned real examples of the logical organization of a 512 Mbit chip, split into 4 banks, can be written as 32Mx4x4 banks, 16Mx8x4 banks and 8Mx16x4 banks, correspondingly. Nevertheless, "full" capacity configurations are used much frequently in memory chip designations (or their expansions in technical documentation), without taking into account division into separate logical banks. And detailed description of a chip organization (a number of banks, rows and columns, width of the external data bus of a bank) can be found only in detailed technical documentation on a given type of SDRAM chips.

Splitting SDRAM arrays into banks was introduced mainly for performance reasons (to be more exact, for minimization of system latencies — that is delays in data flow to the system). To put it simply, after any operation with a memory row, after RAS# deactivation, there must be some time for charging. Advantage of multibank SDRAM chips lies in the opportunity to access a row in one bank, while a row in another bank is being charged. You can arrange data in memory and organize access to them so that the next data to be accessed will be from the second bank, charged and ready. It's quite natural to use this moment to charge Bank One, and so forth. This method of memory access is called Bank Interleave.

SDRAM modules: Organization

Key parameters of logical organization of memory chips (capacity, depth, and width) can be applied to SDRAM memory modules. The notion of module capacity (or size) is evident — it's the maximum volume of information that a given module can hold. Theoretically, it can be expressed in bits as well. But the conventional consumer characteristic of a memory module is its capacity (size), expressed in bytes — to be more exact, considering the modern level of memory capacities — in Mega and even Gigabytes.

Module width is capacity of its data bus interface, which matches the data bus capacity of a memory controller. It's 64 bit for all modern types of SDRAM (SDR, DDR, and DDR2) memory controllers. Thus, all modern modules are characterized by the data bus interface width of "x64". How is the 64-bit data bus of a memory controller (64-bit interface of a memory module) matched with the typical width of the external data bus of memory chips, which is usually just 4, 8, or 16 bit? The answer is very simple — interface of the data bus in a memory module is made up of simple sequential merging of external data buses of separate chips in a memory module. Filling data bus in a memory controller is called composition of a physical bank of memory. Thus, composition of one physical bank of a 64-bit SDRAM module requires sixteen x4 chips, eight x8 chips (the most popular variant), or four x16 chips.

The remaining parameter, module depth (capacity of a memory module, expressed in a number of words of a certain width), is calculated by simple division of full capacity of a module (in bits) by its width (capacity of the external data bus, also expressed in bits). For example, a typical 512-MB SDR/DDR/DDR2 SDRAM module has the depth of 512MB * 8 (bit/byte) / 64 bits = 64M. Accordingly, width multiplied by depth gives full capacity of a module and defines its organization, or geometry, which in this case is written as 64Mx64.

Getting back to physical banks in a memory module - in case of rather "wide" x8 or x16 chips, more of them can be actually installed, corresponding to two physical banks instead of only one — sixteen x8 chips or eight x16 chips. That's the difference between single-bank (or single-rank) and dual-bank (dual-rank) modules. Dual-bank memory modules are most often represented by the configuration with 16 x8 chips, when one of physical banks (the first eight chips) is on the front side of the module and the second (the remaining eight chips) is on the back side. More than one physical bank in a memory module is not always an advantage, as it may require increased latencies in a command interface to be reviewed in this section.

Memory modules: SPD chip

Even before the appearance of the first SDR SDRAM type, according to JEDEC each memory module must have a small special ROM chip, called Serial Presence Detect (SPD). This chip contains main information about the type and configuration of a module, timings (see the next section), which must be kept up when executing this or that operation on the memory chip level, as well as other information, including Manufacturer’s JEDEC ID Code, Serial Number, Manufacturing Date, etc. The latest revision of the SPD standard for DDR2 modules also includes data on temperature conditions for memory operation, which can be used for maintaining optimal temperature conditions by controlling memory synchronization (relative clock pulse duration), so called DRAM Throttle. Detailed information about the SPD chip and its contents is published in our article "SPD — Serial Presence Detect" and in a series of our reviews of memory modules.

Memory timings

Memory timings are an important category of memory chip/module characteristics — this notion is likely to be familiar to each PC user. The notion of timings is closely related to delays that accompany any operations with storage cells due to the finite operating speed of SDRAM devices, like any other integrated circuits. Delays that accompany memory access are called memory latencies.

In this section we shall consider where exactly these latencies appear during data operations (with contents of memory chips) and how they are connected with the most important timings parameters. As this guide covers SDRAM (SDR, DDR, and DDR2) memory modules, we are going to analyze a certain access method to data, stored in SDRAM chip cells. This section will also review a different category of timings that have to do with choosing a number of a physical bank for routing commands on the command interface of SDRAM memory modules (so called command interface latencies).

SDRAM Data Access Schema

1. Row activation

Before any operation with data stored in some bank of an SDRAM chip (READ or WRITE), it's necessary to activate a corresponding row in this bank. For this purpose, the ACTIVATE command with a bank number (BA0-BA1 rows for a 4-banked chip) and row address (A0-A12 address lines, their real number depending on the number of rows in a bank; it's 213 = 8192 in this case of a 512-Mbit SDRAM chip) is given to the chip.

Activated row remains open (available) for subsequent access operations until the PRECHARGE command, which actually closes this row. Minimum period of row activity — from its activation to precharge is determined by Row Active Time (tRAS).

It's impossible to reactivate another row in the same bank, while the previous row in this bank is open (as SenseAmp, containing a data buffer the size of a single bank row, described in the section SDRAM chips: physical organization and operation principles, is shared by all rows in this bank of an SDRAM chip). Thus, a minimum period of time between activation of two different rows in the same bank is determined by Row Cycle Time (tRC).

At the same time, having activated a row in a given bank, SDRAM chip can easily activate another row in the other bank (that's the above-mentioned advantage of multibanked SDRAM) at the next cycle of the memory bus. Nevertheless, in actual fact, SDRAM manufacturers usually deliberately introduce an additional delay, called Row-to-Row Delay (tRRD). Reasons for introducing this delay have nothing to do with operation of memory chips as such. They are purely electrical — row activation consumes quite a lot of electric power, so frequent execution of this command may lead to undesirable excessive electric loads.

2. Read/write data

The next time parameter of memory operation appears because row activation itself requires some time. In this respect, subsequent (after ACTIVATE) READ or WRITE commands cannot be given at the next cycle of a memory bus. It can be done only after a certain interval, called RAS#-to-CAS# Delay (tRCD).

So, after an interval of tRCD, the READ command with a bank number (ACTIVATED) and a column address is given to a memory chip. SDRAM devices are intended for reading and writing data in Burst mode. It means that giving just a single READ (WRITE) command will result in reading/writing several elements, or words, in succession from/to storage cells (capacity of each word equals the width of the external data bus in a memory chip — for example, 8 bit). The number of data elements, read by a single READ command or written by a single WRITE command, is called Burst Length and usually amounts to 2, 4, or 8 elements (except for an exotic case of Full-Page Burst, when a special BURST TERMINATE command should be used to interrupt a super-long burst). Note that DDR and DDR2 memory chips cannot have Burst Length less than 2 and 4 elements correspondingly — the reason will be analyzed below in connection with differences between SDR/DDR/DDR2 SDRAM devices.

Let's get back to reading data. There exist two types of the read command. The first one is a usual READ command, the second is called Read with Auto-Precharge, (RD+AP). The latter has a peculiarity - after a burst data transfer along the data bus is completed, PRECHARGE command will be given automatically, while in the first case the selected row will remain open for further operations.

After the READ command, the first portion of data does not become available immediately, but after a delay of several memory bus cycles, when data read from SenseAmp are synchronized and transferred to the external pins of the chip. The delay between the read command and availability of data on the bus is the most important parameter, which is called CAS# Latency (tCL). The next chunks of data (in compliance with burst length) become available without any additional latencies at each next memory bus cycle (one element per cycle for SDR devices, two elements per cycle in case of DDR/DDR2 devices).

Data write operations are performed in the same way. There exist two write command types just as well — simple data write (WRITE) and Write with Auto-Precharge (WR+AP). In the same way, in case of the WRITE/WR+AP command, a bank number and a column address are fed to a memory chip. And finally, data are also written in bursts. Here are the differences between write and read operations. Firstly, the first chunk of data to be written must be fed on the data bus simultaneously with WRITE/WR+AP command, bank number, and column address sent on the address bus. The next chunks, their number determined by burst length, are sent at each next memory bus cycle. Secondly, Write Recovery Time (tWR) becomes of primary importance here, instead of CAS# latency (tCL). This value determines a minimal interval between receiving the last chunk of data to be written and readiness of a row to be closed by the PRECHARGE command. But if it's necessary to read the data from the same open row instead of closing it, another latency becomes important - Write-to-Read Delay (tWTR).

3. Row precharge

The cycle of reading/writing data from/into memory rows, which in the general case can be called a row access cycle, ends in closing the open row using the PRECHARGE command (which can be done automatically, as part of RD+AP or WR+AP). Subsequent access to this memory bank does not become possible immediately, but after some interval, called Row Precharge Time (tRP). That's when the precharge operation is carried out - when data elements corresponding to all columns of a given row with SenseAmp return to memory row cells.

Correlations between timings

In conclusion of this section about memory access latencies, let's review the main correlations between the most important timing parameters by giving an example of simple data read operations. As we have already written above, in the general case - burst read (2, 4, or 8 elements) - the following operations should be executed:

1) ACTIVATE a row in a memory bank

2) Give the READ command

3) Read the data on the external data bus of a memory chip

4) Close the row with the PRECHARGE command (it can be done automatically, if RD+AP is used at the second step).

Time between the first and the second operation is called RAS# to CAS# delay (tRCD), between the second and the third — CAS# latency (tCL). Time between the third and the fourth operations depends on a burst length. Strictly speaking, it equals the burst length (2, 4, or 8) in memory bus cycles divided by a number of data elements, transferred on the external bus per cycle — 1 for SDR devices, 2 for DDR devices. And finally, time between the fourth operation and the next repetition of the first operation in a cycle is called Row Precharge Time (tRP).

At the same time, we have already seen above that the minimum row active time tRAS is actually the interval between the first and the fourth operations. Hence follows the first important correlation:

tRAS > tRCD + tCL,

minimum tRAS must be greater than the sum of tRCD and tCL by the duration of the third operation, determined by burst length. Let's consider the following example: DDR memory with tCL = 2 and tRCD = 2 (typical of high-speed DDR memory modules). In case of the minimum burst length of 2 (DDR standard), minimum 4 memory bus cycles are necessary for tRCD and tCL plus one bus cycle to read one data packet. Thus, in this case tRAS equals 5. Transferring longer packets consisting of four elements (BL = 4) increases this value to 6.

The second important correlation between timings follows from the fact that a full cycle of burst reading — from Stage 1 to its repetition is actually called Row Cycle Time, tRC. As the first three stages cannot take up less time than tRAS and the last stage takes up no less time than tRP, we get the following:

tRC = tRAS + tRP.

Note that some memory controllers (for example, integrated memory controller in AMD64 processors) allow to set tRAS and tRC independently, which may theoretically result in violation of the above equation. Nevertheless, this inequation is not important — it will just mean that tRAS or tRC will be automatically adjusted (upward) to comply with this equation.

Command interface delays

A special group of timings, which have nothing to do with SDRAM data access, is the so called command interface delays, or their inverse characteristic — command rate. These delays have to do with low operation level of the memory system — not individual chips, but physical banks composed by them. When a memory system initializes, each physical bank gets a die number (chip select), which identifies it in each subsequent request (as all banks share the same command/address and data buses). The more physical banks, the longer delays in signal propagation (as a direct offshoot of the signal path length), encode/decode, and addressing/control logic.

That's how delays in command interface appear. They are mostly known for AMD Athlon 64 platforms with integrated memory controllers supporting DDR SDRAM. Of course, it does not mean that command interface delays are characteristic of this platform only — it's just that this platform type, as a rule, has a BIOS setting "Command Rate: 1T/2T", while the other modern platforms (for example, Intel Pentium 4 with chipsets Intel 915, 925, 945, 955, and 975) lack settings of command interface delays; they are most likely controlled automatically. Let's get back to AMD Athlon 64. In 2T mode, all commands (together with corresponding addresses) are executed for two memory bus cycles, which definitely affects performance. But it can be justified from the point of view of memory stability. We shall review this issue in detail in future (in the second "practical" part of this guide, devoted to choosing SDRAM modules).

DDR/DDR2 SDRAM: Differences from SDR SDRAM

We have reviewed organization and operation principles of SDR SDRAM devices. This section will review the main differences, brought by DDR and DDR2 SDRAM.

Let's start with DDR SDRAM. These devices are mostly similar to SDR SDRAM chips — as a rule, both types have the same logical organization (in case of the same capacity), including 4-banked organization of the memory array and the same command-address interface. Fundamental differences between SDR and DDR lie in the organization of the logical layer of the data interface. Data are transferred along the data interface of SDR SDRAM memory only on the positive-going edge (rising edge) of the clock signal. The internal frequency of SDRAM chips matches that of the external data bus. Width of the internal SDR SDRAM data bus (from storage cells to IO buffers) matches the width of the external data bus. At the same time, the data interface of DDR (and DDR2) memory transfers data twice per data bus cycle — on the positive-going pulse (rising edge) and on the negative-going pulse (falling edge).

Here emerges the question as to how to organize double transfer rate with respect to the memory bus frequency? Two solutions come to mind — either double the internal operation frequency of memory chips (compared to the frequency of the external bus), or double the width of the internal data bus (compared to the width of the external bus). It would be too naive to think that DDR standard uses the first solution. But it's easy to make this mistake, considering the purely marketing approach to marking DDR memory modules, supposedly operating at the double rate (for example, DDR modules with the real bus frequency of 200 MHz are called DDR-400). Nevertheless, the second solution is much simpler and more efficient in technological and economical terms. So it's used in DDR SDRAM devices. This architecture, used in DDR SDRAM, is called a 2n-prefetch architecture. Data access in this architecture is done "pairwise" — each single READ command sends two elements on the external data bus (their capacity, as in SDR SDRAM, being equal to capacity of the external data bus). In the same way, each WRITE command waits for the arrival of two elements on the external data bus. This very fact explains why Burst Length (BL) cannot be less than 2 for transferring data in DDR SDRAM devices.

DDR2 SDRAM devices are a logical development of the 2n-prefetch architecture, used in DDR SDRAM devices. It's natural to assume that the architecture of DDR2 SDRAM devices is called 4n-prefetch and the internal data bus width is four times (not two times) as large compared to the width of the external data bus. But we are not speaking here of further increase in the number of data elements, transferred per cycle of the external data bus — or such devices wouldn't have been called Double Data Rate devices of the second generation. Instead, further expansion of the internal data bus allows to half the internal operation frequency of DDR2 SDRAM devices compared to the operation frequency of DDR SDRAM chips with the same theoretical bandwidth. On one hand, the reduction of the internal operation frequency of memory chips together with the reduction of the nominal voltage from 2.5 V to 1.8 V (thanks to the new 90-nm process technology) allows a noticeable reduction in memory power consumption. On the other hand, the 4n-prefetch architecture of DDR2 chips allows to reach twice as high frequency of the external data bus compared to the frequency of the external data bus in DDR chips — the internal operation frequency of their chips being equal. That's exactly what we see nowadays — standard DDR2-800 memory modules (400 MHz data bus) are currently rather popular on the memory market, while the last official DDR standard is limited to DDR-400 (200 MHz data bus).

You can get detailed information about DDR2 and its main differences from DDR in our article DDR2 - a future replacement of DDR. Theoretical basics and the first low-level test results. And now, on the analogy of DDR, we can only see how much data are read/written in DDR2 chips and what is the minimal burst length. As DDR2 is just like the old DDR, we still have double transfer rate per cycle of the external data bus — in other words, we expect to get no less than two data elements per cycle of the external data bus (as always, their capacity equals that of the external data bus) for reading and must provide no less than two data elements to the memory chip for writing. At the same time, remember that the internal operation frequency of DDR2 chips is only half of its external interface frequency. Thus, there are two "external" cycles per each "internal" cycle of a memory chip. And each of them in its turn allows to read/write two data elements. Thus, four data elements are read/written per "internal" cycle of a memory chip (hence the title — 4n-prefetch). That is all operations inside a memory chip are carried out on the level of 4-element chunks of data. So the minimal burst length (BL) must be 4. In fact, it can be proved that in the general case the minimal Burst Length (2n) always corresponds to the "2nn-prefetch" architecture (n = 0 corresponds to DDR; n = 1 — DDR2; n = 2 — future DDR3).



Dmitri Besedin (dmitri_b@ixbt.com)
May 14, 2006



Write a comment below. No registration needed!


Article navigation:



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.