Unit VI: Memory Organization - Computer Architecture - BCA Notes (Pokhara University)

Breaking

Wednesday, May 20, 2020

Unit VI: Memory Organization - Computer Architecture

Memory Hierarchy:

To achieve the greatest performance the memory must be able to keep up with the processor. The designer would like to use memory technologies that provide larger capacity memory because capacity is needed for better performance of the system.
Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy
A typical hierarchy is shown in the figure above. As we go down to the hierarchy, the following features occur:
a. Decreasing the cost per bit.
b. Increasing access time.
c. Decreasing the frequency of access to the memory by the processor.

Thus, smaller, more expensive, faster memory is supplemented by larger, cheaper, and slower memory. The key to the success of this organization is decreasing the frequency of access.

Semiconductor Main Memory:

The basic element of semiconductor memory is the memory cell. Although a variety of electronic technologies are used, all semiconductor memory cells share certain properties:
a. They exhibit two stable (or semistable) states, which can be used to represent binary 1 and 0.
b. They are capable of being written into (at least once), to set the state.
c. They are capable of being read to sense the state.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

The figure above shows the operation of the memory cells. Most commonly the cell has three functional terminals capable of carrying an electrical signal. The select terminal selects a memory cell for a read or writes operation. The control terminal indicates read or write. For writing, the other terminal provides an electrical signal that sets the state of the cell to 0 or 1. For reading, that terminal is used for the output of the cell state.

RAM (Random Access Memory):

RAM stands for "Random Access Memory" and is often called primary/main memory because it is made up of semiconductor chips. It is the working space used by the computer to hold the program that is currently running along with the necessary data and instructions. It is fast and expensive memory that allows the computer to access the data and instructions very quickly. 


Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

We can read from RAM as well as write into it. Hence is also called "Read-Write" memory. The main drawback of RAM is that it is volatile a memory so the contents of RAM are lost when the computer is switched off.

It is made of millions of microscopic cells that are distinctly numbered so that each cell can be identified and located. Each cell can be electrically charged or not. The charged cell represents 1 and the not charged cell represents 0 in binary format. RAM is also of two types:

1. DRAM:

DRAM stands for "Dynamic Random Access Memory". It is made up of capacitors that are capable of storing the electric charge. Due to the leakage of charges, the capacitors discharge gradually and the memory cells lose their contents. So, to recharge the capacitors to retain its memory contents it has to be refreshed periodically. DRAM is slower than SRAM but it is dense, consumes less electricity, smaller in size, and less expensive.

Synchronous Dynamic Random Access Memory "SDRAM" is DRAM that has a synchronous interface which is widely used in present computers. Traditionally, DRAM has a synchronous interface, which means that it controls inputs. SDRAM has a synchronous interface, meaning that it waits for a clock signal before synchronized with the computer system bus.

Example: DDR "Dual Data Rate", DDR2, DDR3, EDO DRAM, SDRAM, RIMM, etc.

2. SRAM:

SRAM stands for "Static Random Access Memory" and is made up of transistors. It is called static because it can remember or retain its memory contents without being refreshed or recharged as long as there is power. SRAM is faster than DRAM but more expensive, the loser in density, and bigger in size and consumes more electricity.

Differentiate between DRAM and SRAM:

DRAM
SRAM
DRAM stands for Dynamic Random Access Memory.
SRAM stands for Static Random Access Memory.
DRAM is made up of capacitors.
SRAM is made up of transistors.
DRAM is high-density RAM. In one chip larger memory can be constructed.
SRAM is a low density of RAM. In one chip small memory can be constructed.
Power consumption of DRAM is higher than SRAM.
The power consumption of SRAM is lesser than DRAM.
DRAM needs to be periodically refreshed. (System automatically refreshed the RAM cells.)
SRAM does not need to be periodically refreshed.
The cost of DRAM is slower than SRAM.
The cost of SRAM is higher than DRAM.
The data access time is larger than SRAM, typically requires larger than 40 nanoseconds. Hence they are slow.
The data access time is smaller than DRAM, typically less than 30 nanoseconds. Hence they are fast.
DRAM is generally used for low-cost high capacity memory for computers.
SRAM is generally used to create a memory of critical section like cache memory.
Example: DDR, DDR2, DDR3 (Dual Data Rate), EDO DRAM, SDRAM, RIMM, etc.
Example: cache memory of the microprocessor.

ROM (Read-Only Memory):

ROM stands for "Read Only Memory" and it is called ROM because only read operation can be performed on it. The binary information stored in ROM is written permanently by the manufacturer and it cannot be altered. ROM is necessary to store such software that enables the computer to boot up because booting instructions does not need modification.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

ROM is non-volatile memory because it can retain its contents even after the computer is turned off. It is also made of semiconductor chips. The program stored permanently in ROM is called firmware. Hence, the firmware is immediately available when a device is powered on to start up the PC or other electronic equipment like mobile, PDA, and others. ROM is of three types:

1. PROM:

PROM stands for "Programmable Read-Only Memory". Initially, it is the blank chip which can be written or programmed only one time by using a special machine called ROM programmer or ROM burner. Once the PROM is written, it cannot be modified and becomes ROM.

2. EPROM:

EPROM stands for "Erasable Programmable Read-Only Memory". It is a special chip which can be re-programmed to record different information. The data and information are erased by exposing it to intense ultraviolet light for about 20 minutes. These types of memory are used in product development and experimental projects.

3. EEPROM:

EEPROM stands for "Electrically Erasable Programmable Read-Only Memory". These types of chips can be erased and re-programmed repeatedly with special electrical pulses. It does not require a special device to write into it. EEPROM can be reprogrammed without removing it from the computer. It also has a limited life span i.e. the number of times it can be re-programmed is limited to tens or hundreds or thousands of times.

Differentiate between RAM and ROM:

RAM
ROM
RAM stands for Random Access Memory.
ROM stands for Read Only Memory.
RAM is volatile memory, if power fails data and information will be lost.
ROM is inherently non-volatile memory if power fails data and information will not be lost.
RAM is used for the currently running programs of the computer system.
ROM is used to store firmware of computer systems and system software for embedded systems.
RAM is read/write memory.
ROM is read-only memory.
The cost of RAM is higher than ROM.
The cost of ROM is lower is than RAM.
There are two types of RAM: SRAM and DRAM.
There are three types of ROM: PROM, EPROM, and EEPROM

Characteristics of Memory System:

1. Location:

    a. Internal (Example: Registers, Main Memory, and Cache)
    b. External (Example: Magnetic Disk, Optical Disk, etc.)

2. Capacity:

    a. Number of words
    b. Number of bytes

3. Units of Transfer:

    a. Word
    b. Block

4. Access Method:

    a. Sequential, Direct
    b. Random, Associative

5. Performance:

    a. Access Time
    b. Cycle Time
    c. Transfer Rate

6. Physical Characteristics:

    a. Volatile or Non-volatile
    b. Erasable or non-erasable

Auxiliary Memory:

1. Magnetic Disk:

A disk is a circular platter constructed of metal or of plastic coated with magnetic materials. Data are recorded on and later retrieved from the disk through a conducting coil known as head. During a read or write operations the head is stationary while the platter rotates it. Writing is achieved by producing a magnetic field that records a magnetic pattern on a magnetic surface.

The figure below shows the data layout of the disk. The head is capable of reading or writing. The data is transferred to and from the disk in blocks. The sectors may be of fixed or variable length. The platter in a disk may be of single or multiple.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

2. Magnetic Tape:

Tape systems use the same reading and recording techniques as disk systems. The way of recording the data in parallel. It stores one byte of data at a time. Data on the tape are structured as a number of parallel tracks running lengthwise.

The data are read and written in a contiguous track called physical records. Blocks on the tape are separated by gaps called inter record gaps. It is a system for storing digital information on tape using digital recording. The device that performs writing or reading of data is a tape drive.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

3. Optical Disk:

An optical disc is an electronic data storage medium that can be written to and read using a low-powered laser beam. Originally developed in the late 1960s, the first optical disc, created by James T. Russell, stored data as micron-wide dots of light and dark. A laser read the dots, and the data was converted to an electrical signal, and finally to audio or visual output. However, the technology didn't appear in the marketplace until Philips and Sony came out with the compact disc (CD) in 1982. Since then, there has been a constant succession of optical disc formats, first in CD formats, followed by a number of DVD formats.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

The optical disc offers a number of advantages over magnetic storage media. An optical disc holds much more data. The greater control and focus possible with laser beams (in comparison to tiny magnetic heads) means that more data can be written into a smaller space. Storage capacity increases with each new generation of optical media. Emerging standards, such as Blu-ray, offer up to 27 gigabytes (GB) on a single-sided 12-centimeter disc. In comparison, a diskette, for example, can hold 1.44 megabytes (MB). Optical discs are inexpensive to manufacture and data stored on them is relatively impervious to most environmental threats, such as power surges, or magnetic disturbances.

4. Flash Drives:

A small, portable flash memory card that plugs into a computer USB port and functions as a portable hard drive. USB flash drives are touted as being easy-to-use as they are small enough to be carried in a pocket and can plug into any computer with a USB drive. USB flash drives have less storage capacity than an external hard drive, but they are smaller and more durable because they do not contain any internal moving parts.

USB flash drives also are called thumb drives, jump drives, pen drives, key drives, tokens, or simply USB drives.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case which can be carried in a pocket or on a key chain. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist.

5. Review Of RAID (Redundant Array Of Independent Disks):

RAID is a set of physical disk drives viewed by the operating system as a single logical drive. Data are distributed across the physical drive of an array in a scheme known as striping. Redundant disk capacity is used to store parity information which guarantees data recoverability in case of disk failure.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

RAID 0:

1. Often called striping
2. Break a file into blocks of data
3. Simple to implement
4. Provides no redundancy or error detection

RAID 1:

1. Complete the file is stored in a single disk
2. A second disk contains an exact copy of a file
3. Provides complete redundancy of data

RAID 2:

1. Strips data across disk similar to level – 0
2. A parity disk is used to reconstruct corrupted or lost data.
3. Uses Error Checking Code (ECC) to monitor the correctness of information

RAID 3:

1. It eliminates the problem of level – 2 that is disk need to detect which disk has an error.
2. A modern disk can already determine if there is an error.

RAID 4:

1. It consists of block-level striping with dedicated parity.
2.  It allows multiple small input/output to be done at once.

RAID 5:

1. It consists of block-level striping with distributed parity.
2. Here, also, parity information is distributed among the drives.

RAID 6:

1. Consists of block-level striping with double distributed parity.
2. Double parity provides fault tolerance up to two failed drives.

Associative Memory:

1. Hardware Organization:

The time required to find an item stored in memory can be reduced considerably if stored data can be identified for access by the content of the data itself rather than by address. A memory unit accessed by the content is called Associative Memory or Content Accessible Memory. This type of memory is accessed simultaneously and parallel on the basis of data content rather than by specific address or location. When a word is written in associative memory no address is given.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

Associative Memory is organized in such a way:

a. Argument Register (A): It contains the word to be searched. It has ‘n’ bits (one for each bit of the word).

b. Key Register (K): This specifies which part of the argument word needs to be compared with words in memory. If all bits in the register are 1, the entire word should be compared. Otherwise, only the bits having K-bits set to 1 will be compared.

c. Associative Memory Array: It contains the words which are to be compared with the argument word.

d. Match Register (M): It has ‘m’ bits, one bit corresponding to each word in the memory array. After the matching process, the bits corresponding to matching words in match register are set to 1.

2. Address Matching Logic:

The key register provides the mask for choosing the particular field in A register. The entire content of A register is compared if key register content all 1. Otherwise, the only bit that has 1 in the key register is compared. If the compared data is matched corresponding bits in the match register are set. Reading is accomplished by sequential access in memory for those words whose bit are set. 

Example:
Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

Let us include a key register. If Kj = 0 then there is no need to compare Aj and Fij. Only when kj = 1, the comparison is needed. This achieved by ORing each term with Kj. Mi = (X1 + K’1) (X2 + K’2) (X3 + K’3)…. (Xn + K’n)

3. Read/Write Operations:

a.  Read Operations:

When a word is to be read from associative memory, the contents of the word or a part of the word is specified. If more than one-word match with the content, all the matched words will have 1 in the corresponding bit position in match register. Matched words are then read in sequence by applying a read signal to each word line. In most applications, the associative memory stores a table with no two identical items under a given key.

b.  Write Operations:

If the entire memory is loaded with new information at once prior to search operation then writing can be done by addressing each location in sequence. Tag register contains as many bits as there are words in memory. It contains 1 for active word and 0 for inactive word. If the word is to be inserted, tag register is scanned until 0 is found and word is written at that position and bit are changed to 1.

4. Types of Associative Memory:

There are two types of Associative Memory, which both are used in different conditions.

a.  Auto-Associative:


Auto-associative memory takes back (retrieves) a previously stored pattern that most closely resembles the current pattern.

b.  Hetero-Associative:

Hetero-associative memory, the retrieved pattern is in general, different from the input pattern not only in content but possibly also in type and format. The neural network is used to implement these associative memory models called NAM (Neutral Associative Memory).

Cache Memory:

1. Cache Initialization:

The cache contains a copy of the portion main memory. When the processor attempts to read a word of memory a check is made to determine if the word is in the cache. If so, the word is delivered to the processor if not block of main memory is read and transfer to the cache and is delivered to the processor.

Because of the phenomena of the locality of reference, when block of data is fetched into a cache it is likely that there will be a future reference to the same memory location. As shown in the figure above a block of data is transferred between cache and main memory whereas word of data is transferred between CPU and Cache.

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

Different levels of caches can be created on the basis of uses. For example, on the basis of speed level, 1 cache is faster than level 2 and level 2 is faster than level 3.

The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU refers to memory and finds the word in the cache it is said to produce hit. If the word is not found in cache it counts a miss. The ratio of the number of hits divided by total CPU references to memory is called the hit ratio.

2. Mapping Cache Memory:

Transformation of data from main memory to cache memory is known as a mapping process. There are 3 types of the mapping process and they are:

a. Direct Mapping:

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

This is the simplest mapping technique in which bock ‘M’ of main memory is mapped into block ‘K’ of cache memory. Since more than one main memory block is mapped into a given cache block position contention may arise for that position even if the cache is not full.

The main memory address can be divided into 3 fields i.e. tag, block and word. The tag bit is required to identify the main memory block when it is a resident in the cache. When a new block enters the cache, the cache block field determines the cache position. The main memory address can be divided into 3 fields are:
Main Memory Address
5
7
4
Tag
Block
Word




b. Associative Mapping:

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

This is a much more flexible mapping technique, here any main memory block can be loaded to any cache block position. In this case, 12 tag bits are required to identify the main memory block. The tag bits of an address received from CPU are compared with the tag bits of each cache blocks to see if the desired block is present in the cache. The main memory address can be divided into 2 fields are:
Main Memory Address
12
4
Tag
Block




c.  Block Set Associative Mapping:

Memory Organization, Memory Hierarchy, Main Memory, RAM and ROM, Auxiliary Memory, Magnetic Disks and Tapes, Optical Disks, Flash Drives, Review of RAID, Associative Memory, Hardware Organization, Address Matching Logic, Read/Write Operations, Cache Memory, Cache Initialization, Mapping Cache Memory, Direct, Associative and Set Associative Memory Mapping, Write Policy

In block set associative mapping technique, blocks of the cache are grouped into sets and the mapping allows a block of main memory to reside in any block of a particular set. As shown in the diagram above a cache with four-block per set is used for mapping technique.

The six-bit set field of the cache might contain the address block. As shown in the diagram above two kilobytes of main memory is used to transfer its data to the cache memory. The contention problem of the direct method is overcome by having few choices for block replacement.

Similarly, the hardware cost is reduced by reducing the size of associative search which is its advantages. The main memory address can be divided into 3 fields are:
Main Memory Address
6
6
4
Tag
Block
Word




3. Write Policy:

When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the write policy.

There are two basic writing approaches:
a. Write-through: Write is done synchronously both to the cache and to the backing store.
b. Write-back (also called write-behind): Initially, writing is done only to the cache. The write to the backing store is postponed until the cache blocks containing the data are about to be modified/replaced by new content.

A write-back cache is more complex to implement since it needs to track which of its locations have been written over, and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, an effect referred to as a lazy write. For this reason, a read miss in a write-back cache (which requires a block to be replaced by another) will often require two memory accesses to service: one to write the replaced data from the cache back to the store, and then one to retrieve the needed data.

Other policies may also trigger data write-back. The client may make any changes to data in the cache, and then explicitly notify the cache to write back the data. No data is returned on write operations, thus there are two approaches for situations of write-misses:

a. Write Allocate (Also Called Fetch on Write): Data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses.

b. No-Write Allocate (Also Called Write-No-Allocate or Write Around): Data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, only the reads are being cached.

Both write-through and write-back policies can use either of these write-miss policies, but usually, they are paired in this way:

A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store.

Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.

4. Replacement Algorithms:

A cache algorithm is a detailed list of instructions that directs which items should be discarded in a computing device's cache of information. Examples of cache algorithms include:

a.  First In First Out (FIFO):

Using this algorithm the cache behaves in the same way as a FIFO queue. The cache evicts the first block accessed first without any regard to how often or how many times it was accessed before.

b.  Last In First Out (LIFO):

Using this algorithm the cache behaves in the exact opposite way as a FIFO queue. The cache evicts the block accessed most recently first without any regard to how often or how many times it was accessed before.

c.  Least Frequently Used (LFU):

This cache algorithm uses a counter to keep track of how often an entry is accessed. With the LFU cache algorithm, the entry with the lowest count is removed first. This method isn't used that often, as it does not account for an item that had an initially high access rate and then was not accessed for a long time.

d.  Least Recently Used (LRU):

This cache algorithm keeps recently used items near the top of the cache. Whenever a new item is accessed, the LRU places it at the top of the cache. When the cache limit has been reached, items that have been accessed less recently will be removed starting from the bottom of the cache. This can be an expensive algorithm to use, as it needs to keep "age bits" that show exactly when the item was accessed. In addition, when an LRU cache algorithm deletes an item, the "age bit" changes on all the other items.

e.  Adaptive Replacement Cache (ARC):

Developed at the IBM Almaden Research Center, this cache the algorithm keeps track of both LFU and LRU, as well as evicted cache entries to get the best use out of the available cache.

f. Most Recently Used (MRU):

This cache algorithm removes the most recently used items first. An MRU algorithm is good in situations in which the older an item is, the more likely it is to be accessed.

g.  Random Replacement (RR):

Randomly selects a candidate item and discards it to make space when necessary. This algorithm does not require keeping any information about access to history. For its simplicity, it has been used in ARM processors. It admits efficient stochastic simulation.

No comments:

Post a Comment

If you have any doubt, then don't hesitate to drop comments.