DAP Spr.‘98
 © 
UCB 1
Lecture 12: Memory Hierarchy
Ways to Reduce Misses
 
DAP Spr.‘98
 © 
UCB 2
Block Replacement
When a miss occurs, the cache controller must select a block to be replaced with the desired data. A replacement policy determines which block should be replaced.With
direct-mapped placement
the decision is simple because there is no choice: only one block frame is checked for a hit and only that block can be replaced.With
fully-associative or set-associative placement
, there are more than one block to choose from on a miss.
Primary strategies:
Random
-to spread allocation uniformly, candidate blocks are randomly selected.
 Advantage:
simple to implement in hardware
 Disadvantage:
ignores principle of locality
Least-Recently Used (LRU)
-to reduce a chance of throwing out information that will be needed soon, accesses to blocks are recorded. The block replaced is the one that has been unused for the longest time.
 Advantage:
takes locality into account
 Disadvantage:
as the number of blocks to keep track of increases, LRU becomes more expensive (harder to implement, slower and often just approximatedOther strategies:
First In First Out (FIFO) Most-Recently Used (MRU)Least-Frequently Used (LFU) Most-Frequently Used (MFU)
 
DAP Spr.‘98
 © 
UCB 3
Review: Four Questions for Memory Hierarchy Designers
Q1: Where can a block be placed in the upper level?
(Block placement)
 –
Fully Associative, Set Associative, Direct Mapped
Q2: How is a block found if it is in the upper level?
(Block identification)
 –
Tag/Block
Q3: Which block should be replaced on a miss?
(Block replacement)
 –
Random, LRU
Q4: What happens on a write?
(Write strategy)
 –
Write Back or Write Through (with Write Buffer)
 
DAP Spr.‘98
 © 
UCB 4
Review: Cache Performance
CPUtime = Instruction Count x (CPI
execution
+ Mem accesses per instruction x Miss rate x Miss penalty) x Clock cycle timeMisses per instruction = Memory accesses per instruction x Miss rateCPUtime = IC x (CPI
execution
+ Misses per instruction x Miss penalty) x Clock cycle timeTo Improve Cache Performance:1. Reduce the miss rate 2. Reduce the miss penalty3. Reduce the time to hit in the cache.
of 23