Computer Organization : Exploring Cache Memory Mapping Techniques: Direct Mapping, Associative Mapping, and Set-Associative Mapping

Cache memory is a high-speed, small-sized memory that stores frequently accessed data and instructions from the main memory to reduce the average access time. Cache memory mapping determines how memory addresses from the main memory are mapped to specific cache locations. There are three primary cache memory mapping techniques: Direct Mapping, Associative Mapping, and Set-Associative Mapping. Let’s explain each of these mapping techniques with examples:

  1. Direct Mapping: In direct mapping, each block of main memory can be mapped to only one specific cache location. The mapping is determined by extracting certain bits from the memory address and using them as an index to access the corresponding cache location. The cache size is usually smaller than the main memory, so multiple main memory blocks can map to the same cache location, resulting in possible cache conflicts.

Example: Suppose we have a direct-mapped cache with 4 cache lines (L1, L2, L3, L4), and the main memory has 16 blocks (B0, B1, …, B15). The direct mapping function uses the least significant bits of the memory address to determine the cache line index.

Memory Address (Main Memory) | Cache Line Index (Cache)

B0 | L0
B1 | L1
B2 | L2
B3 | L3
B4 | L0
B5 | L1
B6 | L2
B7 | L3
B8 | L0
... | ...
B15 | L3

In this example, memory blocks B0, B4, B8, and B12 all map to the same cache line L0, which can lead to cache conflicts.

  1. Associative Mapping: In associative mapping, each block of main memory can be placed in any cache location. This means that there are no predetermined mappings; the cache controller searches the entire cache to find the requested data. Associative mapping eliminates cache conflicts, but it requires additional hardware for searching and comparing tags (memory address tags) to identify the correct cache block.

Example: Consider an associative cache with 4 cache lines (L0, L1, L2, L3), and the main memory still has 16 blocks (B0, B1, …, B15).

Memory Address (Main Memory) | Cache Line (Cache)

B0 | L2
B1 | L1
B2 | L3
B3 | L0
B4 | L1
B5 | L2
B6 | L0
B7 | L3
B8 | L1
... | ...
B15 | L3

In associative mapping, each main memory block can be placed in any available cache line, which allows for efficient use of the cache space.

  1. Set-Associative Mapping: Set-associative mapping is a compromise between direct mapping and associative mapping. It divides the cache into multiple sets, and each set contains multiple cache lines. Each memory block can map to any cache line within its corresponding set. Set-associative mapping reduces cache conflicts compared to direct mapping while still providing some of the advantages of associative mapping.

Example: Consider a 4-way set-associative cache, with 4 sets (S0, S1, S2, S3), and each set containing 2 cache lines (L0, L1).

Memory Address (Main Memory) | Set (Cache) | Cache Line (Cache)

B0 | S0 | L0
B1 | S1 | L0
B2 | S2 | L0
B3 | S3 | L0
B4 | S0 | L1
B5 | S1 | L1
B6 | S2 | L1
B7 | S3 | L1
B8 | S0 | L0
... | ... | ...

In this example, each set contains two cache lines, and each memory block maps to any cache line within its corresponding set (two-way associative within each set).

Author: user

Leave a Reply