Cache memory is a very high speed memory that is placed between the CPU and main memory, to operate at the speed of the CPU.
It is used to reduce the average time to access data from the main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data.
- Cache memory is much faster than main memory and its access time is very less as compared to main memory.
When the processor needs to read or write a location in main memory, it first checks for a corresponding entry in the cache.
The cache checks for the contents of the requested memory location in any cache lines that might contain that address.
- If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read from chache
- If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
The three different types of mapping used for the purpose of cache memory are as follow,
- Direct mapping
- Associative mapping,
- Set-Associative mapping.
Direct mapping: In direct mapping assigned each memory block to a specific line in the cache. If a line is previously taken up by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into two parts index field and tag field. The cache is used to store the tag field whereas the rest is stored in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.
Associative mapping: In this type of mapping the associative memory is used to store content and addresses both of the memory word. Any block can go into any line of the cache. This means that the word id bits are used to identify which word in the block is needed, but the tag becomes all of the remaining bits. This enables the placement of the any word at any place in the cache memory. It is considered to be the fastest and the most flexible mapping form.
Set-associative mapping: This form of mapping is a enhanced form of the direct mapping where the drawbacks of direct mapping is removed. Set associative addresses the problem of possible thrashing in the direct mapping method. It does this by saying that instead of having exactly one line that a block can map to in the cache, we will group a few lines together creating a set. Then a block in memory can map to any one of the lines of a specific set..Set-associative mapping allows that each word that is present in the cache can have two or more words in the main memory for the same index address. Set associative cache mapping combines the best of direct and associative cache mapping techniques
Sample GATE Question
The cache hit ratio for this initialization loop is
Cache hit ratio=No. of hits/total accesses
So (C) is correct option
Article Contributed by Pooja Taneja. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.