Pages

Cache Coherence

cache Coherence:

Cache coherence is a issue that is faced in multicore systems, when each core has its own private cache. For example consider the following system, which has two cores. each core has its own private cache and a common share memory. The sharing of the main memory could be on a bus or on an interconnection network.





Because the main memory is shared the cache of both the systems store in them the memory locations from the same memory. There are three possible cases in such a situation.


Case 1:

Both the caches store different memory locations. As shown in the figure below, the caches might have data from different memory locations and hence do not have any common memory location between them. In this case there is no problem as the data in both the cache can be modified with out any issues.






Case2:

The caches might store a common location in them. Let us the memory address 100 is being stored in both the caches. Assume the data in the memory location 100 is 10 initially as shown below.








Let us assume the following set of operations are done on the memory location 100

1. P1 Increments the value at memory location 100 by 1 making it 11.

If the cache is write back, this change will not be reflected in main memory instantly and even if the cache is write through the change in the data will only be visible to the main memory but not to the cache of the processor p2.

2. P2 Reads the data from memory location 100.

 As the change in the value has not been informed to the cache of processor P2, it contiues to read the older value of 10, which is the wrong value.
Thus we see that there is need for the change in the value of a memory location that is being cached by different processors to be informed to all the caches that are sharing the memory location so that they can be updated.
This process of keeping all the cache up to date is termed as cache coherence. This is done in two ways.

Invalidation:
If one processor modifies the data in a share memory location, all the other caches are informed about this and they in turn invalidate the corresponding memory locations. Thus the next time the processor issues a read it will lead to miss and thus fetch the updated data from the main memory .



Update:
The second method is to send the updated data of the memory location to all the cache. Thus all the caches can update themselves with the new data so that any for further reads will have the update data in them.






Coherence Misses:




Because of the coherence issues explained above, multicore systems have a new kind of miss other than the usual , capacity,conflict and compulsory,  that is coherence misses. These are misses that occur when a cache block gets invalidated because of a write in some other cache. Thus any future reads to that block will lead to a miss and the read will have to fetch the data from main memory or the cache that has the update data.


To handle the coherence issue there are a number of protocols that are used the most often used ones are

Invalidation based  :  Protocols which invalidate other copies of the data on a write


MSI Protocol  (Modified Shared Invalid)
MESI protocol  (Modified Exclusive Shared Invalid)
MOESI protocol  (Modified Owned  Exclusive Shared Invalid)


Update Based :  Protocols which send the updated data to all the  caches on a write


Dragon protocol.









No comments:

Post a Comment