Explanation : When A [0] [0] is accessed, block from A [0] [0] to A [0] [15] is brought into the cache but it is of no use as the next element required to access will be A [1] [0].
Thus there will not be a single hit and all 512.512 = 262144 accesses will be misses.
∴ M1 / M2 = 16384/262144 = 1/16
Explanation : First, the system will look in cache 1. If it is not found in cache 1, then cache 2 and then further in main memory (if not in cache 2 also). The average access t i me would take into consideration success in cache 1, failure in cache 1 but success in cache 2, failure in both the caches and success in main memory. Average access time = [H1 * T1] + [(1 – H1) * H2* T2]+ [(1 – H1)(1 – H2) * Hm * Tm] where, H1 = Hit rate of level 1 cache = 0.8 T1 = Access time for level 1 cache = 1 ns H2 = Hit rate of level 2 cache = 0.9 T2 = Access time for level 2 cache = 10 ns Hm = Hit rate of Main Memory = 1 Tm = Access time for Main Memory = 500 ns So, Average Access Time = (0.8 * 1) + (0.2 * 0.9 * 10) + (0.2 * 0.1 * 1 * 500) = 0.8 + 1.8 + 10 = 12.6 ns
Explanation : For 1 GBps bandwidth it takes 1 sec to load
109 bytes on line
so, for 64 bytes it will take 64 * 1 /109 = 64 ns
main memory latency given is 32
so, total time required to place cache line is
64 + 32 = 96 ns
Explanation : The main memory consists of 24 banks, each
of 2 bytes. Since parallel accesses t o all
banks are possible, only two parallel accesses
of all banks are needed to traverse the whole
data.
For one parallel access,
Total time = Decoding Time + Latency Time
= 24/2 + 80
= 92ns
Hence, for 2 such accesses,
time = 2 * 92
= 184 ns