Previous | Next --- Slide 11 of 51
Back to Lecture Thumbnails
potato

What happens upon a conflict miss? Does eviction still follow LRU?

nassosterz

LRU is one of the eviction policies. There are others like LFU. I guess the answer depends on the specific eviction policy of the cache

student1

When we say 8-way set associative, are we saying that there are 8 cache lines one can load into the cache? If so, why does L2 Cache has less set associative despite it has larger size? Is there an example for under what situation will there be a conflict miss and how the 8/4-way set associtive actually works?

laimagineCS149

I was confused with cache associativity, and found this illustration which helps a bit for my understanding: http://csillustrated.berkeley.edu/PDFs/handouts/cache-3-associativity-handout.pdf

german.enik

this helped me understand what direct-mapped vs set-associative vs fully associative means! http://csillustrated.berkeley.edu/PDFs/handouts/cache-3-associativity-handout.pdf

kai

I just wanted to note that it might seem like fully associative caches have every advantage over less associative caches; however, as you increase the associativity of a cache you increase the hardware need to implement it and consequently the cache hit latency.

parthiv

To recap, the 3 C's model distinguishes between three types of cache misses

  1. Cold: first load; i.e., we would have had a cache miss even in an infinite-size cache. During the first load, we must bring the line into the cache.

2: Capacity: Working set is larger than the cache; i.e., we would have avoided this miss in an infinite-size cache. Since the cache is not big enough to hold all lines that the program needs, some lines must be evicted (e.g. via LRU).

3: Conflict: Cache lines have conflicts; i.e., we would have avoided this miss in a fully associative cache. In a fully associative cache, any subset of the cache lines (up to the size of the cache) can be held in the cache. However, in the real-world, this is quite uncommon and there may be situations where we must evict lines from the cache because a certain line may be able to be held in one of e.g. eight places. In this case, we could have nine cache lines that fully overlap in their available locations, and therefore one must be evicted even though the cache has enough total space to hold all lines.

Could we define a fourth C -- coherence miss? Where a line is a miss since the cache line has been invalidated?

tigerpanda

Is L3 cache different from RAM? After reviewing the slides from last lecture, I became quite confused by what we mean by a "write or read to memory". When we say memory, are we talking about writing or reading to the SSD or RAM? Looking at the cache diagram above, are all of these the cache we have been thinking about in previous written assignments? Also, when an item is stored in the cache, how is it determine which cache it will get stored in?

tigerpanda

Nevermind, I rewatched that part of the lecture and it is clear now:)

alishah

I'm a little confused by what set associative means. Is it a limit on how many cache lines can be present in the cache at one time?

AnonyMouse

Yes I believe set associativity refers to how many cachelines are accommodated in each level. We need a way to map any memory input to a cacheline to store but since our cache is much smaller than our actual memory there's bound to be memory regions that get mapped to the same cacheline. Set associativity allows us to accommodate that overlap. With a 1 way set-associative cache we can only accommodate 1 mapping of a memory region to every cacheline so if another memory region comes along that evaluates to the same mapping we'll be forced to kick off the previous occupant memory but in 2-way set-associative we can accommodate 2 overlapping memory regions ("overlapping" means their address gets mapped to the same cacheline), with 4-way set associative caches we can accommodate 4 and so on. EE180 goes more in-depth in that topic if anyone's interested.

kkim801

@tigerpanda It depends on the machine. In academia, L3 is commonly referred to as the shared last level cache that is responsible for trying to keep access going out to main memory as much as possible. But sometimes the L2 is the LLC, like some of the chips out there. So no, L3 is not RAM.

Please log in to leave a comment.