Implement LRU Cache

editor-page-cover

Problem Statement

Least Recently Used (LRU) is a common caching strategy. It defines the policy to evict elements from the cache to make room for new elements when the cache is full, meaning it discards the least recently used items first.

Let’s take an example of a cache that has a capacity of 4 elements. We cache elements 1, 2, 3 and 4.

widget

The diagram above represents the cache state after first access of all four elements. We now need to cache another element “5”.

widget

In LRU cache, we evict the least recently used element (in this case “1”) in case a new element needs to be cached. Now “2” is next in line to be evicted if a new element needs to be cached. Let’s see what happens when “2” is accessed again.

widget

Now “3” becomes the next in line to be evicted from the cache.


Hint

  • Doubly linked list
  • Hashing
  • Think about evictions

Try it yourself

// simple single threaded LRUCache
class LRUCache {
unordered_map<int, ListNode*> cache;
// each entry in linked list is <key, value>
LinkedList cache_vals;
int capacity; // total capacity
public:
LRUCache(int capacity) {
this->capacity = capacity;
}
~LRUCache() {
for (auto& t : cache) {
delete t.second;
}
}
int get(int key) {
//TODO: Write - Your - Code
return -1;
}
void set(int key, int value) {
//TODO: Write - Your - Code
}
string print() {
string result = "";
for (auto& x : cache) {
result += "(" + std::to_string(x.first) + "," + std::to_string(x.second->value) + "),";
}
return result;
}
};

Solution

// Linked list operations
// void add_at_tail(int val)
// void delete_node(ListNode* node)
// void delete_from_head()
// void delete_from_tail()
// ListNode* get_head()
// void set_head(ListNode* node)
// ListNode* get_tail()
// void set_tail(ListNode* node)
// simple single threaded LRUCache
class LRUCache {
unordered_set<int> cache;
// each entry in linked list is the value of the element
LinkedList cache_vals;
int capacity; // total capacity
public:
LRUCache(int capacity) {
this->capacity = capacity;
}
~LRUCache() {
cache.clear();
}
ListNode* get(int val) {
auto p = cache.find(val);
if (p == cache.end()) {
return nullptr;
}
else{
ListNode* i = cache_vals.get_head();
while(i != nullptr){
if (i->value == val){
return i;
}
i = i->next;
}
}
}
void set(int value) {
ListNode* check = get(value);
if(check == nullptr){
if(cache.size() >= capacity){
cache_vals.add_at_tail(value);
int head_val = cache_vals.get_head()->value;
cache.erase(head_val);
cache_vals.delete_from_head();
}
else{
cache_vals.add_at_tail(value);
cache.insert(value);
}
}
else{
if(check == cache_vals.get_tail()){
return;
}
if(check == cache_vals.get_head()){
cache_vals.add_at_tail(check->value);
cache_vals.delete_from_head();
return;
}
if(check->prev != nullptr){
check->prev->next = check->next;
}
if(check->next != nullptr){
check->next->prev = check->prev;
}
cache_vals.add_at_tail(check->value);
delete check;
}
}
void print() {
ListNode* i = cache_vals.get_head();
while(i != nullptr){
cout << i->value << ", ";
i = i ->next;
}
cout << endl;
}
};
int main(int argc, char const *argv[])
{
LRUCache cache(4);
cache.set(1);
cache.print();
cache.set(2);
cache.print();
cache.set(3);
cache.print();
cache.set(4);
cache.print();
cache.set(5);
cache.print();
cache.set(2);
cache.print();
return 0;
}

Solution Explanation

Runtime Complexity

get (hashset): O(1) in the average case, O(n) in the worst case

set (hashset): Constant, O(1)

deletion at head when adding a new element (linked list): Constant, O(1)

search for deleting and adding to tail (linked list): O(n)

Complexity: O(n)

Memory Complexity

Linear, O(n) where n is the size of cache.


Solution Breakdown

Caching is a technique to store data in a faster storage (usually RAM) to serve future requests faster. Below are some common examples where cache is used:

  • A processor cache is used to read data faster from main memory (RAM).
  • Cache in RAM can be used to store part of disk data in RAM and serve future requests faster.
  • Network responses can be cached in RAM to avoid too many network calls.

However, cache store is usually not big enough to store the full data set. So we need to evict data from the cache whenever it becomes full. There are a number of caching algorithms to implement a cache eviction policy. LRU is very simple and a commonly used algorithm. The core concept of the LRU algorithm is to evict the oldest data from the cache to accommodate more data.

To implement an LRU cache we use two data structures: a hashmap and a doubly linked list. A doubly linked list helps in maintaining the eviction order and a hashmap helps with O(1) lookup of cached keys. Here goes the algorithm for LRU cache.

If the element exists in hashmap
	move the accessed element to the tail of the linked list
Otherwise,
	if eviction is needed i.e. cache is already full
		Remove the head element from doubly linked list and delete its hashmap entry
	Add the new element at the tail of linked list and in hashmap
Get from Cache and Return

Note that the doubly linked list is used to keep track of the most recently accessed elements. The element at the tail of the doubly linked list is the most recently accessed element. All newly inserted elements (in put) go the tail of the list. Similarly, any element accessed (in get operation) goes to the tail of the list.

Practice problems like this and many more by checking out our Grokking the Coding Interview: Patterns for Coding Questions course!