LRU Cache

Design a data structure that works like a LRU Cache. Here cap denotes the capacity of the cache and Q denotes the number of queries. Query can be of two types:

  1. PUT x y: sets the value of the key x with value y
  2. GET x: gets the key of x if present else returns -1.

The LRUCache class has two methods get() and put() which are defined as follows.

  1. get(key): returns the value of the key if it already exists in the cache otherwise returns -1.
  2. put(key, value): if the key is already present, update its value. If not present, add the key-value pair to the cache. If the cache reaches its capacity it should remove the least recently used item before inserting the new item.
  3. In the constructor of the class the capacity of the cache should be initialized.

Examples:

Input: cap = 2, Q = 2, Queries = [["PUT", 1, 2], ["GET", 1]]
Output: [2]
Explanation: Cache Size = 2
["PUT", 1, 2] will insert the key-value pair (1, 2) in the cache,
["GET", 1] will print the value corresponding to Key 1, ie 2.
Input: cap = 2, Q = 8, Queries = [["PUT", 1, 2], ["PUT", 2, 3], ["PUT", 1, 5], ["PUT", 4, 5], ["PUT", 6, 7], ["GET", 4], ["PUT", 1, 2], ["GET", 3]]
Output: [5, -1] Explanation: Cache Size = 2 ["PUT", 1, 2] will insert the pair (1,2) in the cache. ["PUT", 2, 3] will insert the pair (2,3) in the cache: 1->2, 2->3(the most recently used one is kept at the rightmost position)  ["PUT", 1, 5] will replace the value of 1 from 2 to 5 : 2 -> 3, 1 -> 5 ["PUT", 4, 5] : 1 -> 5, 4 -> 5 (Cache size is 2, hence we delete the least recently used key-value pair) ["PUT", 6, 7] : 4 -> 5, 6 -> 7  ["GET", 4] : Prints 5 (The cache now looks like 6 -> 7, 4->5) ["PUT", 1, 2] : 4 -> 5, 1 -> 2 (Cache size is 2, hence we delete the least recently used key-value pair) ["GET", 3] : No key value pair having key = 3. Hence, -1 is printed.

Constraints:
1 <= cap <= 103
1 <= Q <= 105
1 <= x, y <= 104


Approach 01:

#include <unordered_map>
#include <list>
using namespace std;

class LRUCache {
 public:
  LRUCache(int cacheCapacity) : cacheCapacity(cacheCapacity) {}

  // Function to return the value corresponding to the key
  int get(int key) {
    if (lruCacheMap.find(key) == lruCacheMap.end())
      return -1;

    // Move the accessed key to the front of the list
    usageOrder.splice(usageOrder.begin(), usageOrder, lruCacheMap[key]);
    return lruCacheMap[key]->second;
  }

  // Function for storing key-value pair
  void put(int key, int value) {
    if (lruCacheMap.find(key) != lruCacheMap.end()) {
      // Update value and move key to the front
      lruCacheMap[key]->second = value;
      usageOrder.splice(usageOrder.begin(), usageOrder, lruCacheMap[key]);
    } else {
      // If cache is full, remove the least recently used (last element)
      if (usageOrder.size() == cacheCapacity) {
        int leastUsedKey = usageOrder.back().first;
        usageOrder.pop_back();
        lruCacheMap.erase(leastUsedKey);
      }
      // Insert new key-value pair at the front
      usageOrder.emplace_front(key, value);
      lruCacheMap[key] = usageOrder.begin();
    }
  }

 private:
  int cacheCapacity;
  list<pair<int, int>> usageOrder;                 // List to track usage order
  unordered_map<int, list<pair<int, int>>::iterator> lruCacheMap; // Key-to-iterator mapping
};
from collections import OrderedDict

class LRUCache:
    def __init__(self, cacheCapacity: int):
        self.cacheCapacity = cacheCapacity
        self.lruCache = OrderedDict()

    # Function to return the value corresponding to the key
    def get(self, key: int) -> int:
        if key not in self.lruCache:
            return -1
        # Move the accessed key to the end to show it was recently used
        self.lruCache.move_to_end(key)
        return self.lruCache[key]

    # Function for storing key-value pair
    def put(self, key: int, value: int) -> None:
        if key in self.lruCache:
            # Remove the old value
            del self.lruCache[key]
        elif len(self.lruCache) >= self.cacheCapacity:
            # Remove the least recently used item
            self.lruCache.popitem(last=False)
        # Insert the new key-value pair
        self.lruCache[key] = value

Time Complexity:

  • For get method:

    Looking up a key in the unordered_map takes \( O(1) \). Moving the accessed element to the front of the list using splice also takes \( O(1) \) since the iterator to the element is directly available. Thus, the time complexity for the get method is \( O(1) \).

  • For put method:

    Inserting a new key-value pair involves checking if the key exists in the unordered_map, which takes \( O(1) \). If the cache is full, removing the least recently used key from the list and erasing it from the unordered_map also takes \( O(1) \). Adding a new key-value pair to the front of the list and updating the unordered_map also takes \( O(1) \). Thus, the time complexity for the put method is \( O(1) \).

  • Overall Time Complexity:

    The overall time complexity is \( O(1) \) per operation for both get and put methods.

Space Complexity:

  • List and unordered_map:

    The list and unordered_map store up to \( capacity \) elements. Each element in the list is a pair of integers, and the unordered_map stores key-to-iterator mappings. Thus, the space complexity is \( O(capacity) \).

  • Overall Space Complexity:

    The overall space complexity is \( O(capacity) \).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top