๐ŸŽ‰ Article Published

if you are not a medium member then Click here to read free

In the world of software architecture, caching is one of the most critical tools for optimizing performance, reducing latency, and improving scalability. While the Least Recently Used (LRU) caching strategy often takes center stage, the Most Recently Used (MRU) Cache is an equally important player that shines in specific scenarios.Let's dive deep into the MRU Cache, understand its eviction policy, use cases, advantages, and disadvantages, and walk through an illustrative Java implementation with detailed examples and visuals.

What is an MRU Cache?

The Most Recently Used (MRU) Cache is a caching strategy where the most recently accessed item is evicted first when the cache runs out of space. It operates on the assumption that the most recently accessed data is less likely to be reused compared to older data.

Eviction Policy

  • The MRU Cache removes the most recently used item first.
  • This is the opposite of LRU, which removes the least recently used item first.

Real-World Analogy

Think of your fridge:

  • You just cooked a fresh meal and placed it in the fridge.
  • When the fridge gets full, instead of tossing out the old takeout that's been sitting there, you decide to remove the most recently added leftovers.
  • That's how the MRU Cache works โ€” it forgets the newest thing you added to make room for something else.

When to Use MRU Cache?

MRU caching is not a one-size-fits-all solution. It's effective in specific scenarios where the most recently accessed data is less likely to be reused anytime soon.

Use Cases

  1. Database Query Caching:
  • When older queries are more likely to be reused compared to new ones.
  • Example: Analytical systems where queries are often repeated over historical data.

2. Video/Media Streaming:

  • Users may revisit older streams more often than the most recently accessed one.

3. Applications Without Temporal Locality:

  • If your workload doesn't exhibit temporal locality (i.e., recently used data is unlikely to be reused), MRU can outperform LRU.

How Does an MRU Cache Work?

The MRU Cache is implemented using:

  1. Hash Map:
  • Provides O(1) lookups for cache items.
  • Maps a key to a reference of the corresponding node in the doubly linked list.

2. Doubly Linked List:

  • Maintains the order of items based on recency.
  • The head of the list represents the most recently used item.
  • The tail of the list represents the least recently used item.

Representation

Imagine a group of friends on the dance floor:

  • The last person to dance (most recently used) is the first to leave the dance floor when space is needed.

Here's what a sample cache state might look like:

Cache Capacity: 3

MRU Cache (Head -> Tail):
[3 -> 2 -> 1]

- 3 is the most recently used (head).
- 1 is the least recently used (tail).

Eviction Example:

If another item (4) is added and the cache is full:

New Cache State (Head -> Tail):
[4 -> 2 -> 1]

- 3 (the most recently used) is evicted to make space for 4.

MRU Cache Implementation in Java

Here's a step-by-step implementation of the MRU Cache in Java using a HashMap and a Doubly Linked List.

Code Implementation

import java.util.HashMap;

public class MRUCache<K, V> {
    private final int capacity;
    private final HashMap<K, Node<K, V>> map;
    private final DoublyLinkedList<K, V> list;

    // Constructor
    public MRUCache(int capacity) {
        this.capacity = capacity;
        this.map = new HashMap<>();
        this.list = new DoublyLinkedList<>();
    }

    // Get operation
    public V get(K key) {
        if (!map.containsKey(key)) {
            return null; // Cache miss
        }
        Node<K, V> node = map.get(key);
        list.moveToHead(node); // Update recency
        return node.value;
    }

    // Put operation
    public void put(K key, V value) {
        if (map.containsKey(key)) {
            // Update existing node
            Node<K, V> node = map.get(key);
            node.value = value;
            list.moveToHead(node); // Update recency
        } else {
            // Add new node
            if (map.size() == capacity) {
                // Evict the most recently used item (head of the list)
                Node<K, V> evicted = list.removeHead();
                map.remove(evicted.key);
            }
            Node<K, V> newNode = new Node<>(key, value);
            list.addToHead(newNode);
            map.put(key, newNode);
        }
    }

    // Doubly Linked List Node
    private static class Node<K, V> {
        K key;
        V value;
        Node<K, V> prev, next;

        Node(K key, V value) {
            this.key = key;
            this.value = value;
        }
    }

    // Doubly Linked List
    private static class DoublyLinkedList<K, V> {
        private Node<K, V> head, tail;

        DoublyLinkedList() {
            head = null;
            tail = null;
        }

        // Add node to the head
        void addToHead(Node<K, V> node) {
            if (head == null) {
                head = tail = node;
            } else {
                node.next = head;
                head.prev = node;
                head = node;
            }
        }

        // Remove a specific node
        void removeNode(Node<K, V> node) {
            if (node == head) {
                head = head.next;
                if (head != null) head.prev = null;
            } else if (node == tail) {
                tail = tail.prev;
                if (tail != null) tail.next = null;
            } else {
                node.prev.next = node.next;
                node.next.prev = node.prev;
            }
        }

        // Move a node to the head
        void moveToHead(Node<K, V> node) {
            removeNode(node);
            addToHead(node);
        }

        // Remove the head node (MRU eviction)
        Node<K, V> removeHead() {
            if (head == null) return null;
            Node<K, V> oldHead = head;
            removeNode(head);
            return oldHead;
        }
    }

    // Main method for testing
    public static void main(String[] args) {
        MRUCache<Integer, String> cache = new MRUCache<>(3);

        cache.put(1, "A");
        cache.put(2, "B");
        cache.put(3, "C");

        System.out.println(cache.get(1)); // Output: A
        cache.put(4, "D"); // Evicts 1 (most recently used)
        System.out.println(cache.get(1)); // Output: null
        System.out.println(cache.get(2)); // Output: B
    }
}

Step-by-Step Walkthrough

  1. Initialization:
  • A fixed capacity is defined.
  • The HashMap stores key-to-node mappings for O(1) lookups.
  • The DoublyLinkedList maintains the recency order.

2. Get Operation:

  • Retrieves the value for the given key.
  • Moves the accessed node to the head of the list, marking it as the most recently used.

3. Put Operation:

  • If the key exists, update its value and move it to the head.
  • If the key doesn't exist:
  • Evict the most recently used item if the cache is full.
  • Add the new key-value pair to the head of the list.

Advantages and Disadvantages

Advantages

  • Efficiency: Both get and put operations have O(1) time complexity.
  • Specific Use Cases: Shines in scenarios where the most recently accessed data is less likely to be reused.

Disadvantages

  • Not Suitable for Temporal Locality: If your workload exhibits temporal locality (recently accessed data is often reused), MRU may lead to poor performance.
  • Less Common: MRU is less widely applicable compared to LRU.

Conclusion

The MRU Cache is a powerful yet niche caching strategy that excels in specific scenarios such as database query caching and systems with low temporal locality. By leveraging a HashMap and a Doubly Linked List, we can efficiently implement an MRU Cache with O(1) time complexity for both get and put operations.So, the next time you need to design a caching system with unique access patterns, consider giving the MRU Cache a spot in your architecture!

Thank You for Reading!

โœ… "Follow me on Medium to never miss an update!"

If you found this content helpful, feel free to show your support with ๐Ÿ‘ claps! ๐Ÿ˜Š Your encouragement keeps me motivated to write more article