A LRU cache is a container that ensures its maximum capacity is never exceeded by using a Least Recently Used strategy to discard elements. The LRU algorithm keeps track of the order used to access the cache elements in order to know the ones to discard when the container is full. We present here a simple implementation using a Linked Hash Map in order to ensure O(1) for both access and insertion time. The space complexity is O(1) too as the number of elements can’t exceed the cache’s capacity.

Implementation

The cache is implemented in C# here but this implementation could be easily translated to Java. It combines a hash table (the .NET Dictionary<> class) with a doubly linked list. The doubly linked list is simply implemented using the inner class Node.

public class LeastRecentlyUsedCache<TKey, TValue>
{
    private readonly Dictionary<TKey, Node> entries;
    private readonly int capacity;
    private Node head;
    private Node tail;        

    private class Node
    {
        public Node Next { get; set; }
        public Node Previous { get; set; }
        public TKey Key { get; set; }
        public TValue Value { get; set; }
    }

    public LeastRecentlyUsedCache(int capacity = 16)
    {
        if (capacity <= 0) 
            throw new ArgumentOutOfRangeException(
                "capacity", 
                "Capacity should be greater than zero");
        this.capacity = capacity;
        entries = new Dictionary<TKey, Node>();
        head = null;
    }

    public void Set(TKey key, TValue value)
    {
        Node entry;
        if (!entries.TryGetValue(key, out entry))
        {
            entry = new Node { Key = key, Value = value };
            if (entries.Count == capacity)
            {
                entries.Remove(tail.Key);
                tail = tail.Previous;
                if (tail != null) tail.Next = null;
            }
            entries.Add(key, entry);
        }

        entry.Value = value;
        MoveToHead(entry);
        if (tail == null) tail = head;
    }

    public bool TryGetValue(TKey key, out TValue value)
    {
        value = default(TValue);
        Node entry;
        if (!entries.TryGetValue(key, out entry)) return false;
        MoveToHead(entry);
        value = entry.Value;
        return true;
    }

    private void MoveToHead(Node entry)
    {
        if (entry == head || entry == null) return;

        var next = entry.Next;
        var previous = entry.Previous;

        if (next != null) next.Previous = entry.Previous;
        if (previous != null) previous.Next = entry.Next;

        entry.Previous = null;
        entry.Next = head;

        if (head != null) head.Previous = entry;
        head = entry;

        if (tail == entry) tail = previous;
    }
}

Quick complexity analysis

Getting an element: When an element is requested, it is:

  1. fetched in the hash map and,
  2. its corresponding linked list node is moved to the list’s head.

These operations are both O(1) which ensures that fetching elements in our cache is O(1) too.

Adding an element: When an element is added to the cache:

  1. the hash table is checked for this element,
  2. if capacity is reached, list tail is removed,
  3. if element not in hash map, it is added,
  4. The existing or newly added node is moved to the lists head.

All these operations need to add an element to the cache are O(1).

Conclusion

The code snippet is hosted at github. This implemntation can be improved in many ways: make the cache implement the IDictionary<> interface; create a thread-safe wrapper etc.

Questions or remarks? please post a comment below, we would love to hear from you :-)