LRU Cache (LeetCode #146) is one of the most frequently asked interview questions at Google, Meta, Amazon, and Microsoft. It combines data structure design with practical caching concepts.
This guide covers everything you need to know to solve LRU Cache confidently in interviews.
TL;DR: Quick Solution
Data Structures: Hash Map + Doubly Linked List
Time Complexity: O(1) for both get and put
Key Insight: The hash map gives O(1) lookup. The doubly linked list gives O(1) insertion/deletion at both ends.
The Problem
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity)— Initialize with positive capacityint get(int key)— Return value if exists, else -1void put(int key, int value)— Update or insert. Evict LRU if at capacity.
Constraint: Both operations must run in O(1) average time.
Why This Problem is Tricky
The challenge is achieving O(1) for both operations:
| Operation | Challenge |
|---|---|
get | Need O(1) key lookup |
put (insert) | Need O(1) insert at front |
put (evict) | Need O(1) removal of LRU item |
| Update "recently used" | Need O(1) move to front |
Single data structure won't work:
- Hash Map: O(1) lookup, but no ordering
- Linked List: O(1) insert/delete, but O(n) lookup
- Array: O(n) for removal
Solution: Combine both!
The Key Insight
Use a Hash Map + Doubly Linked List together:
| Structure | Purpose |
|---|---|
| Hash Map | Key → Node (O(1) lookup) |
| Doubly Linked List | Track recency order (O(1) move/remove) |
How It Works:
- Most recently used → Front of list
- Least recently used → Back of list
- Hash Map → Direct access to any node
Step-by-Step Solution
Step 1: Define the Node
class Node:
def __init__(self, key, val):
self.key = key
self.val = val
self.prev = None
self.next = NoneEach node stores:
key— needed for eviction (to remove from hash map)val— the actual valueprev,next— doubly linked list pointers
Step 2: Initialize the Cache
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache = {} # key -> Node
# Dummy head and tail (simplifies edge cases)
self.head = Node(0, 0)
self.tail = Node(0, 0)
self.head.next = self.tail
self.tail.prev = self.headWhy dummy nodes?
- No null checks needed
- Insert/remove code is cleaner
- Head.next = most recent
- Tail.prev = least recent
Step 3: Helper Methods
def _remove(self, node):
"""Remove node from linked list"""
prev, nxt = node.prev, node.next
prev.next = nxt
nxt.prev = prev
def _insert(self, node):
"""Insert node at front (most recent)"""
prev, nxt = self.head, self.head.next
prev.next = node
nxt.prev = node
node.prev = prev
node.next = nxtStep 4: Implement get()
def get(self, key):
if key in self.cache:
node = self.cache[key]
# Move to front (mark as recently used)
self._remove(node)
self._insert(node)
return node.val
return -1Logic:
- If key exists: move to front, return value
- If key doesn't exist: return -1
Step 5: Implement put()
def put(self, key, value):
if key in self.cache:
# Remove old node
self._remove(self.cache[key])
# Create and insert new node
node = Node(key, value)
self.cache[key] = node
self._insert(node)
# Evict if over capacity
if len(self.cache) > self.capacity:
# Remove LRU (node before tail)
lru = self.tail.prev
self._remove(lru)
del self.cache[lru.key]Logic:
- If key exists: remove old node
- Create new node, add to cache and list
- If over capacity: evict LRU (tail.prev)
Complete Solution
class Node:
def __init__(self, key, val):
self.key = key
self.val = val
self.prev = None
self.next = None
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache = {} # key -> Node
# Dummy head and tail
self.head = Node(0, 0)
self.tail = Node(0, 0)
self.head.next = self.tail
self.tail.prev = self.head
def _remove(self, node):
prev, nxt = node.prev, node.next
prev.next = nxt
nxt.prev = prev
def _insert(self, node):
prev, nxt = self.head, self.head.next
prev.next = node
nxt.prev = node
node.prev = prev
node.next = nxt
def get(self, key):
if key in self.cache:
node = self.cache[key]
self._remove(node)
self._insert(node)
return node.val
return -1
def put(self, key, value):
if key in self.cache:
self._remove(self.cache[key])
node = Node(key, value)
self.cache[key] = node
self._insert(node)
if len(self.cache) > self.capacity:
lru = self.tail.prev
self._remove(lru)
del self.cache[lru.key]Complexity Analysis
| Operation | Time | Space |
|---|---|---|
get | O(1) | — |
put | O(1) | — |
| Overall | O(1) | O(capacity) |
Visual Walkthrough
Initial State (capacity = 2):
head <-> tail
cache = {}After put(1, 1):
head <-> [1:1] <-> tail
cache = {1: Node(1,1)}After put(2, 2):
head <-> [2:2] <-> [1:1] <-> tail
cache = {1: Node(1,1), 2: Node(2,2)}After get(1):
head <-> [1:1] <-> [2:2] <-> tail (1 moved to front)After put(3, 3) — evicts 2:
head <-> [3:3] <-> [1:1] <-> tail
cache = {1: Node(1,1), 3: Node(3,3)} (2 evicted)Common Mistakes
| Mistake | Fix |
|---|---|
| Forgetting to store key in node | Node needs key for eviction |
| Using singly linked list | Need doubly linked for O(1) removal |
| Not updating cache on put | Update cache even if key exists |
| Off-by-one in capacity check | Check after insert, before evict |
Python Shortcut: OrderedDict
Python's OrderedDict combines hash map + ordering:
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache = OrderedDict()
def get(self, key):
if key in self.cache:
self.cache.move_to_end(key)
return self.cache[key]
return -1
def put(self, key, value):
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.capacity:
self.cache.popitem(last=False)Note: Know the manual solution first—interviewers may ask you to implement without built-ins.
Interview Tips
When Explaining:
- Start with the O(1) requirement
- Explain why you need two data structures
- Draw the linked list with dummy nodes
- Walk through an example
Follow-Up Questions:
- "How would you handle concurrency?" → Use locks or concurrent data structures
- "What if capacity is 0?" → Handle as edge case
- "Can you implement LFU instead?" → LeetCode 460
Why This Problem is Asked
LRU Cache tests:
- Data structure design — combining structures
- Time complexity awareness — achieving O(1)
- Edge case handling — eviction, updates
- Practical knowledge — caching is everywhere
Real-world uses:
- Database query caches
- Web browser caches
- Operating system page replacement
- CDN caching
Related Problems
| Problem | Difficulty | Concept |
|---|---|---|
| LFU Cache (#460) | Hard | Frequency-based eviction |
| Design HashMap (#706) | Easy | Hash table basics |
| Design LinkedList (#707) | Medium | Linked list implementation |
| All O'one Data Structure (#432) | Hard | Max/min tracking |
Practice Strategy
- Understand the concept — Draw the linked list
- Code from scratch — Don't memorize, understand
- Test with examples — Walk through operations
- Use LeetCopilot — Get hints if stuck
FAQ
Why doubly linked list, not singly?
Singly linked list needs O(n) to find previous node for removal. Doubly linked list stores prev pointer for O(1) removal.
Why store key in the node?
When evicting, you need to remove from hash map too. The key is needed for that deletion.
Can I use a deque?
Deque gives O(n) for arbitrary removal. You need O(1) removal when accessing existing keys.
Conclusion
LRU Cache is a classic interview problem that tests your ability to combine data structures for optimal performance.
Key takeaways:
- Hash Map → O(1) key lookup
- Doubly Linked List → O(1) ordering operations
- Dummy nodes → Simplify edge cases
- Store key in node → Needed for eviction
Practice until you can implement it from scratch in under 15 minutes.
Good luck!
Want to Practice LeetCode Smarter?
LeetCopilot is a free browser extension that enhances your LeetCode practice with AI-powered hints, personalized study notes, and realistic mock interviews — all designed to accelerate your coding interview preparation.
Also compatible with Edge, Brave, and Opera
