LeetCopilot Logo
LeetCopilot
LeetCode Pattern/Hash Map/Hash Map Time Complexity: O(1) Average, O(n) Worst Case Explained

Hash Map Time Complexity: O(1) Average, O(n) Worst Case Explained

LeetCopilot Team
Dec 14, 2025
6 min read
Hash MapTime ComplexityBig OInterview Prep
Confused about why hash maps are O(1) if collisions exist? Learn the truth about average vs worst case complexity and how to explain it in interviews.

"Hash maps are O(1)." You've heard this a thousand times.

But then someone asks: "What about collisions? Isn't worst case O(n)?"

You freeze. You know hash maps are fast, but you can't explain why or when they degrade.

This guide gives you the complete explanation you need for interviews.

The O(1) Average Case

What it means

Average case O(1): With a good hash function and reasonable load factor, hash map operations (insert, lookup, delete) take constant time on average.

Why it works

Good hash function distributes keys uniformly across buckets:

text
Hash table with 8 buckets:
[0] → key1
[1] → key2
[2] → key3
[3] → key4
[4] → key5
[5] → key6
[6] → (empty)
[7] → (empty)

Each bucket has 0-1 elements → O(1) lookup

Load factor (α = n/m where n = elements, m = buckets):

  • When α < 0.75, most buckets have 0-1 elements
  • Lookup: Hash → bucket → check 1 element → O(1)

The math

With uniform hashing:

  • Expected elements per bucket = α
  • If α is constant (e.g., 0.75), lookup is O(1)

Example: 100 elements, 128 buckets

  • α = 100/128 ≈ 0.78
  • Average bucket has < 1 element
  • Lookup checks ~1 element → O(1)

The O(n) Worst Case

When it happens

Worst case O(n): All keys hash to the same bucket (collision chain).

text
Hash table with 8 buckets:
[0] → key1 → key2 → key3 → ... → keyn
[1] → (empty)
[2] → (empty)
...
[7] → (empty)

All n elements in one bucket → O(n) lookup

Why it's rare

Good hash functions make this extremely unlikely:

  • Python's dict uses randomized hashing
  • Java's HashMap uses a good hash function
  • Probability of all collisions is negligible

Adversarial input: Only happens if attacker knows your hash function and crafts malicious keys.

Real-world impact

In practice: You'll never hit O(n) worst case with:

  • Standard library hash maps
  • Random or typical input data
  • Reasonable load factors

In theory: Worst case exists, but it's like saying "arrays have O(n) access if you search linearly."

How Hash Collisions Work

Collision resolution methods

1. Chaining (most common)

Each bucket is a linked list:

text
[0] → key1 → key4 → key7
[1] → key2
[2] → key3 → key5
[3] → key6

Lookup:

  1. Hash key → bucket index
  2. Traverse linked list in bucket
  3. Compare keys until match

Time: O(1 + α) where α = load factor

2. Open addressing

Find next empty slot:

text
[0] → key1
[1] → key2
[2] → key3 (collision with key1, placed here)
[3] → (empty)

Lookup:

  1. Hash key → index
  2. If occupied by different key, probe next slot
  3. Continue until found or empty slot

Time: O(1/(1-α)) with good probing

Load Factor and Rehashing

What is load factor?

Load factor (α) = n/m

  • n = number of elements
  • m = number of buckets

Example: 75 elements, 100 buckets → α = 0.75

When to rehash

Most hash maps rehash when α > threshold (typically 0.75):

python
# Python dict rehashes when α > 2/3
# Java HashMap rehashes when α > 0.75

Rehashing process:

  1. Create new table (usually 2x size)
  2. Reinsert all elements
  3. Update bucket count

Cost: O(n) for rehashing, but amortized O(1) per insert

Amortized analysis

Amortized O(1): Even though rehashing costs O(n), it happens rarely enough that average cost per operation is O(1).

Example: Insert n elements

  • Most inserts: O(1)
  • Rehash at sizes 8, 16, 32, 64, ..., n
  • Total rehash cost: 8 + 16 + 32 + ... + n ≈ 2n
  • Amortized: 2n / n = O(1) per insert

How to Discuss This in Interviews

The standard answer

Interviewer: "What's the time complexity of hash map lookup?"

You: "O(1) average case, O(n) worst case."

The follow-up

Interviewer: "Why O(1) average?"

You: "With a good hash function, keys are distributed uniformly across buckets. The load factor is kept low (typically < 0.75), so each bucket has on average less than one element. This makes lookup constant time."

The deep dive

Interviewer: "When is it O(n)?"

You: "Worst case is O(n) if all keys collide and hash to the same bucket. However, this is extremely rare with modern hash functions. In practice, with standard library implementations and typical data, you'll see O(1) performance."

The practical answer

Interviewer: "So should I assume O(1) or O(n)?"

You: "For complexity analysis in interviews, assume O(1) average case unless the problem specifically involves adversarial input or you're implementing your own hash function. Standard library hash maps are highly optimized and maintain O(1) performance in practice."

Comparison with Other Data Structures

OperationHash MapArrayBSTSorted Array
LookupO(1) avgO(n)O(log n)O(log n)
InsertO(1) avgO(1) endO(log n)O(n)
DeleteO(1) avgO(n)O(log n)O(n)
Ordered?NoNoYesYes
SpaceO(n)O(n)O(n)O(n)

When hash map wins: Need fast lookup/insert/delete, don't need ordering

When others win:

  • BST: Need ordering + fast operations
  • Array: Small data, indexed access
  • Sorted array: Static data, need ordering

Practical Implications

When O(1) matters

Large datasets: With n = 1,000,000

  • Hash map lookup: ~1 operation
  • Array search: ~1,000,000 operations
  • 1,000,000x faster!

When it doesn't matter

Small datasets: With n = 10

  • Hash map: ~1 operation
  • Array: ~10 operations
  • Difference is negligible

The rule: Hash maps shine with large datasets and frequent lookups.

Common Misconceptions

Misconception 1: "Hash maps are always O(1)"

Truth: O(1) average case, O(n) worst case exists (but is rare).

Misconception 2: "Collisions make hash maps slow"

Truth: Some collisions are fine. Load factor < 0.75 keeps performance good.

Misconception 3: "I should worry about worst case"

Truth: With standard libraries, worst case is negligible. Focus on average case.

Misconception 4: "Rehashing makes inserts O(n)"

Truth: Rehashing is amortized O(1). Occasional O(n) rehash is spread across many O(1) inserts.

Summary

Hash map time complexity explained:

Average case: O(1)

  • Good hash function
  • Low load factor (< 0.75)
  • Uniform distribution

Worst case: O(n)

  • All keys collide
  • Extremely rare in practice
  • Only with adversarial input

Amortized: O(1)

  • Includes rehashing cost
  • Averaged over many operations

For interviews:

  • State: "O(1) average case"
  • Explain: Good hash function + low load factor
  • Acknowledge: O(n) worst case exists but is rare
  • Assume: O(1) for standard library implementations

The truth: Modern hash maps are so well-optimized that you can confidently use O(1) in complexity analysis.

Next steps:

Understand the complexity, explain it confidently.

Want to Practice LeetCode Smarter?

LeetCopilot is a free browser extension that enhances your LeetCode practice with AI-powered hints, personalized study notes, and realistic mock interviews — all designed to accelerate your coding interview preparation.

Also compatible with Edge, Brave, and Opera

Related Tutorials