LeetCopilot Logo
LeetCopilot
LeetCode Pattern/Monotonic Stack & Queue/Why Monotonic Stack Beats Brute Force: The O(n²) to O(n) Transformation

Why Monotonic Stack Beats Brute Force: The O(n²) to O(n) Transformation

LeetCopilot Team
Dec 22, 2025
14 min read
Monotonic StackTime ComplexityAmortized AnalysisAlgorithm OptimizationInterview Prep
Understand the amortized analysis proof of why monotonic stacks are O(n), not O(n²). Learn the mathematical intuition, see visual proofs, and build deep understanding of when this optimization applies.

You're looking at a nested loop solution. The outer loop runs n times. The inner while loop could run n times. Your brain screams: "This is O(n²)!"

But the interviewer says: "Actually, it's O(n)."

How is that possible?

This is the amortized analysis of monotonic stacks—one of the most elegant time complexity proofs in computer science. Understanding it deeply will transform how you think about algorithm optimization.

This guide will teach you why monotonic stacks are O(n), the mathematical proof, visual intuition, and when this optimization applies.

TL;DR

Why monotonic stack is O(n), not O(n²):

  • Each element is pushed to the stack exactly once: n operations
  • Each element is popped from the stack at most once: n operations
  • Total operations: 2n = O(n)

Even though there's a while loop inside a for loop, the total number of pops across all iterations is bounded by n.

This is called amortized analysis.

The Brute Force Baseline

Problem: Next Greater Element

Task: For each element, find the next element to its right that is greater.

Brute Force Solution: O(n²)

python
def next_greater_brute_force(nums):
    """
    Brute force: For each element, scan right to find next greater.
    
    Time: O(n²)
    Space: O(1)
    """
    n = len(nums)
    result = [-1] * n
    
    for i in range(n):              # O(n)
        for j in range(i + 1, n):   # O(n) in worst case
            if nums[j] > nums[i]:
                result[i] = nums[j]
                break
    
    return result

# Example
nums = [2, 1, 2, 4, 3]
print(next_greater_brute_force(nums))
# Output: [4, 2, 4, -1, -1]

Time complexity analysis:

  • Outer loop: n iterations
  • Inner loop: up to n iterations per outer iteration
  • Total: O(n × n) = O(n²)

Worst case: Strictly decreasing array [5, 4, 3, 2, 1]

  • For each element, scan all remaining elements
  • Total comparisons: n + (n-1) + (n-2) + ... + 1 = n(n+1)/2 = O(n²)

The Monotonic Stack Solution: O(n)

Optimized Solution

python
def next_greater_stack(nums):
    """
    Monotonic stack: Maintain decreasing order.
    
    Time: O(n)  ← How is this possible?
    Space: O(n)
    """
    n = len(nums)
    result = [-1] * n
    stack = []  # Store indices
    
    for i in range(n):                          # O(n) outer loop
        while stack and nums[i] > nums[stack[-1]]:  # O(?) inner loop
            idx = stack.pop()
            result[idx] = nums[i]
        stack.append(i)
    
    return result

# Example
nums = [2, 1, 2, 4, 3]
print(next_greater_stack(nums))
# Output: [4, 2, 4, -1, -1]

The question: The while loop is inside the for loop. Why isn't this O(n²)?

The Amortized Analysis Proof

The Key Insight

Each element can be popped at most once.

Even though the while loop can run multiple times in a single iteration of the for loop, across all iterations, each element is popped exactly once.

Formal Proof

Operations on each element:

  1. Push: Each element is pushed to the stack exactly once (in the for loop)
  2. Pop: Each element is popped from the stack at most once (in the while loop)

Total operations:

  • n pushes (one per element)
  • At most n pops (each element popped at most once)
  • Total: 2n = O(n)

Mathematical formulation:

Let push_count = number of push operations
Let pop_count = number of pop operations

code
push_count = n  (each element pushed once)

pop_count ≤ n   (each element popped at most once)

Total operations = push_count + pop_count
                 = n + pop_count
                 ≤ n + n
                 = 2n
                 = O(n)

Visual Proof

Example: nums = [3, 1, 4, 1, 5, 9, 2]

code
i=0, val=3:
  Stack: []
  Push 3
  Stack: [3]
  Operations: 1 push

i=1, val=1:
  Stack: [3]
  1 < 3, no pops
  Push 1
  Stack: [3, 1]
  Operations: 1 push

i=2, val=4:
  Stack: [3, 1]
  4 > 1, pop 1  ← 1 is popped (will never be pushed again)
  4 > 3, pop 3  ← 3 is popped (will never be pushed again)
  Push 4
  Stack: [4]
  Operations: 2 pops + 1 push

i=3, val=1:
  Stack: [4]
  1 < 4, no pops
  Push 1
  Stack: [4, 1]
  Operations: 1 push

i=4, val=5:
  Stack: [4, 1]
  5 > 1, pop 1  ← 1 is popped
  5 > 4, pop 4  ← 4 is popped
  Push 5
  Stack: [5]
  Operations: 2 pops + 1 push

i=5, val=9:
  Stack: [5]
  9 > 5, pop 5  ← 5 is popped
  Push 9
  Stack: [9]
  Operations: 1 pop + 1 push

i=6, val=2:
  Stack: [9]
  2 < 9, no pops
  Push 2
  Stack: [9, 2]
  Operations: 1 push

Total operations:
  Pushes: 7 (one per element)
  Pops: 6 (each element except 9 and 2 popped once)
  Total: 13 operations for 7 elements
  Average: ~1.86 operations per element
  Complexity: O(n)

Key observation: Elements 3, 1, 4, 1, 5 were each popped exactly once. Elements 9 and 2 remain in the stack (never popped). Total pops ≤ n.

Why It's NOT O(n²)

Common misconception: "The while loop can run n times, and it's inside a for loop that runs n times, so it's O(n²)."

Why this is wrong:

The while loop doesn't run n times per iteration of the for loop. It runs at most n times total across all iterations.

Analogy: Imagine you have n coins in your pocket. In a loop, you can take out coins and throw them away. You can't throw away more than n coins total, even if the loop runs n times.

Amortized Analysis: The Accounting Method

The Accounting Metaphor

Think of each element as having a budget of 2 operations:

  1. 1 operation for being pushed
  2. 1 operation for being popped

Total budget: 2n operations

Actual usage:

  • Each element uses its push budget: n operations
  • Each element uses at most its pop budget: ≤ n operations
  • Total: ≤ 2n operations = O(n)

Amortized Cost Per Operation

Amortized cost = Total cost / Number of operations

For monotonic stack:

  • Total cost: 2n operations
  • Number of elements: n
  • Amortized cost per element: 2n / n = 2 = O(1)

This means: On average, each element costs O(1) to process, even though some iterations of the while loop process multiple elements.

When This Optimization Applies

Pattern Recognition

Monotonic stack optimization applies when:

  1. Each element is processed at most once (pushed once, popped once)
  2. Stack operations are the only work (no other O(n) work per element)
  3. No backtracking (once popped, never re-added)

Examples where it applies:

  • Next greater element
  • Largest rectangle in histogram
  • Daily temperatures
  • Stock span
  • Sum of subarray minimums

When It DOESN'T Apply

Counterexample 1: Nested loops with independent work

python
for i in range(n):
    for j in range(n):
        # Do independent work for each (i, j) pair
        result[i][j] = compute(i, j)
# Time: O(n²) - no optimization possible

Why: Each (i, j) pair requires independent computation. No elements are "eliminated" like in monotonic stack.

Counterexample 2: Stack with re-additions

python
for i in range(n):
    while stack and condition:
        elem = stack.pop()
        # Process elem
        stack.append(elem)  # Re-add to stack!
# Time: O(n²) - elements can be popped multiple times

Why: Elements can be popped and re-added, violating the "at most once" property.

Complexity Comparison

Time Complexity

ApproachBest CaseAverage CaseWorst Case
Brute ForceO(n)O(n²)O(n²)
Monotonic StackO(n)O(n)O(n)

Best case for brute force: Strictly increasing array [1, 2, 3, 4, 5]

  • Each element's next greater is the immediate next element
  • Inner loop breaks after 1 iteration
  • Total: O(n)

Worst case for brute force: Strictly decreasing array [5, 4, 3, 2, 1]

  • Each element has no next greater
  • Inner loop scans all remaining elements
  • Total: O(n²)

All cases for monotonic stack: O(n)

  • Amortized analysis guarantees O(n) regardless of input

Space Complexity

ApproachSpace
Brute ForceO(1)
Monotonic StackO(n)

Tradeoff: Monotonic stack uses O(n) space to achieve O(n) time.

Real-World Impact

Example: Daily Temperatures (#739)

Problem: Given daily temperatures, return how many days until a warmer temperature.

Input: [73, 74, 75, 71, 69, 72, 76, 73]
Output: [1, 1, 4, 2, 1, 1, 0, 0]

Brute force: O(n²)

python
for i in range(n):
    for j in range(i + 1, n):
        if temps[j] > temps[i]:
            result[i] = j - i
            break
# Time: O(n²)

Monotonic stack: O(n)

python
stack = []
for i in range(n):
    while stack and temps[i] > temps[stack[-1]]:
        idx = stack.pop()
        result[idx] = i - idx
    stack.append(i)
# Time: O(n)

Performance difference:

  • n = 1,000: Brute force ~1,000,000 ops vs Stack ~2,000 ops (500× faster)
  • n = 10,000: Brute force ~100,000,000 ops vs Stack ~20,000 ops (5,000× faster)
  • n = 100,000: Brute force ~10,000,000,000 ops vs Stack ~200,000 ops (50,000× faster)

In interviews: The difference between TLE (Time Limit Exceeded) and AC (Accepted).

Common Misconceptions

Misconception 1: "While loop inside for loop is always O(n²)"

Wrong: Assumes each iteration of while loop is independent.

Correct: Amortized analysis considers total operations across all iterations.

Counterexample: Monotonic stack is O(n) despite nested loops.

Misconception 2: "Amortized O(n) means some operations are O(n)"

Wrong: Confuses amortized cost with worst-case per-operation cost.

Correct: Amortized O(n) means total cost is O(n), so average cost per operation is O(1).

Clarification: Some iterations of the while loop may pop multiple elements (O(k) for that iteration), but the total pops across all iterations is O(n).

Misconception 3: "Space complexity is O(1) because we only use one stack"

Wrong: Stack can grow to size n.

Correct: Space complexity is O(n) in worst case (all elements in stack).

Example: Strictly increasing array [1, 2, 3, 4, 5] → stack contains all elements.

Proof by Contradiction

Claim: Monotonic stack is O(n²)

Assumption: Suppose the time complexity is O(n²).

Implication: There exists an input where the total number of operations is proportional to n².

Analysis:

  • Each element is pushed once: n operations
  • For total to be n², we need n² - n pops
  • This means some element must be popped (n² - n) / n = n - 1 times on average

Contradiction: Each element can be popped at most once (once popped, it's removed from the stack forever).

Conclusion: The assumption is false. Time complexity is not O(n²).

Claim: Monotonic stack is O(n)

Proof:

  • Each element is pushed once: n operations
  • Each element is popped at most once: ≤ n operations
  • Total: ≤ 2n = O(n) ✓

Conclusion: Time complexity is O(n).

Visualizing the Optimization

Brute Force: Redundant Comparisons

code
Array: [2, 1, 2, 4, 3]

i=0 (val=2): Compare with 1, 2, 4 → Found 4
i=1 (val=1): Compare with 2, 4 → Found 2
i=2 (val=2): Compare with 4 → Found 4
i=3 (val=4): Compare with 3 → None
i=4 (val=3): None

Total comparisons: 3 + 2 + 1 + 1 + 0 = 7

Redundancy: We compare 2 (index 0) with 1, even though 1 < 2 (useless comparison).

Monotonic Stack: Eliminate Redundancy

code
Array: [2, 1, 2, 4, 3]

i=0: Push 2, stack=[2]
i=1: Push 1, stack=[2,1]
i=2: Pop 1 (2>1), Pop 2 (2≥2), Push 2, stack=[2]
     → Found next greater for indices 1 and 0
i=3: Pop 2 (4>2), Push 4, stack=[4]
     → Found next greater for index 2
i=4: Push 3, stack=[4,3]

Total operations: 5 pushes + 3 pops = 8

Optimization: We never compare 2 with 1 directly. The stack maintains only "useful" candidates.

When to Use This Optimization

Decision Framework

Use monotonic stack when:

  1. ✅ Problem asks for "next/previous greater/smaller"
  2. ✅ Brute force would be O(n²) with nested loops
  3. ✅ Each element needs to be compared with future/past elements
  4. ✅ Can eliminate "useless" candidates (smaller elements for "next greater")

Don't use when:

  1. ❌ Need to compare all pairs (e.g., find all pairs with sum = k)
  2. ❌ Order doesn't matter (e.g., k-th largest element)
  3. ❌ Need exact indices (e.g., two sum with indices)

Practice Problems

Master amortized analysis with these problems:

  1. Next Greater Element I (#496) - Basic template
  2. Daily Temperatures (#739) - Classic application
  3. Largest Rectangle in Histogram (#84) - Increasing stack
  4. Trapping Rain Water (#42) - Complex amortization
  5. Sum of Subarray Minimums (#907) - Contribution technique

For each problem:

  • Implement brute force O(n²) solution first
  • Implement monotonic stack O(n) solution
  • Trace through both to see the optimization
  • Verify amortized analysis holds

Conclusion

Monotonic stacks achieve O(n) time complexity through amortized analysis—one of the most elegant optimizations in computer science.

Key principles:

  • Each element pushed once: n operations
  • Each element popped at most once: ≤ n operations
  • Total: 2n = O(n), not O(n²)

The transformation:

  • Brute force: O(n²) - compare each element with all future elements
  • Monotonic stack: O(n) - eliminate redundant comparisons

Amortized analysis:

  • Total cost: 2n operations
  • Average cost per element: O(1)
  • Even though some iterations do more work, total is bounded

When it applies:

  • "Next/previous greater/smaller" problems
  • Histogram and rectangle problems
  • Any problem where brute force is O(n²) nested loops

Real-world impact:

  • 500× to 50,000× faster than brute force
  • Difference between TLE and AC in interviews

Master this optimization, and you'll transform O(n²) solutions into O(n) elegance.

Next steps:

Next time you see nested loops, ask: "Can I eliminate redundant comparisons with a monotonic stack?"

Want to Practice LeetCode Smarter?

LeetCopilot is a free browser extension that enhances your LeetCode practice with AI-powered hints, personalized study notes, and realistic mock interviews — all designed to accelerate your coding interview preparation.

Also compatible with Edge, Brave, and Opera

Related Tutorials