Greedy algorithms are deceptively simple to code but notoriously difficult to recognize and prove correct. They transform complex optimization problems into elegant O(n) or O(n log n) solutions. They're used in dozens of LeetCode problems. And once you master them, you'll solve an entire class of problems that stump most developers.
But here's the challenge: greedy algorithms aren't just about making locally optimal choices—they require proving that local optimality leads to global optimality. Unlike patterns with clear structural cues (sorted arrays → binary search, pairs → two pointers), greedy problems require deep understanding of when a greedy choice is safe and when you need dynamic programming instead.
This comprehensive guide will teach you everything about greedy algorithms: all patterns, when to use greedy vs DP, how to prove correctness, common mistakes to avoid, and a systematic recognition framework that works every time.
What Is a Greedy Algorithm?
The Core Concept
A greedy algorithm makes the locally optimal choice at each step, hoping that these local choices lead to a globally optimal solution.
The fundamental idea: Instead of exploring all possible solutions (brute force) or building up solutions from subproblems (dynamic programming), we make the best immediate decision and never reconsider it.
Example:
Problem: Coin Change (Greedy Version - US Coins)
Coins: [25¢, 10¢, 5¢, 1¢], Target: 41¢
Greedy approach:
Step 1: Take largest coin ≤ 41¢ → 25¢ (remaining: 16¢)
Step 2: Take largest coin ≤ 16¢ → 10¢ (remaining: 6¢)
Step 3: Take largest coin ≤ 6¢ → 5¢ (remaining: 1¢)
Step 4: Take largest coin ≤ 1¢ → 1¢ (remaining: 0¢)
Result: 4 coins [25¢, 10¢, 5¢, 1¢] ✓ (Optimal for US coins)Why Greedy Works (When It Does)
The power of greedy algorithms comes from two critical properties:
1. Greedy Choice Property: A globally optimal solution can be arrived at by making locally optimal choices.
2. Optimal Substructure: An optimal solution contains optimal solutions to subproblems.
When both properties hold, greedy works. When either fails, you need dynamic programming.
Visual proof (Interval Scheduling):
Activities: [(1,4), (3,5), (0,6), (5,7), (3,9), (5,9), (6,10), (8,11)]
Greedy choice: Always pick activity that ends earliest
Step 1: Pick (1,4) - ends at 4
Step 2: Pick (5,7) - ends at 7, starts after 4
Step 3: Pick (8,11) - ends at 11, starts after 7
Result: 3 activities (optimal) ✓This works because picking the earliest-ending activity leaves the most room for future activities—a greedy choice that's provably optimal.
The Intuition: Local to Global Optimality
Let's see the transformation from brute force to greedy:
Brute force (exponential):
def coin_change_brute(coins, amount):
if amount == 0:
return 0
if amount < 0:
return float('inf')
# Try every coin, take minimum
min_coins = float('inf')
for coin in coins:
result = coin_change_brute(coins, amount - coin)
min_coins = min(min_coins, result + 1)
return min_coins
# Time: O(k^n) where k = number of coins, n = amountGreedy (linear after sorting):
def coin_change_greedy(coins, amount):
coins.sort(reverse=True) # Largest first
count = 0
for coin in coins:
if amount >= coin:
count += amount // coin
amount %= coin
return count if amount == 0 else -1
# Time: O(k log k) for sorting + O(k) for greedy = O(k log k)The tradeoff: Greedy is exponentially faster, but only works for certain coin systems (like US coins). For arbitrary coins, you need DP.
Comparing Approaches
Problem: Activity selection with 1000 activities.
Brute force:
- Time: O(2^1000) - try all subsets
- Space: O(1000) - recursion depth
- Guaranteed optimal
Dynamic programming:
- Time: O(n²) - build up solutions
- Space: O(n) - DP table
- Guaranteed optimal
Greedy:
- Time: O(n log n) - sort by end time
- Space: O(1) - no extra storage
- Optimal only if greedy choice property holds
Result: Greedy is 1000× faster than DP when it works, but requires proof of correctness.
The Three Main Greedy Patterns
Greedy algorithms aren't one pattern—they're three distinct techniques that solve different types of problems. Understanding which pattern to use is the key to mastering greedy algorithms.
1. Interval Scheduling (Activity Selection)
The Pattern: Sort intervals by end time, greedily select non-overlapping intervals.
When to use:
- Scheduling problems (meetings, activities, tasks)
- Interval selection with conflicts
- Want to maximize number of non-overlapping intervals
- Can sort by end time
Why it works:
Selecting the earliest-ending interval leaves maximum room for future intervals. This greedy choice is provably optimal via exchange argument.
Visual representation:
Intervals: [(1,3), (2,5), (4,7), (6,9)]
Sort by end time: [(1,3), (2,5), (4,7), (6,9)]
Greedy selection:
Pick (1,3) - ends earliest
Skip (2,5) - overlaps with (1,3)
Pick (4,7) - starts after 3
Skip (6,9) - overlaps with (4,7)
Result: 2 intervals (optimal)Real-world analogy: Scheduling meetings in a conference room. To fit the most meetings, always pick the one that ends earliest, freeing up the room for more meetings later.
Example problems: Meeting Rooms (#252), Non-overlapping Intervals (#435), Minimum Number of Arrows (#452)
2. Greedy Choice with Sorting
The Pattern: Sort input by a specific criterion, then make greedy decisions using two pointers or iteration.
When to use:
- Matching/pairing problems (assign cookies, boats)
- Optimization with constraints
- Can sort to enable greedy decisions
- Want O(n log n) time
Why it works:
Sorting creates monotonicity, allowing greedy decisions about which elements to pair or select.
Visual representation:
Problem: Assign Cookies (LeetCode #455)
Children greed: [1, 2, 3]
Cookie sizes: [1, 1, 2]
Sort both: greed=[1,2,3], cookies=[1,1,2]
Greedy matching:
Child 1 (greed=1) ← Cookie 1 (size=1) ✓
Child 2 (greed=2) ← Cookie 2 (size=1) ✗ (too small)
Child 2 (greed=2) ← Cookie 3 (size=2) ✓
Child 3 (greed=3) ← No cookies left
Result: 2 children satisfied (optimal)Real-world analogy: Distributing food to people with different appetites. Give the smallest satisfying portion to each person, starting with those who need the least.
Example problems: Assign Cookies (#455), Boats to Save People (#881), Two City Scheduling (#1029)
3. Greedy State Tracking
The Pattern: Track cumulative state (balance, reach, fuel) in a single pass, making greedy decisions based on current state.
When to use:
- Reachability problems (Jump Game)
- Balance/accumulation problems (Gas Station)
- Can determine feasibility in one pass
- Want O(n) time, O(1) space
Why it works:
If we can't reach a position with the best possible state so far, we can't reach it at all. Greedy tracking of maximum reach or cumulative balance is sufficient.
Visual representation:
Problem: Jump Game (LeetCode #55)
Array: [2, 3, 1, 1, 4]
Track maximum reach:
i=0: nums[0]=2, reach=max(0, 0+2)=2
i=1: nums[1]=3, reach=max(2, 1+3)=4
i=2: nums[2]=1, reach=max(4, 2+1)=4
i=3: nums[3]=1, reach=max(4, 3+1)=4
i=4: nums[4]=4, reach=max(4, 4+4)=8
Can reach end (index 4)? Yes, reach=8 ≥ 4 ✓Real-world analogy: Driving with a fuel tank. Track how far you can go with current fuel. If you can't reach the next gas station with maximum possible fuel, the trip is impossible.
Example problems: Jump Game (#55), Jump Game II (#45), Gas Station (#134)
When to Use Greedy Algorithms
Choosing greedy vs DP is the hardest decision. Here's a systematic framework:
The Decision Tree
Question 1: Is this an optimization problem?
├─ No → Greedy probably not applicable
└─ Yes → Continue to Question 2
Question 2: Can you identify a greedy choice?
├─ No → Use DP or brute force
└─ Yes → Continue to Question 3
Question 3: Does the greedy choice property hold?
├─ Unsure → Try to find a counterexample
├─ No (found counterexample) → Use DP
└─ Yes → Continue to Question 4
Question 4: Does optimal substructure hold?
├─ No → Greedy won't work
└─ Yes → Greedy is likely correct ✓
Question 5: Can you prove correctness?
├─ No → Be cautious, test thoroughly
└─ Yes → Greedy is the right choice ✓Quick Recognition Checklist
Use Greedy when you see:
- ✅ "Maximize/minimize number of intervals"
- ✅ "Earliest/latest deadline first"
- ✅ "Largest/smallest first" with sorting
- ✅ "Can you reach the end?"
- ✅ "Assign/match optimally"
- ✅ Interval scheduling keywords
Use DP (not Greedy) when you see:
- ❌ "0/1 Knapsack" (can't take fractions)
- ❌ "Coin change" (arbitrary denominations)
- ❌ "Longest increasing subsequence"
- ❌ "Edit distance"
- ❌ "Partition into subsets"
- ❌ Greedy counterexample exists
Test with counterexample:
- ❌ Coins [1, 3, 4], target 6: Greedy gives [4,1,1]=3 coins, optimal is [3,3]=2 coins
- ❌ Knapsack: Greedy by value/weight ratio fails for 0/1 variant
How to Prove Greedy Correctness
Proving greedy correctness is crucial for interviews and avoiding wrong solutions.
Proof Technique 1: Greedy Choice Property
Definition: A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.
Template:
- Assume there's an optimal solution that doesn't include the greedy choice
- Show you can "exchange" a choice in the optimal solution with the greedy choice
- Prove the modified solution is still optimal
- Conclude the greedy choice is safe
Example (Interval Scheduling):
Greedy choice: Pick interval that ends earliest
Proof:
1. Let OPT be an optimal solution
2. Let g be the greedy choice (earliest end time)
3. Let f be the first interval in OPT
Case 1: If f = g, greedy choice is in OPT ✓
Case 2: If f ≠ g, then end(g) ≤ end(f) (by greedy choice)
- Replace f with g in OPT
- New solution is still valid (g ends earlier, so no new conflicts)
- New solution has same size as OPT
- Therefore, greedy choice leads to optimal solution ✓Proof Technique 2: Exchange Argument
Definition: Show that any optimal solution can be transformed into one that includes the greedy choice, without making it worse.
Template:
- Take any optimal solution
- If it doesn't use the greedy choice, exchange one element with the greedy choice
- Show the solution is still optimal (or better)
- Conclude greedy is safe
Example (Activity Selection):
See detailed proof in Exchange Argument Explained.
Proof Technique 3: Induction
Definition: Prove greedy works for base case, then show if it works for n-1, it works for n.
Template:
- Base case: Prove greedy works for smallest input
- Inductive hypothesis: Assume greedy works for n-1
- Inductive step: Show greedy works for n using the hypothesis
- Conclude greedy works for all n
Example (Jump Game):
Base case: Array of size 1 → always reachable ✓
Inductive hypothesis: Greedy (tracking max reach) works for arrays of size n-1
Inductive step: For array of size n
- If we can reach position i, we can reach any j ≤ i + nums[i]
- Max reach at position i includes all positions reachable from 0..i
- If max reach ≥ n-1, we can reach the end ✓
- If max reach < i at some point, we can't reach i (or anything beyond) ✓
Conclusion: Greedy tracking of max reach is correct ✓Proof Technique 4: Optimal Substructure
Definition: An optimal solution contains optimal solutions to subproblems.
How it differs from DP: In greedy, we only need to solve ONE subproblem (after the greedy choice). In DP, we solve ALL subproblems.
Example (Interval Scheduling):
After picking the greedy interval (earliest end time):
- Remaining problem: Schedule intervals that start after greedy interval ends
- Optimal solution to remaining problem + greedy choice = optimal overall
- We don't need to try all first choices (unlike DP)Complete Templates
Template 1: Interval Scheduling (Sort by End Time)
def interval_scheduling(intervals):
"""
For: Maximizing non-overlapping intervals
Time: O(n log n)
Space: O(1)
Returns: Maximum number of non-overlapping intervals
"""
if not intervals:
return 0
# Sort by end time (greedy choice)
intervals.sort(key=lambda x: x[1])
count = 1
last_end = intervals[0][1]
for start, end in intervals[1:]:
# If current interval doesn't overlap with last selected
if start >= last_end:
count += 1
last_end = end
return countJavaScript:
function intervalScheduling(intervals) {
if (intervals.length === 0) return 0;
intervals.sort((a, b) => a[1] - b[1]);
let count = 1;
let lastEnd = intervals[0][1];
for (let i = 1; i < intervals.length; i++) {
const [start, end] = intervals[i];
if (start >= lastEnd) {
count++;
lastEnd = end;
}
}
return count;
}Java:
public int intervalScheduling(int[][] intervals) {
if (intervals.length == 0) return 0;
Arrays.sort(intervals, (a, b) -> a[1] - b[1]);
int count = 1;
int lastEnd = intervals[0][1];
for (int i = 1; i < intervals.length; i++) {
if (intervals[i][0] >= lastEnd) {
count++;
lastEnd = intervals[i][1];
}
}
return count;
}Template 2: Greedy Choice with Sorting
def greedy_with_sorting(arr1, arr2):
"""
For: Matching/pairing problems
Time: O(n log n)
Space: O(1)
"""
# Sort both arrays (criterion depends on problem)
arr1.sort()
arr2.sort()
i, j = 0, 0
count = 0
# Two-pointer greedy matching
while i < len(arr1) and j < len(arr2):
if can_match(arr1[i], arr2[j]):
count += 1
i += 1
j += 1
else:
# Move pointer based on problem logic
j += 1
return count
# Example: Assign Cookies
def findContentChildren(g, s):
g.sort() # Greed factors
s.sort() # Cookie sizes
child = cookie = 0
while child < len(g) and cookie < len(s):
if s[cookie] >= g[child]:
child += 1
cookie += 1
return childJavaScript:
function greedyWithSorting(arr1, arr2) {
arr1.sort((a, b) => a - b);
arr2.sort((a, b) => a - b);
let i = 0, j = 0, count = 0;
while (i < arr1.length && j < arr2.length) {
if (canMatch(arr1[i], arr2[j])) {
count++;
i++;
j++;
} else {
j++;
}
}
return count;
}Java:
public int greedyWithSorting(int[] arr1, int[] arr2) {
Arrays.sort(arr1);
Arrays.sort(arr2);
int i = 0, j = 0, count = 0;
while (i < arr1.length && j < arr2.length) {
if (canMatch(arr1[i], arr2[j])) {
count++;
i++;
j++;
} else {
j++;
}
}
return count;
}Template 3: Greedy State Tracking
def greedy_state_tracking(arr):
"""
For: Reachability, balance, accumulation problems
Time: O(n)
Space: O(1)
"""
state = 0 # Track cumulative state
for i, val in enumerate(arr):
# Update state based on current element
state = update_state(state, val, i)
# Check if state violates constraint
if state < threshold:
return False # or handle failure
return True # or return final state
# Example: Jump Game
def canJump(nums):
max_reach = 0
for i in range(len(nums)):
# Can't reach current position
if i > max_reach:
return False
# Update maximum reachable position
max_reach = max(max_reach, i + nums[i])
# Early exit if we can reach the end
if max_reach >= len(nums) - 1:
return True
return TrueJavaScript:
function greedyStateTracking(arr) {
let state = 0;
for (let i = 0; i < arr.length; i++) {
state = updateState(state, arr[i], i);
if (state < threshold) {
return false;
}
}
return true;
}
// Example: Jump Game
function canJump(nums) {
let maxReach = 0;
for (let i = 0; i < nums.length; i++) {
if (i > maxReach) return false;
maxReach = Math.max(maxReach, i + nums[i]);
if (maxReach >= nums.length - 1) return true;
}
return true;
}Java:
public boolean greedyStateTracking(int[] arr) {
int state = 0;
for (int i = 0; i < arr.length; i++) {
state = updateState(state, arr[i], i);
if (state < threshold) {
return false;
}
}
return true;
}
// Example: Jump Game
public boolean canJump(int[] nums) {
int maxReach = 0;
for (int i = 0; i < nums.length; i++) {
if (i > maxReach) return false;
maxReach = Math.max(maxReach, i + nums[i]);
if (maxReach >= nums.length - 1) return true;
}
return true;
}Pattern 1: Interval Scheduling
Core Problems
1. Non-overlapping Intervals (LeetCode #435)
def eraseOverlapIntervals(intervals: List[List[int]]) -> int:
"""
Find minimum number of intervals to remove to make rest non-overlapping
Time: O(n log n), Space: O(1)
"""
if not intervals:
return 0
# Sort by end time
intervals.sort(key=lambda x: x[1])
# Track number of non-overlapping intervals we can keep
keep = 1
last_end = intervals[0][1]
for start, end in intervals[1:]:
if start >= last_end:
keep += 1
last_end = end
# Remove = total - keep
return len(intervals) - keep
# Example
print(eraseOverlapIntervals([[1,2],[2,3],[3,4],[1,3]])) # 1
# Keep [1,2], [2,3], [3,4], remove [1,3]Why it works: Greedy choice of earliest-ending interval maximizes room for future intervals.
See detailed guide: Interval Scheduling Template
2. Minimum Number of Arrows to Burst Balloons (LeetCode #452)
def findMinArrowShots(points: List[List[int]]) -> int:
"""
Find minimum arrows to burst all balloons
Time: O(n log n), Space: O(1)
"""
if not points:
return 0
# Sort by end position
points.sort(key=lambda x: x[1])
arrows = 1
arrow_pos = points[0][1]
for start, end in points[1:]:
# If balloon starts after arrow position, need new arrow
if start > arrow_pos:
arrows += 1
arrow_pos = end
return arrows
# Example
print(findMinArrowShots([[10,16],[2,8],[1,6],[7,12]])) # 2
# Arrow at 6 bursts [2,8] and [1,6]
# Arrow at 12 bursts [10,16] and [7,12]Why it works: Shooting at the earliest-ending balloon's end position maximizes coverage.
Pattern 2: Greedy with Sorting
Core Problems
1. Assign Cookies (LeetCode #455)
def findContentChildren(g: List[int], s: List[int]) -> int:
"""
Assign cookies to children to maximize satisfied children
Time: O(n log n + m log m), Space: O(1)
"""
g.sort() # Greed factors
s.sort() # Cookie sizes
child = cookie = 0
while child < len(g) and cookie < len(s):
# If cookie satisfies child, move to next child
if s[cookie] >= g[child]:
child += 1
# Always move to next cookie
cookie += 1
return child
# Example
print(findContentChildren([1,2,3], [1,1])) # 1
# Child with greed 1 gets cookie size 1Why it works: Sorting allows greedy matching of smallest greed with smallest satisfying cookie.
See detailed guide: Assign Cookies Greedy Explained
2. Boats to Save People (LeetCode #881)
def numRescueBoats(people: List[int], limit: int) -> int:
"""
Find minimum boats needed (each boat holds max 2 people)
Time: O(n log n), Space: O(1)
"""
people.sort()
left, right = 0, len(people) - 1
boats = 0
while left <= right:
# Try to pair heaviest with lightest
if people[left] + people[right] <= limit:
left += 1 # Both can go
right -= 1 # Heaviest always goes
boats += 1
return boats
# Example
print(numRescueBoats([3,2,2,1], 3)) # 3
# Boat 1: [1,2], Boat 2: [2], Boat 3: [3]Why it works: Greedy pairing of heaviest with lightest minimizes boats.
See detailed guide: Greedy with Sorting Template
Pattern 3: Greedy State Tracking
Core Problems
1. Jump Game (LeetCode #55)
def canJump(nums: List[int]) -> bool:
"""
Determine if you can reach the last index
Time: O(n), Space: O(1)
"""
max_reach = 0
for i in range(len(nums)):
# Can't reach current position
if i > max_reach:
return False
# Update maximum reachable position
max_reach = max(max_reach, i + nums[i])
# Early exit if we can reach the end
if max_reach >= len(nums) - 1:
return True
return True
# Example
print(canJump([2,3,1,1,4])) # True
# 0 → 1 → 4 (or 0 → 2 → 3 → 4)Why it works: If we can't reach position i with maximum possible reach, we can never reach it.
See detailed guide: Jump Game: Greedy vs DP
2. Gas Station (LeetCode #134)
def canCompleteCircuit(gas: List[int], cost: List[int]) -> int:
"""
Find starting gas station to complete circuit
Time: O(n), Space: O(1)
"""
total_tank = current_tank = 0
start = 0
for i in range(len(gas)):
total_tank += gas[i] - cost[i]
current_tank += gas[i] - cost[i]
# If we can't reach next station, start from next station
if current_tank < 0:
start = i + 1
current_tank = 0
return start if total_tank >= 0 else -1
# Example
print(canCompleteCircuit([1,2,3,4,5], [3,4,5,1,2])) # 3
# Start at station 3: 4→5→1→2→3 (circuit complete)Why it works: If we can't reach station j from station i, we can't reach j from any station between i and j.
See detailed guide: Gas Station: Why Greedy Works
Advanced Techniques
1. Two-Pass Greedy (Candy)
For problems where local constraints come from both directions.
def candy(ratings: List[int]) -> int:
"""
LeetCode #135: Candy
Distribute candies such that higher-rated children get more
Time: O(n), Space: O(n)
"""
n = len(ratings)
candies = [1] * n
# Left to right: ensure right neighbor gets more if rated higher
for i in range(1, n):
if ratings[i] > ratings[i-1]:
candies[i] = candies[i-1] + 1
# Right to left: ensure left neighbor gets more if rated higher
for i in range(n-2, -1, -1):
if ratings[i] > ratings[i+1]:
candies[i] = max(candies[i], candies[i+1] + 1)
return sum(candies)Why it works: Two passes ensure both left and right constraints are satisfied.
See detailed guide: Candy: Two-Pass Greedy Explained
2. Greedy with Priority Queue
For problems requiring dynamic selection of optimal element.
import heapq
def minCostToHireWorkers(quality, wage, k):
"""
LeetCode #857: Minimum Cost to Hire K Workers
Time: O(n log n), Space: O(n)
"""
workers = sorted([(w/q, q, w) for q, w in zip(quality, wage)])
heap = []
sum_quality = 0
min_cost = float('inf')
for ratio, q, w in workers:
heapq.heappush(heap, -q) # Max heap
sum_quality += q
if len(heap) > k:
sum_quality += heapq.heappop(heap) # Remove largest quality
if len(heap) == k:
min_cost = min(min_cost, sum_quality * ratio)
return min_costSee detailed guide: Greedy with Priority Queue
3. Greedy with Lookahead (Jump Game II)
For problems requiring minimum steps with greedy range expansion.
def jump(nums: List[int]) -> int:
"""
LeetCode #45: Jump Game II
Find minimum jumps to reach end
Time: O(n), Space: O(1)
"""
jumps = 0
current_end = 0
farthest = 0
for i in range(len(nums) - 1):
farthest = max(farthest, i + nums[i])
# Reached end of current jump range
if i == current_end:
jumps += 1
current_end = farthest
return jumpsWhy it works: BFS-style greedy expansion of reachable range.
Common Mistakes
1. Using Greedy When DP Is Required
❌ Wrong:
# Coin Change with arbitrary denominations
def coinChange(coins, amount):
coins.sort(reverse=True)
count = 0
for coin in coins:
count += amount // coin
amount %= coin
return count if amount == 0 else -1
# FAILS for coins=[1,3,4], amount=6
# Greedy: [4,1,1] = 3 coins
# Optimal: [3,3] = 2 coins ✗✅ Correct:
# Use DP for arbitrary coin systems
def coinChange(coins, amount):
dp = [float('inf')] * (amount + 1)
dp[0] = 0
for i in range(1, amount + 1):
for coin in coins:
if coin <= i:
dp[i] = min(dp[i], dp[i - coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1See guide: Why Greedy Fails on Coin Change
2. Wrong Sorting Criterion
❌ Wrong:
# Sorting intervals by start time instead of end time
intervals.sort(key=lambda x: x[0]) # WRONG for interval scheduling✅ Correct:
# Sort by end time for interval scheduling
intervals.sort(key=lambda x: x[1])See guide: Wrong Sorting Criterion
3. Not Proving Correctness
❌ Wrong:
# Assuming greedy works without proof
# "It seems like picking the largest first should work..."✅ Correct:
# Always prove or test with counterexamples
# Use exchange argument, greedy choice property, or inductionSee guide: Greedy Proof Techniques
Greedy vs Dynamic Programming
This is the most important distinction in algorithm interviews.
When Greedy Works
Characteristics:
- ✅ Greedy choice property holds
- ✅ Can prove correctness
- ✅ Local optimal → global optimal
- ✅ No counterexamples
Examples:
- Interval scheduling (sort by end time)
- Fractional knapsack (sort by value/weight ratio)
- Jump Game (track maximum reach)
- Huffman coding (build tree greedily)
When You Need DP
Characteristics:
- ❌ Greedy choice property fails
- ❌ Counterexamples exist
- ❌ Need to explore multiple choices
- ❌ Overlapping subproblems
Examples:
- 0/1 Knapsack (can't take fractions)
- Coin change (arbitrary denominations)
- Longest increasing subsequence
- Edit distance
The Counterexample Test
Always test greedy with edge cases:
# Coin Change: [1, 3, 4], target = 6
# Greedy: 6 → 4 → 2 → 1 → 1 (uses 4, then 1, 1) = 3 coins
# Optimal: 6 → 3 → 3 (uses 3, 3) = 2 coins
# Greedy FAILS ✗
# Interval Scheduling: [(1,3), (2,5), (4,7)]
# Greedy (end time): Pick (1,3), then (4,7) = 2 intervals
# Optimal: Same = 2 intervals
# Greedy WORKS ✓See detailed guide: Greedy vs Dynamic Programming
Conclusion
Greedy algorithms are powerful optimization tools that transform complex problems into elegant solutions—when they work.
Key principles:
- Greedy choice property must hold
- Prove correctness with exchange argument, induction, or greedy choice property
- Test with counterexamples to verify greedy works
- Know when to use DP instead
The three main patterns:
- Interval scheduling - sort by end time
- Greedy with sorting - sort then match/pair
- Greedy state tracking - one-pass cumulative state
The decision framework:
- Optimization problem? → Consider greedy
- Can identify greedy choice? → Test it
- Greedy choice property holds? → Prove it
- Counterexample exists? → Use DP
When to use:
- Interval scheduling problems
- Matching/pairing with constraints
- Reachability/balance problems
- Can prove local → global optimality
Master these patterns, learn to prove correctness, and you'll solve greedy problems with confidence. For specific patterns, see Interval Scheduling Template, Greedy vs DP, and Greedy Proof Techniques.
Next time you see "maximize intervals" or "minimize steps," reach for greedy—but prove it first.
Want to Practice LeetCode Smarter?
LeetCopilot is a free browser extension that enhances your LeetCode practice with AI-powered hints, personalized study notes, and realistic mock interviews — all designed to accelerate your coding interview preparation.
Also compatible with Edge, Brave, and Opera
