Dynamic Programming
Dynamic Programming (DP) is about turning a brute force into an efficient solution by reusing overlapping subproblems and storing intermediate calculations.
1. Starter Problems
Before diving into definitions, here are two “toy” problems that show what DP feels like.
1.1 Staircase Ways
You are at step and want to reach step . Each move you can climb 1, 2, or 3 steps. How many distinct ways are there to climb up the staircase?
State idea: = number of ways to reach step .
Transition: To reach , your last jump was , , or .
dp[0] = 1
dp[1] = dp[0] = 1
dp[2] = dp[1] + dp[0] = 2
dp[i] = dp[i-1] + dp[i-2] + dp[i-3] (for i >= 3)
Answer:
1.2 Minimum Cost to Climb Stairs
Same staircase rules (move 1, 2, or 3 steps), but each step has a positive cost paid when you land on it. Find the minimum total cost to reach the top.
State idea: = minimum total cost to land on step .
Transition: To land on , you came from , , or , then pay .
dp[0] = c[0]
dp[1] = c[1]
dp[2] = c[2]
dp[i] = min(dp[i-1], dp[i-2], dp[i-3]) + c[i] (for i >= 3)
Answer: If reaching the “top” means you can finish from step , , or without paying extra, then the answer is
Practice:
2. DP Mindset and Key Definitions
Dynamic programming is fundamentally about recurrence: we express the solution to a problem in terms of solutions to smaller instances of the same problem. In many problems, the same smaller instances arise repeatedly across different branches of the recursion, creating overlapping subproblems. Memoization (or bottom-up tabulation) avoids recomputing these repeated subproblems by storing their answers the first time they are computed and reusing them whenever needed
Core pieces
- State:
dp[subproblem]stores the answer for that subproblem. - Base cases: smallest states you already know.
- Transition: compute
dp[subproblem]from previously computed states. - Answer: usually
dp[final_problem](or max/min over a set of states).
Two common styles for implementation
- Top-down (memoized DFS): easy to write, must avoid recursion depth issues in Python, can TLE due to recursion being slower in general.
- Bottom-up: explicit order, often faster/safe, more common approach.
Sanity Checks
- Does your dp have a valid ordering?
- Does each case only reference smaller cases?
- Are you initializing your base cases?
DP Type: Counting vs Optimality
A quick (and useful) mental split for most dynamic programming related problems:
-
Optimality DP: each state stores a best value (max/min).
- Think: “What’s the best I can do from here?”
- Transitions usually use
max(...)/min(...). - Common extras: reconstruction via parent pointers, handling
-inf/+inf, tie-breaking.
-
Counting DP: each state stores a number of ways.
- Think: “How many ways can I get here / finish from here?”
- Transitions usually sum contributions from previous states.
- Common extras: modulo arithmetic, avoiding double-counting (ordered vs unordered), base cases are often “1 way” not “0 cost”.
Sanity checks by type
- Optimality: Are you initializing unreachable states to
-inf/+infcorrectly? - Counting: Are you counting each object exactly once? Are you applying
modconsistently?
A good workflow:
- Define the state.
- What information do you need to uniquely describe a subproblem?
- Write the transition.
- Which states do you need to transition to in order to ensure you have solved your current state?
- Ensure every transition is correct and covers all cases.
- Decide the order.
- Increasing length, increasing index, topological order, etc.
- Check complexity.
- State count × transitions per state.
3. Classical DP examples
3.1 Maximum Subarray Sum (Kadane)
Goal: maximum sum over all contiguous subarrays.
State idea:
best_end[i]= max subarray sum that must end ati.
Transition:
- Either extend previous subarray or start fresh at
i.
best_end[i] = max(a[i], best_end[i-1] + a[i])
answer = max over i of best_end[i]
Practice:
- Maximum Subarray Sum (CSES)
- [Reverse Subarray Sum (Hacker Devils X Soda Code Challenge XI)]
- Given an array of integers on the range , first you may pick some subarray of the given array and reverse it. Then, you pick any subarray and sum its elements. Find the maximum possible value of this sum.
3.2 Knapsack
You have items. Item has weight and value . You have a capacity . Choose a subset of items with total weight at most that maximizes total value. Constraints: , ,
Approach 1:
Let be the maximum value with the first items in the knapsack with capacity .
Transition:
- Don’t take item :
- Take item (if ):
All together:
Base Cases: ,
Approach 2:
While the above approach involves two dimensional dp, traditionally knapsack is implemented with a one-dimensional array. Let maintain the best value of a knapsack with capacity as we process the items. Updates with the -th element happens as follows for all in reverse order:
The idea of maintaining a dp table of information and updating it after processing each element of an array is very common.
Practice:
3.3 Longest Increasing Subsequence
Problem: Given an array of length , find the maximum length of a strictly increasing subsequence (not necessarily contiguous).
Approach 1: Classic DP
State: = length of the longest increasing subsequence that ends at index .
Transition:
- If we want an increasing subsequence ending at , the previous element can be any with .
- Compute:
- If there is no valid , then .
Base case: for all (the subsequence consisting of just ).
Answer:
Complexity:
- Time:
- Memory:
Reconstruction (optional):
- Store a
parent[i]pointer. Whenever is improved using some , setparent[i] = j. - Start from an index achieving and follow
parentpointers backward.
Approach 2: “tails” method
This method computes the length of LIS efficiently (and can be extended to reconstruct the sequence).
Key idea (definition of tails):
- Maintain an array where:
- is the smallest possible ending value of a strictly increasing subsequence of length seen so far.
Invariant:
- is increasing as an array of values, and smaller tail values are always better (they make it easier to extend later).
Update rule for each value :
- Find the smallest index such that:
- If such exists, replace with .
- If no such exists (i.e., is larger than all tails), append to .
This is correct because:
- Replacing with a smaller value keeps a subsequence of length possible, but makes its ending value as small as possible.
- Appending corresponds to finding a longer increasing subsequence than any previously seen.
Answer:
Complexity:
- Each update uses binary search on :
- Total time:
- Memory:
Strict vs nondecreasing detail:
- Strictly increasing LIS uses the first index with (a “lower bound”).
- Nondecreasing LIS uses the first index with (an “upper bound”).
Reconstruction (optional):
- If you also want the actual LIS (not just length), maintain:
- an index array tracking which original index produced each tail length, and
parent[i]pointers to backtrack the chosen subsequence.
Practice:
- Increasing Subsequence (CSES)
- Mysterious Present (CF)
- Candy Machine (Baltic IOI)
- Leaping Tak (AC)
- Jump Game (LC)
- Antimatter (CF)
A good source of DP problems to begin with can be found on USACO Guide. They also link to this beginner friendly contest.