When I left you, I was but the learner, now I am the master.
Let us explore the intuitions of dynamic programming and transform our thoughts from "what the hell?" to "oh yeah, duh!" via a 3-step heuristic process. In hindsight, we can "see" the ART of dynamic programming is as easy as 1, 2, 3. π
You rarely have time for everything you want in this life, so you need to make choices. And hopefully your choices can come from a deep sense of who you are.
A significant amount of time and mental resources are necessary to begin understanding dynamic programming. Thus, DP is best approached as a "marathon", not a "sprint". Two key building blocks towards a basic understanding of DP are recursion and mathematical induction.
Note: When I mention "recursive stack unwinding" below, what I mean is "frames popping off the recursive call stack". And I imagine a similarity between this action and stepping down the stairs one-by-one, followed by stepping back up the stairs one-by-one in reverse. Stepping down the stairs would be equivalent to the recursive function being invoked as a subroutine of itself. And stepping back up the stairs would be equivalent to the recursive function invocations exiting their subroutines as the base case(s) are reached.
Take the first step in faith. You don't have to see the whole staircase, just take the first step.
A mental leap is faith is necessary to begin post-processing recursion and mathematical induction, ie. we can only understand the basic principles of recursion and recursive algorithms after we have assumed the inductive hypothesis of a recurrence relation is true. After taking this mental leap of faith, we can look back in hindsight with the mind's eye to discover what that actually means, ie. we can clearly "see" the recursive stack hit the base case(s) and begin to unwind, formulating an optimal solution built upon the optimal solutions to subproblems of the original problem itself.
We will never know our full potential unless we push ourselves to find it.
There are two key ingredients to problems which can be solved by a Dynamic Programming algorithm:
Don't only practice your art, but force your way into its secrets. For it and knowledge can raise men to the divine.
ART is an acronym used to intuitively create solutions to DP problems in 3 steps:
- All
- Remember
- Turn
These 3 steps are elaborated upon below, however, let us first take a minute to consider what our end-goal is and how we can intuitively reach it. Our end-goal in general is to minimize or maximum an objective function within the given constraints of an arbitrary universe of discourse (ie. the problem statement).
Ask yourself this question: Is it possible to know the minimum or maximum objective function outcome without first checking all possibilities? For example, let's say we have 3 numbers, and we want to know what is the minimum or maximum of those 3 numbers. Is it possible to know the minimum or maximum value without first checking the values of all 3 numbers? Please take a moment to consider this question before proceeding.
See Answer
The answer is obviously "no." It is not possible to know which of the 3 numbers are minimal or maximal unless we first check all 3 values. Using 3 cards as an example, let's assume we only know the values for the first two cards. Since we have not checked the third card's value, we don't know if it is less-than, equal-to, or greater-than the first two cards, and thus we cannot determine the object function outcome without first checking the all of the card values.

All possibilities of a universe of discourse under consideration need to be checked before we can determine the objective function outcome. This realization allows us to begin creating a DP solution via a naive brute-force algorithm ie. an exhaustive search of all possibilities. Therefore, we begin by exploring all possibilites via top-down depth-first-search. Since we know we need to check all possibilities, this gives us key insight towards the N-dimensions of the corresponding DP memo which is used to remember the optimal solutions to each overlapping subproblem. This intuition leads us to step 2, but before we move on to step 2, let us first take another moment to consider the next key question.
Ask yourself this question: Is it possible to determine the objective function outcome without solving overlapping subproblems more than once?
See Answer
The answer is obviously "yes." With the properly structured N-dimensional memo we can store the optimal solutions to overlapping subproblems as they are computed, and then lookup previous solutions upon demand. This is the entire purpose of the DP memo. Simply remember each previous subproblem's optimal solution to avoid re-calculating it. In fact, this is raison d'Γͺtre of dynamic programming, remembering the past to formulate the future, ie. use previous optimal subproblem solutions to formulate current optimal subproblem solutions to formulate the overall optimal solution for the original problem itself.

Remember each previous subproblem's optimal solution to avoid recomputing it. Combined with the previous "top-down brute-force" solution from step 1, we create the "top-down with memo" solution by simply using a memo to store and lookup solutions to previously solved subproblems. Thus a simple if-statement is added to the top of the recursive function to check if a solution to the subproblem is available. If the solution is availble, then return it immediately. If the solution is not available, then compute and store the solution once, thus making the solution available for future lookups. The memo is shaped as an arbitrary N-dimensional data structure such that each N-th dimension corresponds to a specific variable of the universe of discourse. Thus, the size of the N-dimensional data structure directly corresponds to the product of the coalesced variables of all possibilites under consideration. And it follows that the keys which uniquely identify each subproblem solution's storage position within the DP memo are all valid permutations of those specific variables per the constraints of the problem statement. The base case(s) of the recurrence relation are added to the memo first. And as the recursive stack unwinds, the base case(s) are iteratively and optimally built upon. This iterative building upon previous subproblem's optimal solutions from the bottom-up leads us to step 3.
Turn the "top-down with memo" solution upside-down to formulate an explicit bottom-up solution. This step can be challenging because the bases case(s) must first be explicitly specified before being iteratively built upon. The previous top-down solutions allow for the base case(s) to be implied by the recursion towards the base case(s) and thus implicitly stored by the memo as each recursive stack "bottoms out" (ie. hits the base case(s) and begins to unwind). To prepare for this implicit to explicit transformation, it can be helpful to print the key and value each time a subproblem's optimal solution is stored in the memo from the "top-down with memo" solution to explicitly identify the bases case(s) and to clearly understand how the recursive stack unwinds and thus dictates the iterative building upon the bottom-up recurrence relation. It can also be helpful to print the entire memo when the memo's dimensions can be easily visualized, ie. within 1- or 2-dimensions.
It may be possible to further reduce the bottom-up solution's memory consumption. For example, if we have a 2D matrix for the DP memo, but the current row is only dependent upon the previous row, then we can reduce memory from O(N2) to O(N) by replacing "dp[i]
" with "cur
" (ie. current row) and "dp[i - 1]
" with "pre
" (ie. previous row). Furthermore, if "cur
" is only dependent upon itself, then we can also remove "pre
". See 322. Coin Change and 518. Coin Change 2 as examples of this supplemental memory optimization which reduces the memory by a constant factor of N, ie. we only need N memory instead of 2 * N memory.
Computer, install a recursive algorithm.
-Ensign Harry Kim, Star Trek Voyager, Episode 102
The ART of DP in 3 steps:
- All possibilities are considered via top-down brute-force depth-first-search
- Remember each subproblem's optimal solution via a DP memo
- Turn the top-down solution upside-down to create the bottom-up solution
It seemed unthinkable for me to leave the world forever before I had produced all that I felt called upon to produce.
- π Base Case(s)
- Where the recursive stack "bottoms out" and begins to unwind
- π― Recurrence Relation Target
- Determine the overall objective function outcome by recursively solving subproblems optimally
- π€ Memo
- Store and retrieve optimal solutions to previously solved subproblems within the N-dimensional memo data structure
- π Seen
- Track which unique keys of the N-dimensional memo data structure have been previously seen
- β
With
- "include this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each i-th item
- π« Without
- "disclude this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each i-th item
- β Exit Conditions
- We can exit early under non-optimal conditions (ie. depth-first-search pruning) or for invalid inputs (ie. out-of-bounds)
- 5. Longest Palindromic Substring
- 62. Unique Paths
- 72. Edit Distance
- 96. Unique Binary Search Trees
- 139. Word Break
- 140. Word Break II
- 198. House Robber
- 213. House Robber II
- 221. Maximal Square
- 279. Perfect Squares
- 300. Longest Increasing Subsequence
- 322. Coin Change
- 416. Partition Equal Subset Sum
- 518. Coin Change 2
- 673. Number of Longest Increasing Subsequence
- 746. Min Cost Climbing Stairs
- 787. Cheapest Flights Within K Stops
- 799. Champagne Tower
- 877. Stone Game
- 983. Minimum Cost For Tickets
- 1025. Divisor Game
- 1035. Uncrossed Lines
- 1140. Stone Game II
- 1406. Stone Game III
- 1458. Max Dot Product of Two Subsequences
- 1463. Cherry Pickup II
- 1473. Paint House III
- 1510. Stone Game IV
- 42. Trapping Rain Water
- 304. Range Sum Query 2D - Immutable
- 307. Range Sum Query - Mutable
- 1139. Largest 1-Bordered Square
- The ART of Dynamic Programming
- Competitive Programmer's Core Skills by Saint Petersburg State University
- Algorithms by Standford University
- Algorithms and Data Structures by UC San Diego
- Algorithms for DNA Sequencing
- Master Theorem: Determine the asymptotic bound of recursive algorithms via standard recurrences
- Towers of Hanoi