When do we use dynamic programming?

Many computational problems are solved by recursively applying a divide-and-conquer approach. In some of these problems, we see an optimal substructure, i.e., the solution to a smaller problem helps us solve the bigger one.

Let's consider the following problem as an example: is the string "rotator" a palindrome? We can start by observing that the first and the last characters match, so the string might be a palindrome. We shave off these two characters and try to answer the same question for the smaller string "otato". The subproblems we encounter are: "rotator", "otato", "tat", and "a". For any subproblem, if the answer is no, we know that the overall string is not a palindrome if the answer is yes for all of the subproblems, then we know that the overall string is indeed a palindrome.

While each subproblem in this recursive solution is distinct, there are many problems whose recursive solution involves solving some subproblems over and over again. An example is the recursive computation of the nth Fibonacci number. Let's review the definition of the Fibonacci series:

fib(0)=0fib(0) = 0

fib(1)=1fib(1) = 1

fib(n)=fib(n1)+fib(n2),n>1fib(n) = fib(n-1) + fib(n-2), n>1

Here's the call tree for the naive recursive solution to this problem, with n=4n= 4.

We can see above that the subproblem fib(2)fib(2) is evaluated twice, fib(1)fib(1) is evaluated thrice, and fib(0)fib(0) is evaluated twice. These are repeated or overlapping subproblems. As every subproblem is reevaluated every time it appears, the naive recursive implementation of this solution will have exponential time complexity.

An optimization of such a recursive solution would be to store and reuse solutions to subproblems, reducing the time complexity to polynomial time. Such an approach is called dynamic programming (DP). 

Defining characteristics of dynamic programming problems

It's useful to formally define the characteristics of problems that can be addressed using DP approaches:

Optimal Substructure: If it is possible to break down a given problem into smaller subproblems and an optimal solution to these subproblems exists and contributes to the solution of the given problem, that means an optimal substructure exists for such a problem.

Overlapping subproblems: If the solution to the overall problem requires solving some of the subproblems repeatedly, the problem is said to feature overlapping subproblems and is a good candidate for optimization using DP.

It is only if the solution to a problem has these properties that we may use dynamic programming to optimize it.

Examples

A few examples of recursive solutions optimized using DP:

When not to use dynamic programming?

At this point, it might seem that we've found the answer to all of our hardest optimization problems. However, it's important to understand that DP is not a silver bullet. Specifically, there are two scenarios where we shouldn't use DP.

Recursion does not translate directly to DP

It's worth stating explicitly that not all recursive solutions can be optimized using DP. If none of the subproblems actually repeat, there is simply no room for improvement, and therefore no need for DP. We saw an example at the start where we discussed using a recursive technique to check if a given string of characters is a palindrome. As each of the subproblems is distinct (in fact, each of them involves a string that is shorter by two characters), we are not recomputing anything and, therefore, the solution cannot be further optimized from that perspective.

Greedy techniques vs. backtracking vs. DP

There's one more thing to take into account when deciding whether to use DP: is it possible to solve the problem using a greedy technique? A greedy technique is one where we break up the overall problem into multiple subproblems, and then find the optimal solution to each subproblem individually, without reference to the overall problem. While this approach does lead to the globally optimal solution of some problems, in other problems, there is the risk of getting stuck in local optimal solutions and missing the globally optimal solution. However, if the locally optimal solution is considered good enough, or where the cost of an exhaustive search in the solution space is prohibitive, this may be an acceptable risk.

These are some problems that are best tackled using greedy techniques:

  • Minimum spanning tree: In a large LAN with many switches, finding a minimum spanning tree is important to ensure that only the minimum number of packets will be transmitted across the network. We typically use a greedy technique (Kruskal's algorithm) to find the minimum spanning tree.

  • Shortest path: In many kinds of networks (for example, a road network or a social network), we need to find the shortest path between two nodes. A well-known algorithm to solve this problem is Dijkstra's algorithm, which takes a greedy approach to make its decisions.

However, there are certain problems where systematically selecting the local optimal solution at every point can be shown to result in suboptimal global solutions. For example, in the Longest Increasing Subsequence (LIS) problem, we need to find the longest subsequence from a given array in which the subsequence elements are sorted in a strictly increasing order. In the array [1, 5, 2, 3, 4, 10], there are multiple increasing subsequences, for example: [1, 5, 10], [1, 2, 3, 4], and [2, 3, 4, 10]. By inspection, we see that the LIS here is [1, 2, 3, 4, 10]. The greedy technique here would require that as long as the current element is greater than the previous element in the subsequence, we must include it in the subsequence we’re building. Therefore, after picking 1, we would have to pick up 5, and would not be able to accept any element until we reached 10. The resulting subsequence, [1, 5, 10], is indeed an increasing subsequence, but it isn’t the longest we could have found. On the other hand, if our algorithm exhaustively checked all possible solutions, it would also evaluate what would happen if the current element was dropped. Such an algorithm would evaluate all the increasing subsequences we get when we skip 5, allowing us to find the actual LIS.

If we absolutely require the globally optimal solution, and we can show that a greedy technique will cause us to miss it in some cases (as in the counter-example discussed above), we would need to exhaustively search through the solution space. For this purpose, a technique known as backtracking may be used.

Some problems that can be solved using backtracking are:

  • N-Queens: Find the number of ways to safely place nn queens on an n×nn \times n chessboard.

  • Sudoku Solver: Given a valid Sudoku board, solve it.

Ways to implement dynamic programming

Now that we have discussed when we need to use backtracking and when a backtracking solution may be optimized using DP, let's look into the two approaches used to implement DP:

Top-down approach

The top-down approach is also known as memoization. It is usually implemented as an enhancement of the naive recursive solution. It uses recursion to break down larger subproblems into smaller ones. The smallest one is solved and the result is stored in a lookup table for use in computing larger subproblems.

To take advantage of the stored results of subproblems, in every call, the top-down recursive function first checks if a solution to a subproblem already exists. If it does, the result is fetched from the lookup table instead of making a recursive call to compute it. Otherwise, the recursive call is made. As we can see in the illustration below, using the top-down approach, we are able to avoid recomputing the subproblems fib(2)fib(2), fib(1)fib(1), and fib(0)fib(0).

Compared to the naive recursive solution, the top-down approach takes up additional space in memory because it stores intermediate results in a lookup table.

Bottom-up approach

The bottom-up approach is also known as tabulation. In this approach, the smallest problem is solved first, the results saved, and then larger subproblems are computed based on the evaluated results. In contrast to the top-down approach, which uses recursion to first break down a larger problem into smaller subproblems, the bottom-up approach starts by solving the smallest subproblem, and then iterates progressively through larger subproblems to reach the overall solution.

We start by initializing a lookup table and setting up the values of the base cases. For every subsequent, larger subproblem, we simply fetch the results of the required preceding smaller subproblems and use them to get the solution to the current subproblem.

For example, to compute the Fibonacci series, we first set up the two base cases, fib(0)fib(0) and fib(1)fib(1), and then proceed to calculate the larger subproblems:

and

and so on.

The slides below present a walkthrough of the bottom-up DP solution to the Fibonacci numbers problem:

As we can see, we solve each subproblem only once. Further, in contrast to the top-down solution, the bottom-up solution is typically implemented using iteration, therefore avoiding the use of space on the machine's call stack. With very large problem sizes, the recursive call stack may actually exceed the maximum depth of recursion supported by the machine. In such situations, the top-down solution has to be discarded in favor of the bottom-up approach.

Real-world problems

Many problems in the real world can be solved using DP patterns. Let’s look at some examples:

  • Resource allocation: Allocate a number of resources with varying costs and productivity to complete a task within a specified time.

  • Filling a box: Given an unlimited supply of tennis balls with varying sizes and prices, fill a box of limited capacity with maximum prices. The solutions to this, and the many different variations of this problem, find application in industries such as paper, steel, and plastic film rolling, as well as in logistics and shipping.

  • Generating a sequence of numbers: Generate a sequence of numbers based on some constraints. This is a common element of games based on chance, and is used in simulations of different kinds as well.

  • Plagiarism detection: Detecting similarities between two texts in order to decide whether one has been plagiarized from the other.

  • Assisting programmers with parentheses: Detecting missing parentheses and identifying the solution with the minimum number of deletion and insertion operations.

Match the problem

Match each problem to the problem-solving technique best suited to solve it.

Note: Select a problem in the left-hand column by clicking on it, and then click on one of the three options in the right-hand column.

Match The Answer
Select an option from the left-hand side

Given a 2-D grid of letters, as in Boggle, find a given word in the grid, such that all the selected letters in the grid are adjacent to each other

Greedy Techniques

Check if an array of integers can be broken up into two parts, such that the integers in both parts sum up to the same number

Backtracking

Find the minimum number of barges needed to unload containers from a cargo ship, while respecting the maximum load-bearing capacity of each barge and placing no more than two containers on any barge

Dynamic Programming


Let's now dive into the course and explore the beauty and power of dynamic programming.