The two properties of dynamic programming are overlapping subproblems and optimal substructure.

Fahim ul Haq

Sep 19, 2024

15 min read

content

share

Mastering dynamic programming and its patterns is like unlocking a secret code to ace your coding interviews. As an interviewer at FAANG and beyond, it's something I wish I could've share with otherwise great candidates when they got stuck in interviews.

Learning dynamic programming patterns reveals new layers of problem-solving efficiency. I encountered my first dynamic programming problem back in my undergraduate algorithms class. While I was intimidated at first, I was amazed by how patterns could help me easily solve **coding problems that seemed so complex**.

Before you can tackle dynamic programming patterns, it’s crucial to have a solid grasp of **dynamic programming’s fundamental concepts**. These concepts form the backbone of dynamic programming and enable you to recognize which problems can be efficiently solved with it. Therefore, this blog will start by addressing some basic questions:

Why dynamic programming questions are asked in coding interviews

What is dynamic programming?

When you should use dynamic programming

5 key dynamic programming patterns

So, let’s begin!

Imagine you’re building a Lego castle: instead of starting from scratch every time, you use pieces you’ve already put together to speed up the process and make it more efficient. That’s exactly how dynamic programming works in solving problems!

**Dynamic programming (DP)** is a method for solving complex problems by breaking them down into simpler subproblems, solving each of those subproblems just once, and storing their solutions. Then, the key idea is to use these stored solutions to avoid redundant work and efficiently solve the original problem.

While many challenging complex problems can be solved with dynamic programming, it is not a solution for all complex problems. It is crucial to recognize whether a problem can be solved with DP. To do this, you need to identify if a problem exhibits the fundamental characteristics of DP. This ensures that we don’t mistake such problems for scenarios better suited to techniques like recursion, greedy algorithms, or backtracking.

Most tech companies frequently ask dynamic programming questions in technical interviews because it’s an important and fundamental topic in computer science, especially for roles that require strong problem-solving abilities. They use these questions to comprehensively assess your technical skills, algorithmic understanding, and capacity for optimizing solutions. Because dynamic programming problems are more complex and challenging, solving them correctly can really make you stand out from other candidates.

Interviewers design questions to evaluate how well you understand key dynamic programming concepts, such as optimal substructure** **and** **overlapping subproblems.** **By testing these concepts, interviewers evaluate your ability to break down complex problems into manageable subproblems and derive efficient solutions.** **

Then,** **they test your proficiency in applying different techniques such as recursion, memoization, and tabulation. This is to showcase your capability to implement and optimize dynamic programming solutions, demonstrating a deeper understanding beyond theoretical knowledge.

Additionally, dynamic programming questions are crucial to assess your ability to optimize solutions in terms of time and space complexity. You must demonstrate your ability to transform inefficient, brute-force solutions into optimized, efficient algorithms, often reducing time complexity from exponential to polynomial.

Suppose you’re planning meals for a week and want to stay within a specific calorie limit while maximizing nutritional value. You have a list of dishes, each with a calorie count and nutritional value. Your goal is to create a meal plan that gives you the best nutrition without exceeding your daily calorie limit.

To make things easier and more efficient, you can divide the task of planning meals for the entire week into smaller tasks of planning meals for each day. Start by figuring out the best meal plan for one day, then use this information to plan meals for the next day. Continue doing this until you have planned the best meals for the entire week.

If you notice, each day’s optimal meal plan builds on the choices made on previous days. This is an example of the optimal substructure property.

**Optimal substructure** means that the best solution to a problem can be built using the best solutions to its smaller subproblems.

Now, during planning, you might find yourself repeatedly evaluating the same combinations of dishes and calorie limits. By storing the results of these evaluations, you can avoid redundant calculations. For example, once you determine the best way to allocate 1500 calories among a specific set of dishes, you can reuse this solution whenever you encounter that calorie scenario. This is an example of the overlapping subproblems property.

**Overlapping subproblems** occur when smaller subproblems are repeatedly used in solving the overall problem.

In summary, a problem can be solved with DP if it exhibits the optimal substructure and the overlapping subproblems property.

Now that we understand when DP is applicable, the next step is to master its implementation. This involves leveraging two fundamental techniques: memoization and tabulation.

**Memoization **(also known as** top-down**) and **tabulation** (also known as** bottom-up**) are two techniques used in DP to optimize the computation of solutions to problems with overlapping subproblems. Both techniques achieve this by avoiding redundant computations of overlapping subproblems, but they differ in their approach and implementation style.

Let’s have a quick overview:

**Memoization:**This solves the subproblems of a problem recursively while storing the results in a cache. When needed again, stored solutions are retrieved, avoiding redundant recalculations.**Tabulation:**This builds up the solution from the smallest subproblems to the largest. It uses the solution to a smaller subproblem to solve the larger one until it eventually gets to the actual solution.

In summary, memoization focuses on optimizing recursive solutions by storing results, while tabulation focuses on building up solutions iteratively using a table.

Dynamic programming can transform an inefficient solution into an efficient one. To understand this, let’s look at the classic problem of finding the

The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from

$F(0) = 0$ $F(1) = 1$ $F(n)=F(n−1)+F(n−2)$ for$n > 1$

Now, the task in this problem is to compute the

Before we solve this problem, can you match each statement about Fibonacci numbers with the corresponding DP’s property?

Match The Answer

Select an option from the left-hand side

The solution to $F(n)$ depends on the solutions to $F(n−1)$ and $F(n−2)$.

Overlapping subproblems

Calculating $F(2)$ involves recalculating $F(1)$ and $F(0)$, calculating $F(3)$ requires recalculating $F(2)$ and $F(1)$, and calculating $F(4)$ requires recalculating $F(3)$ and $F(2)$, which were computed earlier.

Optimal substructure

The first solution that comes to our mind is using a simple recursive function that breaks down the problem into smaller parts. If

The slides below help to understand the solution in a better way.

Now, let's look at the code of this recursive solution:

def fibonacci_recursive(n):# Base case for n = 0 and n = 1if n <= 1:return n# Otherwise, calculate the Fibonacci number using the recurrence relationreturn fibonacci_recursive(n-1) + fibonacci_recursive(n-2)# Example usageprint("F(5) =",fibonacci_recursive(5)) # Output: 5print("F(10) =",fibonacci_recursive(10)) # Output: 55

Fibonacci numbers using recursion

While this method is easy to implement and understand, it has a major drawback. It recalculates the same values multiple times. For instance, to compute

This is where the magic of dynamic programming truly shines. It optimizes the naive recursive solution by storing the results of subproblems to avoid redundant calculations.

Let’s first see how memoization works. This solution stores previously calculated results in a dictionary, `memo`

, so they can be used again, avoiding redundant calculations and making the process much faster. If the value is already in the `memo`

, it returns it. Otherwise, it computes the value, stores it, and then returns it.

Let’s look at the slides below for a better understanding.

Now, let’s look at the code for this approach:

# Initialize a memoization dictionarymemo = {}def fibonacci_memo(n):# If n is already computed, return its value from the memoif n in memo:return memo[n]# Base case for n = 0 and n = 1if n <= 1:memo[n] = nelse:# Recursively calculate Fibonacci numbers and store in the memomemo[n] = fibonacci_memo(n-1) + fibonacci_memo(n-2)# Return the computed Fibonacci numberreturn memo[n]# Example usageprint("F(5) =",fibonacci_memo(5)) # Output: 5print("F(10) =",fibonacci_memo(10)) # Output: 55

Fibonacci numbers using memoization

By storing the results, we avoid redundant calculations, bringing the time complexity down to

Now, in the tabulation method, a list, `lookup_table`

, is used to store Fibonacci values up to `lookup_table`

.

The illustration below helps to understand this solution better.

Now, let’s look at the code for this approach:

def fibonacci_tab(n):# Base caseif n <= 1:return n# Initializing the lookup tablelookup_table = [0] * (n + 1)lookup_table[0] = 0lookup_table[1] = 1for i in range(2, n + 1):# Storing the sum of the two preceding valueslookup_table[i] = lookup_table[i - 1] + lookup_table[i - 2]return lookup_table[n]# Example usageprint("F(5) =",fibonacci_tab(5)) # Output: 5print("F(10) =",fibonacci_tab(10)) # Output: 55

Fibonacci numbers using tabulation

This approach also has a time complexity of

If you want to appreciate the power of dynamic programming, try computing F(60). The naive recursive solution is likely to time out due to its exponential time complexity, while both DP solutions will compute it efficiently in a fraction of the time.

The long-awaited moment is finally here—let’s explore 5 must-know dynamic programming patterns crucial for your coding interview:

Now, let’s explore these patterns one by one.

Imagine you are going on a hiking trip with a backpack that can hold up to 7 kg. You have several items you can bring along, each with its own weight and benefits. However, because you have a limited space in your bag, you’ll have to select the items that provide the most benefit. This is a well-known coding challenge, the classic knapsack problem, where you have a

The key concept here is that a maximum of one item can be selected for each kind. You can pick each item at most once, regardless of how many of that item type are available—it either goes into your knapsack, or it doesn't.

The **0/1 Knapsack** pattern builds upon this concept. It focuses on efficiently solving the problem by using a methodical approach to decide whether each item should be included (represented as a “1”) or excluded (represented as a “0”). The objective remains the same: to maximize the combined value of selected items while ensuring the total weight does not exceed the knapsack’s limit.

Let’s look at an example that can be solved using this pattern:

Some of the common coding problems that can be solved using this pattern are as follows:

Subset Sum

Partition Array Into Two Arrays to Minimize Sum Difference

Minimum Number of Refueling Stops

Count Square Submatrices

The **Unbounded Knapsack** pattern expands upon the classic knapsack problem by allowing unlimited quantities of each item to be selected.

Similar to the 0/1 Knapsack, this pattern involves maximizing the total value of selected items without exceeding a specified weight capacity. However, unlike the 0/1 Knapsack, where each item can only be chosen once, in the Unbounded Knapsack, each item can be selected an unlimited number of times.

The pattern requires evaluating different combinations of items and weights, so each combination can be considered as a subproblem. We can store and use the results of these subproblems to reach the actual solution.

Let’s look at an example that can be solved using this pattern:

Some of the other common coding problems that can be solved using this pattern are as follows:

Maximum Ribbon Cut

Rod Cutting

A **recursive number **is a number in a sequence that is derived from preceding numbers based on a recursive relationship. A **recursive relationship** is a way of defining something (like a sequence or function) in terms of itself. In other words, to find the value at a particular point, we use previous values of the same sequence or function. It's like building up the solution step-by-step, where each step depends on the result of previous steps. Imagine a scenario where you need to calculate the

The **Recursive Numbers** pattern focuses on solving problems where the solution can be built from smaller subproblems using a recursive approach.

Let’s look at an example that can be solved using this pattern:

Some of the other common coding problems that can be solved using this pattern are as follows:

Number Factors

Minimum Jumps to Reach End

The **longest common substring** refers to the longest contiguous sequence of characters that appears in the same order within two or more given strings. In simpler terms, it's the longest segment of characters that exists in both strings without interruption. Imagine you have two strings:

**String A:**“ababc”**String B:**“babca”

Then, the longest substring that appears in both strings is “abc”. This substring appears consecutively in both strings in the same order and is longer than any other common substring that might exist.

The **Longest Common Substring** pattern considers subproblems involving a few characters only (starting from one character only) and then uses the solution to these subproblems to get to the actual solution.

The algorithm in the Longest Common Substring** **pattern iterates through each character of both strings. It compares characters at corresponding positions: if they match, it updates its tracking to indicate a longer common substring has been found; if they don’t match, it resets its tracking because the substring can’t continue. This iterative process builds toward identifying the longest sequence of characters that appears in the same order within both strings. By leveraging previously computed results stored in the table, this pattern ensures efficiency, avoiding redundant calculations and leading to an optimized solution.

A slight variation of the Longest Common Substring is the **Longest Common Subsequence**, in which the elements of the substrings are not necessarily required to occupy consecutive positions within the original strings. In either case, the indexes of the common substring must be in strictly increasing order. A strictly increasing order means that the indexes of the substring taken from the original string must be in ascending order. For example, in the strings “abcde“ and “ace“, the longest common subsequence is “ace“, where the indexes in the first string are

Let’s look at an example that can be solved using this pattern:

Some of the other common coding problems that can be solved using this pattern are as follows:

Shortest Common Supersequence

Minimum Deletions & Insertions

Edit Distance

Longest Alternating Subsequence

Subsequence Pattern Matching

A **palindrome** is a sequence of characters that spells the same forward and backward. A **palindromic subsequence** is a palindrome within a string. The elements in the subsequence are not required to be at consecutive positions in the original string. For instance, if we take a string “bbabcbcab”, there could be many palindromic subsequences of different lengths. Some of them are “bab”, “bb”, “bcb”, “bcbcb”, “babab”, “abcba”, “babcbab”, and so on.

The Palindromic Subsequence pattern is built on the concept of the Longest Common Subsequence pattern. It is about finding the longest subsequence within a given string that forms a palindrome.

Let’s look at an example that can be solved using this pattern:

Some of the other common coding problems that can be solved using this pattern are as follows:

Minimum Deletions in a String to make it a Palindrome

Count of Palindromic Substrings

Palindromic Partitioning

Understanding dynamic programming is a game-changer, allowing you to be a more versatile, efficient developer. Remember, practice makes perfect, so dedicate time to solving problems regularly and seek feedback to improve further.

If you want a guided approach to get hands-on with dynamic programming, you may want to check out the Educative's courses below.

Mastering dynamic programming is just one piece of the puzzle. To succeed as a developer and an interview candidate, you'll need to understand and practice other coding patterns as well.

**For dynamic programming and beyond**, I recommend the Grokking Coding Interview Patterns series. Based on real-world interviews at tech giants like Apple, Amazon, and Netflix, this course prepares you to handle 26 coding patterns, including the most common dynamic programming patterns. In practicing these, I hope you will excel in your technical interviews and beyond.

*Happy learning!*

To learn more, check out the following **blogs**:

Frequently Asked Questions

What are the two properties of dynamic programming?

What are the applications of dynamic programming?

What are the disadvantages of dynamic programming?

What is a real life example of dynamic programming?

What is the main advantage of dynamic programming?

How to identify a DP problem?

What are common dynamic programming questions in interviews?

How do you master dynamic programming techniques?

Free Resources

TRENDING TOPICS