How dynamic programming rescues crashing recursive algorithms

Key takeaways:

  • Recursive algorithms can lead to stack overflow or excessive recomputation, especially when solving problems like the Fibonacci sequence without optimization.

  • Dynamic programming (DP) optimizes recursive algorithms by storing intermediate results, avoiding redundant calculations, and reducing time complexity.

  • Memoization improves recursive algorithms by caching previously computed results, preventing redundant function calls and reducing time complexity.

  • Unlike memoization, tabulation builds the solution iteratively from the bottom up, ensuring that all subproblems are solved once and in the right order.

In software development, recursive algorithms are fundamental due to their straightforward and elegant design, which simplifies complex problem-solving. Yet, this simplicity can lead to significant drawbacks, such as performance bottlenecks and program crashes from stack overflow errors. Dynamic programming (DP) offers a valuable solution to these challenges, enhancing the performance of recursive algorithms and preventing common issues like crashes.

What is the difference between dynamic programming and recursion?

Recursion is a programming technique where a function calls itself to solve smaller instances of a problem. It typically solves problems by breaking them down into subproblems, solving them, and combining the results. However, recursion can be inefficient due to repeated calculations for the same subproblems.

Dynamic programming (DP), on the other hand, optimizes recursion by storing the results of previously solved subproblems, preventing redundant calculations. It involves solving subproblems in a bottom-up or top-down manner, ensuring each subproblem is solved once and reused when needed.

Why do recursive algorithms crash?

Recursive algorithms break down a problem into smaller, manageable subproblems, solving each recursively until a base case is reached. This technique uses the stack, where each recursive call adds a new block to the stack with its execution context. However, recursive algorithms can crash, primarily due to stack overflow. This occurs when the recursion depth exceeds the stack’s capacity, as each recursive call consumes a portion of the stack’s limited space. Particularly in cases where the algorithm handles large inputs or lacks efficient base cases, the stack fills up rapidly, leading to crashes.

Let’s look into a problem to understand how a recursive algorithm crashes.

Example: Fibonacci number

Fibonacci numbers are a sequence of numbers where each is the sum of the two preceding numbers. The Fibonacci sequence is defined as follows:

A few starting terms of this sequence are as follows:

Our task is to find thenthn^{th}Fibonacci number.

Naive recursive approach

A naive approach to finding the nthn^{th} Fibonacci number is by implementing the recurrence relation. At each iteration, the number will be 00 for the zeroth term, 11 for the first term, and the sum of the two preceding values for all the other terms in the sequence.

The visualization below shows a dry run of this algorithm.

Fibonacci numbers using recursion
Fibonacci numbers using recursion
1 of 18

It’s evident from the visualization above that the algorithm makes redundant recursive calls to compute the same results. For example, it calls fib(0) two times, fib(1) three times, and fib(2) two times. This inefficient way of making redundant recursive calls to compute the same values causes stack overflow, resulting in the naive recursive algorithm to crash.

The implementation of this recursive algorithm is given below:

def fib(n):
# Base case for n = 0 and n = 1
if n < 2:
return n
# Otherwise, calculating the Fibonacci number using the recurrence relation
return fib(n - 1) + fib(n - 2)
# Driver code
def main():
inputs = [0, 5, 7, 10, 14]
# Uncommenting the line below and checking how this recursive solution causes a time-out
# inputs.append(60)
for i in range(len(inputs)):
print(str(i + 1) + ". " + str(inputs[i]) + "th Fibonacci number is:", fib(inputs[i]))
print("-" * 100, "\n", sep="")
if __name__ == "__main__":
main()
Fibonacci numbers using recursion

The algorithm above is tested on five test cases: 00, 55, 77, 1010, and 1414. The solution runs fine on these test cases. Let’s test it on a larger value e.g., 6060, by uncommenting that test case in the code. We observe that the algorithm keeps on running and finally times out, indicating that the recursive algorithm crashed due to stack overflow caused by too many recursive calls.

Time complexity

This algorithm has exponential time complexity, specifically O(2n)O(2^n). This is because we take the sum of the two preceding values every time.

Space complexity

As the maximum depth of the recursive call tree is nn, and each call stores its data on the stack, the space complexity of this approach is O(n)O(n).

Dynamic programming to the rescue

Dynamic programming rescues crashing recursive algorithms by optimizing their performance and managing memory use. It achieves this through the following techniques:

  1. Top-down technique such as memoization

  2. Bottom-up technique such as tabulation

DP stores the results of subproblems, thus avoiding the repetitive computation that exacerbates stack overflow in pure recursion. By ensuring that each subproblem is solved only once and reusing these precomputed answers, DP reduces the number of recursive calls and the depth of the call stack. This approach not only prevents crashes but also significantly speeds up the execution by transforming exponential time complexity into a more manageable polynomial complexity. For a deeper understanding of dynamic programming, check out this blog.

The applicability of dynamic programming depends upon the following two properties:

  1. Optimal substructure: This property means that an optimal solution to a problem can be effectively constructed from optimal solutions of its subproblems. In dynamic programming, this property is crucial because it ensures that the problem can be broken down into smaller, manageable parts, and by solving each part optimally, the solution to the overall problem can be derived.

  2. Overlapping subproblems: This property is present when the recursive algorithms revisit the same problems repeatedly. Dynamic programming uses this property by storing the results of these subproblems in a table. Once a subproblem has been solved, its result is stored and reused (thus avoiding the work of re-solving it), which drastically reduces the computational cost and prevents redundant calculations that could lead to a stack overflow in naive recursion.

Let’s see how DP rescues the crashing recursive algorithms using memoization and tabulation.

Top-down technique: Memoization

The top-down solution, commonly known as the memoization technique, enhances the recursive solution. It overcomes the problem of calculating redundant solutions repeatedly by storing them in an array.

The basic idea is to check if the array already contains the result of the Fibonacci number we are trying to find before calculating it. This way, we will reduce the number of recursive calls by using the already-stored result in the array.

The solution of the Fibonacci number problem using memoization is given below:

def fib_memo(n, lookup_table):
# Base case
if n < 2:
return n
# Check if already present
if lookup_table[n] != -1:
return lookup_table[n]
# Adding entry to table if not present
lookup_table[n] = fib_memo(n - 1, lookup_table) + \
fib_memo(n - 2, lookup_table)
return lookup_table[n]
def fib(n):
# Initializing the lookup table of size num + 1
lookup_table = [-1] * (n + 1)
return fib_memo(n, lookup_table)
# Driver code
def main():
inputs = [0, 5, 7, 10, 14]
# Test case to check the effect of dynamic programming using memoization
inputs.append(60)
for i in range(len(inputs)):
print(str(i + 1) + ". " + str(inputs[i]) + "th Fibonacci number is:", fib(inputs[i]))
print("-" * 100, "\n", sep="")
if __name__ == "__main__":
main()
Fibonacci numbers using memoization

We’ve already uncommented the large test case (6060), and the code runs fine, indicating the efficacy of dynamic programming. A worth-mentioning point here is that although memoization is also a recursive technique, it doesn’t crash because it stores and reuses the results.

Time complexity

The new recursion tree approximately grows to a depth ofnn, and each node has two children. This means that there are a total of 2n2n nodes, so the time complexity is O(n)O(n).

Space complexity

The space required in memory will remain O(n)O(n).

Bottom-up technique: Tabulation

The bottom-up solution, also known as the tabulation technique, is an iterative method of solving dynamic problems. The tabulation approach is like filling up a table from the start. Let’s now find thenthn^{th} Fibonacci number using the bottom-up method.

First, we will initialize a lookup table and set the first two values, followed by setting the values of the rest of the Fibonacci numbers by summing up the previous two values.

Have a look at the visualization below for a clearer picture of what is going on for finding the5th5^{th} Fibonacci number:

Fibonacci numbers using tabulation
Fibonacci numbers using tabulation
1 of 6

The solution of the Fibonacci number problem using tabulation is given below:

def fib(n):
# Base case
if n < 2:
return n
# Initializing the lookup table
lookup_table = [0] * (n + 1)
lookup_table[0] = 0
lookup_table[1] = 1
for i in range(2, n + 1):
# Storing sum of the two preceding values
lookup_table[i] = lookup_table[i - 1] + lookup_table[i - 2]
return lookup_table[n]
# Driver code
def main():
inputs = [0, 5, 7, 10, 14]
# Test case to check the effect of dynamic programming using tabulation
inputs.append(60)
for i in range(len(inputs)):
print(str(i + 1) + ". " + str(inputs[i]) + "th Fibonacci number is:", fib(inputs[i]))
print("-" * 100, "\n", sep="")
if __name__ == "__main__":
main()
Fibonacci numbers using tabulation

Again, we’ve already uncommented the large test case (6060), and the code gives the desired output.

Time complexity

The time complexity of this code is O(n)O(n), since that is the equivalent of the number of iterations of the for loop.

Space complexity

The above solution occupies O(n)O(n) space in the memory.

A quick comparison

The table below provides a quick comparison of the three approaches—naive, memoization, and tabulation—highlighting their time and space complexities.

Approach

Does the Algorithm Crash?

Time Complexity

Space Complexity

Naive recursive approach

Yes

O(2^n)

O(n)

Top-down technique: Memoization

No

O(n)

O(n)

Bottom-up technique: Tabulation

No

O(n)

O(n)

Conclusion

Dynamic programming offers a powerful solution to the common pitfalls of recursive algorithms, such as excessive recomputation and inefficient memory use. By storing intermediate results, techniques like memoization and tabulation can transform a slow, crash-prone recursive approach into an efficient solution.

For those eager to dive deeper, our "Grokking Dynamic Programming Interview" course provides a comprehensive guide to mastering various dynamic programming patterns.

Frequently asked questions

Haven’t found what you were looking for? Contact Us


What is the main advantage of dynamic programming?

The primary advantage of dynamic programming is its ability to significantly improve efficiency by reducing time complexity, especially for complex problems. Unlike naive approaches, dynamic programming ensures optimal solutions by solving subproblems once and reusing the results. This makes it highly effective for problems that exhibit optimal substructure and overlapping subproblems.


Does dynamic programming reduce time complexity?

Dynamic programming is a powerful technique that helps solve complex problems by breaking them down into smaller subproblems and reusing previously computed results. While it significantly reduces time complexity by avoiding redundant calculations, it can also increase space complexity if not properly optimized. Proper implementation is key to balancing time and space efficiency.


What is the big O of recursion?

The Big O runtime of a recursive function depends on the number of recursive calls made. This value varies based on the complexity of the recursive algorithm. For instance, if a recursive function with input size NN makes NN recursive calls, its time complexity would be O(N)O(N). The overall complexity can increase if there are nested or overlapping recursive calls.


What is dynamic programming mainly used to solve?

Dynamic programming is primarily used to solve optimization problems, where the goal is to find the minimum or maximum solution. It guarantees an optimal solution, provided one exists, by systematically breaking down the problem into smaller subproblems and reusing their solutions.


What are the disadvantages of dynamic programming?

While dynamic programming is highly efficient for problems with overlapping subproblems by eliminating redundant calculations, it does come with trade-offs. One major disadvantage is that it can consume a lot of memory, as it stores the results of all subproblems—even those that may not be needed. This can lead to increased space complexity, especially for large-scale problems.


What are some real-life applications of dynamic programming?

Dynamic programming is widely used in various fields, including graph theory, game theory, artificial intelligence, machine learning, economics, and finance. It is also applied in bioinformatics and is commonly used to calculate the shortest path, such as in GPS navigation systems. DP helps solve complex optimization problems across these domains efficiently.


Free Resources

Copyright ©2025 Educative, Inc. All rights reserved