Introducing Dynamic Programming With Fibonacci Numbers

In this lesson, we'll use a dynamic programming technique called memoization to reduce the time complexity of the Fibonacci function.

In the last lesson, we saw a recursive C++ implementation of a function to calculate the nthnth Fibonacci number. We also calculated its time complexity by solving a recurrence relation that came out to be O(2n)O(2^n)—i.e. exponential. We are now going to use a dynamic programming technique to reduce the time complexity to linear.

What is Dynamic Programming?

Dynamic programming algorithms solve problems by combining results of subproblems, just like in the divide and conquer algorithms.

Those who cannot remember the past are condemned to repeat it” – Dynamic Programming

Characteristics

Most problems that can be solved with dynamic programming can also be solved with a divide and conquer approach. The difference between the two is that the dynamic programming approach is applicable when a problem has overlapping subproblems; i.e. the subproblems of a given problem are not independent such as when two subproblems share a subsubproblem. An example would be the Fibonacci code we saw in the last lesson where fib(3) had to be calculated in order to calculate both fib(4) and fib(5).

Furthermore, dynamic programming problems have the optimal substructure property which means that the overall optimal solution of the problem can be constructed from the optimal solutions of its subproblems. For Fibonacci numbers, we know that,

fib(n) = fib(n-1) + fib(n-2)

Hence, a problem of size ‘n’ can be reduced to subproblems of size ‘n-1’ and ‘n-2’. Therefore, Fibonacci numbers have the optimal substructure property.

Dynamic programming algorithms solve every subproblem just once and save the answer in a lookup table, thereby avoiding recalculating the answer every time the subproblem is encountered.

There are two dynamic programming patterns: memoization and tabulation.

Memoization (Top Down)

The memoized version (yes, that is not a typo - its memoization not memorization) of a problem is similar to the regular recursive version; the difference is that it looks for the answer of a subproblem in a lookup table before computing its solution. If the precomputed value exists, then it is returned. Otherwise, the value is computed and put in the lookup table so that it can be reused later.

Level up your interview prep. Join Educative to access 70+ hands-on prep courses.