Related Tags

interview
recursion
memoization
communitycreator

# Learn dynamic programming in 10 minutes! Harsh Jain

Those who cannot remember the past are condemned to repeat it.

Dynamic programming is all about remembering answers to the sub-problems you’ve already solved, ​and not solving them again.

## Where do we need dynamic programming?

If you are given a problem that can be broken down into smaller sub-problems. These smaller sub-problems can still be broken into smaller ones, and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a DP problem.

The core idea of dynamic programming is to avoid repeated work by remembering partial results.

## Dynamic programming and recursion

• Dynamic programming is basically recursion plus memoization.
• Recursion allows you to express the value of a function in terms of other values of that function.
• If you implement your function in a way that the recursive calls are done in advance and stored for easy access in the future, it will make your program faster.
• This is what we call memoization – it is memorizing the results of some specific states, which can then be later accessed to solve other sub-problems.

The intuition behind dynamic programming is that we trade space for time, i.e., instead of calculating all the states, which takes a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.

Let’s try to understand this by taking an example of Fibonacci numbers.

Fibonacci (n) = 1; if n = 0

Fibonacci (n) = 1; if n = 1

Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)

So, the first few numbers in this series will be:

$1, 1, 2, 3, 5, 8, 13, 21, 35...$

A code for it using pure recursion:

int fib (int n) {
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}
Recursion based Fibonacci Series

Explanation:

• The above code uses the recurrence equation for the Fibonacci series.
• The calculation of time complexity of the recursion based approach is a bit tough; although, it comes out to be around $O( 2^{N})$
• The space complexity of this approach is $O(N)$ as the recursion tree can be of depth N in the worst case.

Using the dynamic programming approach with memoization:

void fib (int n, int fib[]) {
fib = 1;
fib = 1;
for (int i = 2; i < n; i++)
fib[i] = fib[i-1] + fib[i-2];
}
Dynamic programming based Fibonacci series

Explanation:

• It does not use recursion; instead, it uses memoization, which helps to remember previous results. We only need the last two results, so we save our previous two results.

• The time complexity of the above code is linear, as the loop runs from $2$ to $n$, i.e., it runs in $O(n)$.

• The space complexity of the above approach is the same for fib(6) or fib(100), i.e., as $N$ changes, the space or memory used remains the same. Hence, its space complexity is $O(1)$ or constant.

## How to identify a DP problem

1. Dynamic Programming is typically applied to optimization problems. In such problems, there can be many possible solutions. Each solution has a value, but we wish to find a solution with the optimal (minimum or maximum) value.
2. All dynamic programming problems satisfy the overlapping sub-problems property, i.e., you would need the solutions to the sub-problems again and again to reach the final solution.

RELATED TAGS

interview
recursion
memoization
communitycreator

CONTRIBUTOR Harsh Jain
RELATED COURSES

View all Courses

Keep Exploring

Learn in-demand tech skills in half the time 