Dynamic programming is applied to solve optimization problems. In optimization, we try to find out the maximum or minimum solution of something. It will find out the optimal solution to any problem if that solution exists. If the solution does not exist, dynamic programming is not able to get the optimal solution.

Optimization problems are the ones that require either lowest or highest possible results. We attempt to discover all the possible solutions in dynamic programming and then choose the best optimal solution. Dynamic programming problems are solved by utilizing the recursive formulas though we will not use a recursion of programming the procedures are recursive. Dynamic programming pursues the rule of optimality.

A dynamic programming working involves around following significant steps:

Dividing the complex problem into more straightforward subproblems.

And then find the optimal solution to these subproblems.

Then store the results of the subproblems.

Then reuse these subproblems so that the same subproblem cannot be calculated more than once.

Finally, it calculates the result of a complex problem.

The process of storing the results of intermediate or subproblems is called ** memoization**.

If we have a complex problem and solve that part of the problem, we need to get to the final problem. We realize we need to use that part again. Are we going to solve that part again?. We are just going to look back at the answer and see what you got. We will not walk through all the steps of solving it again. That's just a waste of time. Cannot imagine anyone doing that, so now when we program which is possible our aim is to do the same thing. Now that's where the ** memoization** comes in.

Memoization is top-down, depth-first optimization, a technique that stores previously computed results so that it does not need to be re-computed. It is an optimization procedure used in several programming languages to lessen unwanted, expensive function calls. It is performed by caching the return value of a function centered on its inputs. Its purpose is to speed up the operations. It can be automatically done or manually handcrafted.

Memoization is a technique for storing values written by a function to avoid redo computation that is performed already previously. This technique is beneficial when we have a regularly called function and when its analysis is expensive. So in real life, the concept of memorization becomes extremely important when you're working on developing software for custom asset tracking and live tracking. Because in those cases, we will be calling a function repeatedly, and computation cost is expensive.

This technique can trade space for time. It utilizes additional memory to save computation time. Occasionally, it can transform a slow function into a quick one. Memoization is a cache. When you call the memoization function, it does all sorts of work. If we are calling a similar function several times with the same arguments, then memoization is helpful. However, if we are always doing different arguments, caching will not support them at all.

**Working**

Memoization works through breaking down a complex problem into several subproblems. Then, the memoized algorithm checks if the result of the subproblem stored in the table could be an array, a hash, or a map. If it is available in the table, it's just going to take that data from the table. However, if the table does not contain the result, the memorized algorithm should enter the data value into the table.

**Recursion and Memoization**

Fibonacci sequence is the xth term equivalent to the xth minus one term plus the xth minus two-term. Suppose a sequence of 1,1,2,3,5,8,14,23. So to put it simply, 8 in the sequence is the sixth term would be equivalent to the fifth term, which would be five plus the fourth term, which would be the three, so 5 plus 3 equals 8. On moving to the eighth term, that is equivalent to the seventh term plus the sixth term, which equals 23.

Now we will define a function 'fib' and give it one parameter, 'x'. Now we will start off using a base case. A base case will give back a constant value. So, the base case will be x is less than or equivalent to 2.We need to return a value of 1.If it is not less than or equal to 2, we will return the Fibonacci sequence of x minus one plus the Fibonacci sequence of x minus 2.

```
function fib(x){
if(x<=2){
return 1;
} else {
return fib(x-1) + fib(x-2)
}
}
```

Now we try to call the Fibonacci sequence of 4 and use-value of x equal to 4. As the four is not smaller than or equal to 2. So control will return the condition. We look for Fibonacci of 3 plus the Fibonacci of 2. Now for the Fibonacci of 3, we call this function again. As three is not less than or equal to 2, we will move down to the following condition. Now we have Fibonacci of 2 plus Fibonacci of 1, and that applies to the base case, so it will be 1 plus 1, which will be 2. Now that is just the three parts. That's only done the first call on the 4. Now we move to Fibonacci of 2 because four minus 2 is just a 2. That applies to the base case, so that will be 1. We have already calculated this value which is 2 and 2 plus 1 equal to 3.

`console.log(fib(4))`

We will get output very smoothly for a smaller number. But for the more significant number, the program takes forever. It is a problem with recursion on a fundamental level. We are constantly recalculating answers. That's why running time is slow for a more significant number. And every time we have to do an addition that uses up the processor, it takes a lot longer to calculate the answers. It is highly inefficient. So we will write a recursion statement that is a lot more efficient and takes constant time rather than exponential time. So we will write down statements with 'num' equal to empty curly braces. Now we will make a new function fib1 with parameter 'x'. Then we will start with the base case as done previously. We are going to take if n is smaller than or equal to 2. We are going to return a value of 1. Now we don't need to recalculate the answer if we have already calculated it. So we need to check if we have previously computed that answer.

```
var nums = { }
function fib1(x){
if(x <= 2){
return 1
}
if(x in nums){
return nums[x]
}else{
num = fib1 ( x-1 ) + fib1(x-2)
nums[x] =num
return num;
}
}
```

**Fibonacci Series **

Following is a simple function that calculates the sum of the Fibonacci series. In this example, we have a function called Fibonacci, which takes a number as its only argument, and inside the function, we have only two conditions. The first one being if the number is less than or equal to zero, then we write zero. And if the number is similar to one, then we have written one. As a recursive function, it calls itself repeatedly, so we call the Fibonacci function by passing num minus one and num minus two as its two parameters into our Fibonacci function. And since we want to print the sum of the Fibonacci series. We are adding it and returning the value as well. So we will get five as an output on running the following code.

```
function fibonacci(num){
if (num <= 0){
return 0;
}
if (num == 1){
return 1;
}
return fibonacci(num -1) + fibonacci(num -2)
}
fibonacci(5)
```

Output: 5

**Example**

Let's take an example of a Fibonacci series to understand memoization. In this series, we will calculate the following number by adding the last two numbers. If the number is negative, we will take one function, 'fib', and pass the value of x because we find the xth digit of the Fibonacci series. So it would print an error message in case of a negative number. On the other hand, if the value of 'x' is equal to zero, it will return zero. And if the value of 'x' is similar to one, it will return one returned.

Suppose we pass n equal to 5 in Fibonacci expression. Now the first three conditions in the above code will not satisfy this. So now control will go to sum condition. At this condition, fib(5) will divide into two parts that are fib(4) and fib(3). We will continue to divide these subproblems until we get fib(0) and fib(1). Then it will be a return sum. The Fibonacci function will be called 15 times.

When we use a recursive function, time complexity will increase x, and the count value will increase exponentially. Because whenever we call the same subproblem, a recursive tree of that subproblem is generated. We can use these repeated values one time only by just storing them using the concept of memoization. In addition, we can store intermediate results and then use those results. In this way, we do not need to compute subproblems again and again.

Now, we will take an array to store values. We keep the name of the array as "memo". This array contains five indexes from 0 to 5. The array initializes with -1, which shows null.

Before the execution of the count function, it will check that if the value of n exists in the array, it will return the memo value; otherwise, we will call the following function. If we call fib(1), this value is null in the memo. So, we will call a function and return at one the condition where x is equal to 1.And we will write one at index 1 in the memo. For fib(0), initially, we write null values at zero indexes in the memo. So after calling the function, we will store 0 at index zero in the memo. For fib(2), x is equal to 2. So, it will store one value at index 2 in the memo. For fib(3), we have to calculate fib(1), but we already have a value of 1 at fib(1). So, we do not call fib(1) function, and we take the value of fib(1). Now fib(3) will store two. Now for fib(4), we will store 3. For fib(5), we need to call fib(3) and fib(4). But fib(3) already has some value in the memo. So we don't need to call fib(3). So five will be stored at fib(5). During this whole memoization process, we have avoided the repeated function calls. This memoization technique is also known as the top-down approach because we start from the top and divide the problem into different subproblems.

**Big-O-Notation**

The big-o-notation O(n) value for the Fibonacci sequence without memoization is O(2^n). While the O(n) value for a Fibonacci sequence with memoization is O(n), We can see below the graph of the algorithm complexity, which is the group of input N values. The yellow line represents without memoization. The green line shows memoization with a linear line. The linear line will significantly influence how fast the program can run as the input grows. So for minimal inputs, it's OK with all memorization you probably don't have to write all that, but if it gets enormous, you should consider using it applicable.

**Graph**

**Drawbacks and Limitations**

Memoization has the following drawbacks and limitations:

It does sacrifice space for speed. It means we have to store all the values somewhere. Those computer results for computation have to be stored somewhere, just like if we solve the problem.

It is used for primarily pure functions where we expect the same input to return the same output. For example, suppose we are getting a function where you pass in parameters with the same input, but it's giving different results. We probably cannot use memoization for that. Because when we loop up the table, it does not make any sense. It could be a different value. So it has to be the same value.