Dynamic Programming

From Algorithmist
Jump to: navigation, search

Dynamic Programming is a technique that takes advantage of overlapping subproblems, optimal substructure, and trades space for time to improve the runtime complexity of algorithms.

Overview[edit]

There are two types of Dynamic Programming: Top-Down or Bottom-Up. The Top-Down method is often called Memoization.

Examples[edit]

Fibonacci sequence[edit]

The Fibonacci Sequence is defined as a_{0}=0, a_{1}=1, and a_{i}=a_{{i-1}}+a_{{i-2}} for all i\geq 2. The first few terms are: 0, 1, 1, 2, 3, 5, 8, 13, 21...

Since it is defined recursively, the natural (but naïve) algorithm would be to simply implement it as:

fib( n )
   if ( n = 0 )
      return 0
   if ( n = 1 )
      return 1
   return fib( n - 1 ) + fib( n - 2 )

Because we add one every time we hit the base case, we can assume that it takes at least fib( n ) calls to calculate fib( n ). (To be precise, it actually takes 2 * fib( n ) - 1 calls, the proof is left up to the reader.) A modest n = 50, therefore fib( 50 ) is 12,586,269,025 - 12 billion, it is a staggering amount of calls to be calculating.

The inefficiency arises because it recalculates the same subproblem more than once.

For example, for fib(5):

fib(5) = fib(3) + fib(4)
 fib(3) = fib(1) + fib(2)
  fib(1) = 1
  fib(2) = fib(1) + fib(0)
   fib(1) = 1
   fib(0) = 0
  fib(2) = 1 + 0 = 1
 fib(3) = 1 + 1 = 2
 fib(4) = fib(3) + fib(2)
  fib(3) = fib(1) + fib(2)
   fib(1) = 1
   fib(2) = fib(1) + fib(0)
    fib(1) = 1
    fib(0) = 0
   fib(2) = 1 + 0 = 1
  fib(3) = 1 + 1 = 2
  fib(2) = fib(1) + fib(0)
   fib(1) = 1
   fib(0) = 0
  fib(2) = 1 + 0 = 1
 fib(4) = 2 + 1 = 3
fib(5) = 3 + 2 = 5

we calculate the same thing over and over again. We can avoid this by saving the results of calculations when we do them and simply returning that result when asked for the same calcuation later. This is the Top-Down approach, and is commonly known as memoization:

array map [0...n] = { 0 => 0, 1 => 1 }
fib( n )
   if ( map( n ) is cached )
      return map( n )
   return map( n ) = fib( n - 1 ) + fib( n - 2 )

Now, we will calculate each subproblem once, thus the complexity is only O(n).

In Bottom Up Dynamic Programming, we start from smaller cases and store the calculated values in a table for future use, an effective strategy to most dependency-based problems. This avoids calculating the subproblem twice.

fib( n )
   array map [0...n] = { 0 => 0, 1 => 1 }
   for ( i from 2 to n )
      map[i] = map[i-1] + map[i-2]
   return map[ n ]

We start from the basic cases: the 0th and 1st Fibonacci number and work our way to the bigger cases and eventually to the n-th Fibonacci number. The complexity of the function is O(n). Therefore, only approximately 30 calculations needs to be made using the Dynamic Programming technique as opposed to 1 billion calculations in recursion.

Summary[edit]

Dynamic Programming (DP) generates all enumerations, or rather, cases of the smaller breakdown problems, leading towards the larger cases, and eventually it will lead towards the final enumeration of size n. As in Fibonacci numbers, DP generated all Fibonacci numbers up to n.

Once you are given a problem, it is usually a good idea to check if DP is applicable to it.

The second step to solving a problem using DP is to recognize the recursive relationship. The relationship maybe straightforward or even pointed out, or it maybe hidden and you have to find it. In any case, since you have already determined that it is indeed a DP problem, you should at least have a pretty good idea of the relationship.

Other Dynamic Programming Algorithms and Problems[edit]