Greedy algorithms are one set of algorithms designed to solve optimization problems. They function by calculating the locally optimal solution at every iteration in the hope that this local solution will be part of the optimal global solution. For example, a very simple solution to the 0-1 Knapsack Problem.
def ZeroOneKnapsack(items,MAX_LOAD): loadCarried=0 itemsCarried= sort(items) #sort the items in descending order of value foreach i in items: if(loadCarried + i.weight<=MAX_LOAD): itemsCarried.add(i) loadCarried += i.weight return itemsCarried class item: public value public weight
At every iteration the object of highest value is chosen until there are either no more items or the knapsack is full. Note that this solution is not optimal. For example, consider the set of weights 3, 2, 2 and respective values 3, 2, 2, with a knapsack of size 4. The preceding algorithm selects the single item with value 3, but an optimal solution selects the two items with value 2. There is no greedy algorithm for the 0-1 knapsack problem.
There is, however, a greedy algorithm that solves the continuous knapsack problem, in which the thief is not required to take the entire amount of a selected item. This algorithm sorts the items in order of decreasing value per unit weight, then takes as much as possible of each item in order.
def ContinuousKnapsack(items,MAX_LOAD): loadCarried=0 empty the knapsack sort(items) #sort in descending order of item.value / item.weight foreach i in items: if (loadCarried + i.weight <= MAX_LOAD): add entire amount of item i to the knapsack loadCarried += i.weight else add MAX_LOAD - loadCarried units of item i to the knapsack loadCarried = MAX_LOAD return knapsack
Other optimal greedy algorithms include Dijkstra's Algorithm and Kruskal's Algorithm. However, it is often the case that a greedy strategy does not produce an optimal algorithm. Even in the case where the greedy solution is not optimal, it will often produce a close approximation to an optimal solution.