Approximation Algorithms: Knapsack
This is the lecture notes from Chandra Chekuri's CS583 course on Approximation Algorithms. Chapter 3: Knapsack.
You can read Chapter 4: Packing Problems, here. Chapter 2: Covering problems, here.
Chapter 3
In this lecture we explore the Knapsack problem. This problem provides a good basis for learning some important procedures used for approximation algorithms that give better solutions at the cost of higher running time.
3.1 The Knapsack Problem
In the Knapsack problem we are given a number (knapsack capacity) , and a set of items; each item has a given size and a profit . We will assume that all the input numbers are integers (or more generally rationals). Given a subset of the items , we define two functions, and , representing the total size and profit of the group of items respectively. The goal is to choose a subset of the items, , such that and is maximized. We will assume, without loss of generality, that for all ; we can discard items that do not satisfy this constraint.
It is not difficult to see that if all the profits are identical (say 1), the natural greedy algorithm that inserts items in the order of non-increasing sizes yields. Assuming the profits and sizes are integral, we can still find an optimal solution to the problem using dynamic programming in either or time, where . These are standard exercises. While these algorithms appear to run in polynomial time, it should be noted that and can be exponential in the size of the input written in binary. We call such algorithms pseudo-polynomial time algorithms as their running times are polynomial when numbers in the input are given in unary. Knapsack is a classical NP-Hard problem and these results (and the proof of NP-Hardness) show that the hardness manifests itself when the numbers are large (exponential in which means that the number of bits in the size or profit are polynomial in ).
3.1.1 A Greedy Algorithm
Consider the following greedy algorithm for the Knapsack problem which we will refer to as GreedyKnapsack. We sort all the items by the ratio of their profits to their sizes so that . Afterward, we greedily take items in this order as long as adding an item to our collection does not exceed the capacity of the knapsack. It turns out that this algorithm can be arbitrarily bad. Suppose we only have two items in . Let , , and . GreedyKnapsack will take only item , but taking only item would be a better solution and the ratio of the profits in the two cases is which can be made arbitrarily small. As it turns out, we can easily modify this algorithm to provide a -approximation by simply taking the best of GreedyKnapsack's solution or the most profitable item. We will call this new algorithm ModifiedGreedy.
Theorem 3.1. ModifiedGreedy has an approximation ratio of for the Knapsack problem.
Proof. Let be the index of the first item that is not accepted by GreedyKnapsack. Consider the following claim:
Claim 3.1.1. . In fact, where is the fraction of item that can still fit in the knapsack after packing the first items.
The proof of Theorem 3.1 follows immediately from the claim. In particular, either or must be at least . We now only have to prove Claim 3.1.1. We give an relaxation of the Knapsack problem as follows: Here, denotes the fraction of item packed in the knapsack.
Let be the optimal value of the objective function in this linear programming instance. Any solution to Knapsack is a feasible solution to the and both problems share the same objective function, so . Now set , and for all . This is a feasible solution to the . We leave it as an exercise to the reader to argue that is an optimum solution. Therefore, . The first statement of the lemma follows from the second as .
3.1.2 A Polynomial Time Approximation Scheme
Using the results from the last section, we make a few simple observations. Some of these lead to a better approximation.
Observation 3.2. If for all , GreedyKnapsack gives approximation.
Proof. Follows easily from Claim 3.1.1.
Observation 3.3. There are at most items with profit at least in any optimal solution.
The next claim is perhaps more interesting and captures the intuition that the bad case for greedy happens only when there are "big" items.
Claim 3.1.2. If for all , GreedyKnapsack gives a approximation.
Proof. We give a proof sketch via the relaxation. Recall that is the first item that did not fit in the knapsack. We make the following observation. Recall that is the optimum value of relaxation. Suppose we reduce the knapsack capacity to while keeping all the items the same. Let be the value for the new size. We claim that –this is because we can take any feasible solution to the original and scale each variable by to obtain a feasible solution with the new capacity. What is ? We note that Greedy will fill to capacity with the first items and hence, . Combining, we obtain that
We note that since item did not fit, and hence . Therefore and this finishes the proof.
We may now describe the following algorithm. Let be a fixed constant and let . We will try to guess the most profitable items in an optimal solution and pack the rest greedily.
- For each
such that and do - A.Pack
in knapsack of size at most - B.Let
be the least profitable item in . Remove all items where . - C.Run GreedyKnapsack on remaining items with remaining capacity
- Output best solution from above
Theorem 3.4. Guess Greedy gives a approximation and runs in time.
Proof. For the running time, observe that there are subsets of . For each subset, we spend linear time greedily packing the remaining items. The time initially spent sorting the items can be ignored thanks to the rest of the running time.
For the approximation ratio, consider a run of the loop where actually is the most profitable items in an optimal solution and the algorithm's greedy stage packs the set of items . Let be the optimal way to pack the smaller items in so that . Let item be the first item rejected by the greedy packing of . We know so by Claim 3.1.1 . This means the total profit found in that run of the loop is .
Note that for any fixed choice of , the preceding algorithm runs in polynomial time. This type of algorithm is known as a polynomial time approximation scheme or . The term "scheme" refers to the fact that the algorithm varies with . We say a maximization problem has a if for all , there exists a polynomial time algorithm that gives a approximation for minimization problems). In general, one can often find a for a problem by greedily filling in a solution after first searching for a good basis on which to work. As described below, Knapsack actually has something stronger known as a fully polynomial time approximation scheme or . A maximization problem has a if for all , there exists an algorithm that gives a approximation for minimization problems) and runs in time polynomial in both the input size and .
3.1.3 Rounding and Scaling
Earlier we mentioned exact algorithms based on dynamic programming that run in and time but noted that and may be prohibitively large. If we could somehow decrease one of those to be polynomial in without losing too much information, we might be able to find an approximation based on one of these algorithms. Let and note the following.
Observation 3.5.
Now, fix some . We want to scale the profits and round them to be integers so we may use the algorithm efficiently while still keeping enough information in the numbers to allow for an accurate approximation. For each , let . Observe that so now the sum of the profits is at most . Also, note that we lost at most profit from the scaled optimal solution during the rounding, but the scaled down is still at least . We have only lost an fraction of the solution. This process of rounding and scaling values for use in exact algorithms has use in a large number of other maximization problems. We now formally state the algorithm Round&Scale and prove its correctness and running time.
- For each
set - Run exact algorithm with run time
to obtain - Output
Theorem 3.6. Round&Scale gives a approximation and runs in time.
Proof. The rounding can be done in linear time and as , the dynamic programing portion of the algorithm runs in time. To show the approximation ratio, let and let be the solution returned by the algorithm and be the optimal solution. Observe that for all , as the rounding lowers each scaled profit by at most 1. The algorithm returns the best choice for given the scaled and rounded values, so we know .
It should be noted that this is not the best known for Knapsack. In particular, [1] shows a that runs in time. There have been several improvements and we refer the reader to Chan's paper for the latest [2].
3.2 Other Problems
There are many variants of Knapsack and it is a fundamental problem of interest in integer programming as well in several other areas. One can find a book length treatment in [3]. We close with an interesting variant.
Multiple Knapsack: The input now consists of knapsacks with capacities and items with sizes and profits as in Knapsack. We again wish to pack items to obtains as large a profit as possible, except now we have more than one knapsack with which to do so. An interesting special case is when all the knapsack capacities are the same quantity which is related to the well known Bin Packing problem.
- Eugene L Lawler. “Fast approximation algorithms for knapsack problems”. In: Mathematics of Operations Research 4.4 (1979), pp. 339–356. ↩︎
- Timothy M Chan. “Approximation schemes for 0-1 knapsack”. In: 1st Symposium on Simplicity in Algorithms (SOSA 2018). Schloss DagstuhlLeibniz-Zentrum fuer Informatik. 2018. ↩︎
- H. Kellerer, H.K.U.P.D. Pisinger, U. Pferschy, and D. Pisinger. Knapsack Problems. Springer Nature Book Archives Millennium. Springer, 2004. isbn:9783540402862. url: https://books.google.com/books?id=u5DB7gck08YC. ↩︎