Approximation Algorithms: Introduction to Network Design

This is the lecture notes from Chandra Chekuri's CS583 course on Approximation Algorithms. Chapter 10: Introduction to Network Design. You can read Chapter 11: Steiner Forest Problem, here. Chapter 9: Clustering and Facility Location, here.
Chapter 10
(Parts of this chapter are based on previous scribed lecture notes by Nitish Korula and Sungjin Im.)
Network Design is a broad topic that deals with finding a subgraph H H HH of a given graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) of minimum cost while satisfying certain requirements. G G GG represents an existing network or a constraint over where one can build. The subgraph H H HH is what we want to select/build. Many natural problems can be viewed this way. For instance the minimum spanning tree (MST) can be viewed as follows: given an undirected graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) with edge costs c : E R + c : E R + c:E rarrR_(+)c: E \rightarrow \mathbb{R}_{+}, find the cheapest connected spanning (spanning means that all vertices are included) subgraph of G G GG. The fact that a minimal solution is a tree is clear, but the point is that the motivation does not explicitly mention the requirement that the output be a tree.
Connectivity problems are a large part of network design. As we already saw MST is the most basic one and can be solved in polynomial time. The Steiner Tree problem is a generalization where we are given a subset S S SS of terminals in an edgeweighted graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) and the goal is to find a cheapest connected subgraph that contains all terminals. This is NP-Hard. Traveling Salesman Problem (TSP) and its variants can also be viewed as network design problems. Network design is heavily motivated by real-world problems in telecommunication networks and those problems combine aspects of connectivity and routing and in this context there are several problems related to buy-at-bulk network design, fixed-charge flow problems etc.
Graph theory plays an important role in most network algorithmic questions. The complexity and nature of the problems vary substantially based on whether the graph is undirected or directed. To illustrate this consider the Directed Steiner Tree problem. Here G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) is a directed graph with non-negative edge/arc weights, and we are given a root r r rr and a set of terminals S V S V S sube VS \subseteq V. The goal is to find a cheapest subgraph H H HH of G G GG such that r r rr has a path to each terminal t S t S t in St \in S. Note that when S = V S = V S=VS=V the problem is the minimum-cost arborescence problem and is solvable in polynomial-time. One can see that Directed Steiner Tree is a generalization of the (undirected) Steiner Tree problem. While Steiner Tree problem admits a constant factor approximation, it is easy to show that Directed Steiner Tree is at least as hard as Set Cover. This immediately implies that it is hard to approximate to within an Ω ( log | S | ) Ω ( log | S | ) Omega(log |S|)\Omega(\log |S|) factor. In fact, substantial technical work has shown that it is in fact harder than Set Cover [1], while we only have a quasi-polynomial time algorithm that gives a poly-logarithmic approximation [2]. In even slightly more general settings, the directed graph problems get much harder when compared to their undirected graph counterparts. This phenomena is generally true in many graph related problems and hence the literature mostly focuses on undirected graphs. We refer the reader to some surveys on network design [3],[4],[5].
This chapter will focus on two basic problems which are extensively studied and describe some simple approximation algorithms for them. Pointers are provided for sophisticated results including some very recent ones.

10.1 The Steiner Tree Problem

In the Steiner Tree problem, the input is a graph G ( V , E ) G ( V , E ) G(V,E)G(V, E), together with a set of terminals S V S V S sube VS \subseteq V, and a cost c ( e ) c ( e ) c(e)c(e) for each edge e E e E e in Ee \in E. The goal is to find a minimum-cost tree that connects all terminals, where the cost of a subgraph is the sum of the costs of its edges.
The Steiner Tree problem is NP-Hard, and also APX-Hard [6]. The latter means that there is a constant δ > 1 δ > 1 delta > 1\delta>1 such that it is NP-Hard to approximate the solution to within a ratio of less than δ δ delta\delta; it is currently known that it is hard to approximate the Steiner Tree problem to within a ratio of 95/94 [7].[8]
Remark 10.1. If | S | = 2 | S | = 2 |S|=2|S|=2 (that is, there are only 2 terminals), an optimal Steiner Tree is simply a shortest path between these 2 terminals. If S = V S = V S=VS=V (that is, all vertices are terminals), an optimal solution is simply a minimum spanning tree of the input graph. In both these cases, the problem can be solved exactly in polynomial time.
Remark 10.2. There is c k c k c^(k)c^{k} poly ( n ) ( n ) (n)(n)-time algorithm where k k kk is the number of terminals. Can you figure it out?
Definition 10.1. Given a connected graph G ( V , E ) G ( V , E ) G(V,E)G(V, E) with edge costs, the metric completion of G G GG is a complete graph H ( V , E ) H V , E H(V,E^('))H\left(V, E^{\prime}\right) such that for each u , v V u , v V u,v in Vu, v \in V, the cost of edge u v u v uvu v in H H HH is the cost of the shortest path in G G GG from u u uu to v v vv. The graph H H HH with edge costs is a metric on V V VV, because the edge costs satisfy the triangle inequality: u , v , w , cost ( u v ) cost ( u w ) + cost ( w v ) u , v , w , cost ( u v ) cost ( u w ) + cost ( w v ) AA u,v,w,quad"cost"(uv) <= "cost"(uw)+"cost"(wv)\forall u, v, w, \quad \textit{cost}(u v) \leq \textit{cost}(u w)+\textit{cost}(w v).*

Figure 10.1: On the left, a graph. On the right, its metric completion, with new edges and modified edge costs in red.

Observation 10.2. To solve the Steiner Tree problem on a graph G G GG, it suffices to solve it on the metric completion of G G GG.
We now look at two approximation algorithms for the Steiner Tree problem.

10.1.1 The MST Algorithm

The following algorithm, with an approximation ratio of ( 2 2 / | S | ) ( 2 2 / | S | ) (2-2//|S|)(2-2 /|S|) is due to [9]:
SteinerMST ( G ( V , E ) , S V ) : _ SteinerMST ( G ( V , E ) , S V ) : _ "SteinerMST"(G(V,E),S sube V):_\underline{\text{SteinerMST} (G(V, E), S \subseteq V):}
Let H ( V , E ) H V , E H(V,E^('))larrH\left(V, E^{\prime}\right) \leftarrow metric completion of G . G . G.G .
Let T T T larrT \leftarrow MST of H [ S ] H [ S ] H[S]H[S].
Output T T TT.
(Here, we use the notation H [ S ] H [ S ] H[S]H[S] to denote the subgraph of H H HH induced by the set of terminals S S SS.)
The following lemma is central to the analysis of the algorithm SteinerMST.
Lemma 10.1. For any instance I of Steiner Tree , let H H HH denote the metric completion of the graph, and S S SS the set of terminals. There exists a spanning tree in H [ S ] H [ S ] H[S]H[S] (the graph induced by terminals) of cost at most 2 ( 1 1 | S | ) OPT 2 ( 1 1 | S | ) OPT 2(1-(1)/(|S|))"OPT"2(1-\frac{1}{|S|})\text{OPT}, where OPT OPT "OPT"\text{OPT} is the cost of an optimal solution to instance I.

Figure 10.2: Illustrating the MST Heuristic for Steiner Tree

Before we prove the lemma, we note that if there exists some spanning tree in H [ S ] H [ S ] H[S]H[S] of cost at most 2 ( 1 1 | S | ) OPT 2 ( 1 1 | S | ) OPT 2(1-(1)/(|S|))"OPT"2(1-\frac{1}{|S|})\text{OPT}, the minimum spanning tree has at most this cost. Therefore, Lemma 10.1 implies that the algorithm SteinerMST is a 2 ( 1 1 | S | ) 2 ( 1 1 | S | ) 2(1-(1)/(|S|))2(1-\frac{1}{|S|})-approximation for the Steiner Tree problem.
Proof. Proof of Lemma 10.1 Let T T T^(**)T^{*} denote an optimal solution in H H HH to the given instance, with cost c ( T ) cost c T cost c(T^(**))\operatorname{cost} c\left(T^{*}\right). Double all the edges of T T T^(**)T^{*} to obtain an Eulerian graph, and fix an Eulerian Tour W W WW of this graph. See Fig 10.3. Now, shortcut edges of W W WW to obtain a tour W W W^(')W^{\prime} of the vertices in T T T^(**)T^{*} in which each vertex is visited exactly once. Again, shortcut edges of W W W^(')W^{\prime} to eliminate all non-terminals; this gives a walk W W W^('')W^{\prime \prime} that visits each terminal exactly once.
Figure 10.3: Doubling edges of T T T^(**)T^{*} and shortcutting gives a low-cost spanning tree on terminals.
It is easy to see that c ( W ) c ( W ) c ( W ) = 2 c ( T ) c W c W c ( W ) = 2 c T c(W^('')) <= c(W^(')) <= c(W)=2c(T^(**))c\left(W^{\prime \prime}\right) \leq c\left(W^{\prime}\right) \leq c(W)=2 c\left(T^{*}\right), where the inequalities follow from the fact that by shortcutting, we can only decrease the length of the walk. (Recall that we are working in the metric completion H H HH.) Now, delete the heaviest edge of W W W^('')W^{\prime \prime} to obtain a path through all the terminals in S S SS, of cost at most ( 1 1 | S | ) c ( W ) ( 1 1 | S | ) c ( W ) (1-(1)/(|S|))c(W^(''))(1-\frac{1}{|S|}) c(W^{\prime \prime}). This path is a spanning tree of the terminals, and contains only terminals; therefore, there exists a spanning tree in H [ S ] H [ S ] H[S]H[S] of cost at most 2 ( 1 1 | S | ) c ( T ) 2 ( 1 1 | S | ) c T 2(1-(1)/(|S|))c(T^(**))2(1-\frac{1}{|S|}) c\left(T^{*}\right).
A tight example: The following example (Fig. 4 below) shows that this analysis is tight; there are instances of Steiner Tree where the SteinerMST algorithm finds a tree of cost 2 ( 1 1 S ) cost 2 1 1 S cost 2(1-(1)/(S))\operatorname{cost} 2\left(1-\frac{1}{S}\right) OPT. Here, each pair of terminals is connected by an edge of cost 2, and each terminal is connected to the central non-terminal by an edge of cost 1. The optimal tree is a star containing the central non-terminal, with edges to all the terminals; it has cost | S | | S | |S||S|. However, the only trees in H [ S ] H [ S ] H[S]H[S] are formed by taking | S | 1 | S | 1 |S|-1|S|-1 edges of cost 2; they have cost 2 ( | S | 1 ) 2 ( | S | 1 ) 2(|S|-1)2(|S|-1).

Figure 10.4: A tight example for the SteinerMST algorithm

10.1.2 The Greedy/Online Algorithm

We now describe another simple algorithm for the Steiner Tree problem, due to [10].
GreedySteiner ( G ( V , E ) , S V ) : _ GreedySteiner ( G ( V , E ) , S V ) : _ "GreedySteiner"(G(V,E),S sube V):_\underline{\text{GreedySteiner} (G(V, E), S \subseteq V):}
Let { s 1 , s 2 , s | S | } s 1 , s 2 , s | S | {s_(1),s_(2),dotss_(|S|)}\left\{s_{1}, s_{2}, \ldots s_{|S|}\right\} be an arbitrary ordering of the terminals.
Let T { s 1 } T s 1 T larr{s_(1)}T \leftarrow\left\{s_{1}\right\}
For ( i ( i (i(i from 2 2 22 to | S | ) | S | ) |S|)|S|):
quad\quad Let P i P i P_(i)P_{i} be the shortest path in G G GG from s i s i s_(i)s_{i} to T T TT.
quad\quad Add P i P i P_(i)P_{i} to T T TT.
GreedySteiner is a log 2 | S | log 2 | S | |~log_(2)|S|~|\left\lceil\log _{2}|S|\right\rceil-approximation algorithm; here, we prove a slightly weaker result.
Theorem 10.3. The algorithm GreedySteiner has an approximation ratio of 2 H | S | 2 H | S | 2H_(|S|)~~2 H_{|S|} \approx 2 ln | S | 2 ln | S | 2ln |S|2 \ln |S|, where H i = j = 1 i 1 / j H i = j = 1 i 1 / j H_(i)=sum_(j=1)^(i)1//jH_{i}=\sum_{j=1}^{i} 1 / j denotes the i i i^(')i^{\prime}th harmonic number.
Note that this is an online algorithm; terminals are considered in an arbitrary order, and when a terminal is considered, it is immediately connected to the existing tree. Thus, even if the algorithm could not see the entire input at once, but instead terminals were revealed one at a time and the algorithm had to produce a Steiner tree at each stage, the algorithm GreedySteiner outputs a tree of cost no more than O ( log | S | ) O ( log | S | ) O(log |S|)O(\log |S|) times the cost of the optimal tree.
To prove Theorem 10.3, we introduce some notation. Let c ( i ) c ( i ) c(i)c(i) denote the cost of the path P i P i P_(i)P_{i} used in the i i iith iteration to connect the terminal s i s i s_(i)s_{i} to the already existing tree. Clearly, the total cost of the tree is i = 1 | S | c ( i ) i = 1 | S | c ( i ) sum_(i=1)^(|S|)c(i)\sum_{i=1}^{|S|} c(i). Now, let { i 1 , i 2 , i | S | } i 1 , i 2 , i | S | {i_(1),i_(2),dotsi_(|S|)}\left\{i_{1}, i_{2}, \ldots i_{|S|}\right\} be a permutation of { 1 , 2 , | S | } { 1 , 2 , | S | } {1,2,dots|S|}\{1,2, \ldots|S|\} such that c ( i 1 ) c ( i 2 ) c ( i | S | ) c i 1 c i 2 c i | S | c(i_(1)) >= c(i_(2)) >= dots >= c(i_(|S|))c\left(i_{1}\right) \geq c\left(i_{2}\right) \geq \ldots \geq c\left(i_{|S|}\right). (That is, relabel the terminals in decreasing order of the cost paid to connect them to the tree that exists when they are considered by the algorithm.)
Claim 10.1.1. For all j j jj, the cost c ( i j ) c i j c(i_(j))c\left(i_{j}\right) is at most 2 OPT / j 2  OPT / j 2" OPT"//j2 \text{ OPT} / j, where OPT OPT "OPT"\text{OPT} is the cost of an optimal solution to the given instance.
Proof. Suppose by way of contradiction this were not true; since s i j s i j s_(i_(j))s_{i_{j}} is the terminal with j j jjth highest cost of connection, there must be j j jj terminals that each pay more than 2 OPT / j 2  OPT / j 2" OPT"//j2 \text{ OPT} / j to connect to the tree that exists when they are considered. Let S = { s i 1 , s i 2 , s i j } S = s i 1 , s i 2 , s i j S^(')={s_(i_(1)),s_(i_(2)),dotss_(i_(j))}S^{\prime}=\left\{s_{i_{1}}, s_{i_{2}}, \ldots s_{i_{j}}\right\} denote this set of terminals.
We argue that no two terminals in S { s 1 } S s 1 S^(')uu{s_(1)}S^{\prime} \cup\left\{s_{1}\right\} are within distance 2 OPT / j 2  OPT / j 2" OPT"//j2 \text{ OPT} / j of each other. If some pair x , y x , y x,yx, y were within this distance, one of these terminals (say y y yy ) must be considered later by the algorithm than the other. But then the cost of connecting y y yy to the already existing tree (which includes x x xx ) must be at most 2 OPT / j 2  OPT / j 2" OPT"//j2 \text{ OPT} / j, and we have a contradiction.
Therefore, the minimum distance between any two terminals in S { s 1 } S s 1 S^(')uu{s_(1)}S^{\prime} \cup\left\{s_{1}\right\} must be greater than 2 OPT / j 2  OPT / j 2" OPT"//j2\text{ OPT}/ j. Since there must be j j jj edges in any MST of these terminals, an MST must have cost greater than 2 OPT. But the MST of a subset of terminals cannot have cost more than 2 OPT 2  OPT 2" OPT"2\text{ OPT}, exactly as argued in the proof of Lemma 10.1. Therefore, we obtain a contradiction.
Given this claim, it is easy to prove Theorem 10.3.
i = 1 | S | c ( i ) = j = 1 | S | c ( i j ) j = 1 | S | 2 OPT j = 2 OPT j = 1 | S | 1 j = 2 H | S | OPT . i = 1 | S | c ( i ) = j = 1 | S | c i j j = 1 | S | 2  OPT j = 2  OPT j = 1 | S | 1 j = 2 H | S |  OPT . sum_(i=1)^(|S|)c(i)=sum_(j=1)^(|S|)c(i_(j)) <= sum_(j=1)^(|S|)(2" OPT")/(j)=2" OPT"sum_(j=1)^(|S|)(1)/(j)=2H_(|S|)*" OPT".\sum_{i=1}^{|S|} c(i)=\sum_{j=1}^{|S|} c\left(i_{j}\right) \leq \sum_{j=1}^{|S|} \frac{2 \text{ OPT}}{j}=2 \text{ OPT} \sum_{j=1}^{|S|} \frac{1}{j}=2 H_{|S|} \cdot \text{ OPT} .
Question 10.2. Give an example of a graph and an ordering of terminals such that the output of the Greedy algorithm is Ω ( log | S | ) OPT Ω ( log | S | ) OPT Omega(log |S|)"OPT"\Omega(\log |S|)\text{OPT}.
Remark 10.3. We emphasize again that the analysis above holds for every ordering of the terminals. A natural variant might be to adaptively order the terminals so that in each iteration i i ii, the algorithm picks the terminal s i s i s_(i)s_{i} to be the one closest to the already existing tree T T TT built in the first i i ii iterations. Do you see that this is equivalent to using the MST Heuristic with Prim's algorithm for MST? This illustrates the need to be careful in the design and analysis of heuristics.

10.1.3 LP Relaxation

A natural LP relaxation for the Steiner Tree problem is the following. For each edge e E e E e in Ee \in E we have an indicator variable x e x e x_(e)x_{e} to decide if we choose to include e e ee in our solution. The chosen edges should ensure that no two terminals are separated. We write this via a constraint e δ ( A ) x e 1 e δ ( A ) x e 1 sum_(e in delta(A))x_(e) >= 1\sum_{e \in \delta(A)} x_{e} \geq 1 for any set A V A V A sub VA \subset V such that A A AA contains a terminal and V A V A V\\AV \backslash A contains a terminal.
e δ ( A ) min e E c e x e x e 1 A S , ( V A ) S x e 0 e δ ( A ) min e E c e x e x e 1 A S , ( V A ) S x e 0 {:[sum_(e in delta(A))^(min_(e in E)c_(e)x_(e))x_(e) >= 1quad A nn S!=O/","(V-A)nn S!=O/],[x_(e) >= 0]:}\begin{aligned} \sum_{e \in \delta(A)}^{\min _{e \in E} c_{e} x_{e}} x_{e} &\geq 1 \quad A \cap S \neq \emptyset,(V-A) \cap S \neq \emptyset\\ x_{e}& \geq 0 \end{aligned}
Note that the preceding LP has an exponential number of constraints. However, there is a polynomial-time separation oracle. Given x x xx it is feasible for the LP iff the s t s t s-ts-t cut value is at least 1 1 11 between any two terminals s , t S s , t S s,t in Ss, t \in S with edge capacities given by x x xx. How good is this LP relaxation? We will see later that there is a 2 ( 1 1 / | S | ) 2 ( 1 1 / | S | ) 2(1-1//|S|)2(1-1 /|S|)-approximation via this LP. Interestingly the LP has an integrality gap of 2 ( 1 1 / | S | ) 2 ( 1 1 / | S | ) 2(1-1//|S|)2(1-1 /|S|) even if S = V S = V S=VS=V in which case we want to solve the MST problem! Despite the weakness of this cut based LP for these simple cases, we will see later that it generalizes nicely for higher connectivity problems and one can derive a 2-approximation even for those much more difficult problems.

10.1.4 Other Results on Steiner Trees

The 2-approximation algorithm using the MST Heuristic is not the best approximation algorithm for the Steiner Tree problem currently known. Some other results on this problem are listed below.
  1. The first algorithm to obtain a ratio of better than 2 2 22 was due to due to Alexander Zelikovsky [11]; the approximation ratio of this algorithm was 11 / 6 1.83 11 / 6 1.83 11//6~~1.8311 / 6 \approx 1.83. This was improved to 1 + ln 3 2 1.55 1 + ln 3 2 1.55 1+(ln 3)/(2)~~1.551+\frac{\ln 3}{2} \approx 1.55 [12] and is based on a local search based improvement starting with the MST heuristic, and follows the original approach of Zelikovsky.
  2. Byrka et al gave an algorithm with an approximation ratio of 1.39 = ln 4 + ϵ 1.39 = ln 4 + ϵ 1.39=ln 4+epsilon1.39=\ln 4+\epsilon [13] which is currently the best known for this problem. This was originally based on a combination of techniques and subsequently there is an LP based proof [14] that achieves the same approximation for the so-called Hypergraphic LP relaxation.
  3. The bidirected cut LP relaxation for the Steiner Tree was proposed by [15]; it has an integrality gap of at most 2 ( 1 1 | S | ) 2 ( 1 1 | S | ) 2(1-(1)/(|S|))2(1-\frac{1}{|S|}), but it is conjectured that the gap is smaller. No algorithm is currently known that exploits this LP relaxation to obtain an approximation ratio better than that of the SteinerMST algorithm. Though the true integrality gap is not known, there are examples that show it is at least 6 / 5 = 1.2 6 / 5 = 1.2 6//5=1.26 / 5=1.2 [16].
  4. For many applications, the vertices can be modeled as points on the plane, where the distance between them is simply the Euclidean distance. The MST-based algorithm performs fairly well on such instances; it has an approximation ratio of 2 / 3 1.15 2 / 3 1.15 2//sqrt3~~1.152 / \sqrt{3} \approx 1.15 [17]. An example which achieves this bound is three points at the corners of an equilateral triangle, say of side-length 1; the MST heuristic outputs a tree of cost 2 (two sides of the triangle) while the optimum solution is to connect the three points to a Steiner vertex which is the circumcenter of the triangle. One can do better still for instances in the plane (or in any Euclidean space of small-dimensions); for any ϵ > 0 ϵ > 0 epsilon > 0\epsilon>0, there is a 1 + ϵ 1 + ϵ 1+epsilon1+\epsilon-approximation algorithm that runs in polynomial time [18]. Such an approximation scheme is also known for planar graphs [19] and more generally bounded-genus graphs.

10.2 The Traveling Salesperson Problem (TSP)

10.2.1 TSP in Undirected Graphs

In the Traveling Salesperson Problem (TSP), we are given an undirected graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) and cost c ( e ) > 0 cost c ( e ) > 0 cost c(e) > 0\operatorname{cost} c(e)>0 for each edge e E e E e in Ee \in E. Our goal is to find a Hamiltonian cycle with minimum cost. A cycle is said to be Hamiltonian if it visits every vertex in V V VV exactly once.
TSP is known to be NP-Hard. Moreover, we cannot hope to find a good approximation algorithm for it unless P = N P P = N P P=NPP=N P. This is because if one can give a good approximation solution to TSP in polynomial time, then we can exactly solve the NP-Complete Hamiltonian cycle problem (HAM) in polynomial time, which is not possible unless P = N P P = N P P=NPP=N P. Recall that HAM is a decision problem: given a graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E), does G G GG have a Hamiltonian cycle?
Theorem 10.4 ([20]). Let α : N N α : N N alpha:NrarrN\alpha: \mathbb{N} \rightarrow \mathbb{N} be a polynomial-time computable function. Unless P = N P P = N P P=NPP=N P there is no polynomial-time algorithm that on every instance I of TSP outputs a solution of cost at most α ( | I | ) O P T ( I ) α ( | I | ) O P T ( I ) alpha(|I|)*OPT(I)\alpha(|I|) \cdot \mathrm{OPT}(I).
Proof. For the sake of contradiction, suppose we have an approximation algorithm A A A\mathcal{A} for TSP with an approximation ratio α ( | I | ) α ( | I | ) alpha(|I|)\alpha(|I|). We show a contradiction by showing that using A A A\mathcal{A}, we can exactly solve HAM in polynomial time. Let G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) be the given instance of HAM. We create a new graph H = ( V , E ) H = V , E H=(V,E^('))H=\left(V, E^{\prime}\right) with cost c ( e ) c ( e ) c(e)c(e) for each e E e E e inE^(')e \in E^{\prime} such that c ( e ) = 1 c ( e ) = 1 c(e)=1c(e)=1 if e E e E e in Ee \in E, otherwise c ( e ) = B c ( e ) = B c(e)=Bc(e)=B, where B = n α ( n ) + 2 B = n α ( n ) + 2 B=n alpha(n)+2B=n \alpha(n)+2 and n = | V | n = | V | n=|V|n=|V|. Note that this is a polynomial-time reduction since α α alpha\alpha is a polynomial-time computable function.
We observe that if G G GG has a Hamiltonian cycle, OPT = n OPT = n "OPT"=n\text{OPT}=n, otherwise OPT OPT "OPT" >=\text{OPT}\geq n 1 + B n α ( n ) + 1 n 1 + B n α ( n ) + 1 n-1+B >= n alpha(n)+1n-1+B \geq n \alpha(n)+1. (Here, OPT denotes the cost of an optimal TSP solution in H H HH.) Note that there is a "gap" between when G G GG has a Hamiltonian cycle and when it does not. Thus, if A A A\mathcal{A} has an approximation ratio of α ( n ) α ( n ) alpha(n)\alpha(n), we can tell whether G G GG has a Hamiltonian cycle or not: Simply run A A A\mathcal{A} on the graph H H HH; if A A A\mathcal{A} returns a TSP tour in H H HH of cost at most α ( n ) n α ( n ) n alpha(n)n\alpha(n) n output that G G GG has a Hamilton cycle, otherwise output that G G GG has no Hamilton cycle. We leave it as an exercise to formally verify that this would solve HAM in polynomial time.
Since we cannot even approximate the general TSP problem, we consider more tractable variants.
  • Metric-TSP: In Metric-TSP, the instance is a complete graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) with cost c ( e ) c ( e ) c(e)c(e) on e E e E e in Ee \in E, where c c cc satisfies the triangle inequality, i.e. c ( u w ) c ( u v ) + c ( v w ) c ( u w ) c ( u v ) + c ( v w ) c(uw) <= c(uv)+c(vw)c(u w) \leq c(u v)+c(v w) for any u , v , w V u , v , w V u,v,w in Vu, v, w \in V.
  • TSP-R: TSP with repetitions of vertices allowed. The input is a graph G = ( V , E ) G = ( V , E ) G=(V,E)G=(V, E) with non-negative edge costs as in TSP. Now we seek a minimum-cost walk that visits each vertex at least once and returns to the starting vertex.
Exercise 10.1. Show that an α α alpha\alpha-approximation for Metric-TSP implies an α α alpha\alpha approximation for TSP-R and vice-versa.
We focus on Metric-TSP for the rest of this section. We first consider a natural greedy approach, the Nearest Neighbor Heuristic (NNH).
Nearest Neighbor Heuristic ( G ( V , E ) , c : E R + ) : _ Nearest Neighbor Heuristic G ( V , E ) , c : E R + : _ "Nearest Neighbor Heuristic"(G(V,E),c:E rarrR^(+)):_\underline{\text{Nearest Neighbor Heuristic} \left(G(V, E), c: E \rightarrow \mathcal{R}^{+}\right):}
Start at an arbitrary vertex s s ss,
While (there are unvisited vertices)
quad\quad From the current vertex u u uu, go to the nearest unvisited vertex v v vv.
Return to s s ss.
Exercise 10.2. 1.Prove that NNH is an O ( log n ) O ( log n ) O(log n)O(\log n)-approximation algorithm. (Hint: Think back to the proof of the 2 H | S | 2 H | S | 2H_(|S|)2 H_{|S|}-approximation for the Greedy Steiner Tree Algorithm.)
  1. NNH is not an O O OO (1)-approximation algorithm; can you find an example to show this? In fact one can show a lower bound of Ω ( log n ) Ω ( log n ) Omega(log n)\Omega(\log n) on the approximation-ratio achieved by NNH.
There are constant-factor approximation algorithms for TSP; we now consider an MST-based algorithm. See Fig 10.5.
TSP-MST ( G ( V , E ) , c : E R + ) : _ TSP-MST ( G ( V , E ) , c : E R + ) : _ "TSP-MST"(G(V,E),c:E rarrR^(+)):_\underline{\text{TSP-MST} (G(V, E),c:E\rightarrow\mathcal{R}^{+}):}
Compute an MST T T TT of G G GG.
Obtain an Eulerian graph H = 2 T H = 2 T H=2TH=2 T by doubling edges of T T TT
An Eulerian tour of 2 T 2 T 2T2 T gives a tour in G G GG.
Obtain a Hamiltonian cycle by shortcutting the tour.

Figure 10.5: MST Based Heuristic

Theorem 10.5. MST heuristic(TSP-MST) is a 2-approximation algorithm.
Proof. We have c ( T ) = e E ( T ) c ( e ) OPT c ( T ) = e E ( T ) c ( e )  OPT c(T)=sum_(e in E(T))c(e) <= " OPT"c(T)=\sum_{e \in E(T)} c(e) \leq\text{ OPT}, since we can get a spanning tree in G G GG by removing any edge from the optimal Hamiltonian cycle, and T T TT is a MST. Thus c ( H ) = 2 c ( T ) 2 OPT c ( H ) = 2 c ( T ) 2  OPT c(H)=2c(T) <= 2" OPT"c(H)=2 c(T) \leq 2\text{ OPT}. Also shortcutting only decreases the cost.
We observe that the loss of a factor 2 in the approximation ratio is due to doubling edges; we did this in order to obtain an Eulerian tour. But any graph in which all vertices have even degree is Eulerian, so one can still get an Eulerian tour by adding edges only between odd degree vertices in T T TT. Christofides Heuristic [21] exploits this and improves the approximation ratio from 2 2 22 to 3 / 2 3 / 2 3//23 / 2. See Fig 10.6 for a snapshot.

Figure 10.6: Christofides Heuristic

Christofides Heuristic ( G ( V , E ) , c : E R + ) : _ Christofides Heuristic ( G ( V , E ) , c : E R + ) : _ "Christofides Heuristic"(G(V,E),c:E rarrR^(+)):_\underline{\text{Christofides Heuristic} (G(V, E),c:E\rightarrow\mathcal{R}^{+}):}
Compute an MST T T TT of G G GG.
Let S S SS be the vertices of odd degree in T T TT. (Note: | S | | S | |S||S| is even)
Find a minimum cost matching M M MM on S S SS in G G GG
Add M M MM to T T TT to obtain an Eulerian graph H H HH.
Compute an Eulerian tour of H H HH.
Obtain a Hamilton cycle by shortcutting the tour.
Theorem 10.6. Christofides Heuristic is a 1.5-approximation algorithm.
Proof. The main part of the proof is to show that c ( M ) .5 OPT c ( M ) .5  OPT c(M) <= .5" OPT"c(M) \leq .5\text{ OPT}. Suppose that c ( M ) .5 OPT c ( M ) .5  OPT c(M) <= .5" OPT"c(M) \leq .5 \text{ OPT}. Then, since the solution of Christofides Heuristic is obtained by shortcutting the Eulerian tour on H H HH, its cost is no more than c ( H ) = c ( T ) + c ( M ) 1.5 OPT c ( H ) = c ( T ) + c ( M ) 1.5  OPT c(H)=c(T)+c(M) <= 1.5" OPT"c(H)=c(T)+c(M) \leq 1.5\text{ OPT}. (Refer to the proof of Lemma 10.5 10.5 10.510.5 for the fact c ( T ) OPT c ( T ) OPT c(T) <= "OPT"c(T) \leq\text{OPT}.) Therefore we focus on proving that c ( M ) .5 OPT c ( M ) .5  OPT c(M) <= .5" OPT"c(M) \leq .5\text{ OPT}.
Let F F F^(**)F^{*} be an optimal tour in G G GG of cost OPT OPT "OPT"\text{OPT}; since we have a metric-instance we can assume without loss of generality that F F F^(**)F^{*} is a Hamiltonian cycle. We obtain a Hamiltonian cycle F S F S F_(S)^(**)F_{S}^{*} in the graph G [ S ] G [ S ] G[S]G[S] by short-cutting the portions of F F F^(**)F^{*} that touch the vertices V S V S V\\SV \backslash S. By the metric-condition, c ( F S ) c ( F ) = OPT c F S c F =  OPT c(F_(S)^(**)) <= c(F^(**))=" OPT"c\left(F_{S}^{*}\right) \leq c\left(F^{*}\right)=\text{ OPT}. Let S = { v 1 , v 2 , , v | S | } S = v 1 , v 2 , , v | S | S={v_(1),v_(2),dots,v_(|S|)}S=\left\{v_{1}, v_{2}, \ldots, v_{|S|}\right\}. Without loss of generality F S F S F_(S)^(**)F_{S}^{*} visits the vertices of S S SS in the order v 1 , v 2 , , v | S | v 1 , v 2 , , v | S | v_(1),v_(2),dots,v_(|S|)v_{1}, v_{2}, \ldots, v_{|S|}. Recall that | S | | S | |S||S| is even. Let M 1 = { v 1 v 2 , v 3 v 4 , v | S | 1 v | S | } M 1 = v 1 v 2 , v 3 v 4 , v | S | 1 v | S | M_(1)={v_(1)v_(2),v_(3)v_(4),dotsv_(|S|-1)v_(|S|)}M_{1}=\left\{v_{1} v_{2}, v_{3} v_{4}, \ldots v_{|S|-1} v_{|S|}\right\} and M 2 = { v 2 v 3 , v 4 v 5 , v | S | v 1 } M 2 = v 2 v 3 , v 4 v 5 , v | S | v 1 M_(2)={v_(2)v_(3),v_(4)v_(5),dotsv_(|S|)v_(1)}M_{2}=\left\{v_{2} v_{3}, v_{4} v_{5}, \ldots v_{|S|} v_{1}\right\}. Note that both M 1 M 1 M_(1)M_{1} and M 2 M 2 M_(2)M_{2} are matchings, and c ( M 1 ) + c ( M 2 ) = c ( F S ) OPT c M 1 + c M 2 = c F S OPT c(M_(1))+c(M_(2))=c(F_(S)^(**)) <= "OPT"c\left(M_{1}\right)+c\left(M_{2}\right)=c\left(F_{S}^{*}\right) \leq\text{OPT}. We can assume without loss of generality that c ( M 1 ) c ( M 2 ) c M 1 c M 2 c(M_(1)) <= c(M_(2))c\left(M_{1}\right) \leq c\left(M_{2}\right). Then we have c ( M 1 ) .5 OPT c M 1 .5  OPT c(M_(1)) <= .5" OPT"c\left(M_{1}\right) \leq .5\text{ OPT}. Also we know that c ( M ) c ( M 1 ) c ( M ) c M 1 c(M) <= c(M_(1))c(M) \leq c\left(M_{1}\right), since M M MM is a minimum cost matching on S S SS in G [ S ] G [ S ] G[S]G[S]. Hence we have c ( M ) c ( M 1 ) .5 OPT c ( M ) c M 1 .5  OPT c(M) <= c(M_(1)) <= .5" OPT"c(M) \leq c\left(M_{1}\right) \leq .5 \text{ OPT}, which completes the proof.

10.2.2 LP Relaxation

We describe a well-known LP relaxation for TSP called the Subtour-Elimination LP and sometimes also called the Held-Karp LP relaxation although the formulation was first given by Dantzig, Fulkerson and Johnson [22]. The LP relaxation has a variable x e x e x_(e)x_{e} for each edge e E e E e in Ee \in E. Note that the TSP solution is a Hamilton Cycle of least cost. A Hamilton cycle can be viewed as a connected subgraph of G G GG with degree 2 2 22 at each vertex. Thus we write the degree constraints and also the cut constraints.
min e E c e x e e δ ( v ) x e = 2 v V e δ ( S ) x e 2 S V x e [ 0 , 1 ] e E min e E c e x e e δ ( v ) x e = 2 v V e δ ( S ) x e 2 S V x e [ 0 , 1 ] e E {:[min_(e in E)c_(e)x_(e)],[sum_(e in delta(v))x_(e)=2quad v in V],[sum_(e in delta(S))x_(e) >= 2quad O/⊊S⊊V],[x_(e) in[0","1]quad e in E]:}\begin{aligned} \min _{e \in E} c_{e} x_{e} & \\ \sum_{e \in \delta(v)} x_{e} &=2 \quad v \in V \\ \sum_{e \in \delta(S)} x_{e} & \geq 2 \quad \emptyset \subsetneq S \subsetneq V \\ x_{e} & \in[0,1] \quad e \in E \end{aligned}
The relaxation is not useful for a general graph since we saw that TSP is not approximable. To obtain a relaxation for Metric-TSP we apply the above to the metric completion of the graph G G GG.
Another alternative is to consider the following LP which view the problem as finding a connected Eulerian multi-graph of the underlying graph G G GG. In other words we are allowed to take an integer number of copies of each edge with the constraint that the degree of each vertex is even and the graph is connected. It is not easy to write the even degree condition since we do not have an apriori bound. Instead one can write the following simpler LP, and interestingly one can show that its optimum value is the same as that of the preceding relaxation (when applied to the metric completion).
min e E c e x e e δ ( S ) x e 2 S V x e 0 e E min e E c e x e e δ ( S ) x e 2 S V x e 0 e E {:[min_(e in E)c_(e)x_(e)],[sum_(e in delta(S))x_(e) >= 2quad O/⊊S⊊V],[x_(e) in0quad e in E]:}\begin{aligned} \min _{e \in E} c_{e} x_{e} & \\ \sum_{e \in \delta(S)} x_{e} & \geq 2 \quad \emptyset \subsetneq S \subsetneq V \\ x_{e} & \in 0 \quad e \in E \end{aligned}
Wolsey showed that the 3 / 2 3 / 2 3//23/2-approximation of Christofides can be analyzed with respect to the LP above. Hence the integrality gap of the LP is at most 3 / 2 3 / 2 3//23 / 2 with respect to the LP above. Hence the integrality gap of the LP is at most 3 / 2 3 / 2 3//23/2 for Metric-TSP. Is it better? There is a well-known example which shows that the gap is at least 4 / 3 4 / 3 4//34 / 3. The 4 / 3 4 / 3 4//34 / 3 conjecture states that the worst-case integrality gap very very recently that the 3 / 2 3 / 2 3//23 / 2 barrier was broken.
Remarks
  1. In practice, local search heuristics are widely used and they perform extremely well. A popular heuristic 2-Opt is to swap pairs from x y , z w x y , z w xy,zwx y, z w to x z , y w x z , y w xz,ywx z, y w or x w , y z x w , y z xw,yzx w, y z, if it improves the tour.
  2. It was a major open problem to improve the approximation ratio of 3 2 3 2 (3)/(2)\frac{3}{2} for Metric-TSP; it is conjectured that the Held-Karp LP relaxation [23] gives a ratio of 4 3 4 3 (4)/(3)\frac{4}{3}. In a breakthrough Oveis-Gharan, Saberi and Singh [24] obtained a 3 / 2 δ 3 / 2 δ 3//2-delta3 / 2-\delta approximation for some small but fixed δ > 0 δ > 0 delta > 0\delta>0 for the important special case where c ( e ) = 1 c ( e ) = 1 c(e)=1c(e)=1 for each edge e e ee (called Graphic-TSP). Very recently the 3 / 2 3 / 2 3//23 / 2 ratio was finally broken for the general case [25].

10.2.3 TSP in Directed Graphs

In this subsection, we consider TSP in directed graphs. As in undirected TSP, we need to relax the problem conditions to get any positive result. Again, allowing each vertex to be visited multiple times is equivalent to imposing the asymmetric triangle inequality c ( u , w ) c ( u , v ) + c ( v , w ) c ( u , w ) c ( u , v ) + c ( v , w ) c(u,w) <= c(u,v)+c(v,w)c(u, w) \leq c(u, v)+c(v, w) for all u , v , w u , v , w u,v,wu, v, w. This is called the asymmetric TSP (ATSP) problem. We are given a directed graph G = ( V , A ) G = ( V , A ) G=(V,A)G=(V, A) with cost c ( a ) > 0 cost c ( a ) > 0 cost c(a) > 0\operatorname{cost} c(a)>0 for each arc a A a A a in Aa \in A and our goal is to find a closed walk visiting all vertices. Note that we are allowed to visit each vertex multiple times, as we are looking for a walk, not a cycle. For an example of a valid Hamiltonian walk, see Fig 10.7.
The MST-based heuristic for the undirected case has no meaningful generalization to the directed setting This is because costs on edges are not symmetric. Hence, we need another approach. The Cycle Shrinking Algorithm repeatedly finds a min-cost cycle cover and shrinks cycles, combining the cycle covers found. Recall that a cycle cover is a collection of disjoint cycles covering all vertices. It is known that finding a minimum-cost cycle cover can be done in polynomial time (see Homework 0). The Cycle Shrinking Algorithm achieves a log 2 n log 2 n log_(2)n\log _{2} n approximation ratio.

Figure 10.7: A directed graph and a valid Hamiltonian walk

Cycle Shrinking Algorithm ( G ( V , E ) , c : E R + ) : _ Cycle Shrinking Algorithm ( G ( V , E ) , c : E R + ) : _ "Cycle Shrinking Algorithm"(G(V,E),c:E rarrR^(+)):_\underline{\text{Cycle Shrinking Algorithm} (G(V, E),c:E\rightarrow\mathcal{R}^{+}):}
Transform G G GG s.t. G G GG is complete and satisfies c ( u , v ) + c ( v , w ) c ( u , w ) c ( u , v ) + c ( v , w ) c ( u , w ) c(u,v)+c(v,w) >= c(u,w)c(u, v)+c(v, w) \geq c(u, w) for u , v , w u , v , w AA u,v,w\forall u, v, w
If | V | = 1 | V | = 1 |V|=1|V|=1 output the trivial cycle consisting of the single node
Find a minimum cost cycle cover with cycles C 1 , , C k C 1 , , C k C_(1),dots,C_(k)C_{1}, \ldots, C_{k}
From each C i C i C_(i)C_{i} pick an arbitrary proxy node v i v i v_(i)v_{i}
Recursively solve problem on G [ { v 1 , , v k } ] G v 1 , , v k G[{v_(1),dots,v_(k)}]G\left[\left\{v_{1}, \ldots, v_{k}\right\}\right] to obtain a solution C C CC
C = C C 1 C 2 C k C = C C 1 C 2 C k C^(')=C uuC_(1)uuC_(2)dotsC_(k)C^{\prime}=C \cup C_{1} \cup C_{2} \ldots C_{k} is a Eulerian graph.
Shortcut C C C^(')C^{\prime} to obtain a cycle on V V VV and output C C C^(')C^{\prime}.
For a snapshot of the Cycle Shrinking Algorithm, see Fig 10.8.
Figure 10.8: A snapshot of Cycle Shrinking Algorithm. To the left, a cycle cover C 1 C 1 C_(1)C_{1}. In the center, blue vertices indicate proxy nodes, and a cycle cover C 2 C 2 C_(2)C_{2} is found on the proxy nodes. To the right, pink vertices are new proxy nodes, and a cycle cover C 3 C 3 C_(3)C_{3} is found on the new proxy nodes.
Lemma 10.3. Let the cost of edges in G G GG satisfy the asymmetric triangle inequality. Then for any S V S V S sube VS \subseteq V, the cost of an optimal TSP tour in G [ S ] G [ S ] G[S]G[S] is at most the cost of an optimal TSP tour in G G GG.
Proof. Since G G GG satisfies the triangle inequality there is an optimal tour TSP tour in G G GG that is a Hamiltonian cycle C C CC. Given any S V S V S sube VS \subseteq V the cycle C C CC can be short-cut to produce another cycle C C C^(')C^{\prime} that visits only S S SS and whose cost is at most the cost of C C CC.
Lemma 10.4. The cost of a min-cost cycle-cover is at most the cost of an optimal TSP tour.
Proof. An optimal TSP tour is a cycle cover.
Theorem 10.7. The Cycle Shrinking Algorithm is a log 2 n log 2 n log_(2)n\log _{2} n-approximation for ATSP.
Proof. We prove the above by induction on n n nn the number of nodes in G G GG. It is easy to see that the algorithm finds an optimal solution if n 2 n 2 n <= 2n \leq 2. The main observation is that the number of cycles in the cycle-cover is at most n / 2 n / 2 |__ n//2__|\lfloor n / 2\rfloor; this follows from the fact that each cycle in the cover has to have at least 2 nodes and they are disjoint. Thus k n / 2 k n / 2 k <= |__ n//2__|k \leq\lfloor n / 2\rfloor. Let OPT ( S ) OPT ( S ) "OPT"(S)\text{OPT}(S) denote the cost of an optimal solution in G [ S ] G [ S ] G[S]G[S]. From Lemma 10.3 we have that OPT ( S ) OPT ( V ) = OPT OPT ( S ) OPT ( V ) = OPT "OPT"(S) <= "OPT"(V)="OPT"\text{OPT}(S) \leq\text{OPT}(V)=\text{OPT} for all S V S V S sube VS \subseteq V. The algorithm recurses on the proxy nodes S = { v 1 , , v k } S = v 1 , , v k S={v_(1),dots,v_(k)}S=\left\{v_{1}, \ldots, v_{k}\right\}. Note that | S | < n | S | < n |S| < n|S|<n, and by induction, the cost of the cycle C C CC output by the recursive call is at most ( log 2 | S | ) OPT ( S ) ( log 2 | S | ) OPT log 2 | S | OPT ( S ) log 2 | S | OPT (log_(2)|S|)"OPT"(S) <= (log_(2)|S|)"OPT"\left(\log _{2}|S|\right)\text{OPT}(S) \leq\left(\log _{2}|S|\right)\text{OPT}.
The algorithm outputs C C C^(')C^{\prime} whose cost is at most the cost of C C CC plus the cost of the cycle-cover computed in G G GG. The cost of the cycle cover is at most OPT OPT "OPT"\text{OPT} (Lemma 10.4). Hence the cost of C C C^(')C^{\prime} is at most ( log 2 | S | ) OPT + OPT ( log 2 n / 2 ) OPT + OPT ( log 2 n ) OPT log 2 | S | OPT + OPT log 2 n / 2 OPT + OPT log 2 n OPT (log_(2)|S|)"OPT"+"OPT" <= (log_(2)n//2)"OPT"+"OPT" <= (log_(2)n)"OPT"\left(\log _{2}|S|\right)\text{OPT}+\text{OPT}\leq\left(\log _{2} n / 2\right) \text{OPT}+\text{OPT} \leq\left(\log _{2} n\right) \text{OPT}; this finishes the inductive proof.

10.2.4 LP Relaxation

The LP relaxation for ATSP is given below. For each arc e E e E e in Ee \in E we have a variable x e x e x_(e)x_{e}. We view the problem as finding a connected Eulerian multi-graph in the support of G G GG. That is, we can choose each edge e e ee an integer number of times. We impose Eulerian constraint at each vertex by requiring the in-degree to be equal to the out-degree. We impose connectivity constraint by ensuring that at least one arc leaves each set of vertices S S SS which is not V V VV or O/\emptyset.
min e E c e x e e δ + ( v ) x e e δ ( v ) x e = 0 v V e δ + ( S ) x e 1 S V x e 0 e E min e E c e x e e δ + ( v ) x e e δ ( v ) x e = 0 v V e δ + ( S ) x e 1 S V x e 0 e E {:[min_(e in E)c_(e)x_(e)],[sum_(e indelta^(+)(v))x_(e)-sum_(e indelta^(-)(v))x_(e)=0quad v in V],[sum_(e indelta^(+)(S))x_(e) >= 1quad O/⊊S⊊V],[x_(e) >= 0quad e in E]:}\begin{aligned} \min _{e \in E} c_{e} x_{e} & \\ \sum_{e \in \delta^{+}(v)} x_{e}-\sum_{e \in \delta^{-}(v)} x_{e} &=0 \quad v \in V \\ \sum_{e \in \delta^{+}(S)} x_{e} & \geq 1 \quad \emptyset \subsetneq S \subsetneq V \\ x_{e} & \geq 0 \quad e \in E \end{aligned}
Remarks:
  1. It has remained an open problem for more than 25 years whether there exists a constant factor approximation for ATSP. Asadpour et al [26] have obtained an O ( log n / log log n ) O ( log n / log log n ) O(log n//log log n)O(\log n / \log \log n)-approximation for ATSP using some very novel ideas and a well-known LP relaxation.
  2. There is now an O O OO (1)-approximation for ATSP with initial breakthrough work by Svensson for a special case [27] and then followed up by Svensson, Tarnawski and Vegh [28] . The current best constant is 22 + ϵ 22 + ϵ 22+epsilon22+\epsilon due to [29]. The algorithm is highly non-trivial and is based on the LP relaxation.

  1. Eran Halperin and Robert Krauthgamer. “Polylogarithmic inapproximability”. In: Proceedings of the thirty-fifth annual ACM symposium on Theory of computing. 2003, pp. 585–594. ↩︎
  2. Moses Charikar, Chandra Chekuri, To-Yat Cheung, Zuo Dai, Ashish Goel, Sudipto Guha, and Ming Li. “Approximation algorithms for directed Steiner problems”. In: Journal of Algorithms 33.1 (1999), pp. 73–91. ↩︎
  3. Anupam Gupta and Jochen Könemann. “Approximation algorithms for network design: A survey”. In: Surveys in Operations Research and Management Science 16.1 (2011), pp. 3–20. ↩︎
  4. Guy Kortsarz and Zeev Nutov. “Approximating minimum cost connectivity problems”. In: Parameterized complexity and approximation algorithms. Ed. by Erik D. Demaine, MohammadTaghi Hajiaghayi, and Dániel Marx. Dagstuhl Seminar Proceedings 09511. Dagstuhl, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany, 2010. url: http://drops.dagstuhl.de/opus/volltexte/2010/2497. ↩︎
  5. Zeev Nutov. “Node-connectivity survivable network problems”. In: Handbook of Approximation Algorithms and Metaheuristics. Chapman and Hall/CRC, 2018, pp. 233–253. ↩︎
  6. Marshall Bern and Paul Plassmann. “The Steiner problem with edge lengths 1 and 2”. In: Information Processing Letters 32.4 (1989), pp. 171–176. ↩︎
  7. Miroslav Chlebık and Janka Chlebıková. “Approximation hardness of the Steiner tree problem on graphs”. In: Scandinavian Workshop on Algorithm Theory. Springer. 2002, pp. 170–179. ↩︎
  8. Variants of the Steiner Tree problem, named after Jakob Steiner, have been studied by Fermat, Weber, and others for centuries. The front cover of the course textbook contains a reproduction of a letter from Gauss to Schumacher on a Steiner tree question. ↩︎
  9. Hiromitsu Takahashi and A Matsuyama. “An approximate solution for the Steiner problem in graphs”. In: Math. Jap. 24.6 (1980), pp. 573–577. ↩︎
  10. Makoto Imase and Bernard M Waxman. “Dynamic Steiner tree problem”. In: SIAM Journal on Discrete Mathematics 4.3 (1991), pp. 369–384. ↩︎
  11. Alexander Z Zelikovsky. “An 11/6-approximation algorithm for the network Steiner problem”. In: Algorithmica 9.5 (1993), pp. 463–470. ↩︎
  12. Gabriel Robins and Alexander Zelikovsky. “Tighter bounds for graph Steiner tree approximation”. In: SIAM Journal on Discrete Mathematics 19.1 (2005), pp. 122–134 ↩︎
  13. Jaroslaw Byrka, Fabrizio Grandoni, Thomas Rothvoß, and Laura Sanita. “An improved LP-based approximation for Steiner tree”. In: Proceedings of the forty-second ACM symposium on Theory of computing. 2010, pp. 583–592. ↩︎
  14. Michel X Goemans, Neil Olver, Thomas Rothvoß, and Rico Zenklusen. “Matroids and integrality gaps for hypergraphic steiner tree relaxations”. In: Proceedings of the forty-fourth annual ACM symposium on Theory of computing. 2012, pp. 1161–1176. ↩︎
  15. Jack Edmonds. “Optimum branchings”. In: Journal of Research of the National Bureau of Standards, B 71 (1967), pp. 233–240. ↩︎
  16. Robert Vicari. “Simplex based Steiner tree instances yield large integrality gaps for the bidirected cut relaxation”. In: arXiv preprint arXiv:2002.07912 (2020). ↩︎
  17. D-Z Du and Frank K. Hwang. “A proof of the Gilbert-Pollak conjecture on the Steiner ratio”. In: Algorithmica 7.1 (1992), pp. 121–135. ↩︎
  18. Sanjeev Arora. “Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems”. In: Journal of the ACM (JACM) 45.5 (1998), pp. 753–782. ↩︎
  19. Glencora Borradaile, Philip Klein, and Claire Mathieu. “An O ( n log n ) O ( n log n ) O(n log n)O(n \log n) approximation scheme for Steiner tree in planar graphs”. In: ACM Transactions on Algorithms (TALG) 5.3 (2009), pp. 1–31. ↩︎
  20. Sartaj Sahni and Teofilo Gonzalez. “P-complete approximation problems”. In: Journal of the ACM (JACM) 23.3 (1976), pp. 555–565. ↩︎
  21. Nicos Christofides. Worst-case analysis of a new heuristic for the travelling salesman problem. Tech. rep. Carnegie-Mellon Univ Pittsburgh Pa Management Sciences Research Group, 1976. ↩︎
  22. George Dantzig, Ray Fulkerson, and Selmer Johnson. “Solution of a largescale traveling-salesman problem”. In: Journal of the operations research society of America 2.4 (1954), pp. 393–410. ↩︎
  23. Michael Held and Richard M Karp. “The traveling-salesman problem and minimum spanning trees”. In: Operations Research 18.6 (1970), pp. 1138–1162. ↩︎
  24. Shayan Oveis Gharan, Amin Saberi, and Mohit Singh. “A randomized rounding approach to the traveling salesman problem”. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. IEEE. 2011, pp. 550–559. ↩︎
  25. Anna R Karlin, Nathan Klein, and Shayan Oveis Gharan. “A (slightly) improved approximation algorithm for metric TSP”. In: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing. 2021, pp. 32–45. ↩︎
  26. Arash Asadpour, Michel X Goemans, Aleksander M ˛adry, Shayan Oveis Gharan, and Amin Saberi. “An O (log n/log log n)-approximation algorithm for the asymmetric traveling salesman problem”. In: Operations Research 65.4 (2017). Preliminary version in Proc. of ACM-SIAM SODA, 2010., pp. 1043–1061. ↩︎
  27. Ola Svensson. “Approximating ATSP by relaxing connectivity”. In: 2015 IEEE 56th Annual Symposium on Foundations of Computer Science. IEEE. 2015, pp. 1–19. ↩︎
  28. Ola Svensson, Jakub Tarnawski, and László A Végh. “A constant-factor approximation algorithm for the asymmetric traveling salesman problem”. In: Journal of the ACM (JACM) 67.6 (2020), pp. 1–53. ↩︎
  29. Vera Traub and Jens Vygen. “An improved approximation algorithm for ATSP”. In: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. 2020, pp. 1–13. ↩︎

Recommended for you

@Tim.Fahlberg
Using Mathpix and NaviLens to create accessible math flashcards
Using Mathpix and NaviLens to create accessible math flashcards
Students with print disabilities, due to blindness, low vision, learning disabilities or physical disabilities, can greatly benefit from accessible math flashcards and tutorials. Mathpix greatly reduces the amount of work required to create these by capturing text and math from a variety of sources making them ready for conversion to print or braille.
12 points
0 issues