🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Greedy AA.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

E0 249: Approximation Algorithms January 7, 2022 Lecture 1 Instructor: Arindam Khan Scribe: Sarvagya Jain, Prashant Gokhale 1 Introductory Lecture 1.1 Mot...

E0 249: Approximation Algorithms January 7, 2022 Lecture 1 Instructor: Arindam Khan Scribe: Sarvagya Jain, Prashant Gokhale 1 Introductory Lecture 1.1 Motivation and Definitions What is an approximation algorithm? Find a near-optimal solution, Efficient (polynomial time), Works for all instances. Why study approximation algorithms? Fast solutions for practical problems, Provides mathematical rigour to study heuristics, Quantifies difficulty of different discretization problems, Leads to cool algorithms. Some important definitions: Definition 1 (OP TΠ (I)). Let Π be an optimization problem and I be an instance for Π. Then, OP TΠ (I) is the value of the optimum solution for the instance I. Definition 2 (α-Approximation Algorithm). Let α ≥ 1. A is an α-Approximation Algorithm for a mini- mization problem Π if A(I) ≤ α · OP TΠ (I) ∀I, where A(I) is the value of the solution that A returns for I. Typical values for α are 1.5, O(log n). For a maximization problem: A(I) ≥ α1 OP TΠ (I). This is also called absolute approximation algorithm. Remark 1. Usually we omit Π and I in OP TΠ (I) and just write OP T. 2. Sometimes in literature α < 1 for maximization problem, with the definition modified appropriately. Surprisingly, NP-hard problems can be quite different in terms of approximately. For some problems, it is NP-hard to obtain even any f (n) approximation (TSP) and for some we can get (1 + ε)-approximation in poly(n, 1ε ) time, for any ε > 0 (knapsack problem). Definition 3 (PTAS). Aε is a polynomial time approximation scheme (PTAS) for a minimization problem Π if Aε (I) ≤ (1 + ε)OP T (I) ∀I and for every fixed ε > 0, the running time of Aε is polynomial in the input size. 1 1 1 O( ) Typical running times are O( nε ), 2 ε n2 log2 (B), n(O( ε )) ε , where n is the number of objects in the instance and B is the biggest appearing number. E0 249: Approximation Algorithms-1 Definition 4 (EPTAS). Aε is a efficient polynomial time approximation scheme (EPTAS) for a minimiza- tion problem Π if for every ε > 0, Aε (I) ≤ (1 + ε)OP T (I) ∀I and the running time of Aε is polynomial 1 1 1 in the input size and f ( 1ε ) where f is some function. Typical running times are 2 ε n1000 , ( 7ε )( ε ) ε but not 1 nlog( ε ). Definition 5 (FPTAS). Aε is a fully polynomial time approximation scheme (FPTAS) for a minimization problem Π if for every ε > 0, Aε (I) ≤ (1 + ε)OP T (I) ∀I and the running time of Aε is polynomial in the 3 input size and 1ε. A typical running time is O( nε2 ). Note: If a problem is APX-hard then it admits no PTAS. If a problem is W-hard then it admits no EPTAS. If a problem is strongly NP-hard then it admits no FPTAS. Definition 6 (QPTAS). Aε is a quasi polynomial time approximation scheme (QPTAS) for a minimization problem Π if for every ε > 0, Aε (I) ≤ (1 + ε)OP T (I) ∀I and the running time of Aε is quasi polynomial (a O(1) running time that is between polynomial and exponential time). A typical running time is n(log n). Definition 7 (PPTAS). Aε is a pseudo polynomial time approximation scheme (PPTAS) for a minimization problem Π if for every ε > 0, Aε (I) ≤ (1 + ε)OP T (I) ∀I and the running time of Aε is pseudo polynomial (a running time that is polynomial in the numeric data present in the input, but not necessarily in the number of bits required to represent it). A typical running time is (nB)O(1) where B is the biggest numeric data. We can divide the problems into various classes based on the approximation scheme possible. For example, P T AS is the class of problems that admit a polynomial time approximation scheme and AP X is the class of problems that have a constant-factor approximation. P T AS ( AP X as bin packing has no PTAS under the assumption P 6= N P. A problem Π1 is said to be APX-hard if there is a PTAS-reduction from every problem Π2 ∈ AP X to Π1 , and to be APX-complete if Π1 is APX-hard and also in AP X. An APX-hard problem does not admit a PTAS unless P = N P. In fact there is no quasi-polynomial time approximation scheme (QPTAS) for any APX-hard problem unless NP ⊆ QP. We can summarise these classes by the following diagram: Containments are strict-assuming P ̸= N P APX-hard APX PTAS EPTAS FPTAS P Definition 8 (Asymptotic Approximation Ratio). The asymptotic approximation  ratio ρ∞ A of an algorithm n n A(I) A is defined as limn7→∞ sup ρA , where ρA = supI∈l OP T (I) |OP T (I) = n. Consider two algorithms A1 = OP T (I) + 1 and A2 = 2OP T (I). Both attain absolute approximation ratio of 2. But as OPT gets larger A1 performs much better in terms of ratio. E0 249: Approximation Algorithms-2 Definition 9 (APTAS). Aε is a asymptotic polynomial time approximation scheme (APTAS) if for every ε > 0 there is a poly-time algorithm with asymptotic approximation ratio of (1 + ε). Typical running times are (1 + ε)OP T + O(1). 1.2 Different Inapproximability Recent inapproximability results divide problems into four broad classes, based on the approximability ratio that is provably hard to achieve. These approximation ratios are respectively, 1 + ε for some fixed ε > 0, 1−γ Σ(log n), 2(log n) for every fixed γ > 0, and nδ for some fixed δ > 0 (n is input size throughout). The corresponding classes of problems are called Classes I, II, III, and IV respectively. Inapproximability results for problems within a class share common ideas. Representative problems for each class are MAX-3SAT, SETCOVER, LABELCOVER and CLIQUE, respectively. 1.3 Why optimization problems are hard? Optimization problems can be hard due to multiple reasons: Intractability: Computational hardness such as NP-hardness. For example, finding best route to deliver all packages (travelling salesman problem) or finding optimal schedule of jobs into servers (bin packing). Approximation algorithms are useful in such cases. Uncertainty: May not have complete data. For example, online ad allocation (matching) or online taxi allocation (k-server). Online algorithms are useful in such cases. Dynamic Input: Data changes over time and algorithm needs to maintain a good solution with minimum update (dynamic set cover). Dynamic algorithms are useful in such cases. Big Data: Data arrives too fast or is too big to store. Streaming algorithms are useful in such cases. Approximation algorithms go beyond P!=NP problem. Approximation is also very useful in the context of fine-grained algorithms (say, for algorithms with runtime O(n2 )), parameterized algorithms, distributed algorithms, etc. 1.4 Different Approximation Techniques and some examples Greedy: set cover, vertex set cover, 3-SAT, Local search: max cut, facility location, Combinatorial Algorithm: TSP, Steiner Tree, Rounding Data: Bin packing, Scheduling jobs on identical parallel machines, Dynamic Programming: Knapsack, maximum independent set of rectangles, Cuts and metrics/embeddings: min multicut, multiway cut, sparsest cut, Linear Programming: will be a major part of this course (rounding, dual-fitting, primal-dual), Semidefinite programs: max-cut. E0 249: Approximation Algorithms-3 1.5 Brief History One of the first problems that involved approximation algorithm was the problem of Multiprocessor Schedul- ing with Precedence Constraints, studied by Ron Graham in 1966. The problem statement goes as follows: We are given n jobs J1 , J2 ,... , Jn with job Ji having a processing time pi , a precedence relation ≺ defined on the jobs (Ji ≺ J` means that Ji has to be finished, before J` is allowed to start) and m identical machines. We can represent the precedence constraints using a Directed Acyclic Graph (DAG). The goal is to find a non-preemptive schedule of the jobs on m machines respecting the precedence order that minimizes the makespan (time taken to finish the jobs). Graham’s list scheduling provides us a 2-approximation to the makespan. The algorithm goes as follows: Algorithm 1: Grahams-List-Scheduling Data: Jobs J1 , J2 ,... , Jn , precedence relation ≺, m machines Result: An 2-approximation to the makespan for t = 1,... do if machine j ∈ {1,... , m} idle at t AND all predecessors of some unfinished job Ji are finished then schedule Ji on machine j starting from t; end if All jobs finished then return t; end end In words, at any time t, just start a job whenever possible. Finally, return the time when all jobs are finished. Analysis of the algorithm: Theorem 10. The makespan of the produced algorithm is at most 2 times the optimal makespan. Proof. Start by considering a sequence of jobs (w.l.o.g. after reordering) J1 ,... , Jk such that: Jk is the last job of the whole schedule that finishes J1 ≺ J2 ≺... ≺ Jk (chain in the partial order ≺ ) Ji is the predecessor of Ji+1 that is finished last 1 J2 Job in Chain.. J1 Busy. Jk m ··· Unoccupied makespan After Ji finished Ji+1 is started as soon as a machine is available. Hence between PnJi is finished and Ji+1 1 begins, all machines must be fully busy. Thus, length of all busy periods ≤ m i=1 pi ≤ OP T. Length of time taken by the chain J1 ,... , Jk is ≤ OP T. Combining, makespan ≤ length chain + busy period ≤ 2 · OP T. E0 249: Approximation Algorithms-4 Remark We can further tighten the Pnanalysis. Let us denoteT the time taken by the chain by T , Tthen 1 the length of all busy periods ≤ m ( i=1 pi − T ) ≤ OP T − m. Thus, makespan ≤ T + OP T − m = 1 1   OP T + 1 − m T ≤ 2− m OP T. We state the following theorem due to Svensson without proof: Theorem 11. For every fixed ε > 0, there is no (2 − ε)-approximation unless a variant of the Unique Games Conjecture is false. Finding the complexity status of P 3 | pi = 1, prec | Cmax (i.e. PRECSCHEDULING with unit processing times and 3 machines) is still an open problem. It is known that there is a 43 -approximation for the problem and that P 2 | pi = 1, prec | Cmax is polynomial-time solvable. 1.6 Timeline Some examples of using problems in complexity class P to give approximation to other hard problems: 1968: Moore showed optimal minimum spanning tree can be used to give a solution to travelling salesman problem of cost at most two times the optimal cost. 1969: Graham studied scheduling with no precendence constraints and proposed longest processing time algorithm with makespan at most 43 − 3m1  times the optimal makespan. He also devised the first PTAS (makespan ≤ (1 + ε)OP T in run time O(n log m + m(m−1−εm)/ε )). Some examples of Theory of NP-completeness and reductions: 1970s: Boolean satisfiability is NP-complete (Cook-Levin ’71). 21 famous problems! (Karp ’72). Most of the practical problems are NP-hard (Garey-Johnson). 1975: David Johnson in his thesis on bin packing problem coined the term approximation algorithms. 1975: Ibarra and Kim gave FPTAS for knapsack. 3 1976: 2 approx for TSP. 1976: Sahni-Gonzalez show that for some problems (such as TSP), obtaining constant ratio approxi- mation is equivalent to obtain the optimal solution. Some examples of theory of inapproximability: 1977: Approx preserving reductions, APX and other classes (Ausellio et al.). 1970s and 1980s: Development of polynomial time algorithms for linear programs. Some exam- ples include ellipsoid method (Khachiyan ’79), interior point method (Karmakar ’84), and optimiza- tion=separation (GLS’81: LP with exponential number of constraints can be solved using separation oracle). Integer programming is NP-hard. Linear programs can be good approximation of integer programs: 1970s: Set cover O(log n)-approximation 1982: Set cover f-approximation using deterministic rounding (Hochbaum ’82) 1987: Randomized rounding (Raghavan and Thompson) 1982: Asymp. FPTAS (Karmakar-Karp) Some examples of semidefinite programming. 1990s: 0.878 approx algo for MAX-CUT The 2001 Godel prize was awarded for work on the PCP theorem and its connection to the hardness of approximations. E0 249: Approximation Algorithms-5 2 Set Cover 2.1 Problem Statement Definition 12 (Set Cover). Let E be a set of elements and S = {S1 , S2 , S3 ,..., Sm } be a collection of subsets of E. A set cover is a sub collection S 0 ⊆ S whose union is E. That is, [ s=E s∈S 0 For example, if E = {1, 2, 3, 4, 5, 6, 7, 8} and S1 = {1, 2, 3}, S2 = {1, 4, 5, 8}, S3 = {2, 3, 4}, S4 = {6, 7}, S5 = {2, 4, 6} and S6 = {2, 3, 5, 8}, then {S2 , S3 , S4 } is a set cover but {S1 , S3 , S6 } is not a set cover. In the set cover problem, we are given a ground set of n elements E = {e1 , e2 ,..., en } and a collection of m subsets of E which is S = {S1 , S2 ,..., Sm } and a non-negative weight function cost : S 7→ Q+. The goal is to find a minimum weight collection of subsets (the elements of S) that covers all elements in E. If all weights are equal, then it is called the unweighted set cover problem. Set cover problem is a generalisation of many other important problems like vertex cover and edge cover. Set cover also has many applications in real life like antivirus products and VLSI design. 2.2 The Greedy Paradigm A greedy paradigm is a general algorithmic paradigm, in which we construct a solution iteratively via a sequence of myopic decisions, and hope that everything works at the end. In other words, we select the best augmentation that minimizes the ratio of cost to advantage. Greedy algorithms are easy to design and analyse, but often do not lead to good approximations for many problems. 2.3 Greedy Algorithm For the set cover problem, a ”good strategy” for picking the sets can be, pick the set that costs the least per unit uncovered element in each iteration until every element is covered by the union of the picked sets. Using this naive strategy, we arrive at the following algorithm: Algorithm 2: Greedy-Set-Cover(E, S, cost) Data: universe set E, collection of sets S and cost function cost. Result: An O(log n) approximation to the Set Cover problem C ← {}; while C is not a set cover for E do cost(S) Define cost effectiveness of each set x ∈ S as αx = |S\C| ; Find y, the most cost effective (that is αy is the least) set in the current iteration; Pick y, and for all newly covered elements e ∈ y \ C, set price = αy ; C ← C ∪ {y}; end return C; E0 249: Approximation Algorithms-6 2.4 Example run of greedy algorithm S1 = {A, B, C}, S2 = {C, F }, S3 = {E, F }, S4 = {D, E}, S5 = {B, D, E}, Step 0 S1 , 10 10 10 αS1 = 3 , αS2 = 2 , αS3 = 82 , αS4 = 10 2 , αS5 = 11 3 A B C S2 , 10 D E F S3 , 8 S4 , 10 S5 , 11 Step 1 10 S2 , 10 αS1 = 1 , αS3 = 82 , αS4 = 10 2 , αS5 = 11 2 D E F S3 , 8 S4 , 10 S5 , 11 Step 2 10 11 αS4 = 1 , αS5 = 1 D E S4 , 10 S5 , 11 ALGO = c(S1 ∪ S3 ∪ S4 ) = 10+8+10 = 28 2.5 Analysis of Greedy Algorithm This algorithm returns a valid set cover in polynomial time since it needs at most O(n) steps and each step takes at most O(m log m + n) time. In any iteration, left over sets of the optimal solution can cover the remaining elements E \ C at a cost OP T of at most OP T. Thus, at least one of the remaining sets must have cost effectiveness of at most |E\C| by an averaging argument. WLOG assume that the elements are numbered in the order in which they were covered, resolving ties arbitrarily. Let e1 , e2 , · · · , en be this numbering. Assume element ek was covered by the most cost effective set at some iteration i(≤ k). At most (k − 1) items were covered before the iteration i. Thus, at least n − (k − 1) elements were not covered before the iteration i and hence |E \ C| ≥ (n − k + 1) at the beginning of iteration i. OP T OP T OP T Now using these observations, price(ek ) ≤ |E\C| ≤ where p = (n − k + 1) n−k+1 = p P price of elements is obtained by distributing set weights into the items. Therefore, Sj ∈C cost(Sj ) = P ei ∈E price(ei ) Pn OP T Pn ≤ p=1 OPp T ≤ Hn × OP T P Now, ek ∈E price(ek ) ≤ k=1 n−k+1 E0 249: Approximation Algorithms-7 Thus, the greedy algorithm has Hn or O(log n) approximation ratio, where Hn is the nth harmonic number. Remark As was done above, finding a good lower bound on OP T is a general strategy used in proving approximation bounds in minimisation problems. This analysis is actually tight. Consider the following example: ··· (1 + ) 1 1 1 1 n (n−1) 2 Optimal solution has only one set of cost (1 + ε), where 0 < ε  1. 1 1 The greedy algorithm will return n singleton sets with total cost = n + n−1 +.... + 1 = Hn Hn So, the approximation ratio for this example is 1+ε ≈ Hn as ε can be taken to be arbitrarily small. Thus, our analysis is tight as we have both upper and lower bounds. Remark A further result by P. Slavic (STOC ’96) shows that the performance ratio of greedy algorithm is exactly ln n − ln ln n + Θ(1). Dinur and Steurer in 2014 showed that it is N P -hard to approximate the optimum for set cover within (1 − ε) ln n factor, ∀ constant ε > 0. 2.6 Vertex Cover Problem In this section, we state the vertex cover problem. The vertex cover problem is as follows, we are given a undirected graph G = (V, E) with node weights cP: V 7→ Q+. We want to find a subset U ⊆ V such that every edge is incident to at least one node in U and v∈U c(v) is minimised. Vertex cover is a special case of set cover. To see this, simply take the universe in set cover to be set of all edges and available subsets to be all the edges touched by a particular vertex, i.e., Si is the set of edges incident on the ith vertex. In this case, a simple 2-approximation algorithm is possible and we leave at as an exercise for the reader to find one. Exercise 13. Find a 2-approximation algorithm to the vertex cover problem. 3 Max-Cut 3.1 Problem Statement We will start by defining two quantities essential for stating the problem of finding a Max-Cut. E0 249: Approximation Algorithms-8 Definition 14 (Cut). Given an undirected graph G(V, E), a cut [S, V \ S] is a partition of V. Definition 15 (Separated Edges). Given an undirected graph G(V, E) and a set S ⊆ V , the separated edges, denoted by δ(S), refers to the set of edges having one endpoint in S and the other in V \ S. We are finally ready to define what we mean by a Max-Cut. Definition 16 (Max-Cut). Given a complete undirected graph G(V, E) and edge weights w : E → Q+ , Max-Cut refers to a cut maximising the weight of separated edges. Mathematically, [S, V \ S] is a Max-Cut if    X  S = argmax w(e). S⊆V   e∈δ(S) Intuitively, in order to find a Max-Cut of a graph G(V, E), we need to find a set S ⊆ V such that the sum of weights of edges from S to S := V \ S is maximised. Unlike Min-Cut, finding a Max-Cut is an NP-hard problem. In order to find a good approximation to the Max-Cut, we look for a cut for which the weight of separated edges is close to OP T , where    X  OP T = max w(e). S⊆V   e∈δ(S) 3.2 A Randomized Algorithm In this section, we present a simple randomized approximation algorithm. We will prove that the approxi- mation factor of the algorithm is 2 in expectation. Algorithm 3: Randomized-Max-Cut(G(V, E)) Data: An unweighted graph G(V, E) Result: A 2-approximation in expectation to the Max-Cut for G(V, E) For each vertex v ∈ V , add v to set S with probability 1/2.; S := V \ S; return The set of edges between S and S; Theorem 17. The above algorithm is a 2-approximation randomized algorithm. Proof. Let Xi be the indicator random variable for the event that both endpoints of the edge ei belongs to different sets, i.e., one of the endpoints belongs to S and the other to S. Now we note that 1 1 1 E[Xi ] = ·1+ ·0= 2 2 2 By the linearity of expectation and the fact that |E| is an upper bound on OP T , we get X X |E| OP T E[|[S, S]|] = E[ Xi ] = E[Xi ] = ≥. 2 2 i∈E i∈E Here, |[S, S]| denotes the number of edges having one endpoint in S and other in S. This shows that the proposed algorithm outputs a 2−approximation in expectation. E0 249: Approximation Algorithms-9 3.3 Derandomization In this section, we will use a technique referred to as derandomization to derandomize the algorithm in the previous section so that we can always guarantee a 2−approximation. Later in the section, we will use the insights gained from the derandomization procedure to give two more 2-approximation algorithms. Let us consider the vertices of the graph one by one in some sequence, say v1 , v2 , · · · , vn , where n := |V |. Let xi be the value assigned to the vertex vi. We will construct two disjoint sets A and B of vertices using the following rules: if xi = 0, then vi ∈ A if xi = 1, then vi ∈ B Suppose we have assigned values x1 , · · · , vk to vertices v1 , · · · , vk. Now we consider the expected value of the cut if the remaining vertices vk+1 , · · · , vn are independently assigned values from the set {0, 1} uniformly at random. i.e., we consider the conditional expectation E[cut(A, B) | v1 = x1 , · · · , vk = xk ], where cut(A, B) := |[A, B]| (the number of edges between A and B). The strategy is to inductively assign a value xk+1 to vk+1 such that the following holds: E[cut(A, B) | v1 = x1 , · · · , vk = xk ] ≤ E[cut(A, B) | v1 = x1 , · · · , vk = xk , vk+1 = xk+1 ]. The above condition implies the following: |E| OP T E[cut(A, B) | vi = xi , · · · , vn = xn ] ≥ E[cut(A, B)] ≥ ≥ 2 2 Thus, it suffices to show the existence of such an inductive procedure. Base Case (k = 1): Since it doesn’t matter whether we add the first vertex to A or B, hence E[cut(A, B) | v1 = x1 ] = E[cut(A, B)] Inductive Step: Since vk+1 is assigned either 0 or 1 with probability 1/2, hence 1 E [cut(A, B) | v1 = x1 ,... , vk = xk ] = · E [cut(A, B) | v1 = x1 ,... , vk = xk , vk+1 = 0] 2 1 + · E [cut(A, B) | v1 = x1 ,... , vk = xk , vk+1 = 1] 2 Q1 + Q2 = 2 where Q1 and Q2 are the first and the second terms of the right hand side of the equation respectively. Also, Q1 + Q2 max (Q1 , Q2 ) ≥ = E [cut(A, B) | v1 = x1 ,... , vk = xk ] 2 Thus, we will use the following assignment scheme: ( 0, if Q1 > Q2 vk+1 = 1, otherwise E0 249: Approximation Algorithms-10 The only thing that remains is the computation of Q1 and Q2. To compute Q1 , assign the vertices v1 , · · · , vk the values x1 , · · · , xk respectively and assign vk+1 the value 0. Now count the number of edges in the graph restricted to v1 , · · · , vk+1 that have different valued endpoints. To this, add 0.5 times the number of edges in the original graph that were not in the restricted graph to obtain the value of Q1 (such an edge contributed to the value of Q1 with probability 1/2). Similarly, one can compute the value of Q2. It is easy to see that the time required is polynomial in the number of edges and vertices. This completes our inductive procedure. Remark In the above algorithm, one should note that the value assigned to vk+1 depends on whether vk+1 has more 0 neighbours or 1 neighbours as the edges not having vk+1 as an endpoint contribute the same quantity to Q1 and Q2. Using this observation, we will present a simple greedy algorithm next. Algorithm 4: Greedy-Max-Cut(G(V, E)) Data: An unweighted graph G(V, E) Result: A 2-approximation to the Max-Cut for G Consider vertices in some order v1 , · · · , vn , where n = |V |; A = {v1 }; B = {v2 }; for each successive vertex do if vi has more neighbours in A than B then Put vi in B; end else Put vi in A; end end return The set of edges between A and B, i.e., [A, B]; 3.4 Local Search Based Algorithm Given instance I of a problem, let S(I) be the set of all feasible solutions for I. For S ∈ S(I), let N (S) (the neighbourhood of S) be the set of all solutions S 0 that can be obtained from S via some local moves. Now, the general scheme of local search algorithms can be summarised by the following pseudocode: E0 249: Approximation Algorithms-11 Algorithm 5: General-Local-Search Find a good initial solution S0 ∈ S(I); S ← S0 ; while True do if ∃S 0 ∈ N (S) such that val(S 0 ) is strictly better than val(S) then S ← S0; end else S is a local optimum; return S; end end Here val of a set represents the number corresponding to that set, which is use to compare it with other sets in order to progress towards a better solution. For minimisation (resp. maximisation) problems, S 0 is strictly better than S if val(S 0 ) < val(S) (resp. val(S 0 ) > val(S)). In these algorithms, we look for quick termination and good solution guarantee of local optima. Based on the property we remarked below the inductive procedure for derandomization, we will now present a local search algorithm for the max-cut problem (let n := |V | and m := |E|). Algorithm 6: General-Local-Search Data: An unweighted graph G(V, E) Result: A set S ⊆ V Intialise S arbitrarily; Find a good initial solution S0 ∈ S(I); S ← S0 ; while ∃u ∈ V with more edges to the same side than across do move u to the other side; end return S; Theorem 18. The above algorithm runs in polynomial time. Proof. The proof follows from following observations: Checking if ∃u ∈ V with more edges to the same side than across takes O(mn) time. Now, we will bound the number of iterations. Initially, |E(S, S)| ≥ 0. Every iteration, |E(S, S)| increases by at least 1. Also, clearly |E(S, S)| ≤ m ≤ n2. Hence, there are at most n2 iterations. Combining, the total run time is O(mn3 ). E0 249: Approximation Algorithms-12 Theorem 19. The above algorithm is a 2-approximation algorithm. Proof. Let S be a local optimum, i.e., ∀u ∈ V , there are more {u, v} edges across the cut than the ones connecting vertices in the same side. The algorithm always returns such an S. 1X 1X1 1 m OP T ∴ |E(S, S)| = dacross (u) ≥ d(u) = 2m = ≥. 2 2 2 4 2 2 u∈V u∈V Here, dacross (u) is the number of edges from S to S that have u as one of the endpoints. We leave the generalisation to the weighted case as an exercise for the reader. Exercise 20. Extend the local search algorithm to account for weighted edges. E0 249: Approximation Algorithms-13

Use Quizgecko on...
Browser
Browser