Results 1  10
of
96
Approximation algorithms for disjoint paths and related routing and packing problems
 Mathematics of Operations Research
, 2000
"... Abstract. Given a network and a set of connection requests on it, we consider the maximum edgedisjoint paths and related generalizations and routing problems that arise in assigning paths for these requests. We present improved approximation algorithms and/or integrality gaps for all problems consi ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
Abstract. Given a network and a set of connection requests on it, we consider the maximum edgedisjoint paths and related generalizations and routing problems that arise in assigning paths for these requests. We present improved approximation algorithms and/or integrality gaps for all problems considered; the central theme of this work is the underlying multicommodity flow relaxation. Applications of these techniques to approximating families of packing integer programs are also presented. Key words and phrases. Disjoint paths, approximation algorithms, unsplittable flow, routing, packing, integer programming, multicommodity flow, randomized algorithms, rounding, linear programming. 1
The Probabilistic Method Yields Deterministic Parallel Algorithms
 JCSS
, 1994
"... ..."
(Show Context)
Scheduling Unrelated Machines by Randomized Rounding
 SIAM Journal on Discrete Mathematics
, 1999
"... In this paper, we provide a new class of randomized approximation algorithms for parallel machine scheduling problems. The most general model we consider is scheduling unrelated machines with release dates (or even network scheduling) so as to minimize the average weighted completion time. We introd ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we provide a new class of randomized approximation algorithms for parallel machine scheduling problems. The most general model we consider is scheduling unrelated machines with release dates (or even network scheduling) so as to minimize the average weighted completion time. We introduce an LP relaxation in timeindexed variables for this problem. The crucial idea to derive approximation results is not to use standard list scheduling, but rather to assign jobs randomly to machines (by interpreting LP solutions as probabilities), and to perform list scheduling on each of them. Our main result is a (2 + e)approximation algorithm for this general model which improves upon performance guarantee 16=3 due to Hall, Shmoys, and Wein. In the absence of nontrivial release dates, we get a (3=2 + e)approximation. At the same time we prove corresponding bounds on the quality of the LP relaxation. A perhaps surprising implication for identical parallel machines is that jobs are ra...
SchedulingLPs bear probabilities: Randomized approximations for minsum criteria
 In R. Burkard and G.J. Woeginger eds, ESA'97, LNCS 1284
, 1997
"... Abstract. In this paper, we provide a new class of randomized approximation algorithms for scheduling problems by directly interpreting solutions to socalled timeindexed LPs as probabilities. The most general model we consider is scheduling unrelated parallel machines with release dates (or even n ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we provide a new class of randomized approximation algorithms for scheduling problems by directly interpreting solutions to socalled timeindexed LPs as probabilities. The most general model we consider is scheduling unrelated parallel machines with release dates (or even network scheduling) so as to minimize the average weighted completion time. The crucial idea for these multiple machine problems is not to use standard list scheduling but rather to assign jobs randomly to machines (with probabilities taken from an optimal LP solution) and to perform list scheduling on each of them. For the general model, we give a (2+ e)approximation algorithm. The best previously known approximation algorithm has a performance guarantee of 16/3 [HSW96]. Moreover, our algorithm also improves upon the best previously known approximation algorithms for the special case of identical parallel machine scheduling (performance guarantee (2.89 + e) in general [CPS+96] and 2.85 for the average completion time [CMNS97], respectively). A perhaps surprising implication for identical parallel machines is that jobs are randomly assigned to machines, in which each machine is equally likely. In addition, in this case the algorithm has running time O(nlogn) and performance guarantee 2. The same algorithm also is a 2approximation for the corresponding preemptive scheduling problem on identical parallel machines. Finally, the results for identical parallel machine scheduling apply to both the offline and the online settings with no difference in performance guarantees. In the online setting, we are scheduling jobs that continually arrive to be processed and, for each time t, we must construct the schedule until time t without any knowledge of the jobs that will arrive afterwards. 1
Further Algorithmic Aspects of the Local Lemma
, 2001
"... We provide a method to produce an efficient algorithm to find an object whose existence is guaranteed by the Lov'asz Local Lemma. We feel that this method will apply to the vast majority of applications of the Local Lemma, unless the application has one of four problematic traits. However, pr ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
We provide a method to produce an efficient algorithm to find an object whose existence is guaranteed by the Lov'asz Local Lemma. We feel that this method will apply to the vast majority of applications of the Local Lemma, unless the application has one of four problematic traits. However, proving that the method applies to a particular application may require proving two (possibly difficult) concentrationlike properties.
Tight Approximation Results for General Covering Integer Programs
 In Proc. of the FortySecond Annual Symposium on Foundations of Computer Science
, 2001
"... In this paper we study approximation algorithms for solving a general covering integer program. An nvector x of nonnegative integers is sought, which minimizes c T x; subject to Ax b; x d: The entries of A; b; c are nonnegative. Let m be the number of rows of A: Covering problems have been hea ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
In this paper we study approximation algorithms for solving a general covering integer program. An nvector x of nonnegative integers is sought, which minimizes c T x; subject to Ax b; x d: The entries of A; b; c are nonnegative. Let m be the number of rows of A: Covering problems have been heavily studied in combinatorial optimization. We focus on the effect of the multiplicity constraints, x d; on approximability. Two longstanding open questions remain for this general formulation with upper bounds on the variables. (i) The integrality gap of the standard LP relaxation is arbitrarily large. Existing approximation algorithms that achieve the wellknown O(log m)approximation with respect to the LP value do so at the expense of violating the upper bounds on the variables by the same O(log m) multiplicative factor. What is the smallest possible violation of the upper bounds that still achieves cost within O(log m) of the standard LP optimum ? (ii) The best known approximation ratio for the problem has been O(log(max j P i A ij )) since 1982. This bound can be as bad as polynomial in the input size. Is an O(log m)approximation, like the one known for the special case of Set Cover, possible? We settle these two open questions. To answer the first question we give an algorithm based on the relatively simple new idea of randomly rounding variables to smallerthaninteger units. To settle the second question we give a reduction from approximating the problem while respecting multiplicity constraints to approximating the problem with a bounded violation of the multiplicity constraints. 1 Research partially supported by NSERC Grant 22780900 and a CFI New Opportunities Award 1.
Approximating Probability Distributions Using Small Sample Spaces
 Combinatorica
, 1995
"... We formulate the notion of a "good approximation" to a probability distribution over a finite abelian group. The approximate distribution is characterized by a parameter ffl, the quality of the approximation, which is a bound on the difference between corresponding Fourier coefficients of ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
We formulate the notion of a "good approximation" to a probability distribution over a finite abelian group. The approximate distribution is characterized by a parameter ffl, the quality of the approximation, which is a bound on the difference between corresponding Fourier coefficients of the two distributions. It is also required that the sample space of the approximate distribution be of size polynomial in the representation length of the group elements as well as 1=ffl. Such approximations are useful in reducing or eliminating the use of randomness in randomized algorithms. We demonstrate the existence of such good approximations to arbitrary distributions. In the case of n random variables distributed uniformly and independently over the range f0; : : : ; d \Gamma 1g, we provide an efficient construction of a good approximation. The constructed approximation has the property that any linear combination of the random variables (modulo d) has essentially the same behavior under the ...
Derandomization in Computational Geometry
, 1996
"... We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, repla ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We survey techniques for replacing randomized algorithms in computational geometry by deterministic ones with a similar asymptotic running time. 1 Randomized algorithms and derandomization A rapid growth of knowledge about randomized algorithms stimulates research in derandomization, that is, replacing randomized algorithms by deterministic ones with as small decrease of efficiency as possible. Related to the problem of derandomization is the question of reducing the amount of random bits needed by a randomized algorithm while retaining its efficiency; the derandomization can be viewed as an ultimate case. Randomized algorithms are also related to probabilistic proofs and constructions in combinatorics (which came first historically), whose development has similarly been accompanied by the effort to replace them by explicit, nonrandom constructions whenever possible. Derandomization of algorithms can be seen as a part of an effort to map the power of randomness and explain its role. ...
Approximating DisjointPath Problems Using Packing Integer Programs
, 1998
"... In a packing integer program, we are given a matrix A and column vectors b; c with nonnegative entries. We seek a vector x of nonnegative integers, which maximizes c^T x; subject to Ax ≤ b: The edge and vertexdisjoint path problems together with their unsplittable ow generalization are NPha ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
In a packing integer program, we are given a matrix A and column vectors b; c with nonnegative entries. We seek a vector x of nonnegative integers, which maximizes c^T x; subject to Ax &le; b: The edge and vertexdisjoint path problems together with their unsplittable ow generalization are NPhard problems with a multitude of applications in areas such as routing, scheduling and bin packing. These two categories of problems are known to be conceptually related, but this connection has largely been ignored in terms of approximation algorithms. We explore the topic of approximating disjointpath problems using polynomialsize packing integer programs. Motivated by the...