Results 1  10
of
29
On Sparse Approximations To Randomized Strategies And Convex Combinations
, 1994
"... A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampli ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampling spaces. In all cases arbitrary probability vectors can be substituted by sparse ones (with only logarithmically many positive entries) without loosing too much performance.
Approximation Algorithms Via Randomized Rounding: A Survey
 Series in Advanced Topics in Mathematics, Polish Scientific Publishers PWN
, 1999
"... Approximation algorithms provide a natural way to approach computationally hard problems. There are currently many known paradigms in this area, including greedy algorithms, primaldual methods, methods based on mathematical programming (linear and semidefinite programming in particular), local i ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Approximation algorithms provide a natural way to approach computationally hard problems. There are currently many known paradigms in this area, including greedy algorithms, primaldual methods, methods based on mathematical programming (linear and semidefinite programming in particular), local improvement, and "low distortion" embeddings of general metric spaces into special families of metric spaces. Randomization is a useful ingredient in many of these approaches, and particularly so in the form of randomized rounding of a suitable relaxation of a given problem. We survey this technique here, with a focus on correlation inequalities and their applications.
TRACES OF FINITE SETS: EXTREMAL PROBLEMS AND GEOMETRIC APPLICATIONS
, 1992
"... Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas o ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas of statistics, discrete and computational geometry, and learning theory. We survey some of the most important results related to this concept with special emphasis on (a) hypergraph theoretic methods and (b) geometric applications.
Lattice Approximation and Linear Discrepancy of Totally Unimodular Matrices (Extended Abstract)
 In Proceedings of the 12th Annual ACMSIAM Symposium on Discrete Algorithms (SODA
, 2001
"...  Benjamin Doerr y Abstract This paper shows that the lattice approximation problem for totally unimodular matrices A 2 R mn can be solved eciently and optimally via a linear programming approach. The complexity of our algorithm is O(log m) times the complexity of nding an extremal point of a p ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
 Benjamin Doerr y Abstract This paper shows that the lattice approximation problem for totally unimodular matrices A 2 R mn can be solved eciently and optimally via a linear programming approach. The complexity of our algorithm is O(log m) times the complexity of nding an extremal point of a polytope in R n described by 2(m + n) linear constraints. We also consider the worstcase approximability called linear discrepancy. Here we derive an upper bound for the linear discrepancy of a totally unimodular m n matrix A: lindisc(A) minf1 1 n+1 ; 1 1 m g: This bound is sharp. It proves Spencer's conjecture lindisc(A) (1 1 n+1 ) herdisc(A) for totally unimodular matrices. It seems to be the rst time that linear programming is successfully used for a discrepancy problem. 1 Introduction and Results 1.1 Lattice Approximation Problem, Linear Discrepancy and Integer Linear Programs. Let A 2 R mn be any real matrix and b := Ap; p 2 R n , a point of the vector space ...
Approximation of MultiColor Discrepancy
 Randomization, Approximation and Combinatorial Optimization (Proceedings of APPROXRANDOM 1999), volume 1671 of Lecture Notes in Computer Science
, 1999
"... . In this article we introduce (combinatorial) multicolor discrepancy and generalize some classical results from 2color discrepancy theory to c colors. We give a recursive method that constructs ccolorings from approximations to the 2color discrepancy. This method works for a large class of ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
. In this article we introduce (combinatorial) multicolor discrepancy and generalize some classical results from 2color discrepancy theory to c colors. We give a recursive method that constructs ccolorings from approximations to the 2color discrepancy. This method works for a large class of theorems like the sixstandarddeviation theorem of Spencer, the BeckFiala theorem and the results of Matousek, Welzl and Wernisch for bounded VCdimension. On the other hand there are examples showing that discrepancy in c colors can not be bounded in terms of twocolor discrepancy even if c is a power of 2. For the linear discrepancy version of the BeckFiala theorem the recursive approach also fails. Here we extend the method of floating colors to multicolorings and prove multicolor versions of the the BeckFiala theorem and the BaranyGrunberg theorem. 1 Introduction Combinatorial discrepancy theory deals with the problem of partitioning the vertices of a hypergraph (set...
Polynomials with LittlewoodType Coefficient Constraints
 MICHIGAN MATH. J
, 2001
"... This survey paper focuses on my contributions to the area of polynomials with Littlewoodtype coefficient constraints. It summarizes the main results from many of my recent papers some of which are joint with Peter Borwein. ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This survey paper focuses on my contributions to the area of polynomials with Littlewoodtype coefficient constraints. It summarizes the main results from many of my recent papers some of which are joint with Peter Borwein.
Twoway rounding
 SIAM J. Discrete Math
, 1995
"... Abstract. Given n real numbers 0 ≤ x1,..., xn < 1 and a permutation σ of {1,..., n}, we can always find ¯x1,..., ¯xn ∈ {0, 1} so that the partial sums ¯x1 + · · · + ¯xk and ¯xσ1 + · · · + ¯xσk differ from the unrounded values x1 + · · · + xk and xσ1 + · · · + xσk by at most n/(n + 1), for 1 ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. Given n real numbers 0 ≤ x1,..., xn < 1 and a permutation σ of {1,..., n}, we can always find ¯x1,..., ¯xn ∈ {0, 1} so that the partial sums ¯x1 + · · · + ¯xk and ¯xσ1 + · · · + ¯xσk differ from the unrounded values x1 + · · · + xk and xσ1 + · · · + xσk by at most n/(n + 1), for 1 ≤ k ≤ n. The latter bound is best possible. The proof uses an elementary argument about flows in a certain network, and leads to a simple algorithm that finds an optimum way to round. Many combinatorial optimization problems in integers can be solved or approximately solved by first obtaining a realvalued solution and then rounding to integer values. Spencer [11] proved that it is always possible to do the rounding so that partial sums in two independent orderings are properly rounded. His proof was indirect—a corollary of more general results [7] about discrepancies of set systems—and it guaranteed only that the rounded partial sums would differ by at most 1 − 2−2n from the unrounded values. The purpose of this note is to give a more direct proof, which leads to a sharper result. Let x1,..., xn be real numbers and let σ be a permutation of {1,..., n}. We will write Sk = x1 + · · · + xk, Σk = xσ1 + · · · + xσk, 0 ≤ k ≤ n,
Gap Inequalities for the Cut Polytope
 EUROPEAN J. COMBIN
, 1996
"... We introduce a new class of inequalities valid for the cut polytope, which we call gap inequalities. Each gap inequality is given by a finite sequence of integers, whose "gap" is defined as the smallest discrepancy arising when decomposing the sequence into two parts as equal as possible. Gap inequa ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We introduce a new class of inequalities valid for the cut polytope, which we call gap inequalities. Each gap inequality is given by a finite sequence of integers, whose "gap" is defined as the smallest discrepancy arising when decomposing the sequence into two parts as equal as possible. Gap inequalities include the hypermetric inequalities and the negative type inequalities, which have been extensively studied in the literature. They are also related to a positive semidefinite relaxation of the maxcut problem. A natural question is to decide for which integer sequences the corresponding gap inequalities define facets of the cut polytope. For this property, we present a set of necessary and sufficient conditions in terms of the root patterns and of the rank of an associated matrix. We also prove that there is no facet defining inequality with gap greater than one and which is induced by a sequence of integers using only two distinct values.
On the Discrepancy of Strongly Unimodular Matrices
, 2000
"... A (0, 1) matrix A is strongly unimodular if A is totally unimodular and every matrix obtained from A by setting a nonzero entry to 0 is also totally unimodular. Here we consider the linear discrepancy of strongly unimodular matrices. It was proved by Lovaz, et.al. [5] that for any matrix A, lindisc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A (0, 1) matrix A is strongly unimodular if A is totally unimodular and every matrix obtained from A by setting a nonzero entry to 0 is also totally unimodular. Here we consider the linear discrepancy of strongly unimodular matrices. It was proved by Lovaz, et.al. [5] that for any matrix A, lindisc(A) # herdisc(A). (1) When A is the incidence matrix of a setsystem, a stronger inequality holds: For any family H of subsets of {1, 2, . . . , n}, lindisc(H) # (1  t n )herdisc(H). where t n # 2 2 n (J. Spencer, [6]). In this paper we prove that the constant t n can be improved to 3 (n+1)/2 for strongly unimodular matrices. # The first author is supported by NSF Grant DMS9304580. + The second author is supported by Courant Instructorship, New York University. 1 1 Introduction and results A matrix A is said to be totally unimodular if the determinant of each square submatrix of A is 0 or 1. Clearly the entries of a totally unimodular matrix must be 0 or 1. A matr...
Linear Discrepancy of Totally Unimodular Matrices
 Combinatorica
, 2001
"... We show that the linear discrepancy of a totally unimodular mn matrix A is at most lindisc(A) 1 1 n+1 : This bound is sharp. In particular, this result proves Spencer's conjecture lindisc(A) (1 1 n+1 ) herdisc(A) in the special case of totally unimodular matrices. If m 2, we also show lin ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We show that the linear discrepancy of a totally unimodular mn matrix A is at most lindisc(A) 1 1 n+1 : This bound is sharp. In particular, this result proves Spencer's conjecture lindisc(A) (1 1 n+1 ) herdisc(A) in the special case of totally unimodular matrices. If m 2, we also show lindisc(A) 1 1 m . Finally we give a characterization of those totally unimodular matrices which have linear discrepancy 1 1 n+1 : Besides m 1 matrices containing a single nonzero entry, they are exactly the ones which contain n + 1 rows such that each n thereof are linearly independent. A central proof idea is the use of linear programs. A preliminary version of this result appeared at SODA 2001. This work was partially supported by the graduate school `Eziente Algorithmen und Multiskalenmethoden', Deutsche Forschungsgemeinschaft y A similar result has been independently obtained by T. Bohman and R. Holzman and presented at the Conference on Hypergraphs (Gyula O.H. Katona is 60), Budapest, in June 2001. Mathematics Subject Classication (2000): Primary 11K38, 90C05. Secondary 05C65. Proposed abbreviated title: Linear Discrepancy. 2 1