Results 1 
9 of
9
The Grothendieck constant is strictly smaller than Krivine’s bound
 IN 52ND ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE. PREPRINT AVAILABLE AT HTTP://ARXIV.ORG/ABS/1103.6161
, 2011
"... The (real) Grothendieck constant KG is the infimum over those K ∈ (0, ∞) such that for every m, n ∈ N and every m × n real matrix (aij) we have m ∑ n∑ m ∑ n∑ aij〈xi, yj 〉 � K max aijεiδj. max {xi} m i=1,{yj}n j=1 ⊆Sn+m−1 i=1 j=1 {εi} m i=1,{δj}n j=1⊆{−1,1} i=1 j=1 2 log(1+ √ 2) The classical Groth ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
The (real) Grothendieck constant KG is the infimum over those K ∈ (0, ∞) such that for every m, n ∈ N and every m × n real matrix (aij) we have m ∑ n∑ m ∑ n∑ aij〈xi, yj 〉 � K max aijεiδj. max {xi} m i=1,{yj}n j=1 ⊆Sn+m−1 i=1 j=1 {εi} m i=1,{δj}n j=1⊆{−1,1} i=1 j=1 2 log(1+ √ 2) The classical Grothendieck inequality asserts the nonobvious fact that the above inequality does hold true for some K ∈ (0, ∞) that is independent of m, n and (aij). Since Grothendieck’s 1953 discovery of this powerful theorem, it has found numerous applications in a variety of areas, but despite attracting a lot of attention, the exact value of the Grothendieck constant KG remains a mystery. The last progress on this problem was in π 1977, when Krivine proved that KG � and conjectured that his bound is optimal. Krivine’s conjecture has been restated repeatedly since 1977, resulting in focusing the subsequent research on the search for examples of matrices (aij) which exhibit (asymptotically, as m, n → ∞) a lower bound on KG that matches Krivine’s bound. Here we obtain an improved Grothendieck inequality that holds for all matrices (aij) and yields a bound KG < π 2 log(1+ √ 2) − ε0 for some effective constant ε0> 0. Other than disproving Krivine’s conjecture, and along the way also disproving an intermediate conjecture of König that was made in 2000 as a step towards Krivine’s conjecture, our main contribution is conceptual: despite dealing with a binary rounding problem, random 2dimensional projections, when combined with a careful partition of R 2 in order to round the projected vectors to values in {−1, 1}, perform better than the ubiquitous random hyperplane technique. By establishing the usefulness of higher dimensional rounding schemes, this fact has consequences in approximation algorithms. Specifically, it yields the best known polynomial time approximation algorithm for the FriezeKannan Cut Norm problem, a generic and wellstudied optimization problem with many applications.
Efficient Rounding for the Noncommutative Grothendieck Inequality (Extended Abstract)
, 2013
"... The classical Grothendieck inequality has applications to the design of approximation algorithms for NPhard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main res ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
The classical Grothendieck inequality has applications to the design of approximation algorithms for NPhard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a constantfactor polynomial time approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principle component analysis and the orthogonal Procrustes problem.
Community detection in sparse networks via Grothendieck’s inequality
, 2015
"... We present a simple and flexible method to prove consistency of semidefinite optimization problems on random graphs. The method is based on Grothendieck’s inequality. Unlike the previous uses of this inequality that lead to constant relative accuracy, we achieve any given relative accuracy by lever ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We present a simple and flexible method to prove consistency of semidefinite optimization problems on random graphs. The method is based on Grothendieck’s inequality. Unlike the previous uses of this inequality that lead to constant relative accuracy, we achieve any given relative accuracy by leveraging randomness. We illustrate the method with the problem of community detection in sparse networks, those with bounded average degrees. We demonstrate that even in this regime, various simple and natural semidefinite programs can be used to recover the community structure up to an arbitrarily small fraction of misclassified vertices. The method is general; it can be applied to a variety of stochastic models of networks and semidefinite programs.
SOLUTION OF THE PROPELLER CONJECTURE IN R³
"... It is shown that every measurable partition {A1,..., Ak} of R 3 satisfies k∑ ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
It is shown that every measurable partition {A1,..., Ak} of R 3 satisfies k∑
SPARSE RANDOM GRAPHS: REGULARIZATION AND CONCENTRATION OF THE LAPLACIAN
"... Abstract. We study random graphs with possibly different edge probabilities in the challenging sparse regime of bounded expected degrees. Unlike in the dense case, neither the graph adjacency matrix nor its Laplacian concentrate around their expectations due to the highly irregular distribution of ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We study random graphs with possibly different edge probabilities in the challenging sparse regime of bounded expected degrees. Unlike in the dense case, neither the graph adjacency matrix nor its Laplacian concentrate around their expectations due to the highly irregular distribution of node degrees. It has been empirically observed that simply adding a constant of order 1/n to each entry of the adjacency matrix substantially improves the behavior of Laplacian. Here we prove that this regularization indeed forces Laplacian to concentrate even in sparse graphs. As an immediate consequence in network analysis, we establish the validity of one of the simplest and fastest approaches to community detection – regularized spectral clustering, under the stochastic block model. Our proof of concentration of regularized Laplacian is based on Grothendieck’s inequality and factorization, combined with paving arguments. Contents
COMPUTING THE PARTITION FUNCTION OF A POLYNOMIAL ON THE BOOLEAN CUBE
, 2015
"... Abstract. For a polynomial f: {−1, 1}n − → C, we define the partition function as the average of eλf(x) over all points x ∈ {−1, 1}n, where λ ∈ C is a parameter. We present an algorithm, which, given such f, λ and > 0 approximates the partition function within a relative error of in NO(lnn−ln ) ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. For a polynomial f: {−1, 1}n − → C, we define the partition function as the average of eλf(x) over all points x ∈ {−1, 1}n, where λ ∈ C is a parameter. We present an algorithm, which, given such f, λ and > 0 approximates the partition function within a relative error of in NO(lnn−ln ) time provided λ  ≤ (2L√d)−1, where d is the degree, L is (roughly) the Lipschitz constant of f and N is the number of monomials in f. We apply the algorithm to approximate the maximum of a polynomial f: {−1, 1}n − → R. 1. Introduction and