Results 1  10
of
334
Smallest Enclosing Disks (balls and Ellipsoids)
 Results and New Trends in Computer Science
, 1991
"... A simple randomized algorithm is developed which computes the smallest enclosing disk of a finite set of points in the plane in expected linear time. The algorithm is based on Seidel's recent Linear Programming algorithm, and it can be generalized to computing smallest enclosing balls or ellips ..."
Abstract

Cited by 211 (5 self)
 Add to MetaCart
(Show Context)
A simple randomized algorithm is developed which computes the smallest enclosing disk of a finite set of points in the plane in expected linear time. The algorithm is based on Seidel's recent Linear Programming algorithm, and it can be generalized to computing smallest enclosing balls or ellipsoids of point sets in higher dimensions in a straightforward way. Experimental results of an implementation are presented. 1 Introduction During the recent years randomized algorithms have been developed for a host of problems in computational geometry. Many of these algorithms are not only attractive because of their efficiency, but also because of their appealing simplicity. This feature makes them easier to access for nonexperts in the field, and for actual implementation. One of these simple algorithms is Seidel's Linear Programming algorithm, [Sei1], which solves a Linear Program with n constraints and d variables in expected O(n) time, provided d is constant
An Elementary Introduction to Modern Convex Geometry
 in Flavors of Geometry
, 1997
"... Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and It ..."
Abstract

Cited by 167 (3 self)
 Add to MetaCart
Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and Its Extensions 25 Lecture 6. Convolutions and Volume Ratios: The Reverse Isoperimetric Problem 32 Lecture 7. The Central Limit Theorem and Large Deviation Inequalities 37 Lecture 8. Concentration of Measure in Geometry 41 Lecture 9. Dvoretzky's Theorem 47 Acknowledgements 53 References 53 Index 55 Preface These notes are based, somewhat loosely, on three series of lectures given by myself, J. Lindenstrauss and G. Schechtman, during the Introductory Workshop in Convex Geometry held at the Mathematical Sciences Research Institute in Berkeley, early in 1996. A fourth series was given by B. Bollobas, on rapid mixing and random volume algorithms; they are found els
Isoperimetric Problems for Convex Bodies and a Localization Lemma
, 1995
"... We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for ..."
Abstract

Cited by 132 (8 self)
 Add to MetaCart
We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for the "isoperimetric coefficient" that /(K) ln 2 M 1 (K) ; and give other upper and lower bounds. We conjecture that our upper bound is best possible up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over the ndimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of wellknown results can be proved using it.
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 120 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Measured descent: A new embedding method for finite metrics
 In Proc. 45th FOCS
, 2004
"... We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for ..."
Abstract

Cited by 101 (32 self)
 Add to MetaCart
(Show Context)
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any npoint metric space (X, d) embeds in Hilbert space with distortion O ( √ αX · log n), where αX is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O ( √ (log λX)log n) distortion embedding, where λX is the doubling constant of X. Since λX ≤ n, this result recovers Bourgain’s theorem, but when the metric X is, in a sense, “lowdimensional, ” improved bounds are achieved. Our embeddings are volumerespecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volumerespecting embeddings for all 1 ≤ k ≤ n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted npoint planar graph O(log n) embeds in ℓ∞ with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n) 2). 1
The dimension of almost spherical sections of convex bodies
 Acta Math
, 1977
"... The wellknown theorem of Dvoretzky [1] states that convex bodies of high dimension have low dimensional sections which are almost spherical. More precisely, the theorem states that for every integer k and every e> 0 there is an integer n(k, e) such that any Banach space X with dimension> n(k, ..."
Abstract

Cited by 99 (4 self)
 Add to MetaCart
(Show Context)
The wellknown theorem of Dvoretzky [1] states that convex bodies of high dimension have low dimensional sections which are almost spherical. More precisely, the theorem states that for every integer k and every e> 0 there is an integer n(k, e) such that any Banach space X with dimension> n(k, e) has a subspace Y of dimension k with d{Y,l\) < 1 + e. Here d(Y, l£) denotes the Banach Mazur distance coefficient between Y and the k dimensional Hubert space l\ i.e. inflim \\T ~ X \ \ taken over all operators T from Y onto l\. The estimate for n(k, e) given in [1] was improved in [5] to n(k, e) = ec(e)k In other words (considering the dependence of n(k, e) on k for fixed e) the dimension of the almost spherical section (of the unit ball) given by Dvoretzky's theorem is about the log of the dimension of the space. This estimate is in general the best possible, since as observed in [10] it is easy to verify that if X = Z ~ any subspace Y of X whose Banach Mazur distance from a Hubert space is < 2 say, must be of dimension at most C log n. It turns out however that if
On Metric RamseyType Phenomena
"... The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics. ..."
Abstract

Cited by 90 (41 self)
 Add to MetaCart
The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics.
Geometric approximation via coresets
 COMBINATORIAL AND COMPUTATIONAL GEOMETRY, MSRI
, 2005
"... The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q usin ..."
Abstract

Cited by 82 (9 self)
 Add to MetaCart
(Show Context)
The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q using a relatively inefficient algorithm. The solution for Q is then translated to an approximate solution to the original point set P. This paper describes the ways in which this paradigm has been successfully applied to various optimization and extent measure problems.
On a reverse form of the BrascampLieb inequality
 Invent. Math
, 1998
"... We prove a reverse form of the multidimensional BrascampLieb inequality. Our method also gives a new way to derive the BrascampLieb inequality and is rather convenient for the study of equality cases. Introduction We will work on the space R n with its usual Euclidean structure. We will denote by ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
We prove a reverse form of the multidimensional BrascampLieb inequality. Our method also gives a new way to derive the BrascampLieb inequality and is rather convenient for the study of equality cases. Introduction We will work on the space R n with its usual Euclidean structure. We will denote by 〈, 〉 the canonical scalar product. In [BL], H. J. Brascamp and E. H. Lieb showed that for m ≥ n, p1,...,pm> 1 and a1,...,am ∈ R n, the norm of the multilinear operator Φ
A mass transportation approach to quantitative isoperimetric inequalities
 Invent. Math
, 2010
"... Abstract. A sharp quantitative version of the anisotropic isoperimetric inequality is established, corresponding to a stability estimate for the Wulff shape of a given surface tension energy. This is achieved by exploiting mass transportation theory, especially Gromov’s proof of the isoperimetric ..."
Abstract

Cited by 71 (22 self)
 Add to MetaCart
Abstract. A sharp quantitative version of the anisotropic isoperimetric inequality is established, corresponding to a stability estimate for the Wulff shape of a given surface tension energy. This is achieved by exploiting mass transportation theory, especially Gromov’s proof of the isoperimetric inequality and the BrenierMcCann Theorem. A sharp quantitative version of the BrunnMinkowski inequality for convex sets is proved as a corollary. 1.