Results 1  10
of
173
Smallest Enclosing Disks (balls and Ellipsoids)
 Results and New Trends in Computer Science
, 1991
"... A simple randomized algorithm is developed which computes the smallest enclosing disk of a finite set of points in the plane in expected linear time. The algorithm is based on Seidel's recent Linear Programming algorithm, and it can be generalized to computing smallest enclosing balls or ellipsoids ..."
Abstract

Cited by 175 (5 self)
 Add to MetaCart
A simple randomized algorithm is developed which computes the smallest enclosing disk of a finite set of points in the plane in expected linear time. The algorithm is based on Seidel's recent Linear Programming algorithm, and it can be generalized to computing smallest enclosing balls or ellipsoids of point sets in higher dimensions in a straightforward way. Experimental results of an implementation are presented. 1 Introduction During the recent years randomized algorithms have been developed for a host of problems in computational geometry. Many of these algorithms are not only attractive because of their efficiency, but also because of their appealing simplicity. This feature makes them easier to access for nonexperts in the field, and for actual implementation. One of these simple algorithms is Seidel's Linear Programming algorithm, [Sei1], which solves a Linear Program with n constraints and d variables in expected O(n) time, provided d is constant
An Elementary Introduction to Modern Convex Geometry
 in Flavors of Geometry
, 1997
"... Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and Its Ext ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and Its Extensions 25 Lecture 6. Convolutions and Volume Ratios: The Reverse Isoperimetric Problem 32 Lecture 7. The Central Limit Theorem and Large Deviation Inequalities 37 Lecture 8. Concentration of Measure in Geometry 41 Lecture 9. Dvoretzky's Theorem 47 Acknowledgements 53 References 53 Index 55 Preface These notes are based, somewhat loosely, on three series of lectures given by myself, J. Lindenstrauss and G. Schechtman, during the Introductory Workshop in Convex Geometry held at the Mathematical Sciences Research Institute in Berkeley, early in 1996. A fourth series was given by B. Bollobas, on rapid mixing and random volume algorithms; they are found els
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Measured descent: A new embedding method for finite metrics
 In Proc. 45th FOCS
, 2004
"... We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for ..."
Abstract

Cited by 84 (26 self)
 Add to MetaCart
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any npoint metric space (X, d) embeds in Hilbert space with distortion O ( √ αX · log n), where αX is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O ( √ (log λX)log n) distortion embedding, where λX is the doubling constant of X. Since λX ≤ n, this result recovers Bourgain’s theorem, but when the metric X is, in a sense, “lowdimensional, ” improved bounds are achieved. Our embeddings are volumerespecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volumerespecting embeddings for all 1 ≤ k ≤ n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted npoint planar graph O(log n) embeds in ℓ∞ with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n) 2). 1
The dimension of almost spherical sections of convex bodies
 Acta Math
, 1977
"... The wellknown theorem of Dvoretzky [1] states that convex bodies of high dimension have low dimensional sections which are almost spherical. More precisely, the theorem states that for every integer k and every e> 0 there is an integer n(k, e) such that any Banach space X with dimension> n(k, e) ha ..."
Abstract

Cited by 74 (3 self)
 Add to MetaCart
The wellknown theorem of Dvoretzky [1] states that convex bodies of high dimension have low dimensional sections which are almost spherical. More precisely, the theorem states that for every integer k and every e> 0 there is an integer n(k, e) such that any Banach space X with dimension> n(k, e) has a subspace Y of dimension k with d{Y,l\) < 1 + e. Here d(Y, l£) denotes the Banach Mazur distance coefficient between Y and the k dimensional Hubert space l\ i.e. inflim \\T ~ X \ \ taken over all operators T from Y onto l\. The estimate for n(k, e) given in [1] was improved in [5] to n(k, e) = ec(e)k In other words (considering the dependence of n(k, e) on k for fixed e) the dimension of the almost spherical section (of the unit ball) given by Dvoretzky's theorem is about the log of the dimension of the space. This estimate is in general the best possible, since as observed in [10] it is easy to verify that if X = Z ~ any subspace Y of X whose Banach Mazur distance from a Hubert space is < 2 say, must be of dimension at most C log n. It turns out however that if
Isoperimetric Problems for Convex Bodies and a Localization Lemma
, 1995
"... We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for the "isoperimetric coefficient" that /(K) ln 2 M 1 (K) ; and give other upper and lower bounds. We conjecture that our upper bound is best possible up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over the ndimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of wellknown results can be proved using it.
On Metric RamseyType Phenomena
"... The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics. ..."
Abstract

Cited by 69 (39 self)
 Add to MetaCart
The main question studied in this article may be viewed as a nonlinear analog of Dvoretzky's Theorem in Banach space theory or as part of Ramsey Theory in combinatorics.
Geometric approximation via coresets
 Combinatorial and Computational Geometry, MSRI
, 2005
"... Abstract. The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem o ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
Abstract. The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q using a relatively inefficient algorithm. The solution for Q is then translated to an approximate solution to the original point set P. This paper describes the ways in which this paradigm has been successfully applied to various optimization and extent measure problems. 1.
A Pattern Search Filter Method for Nonlinear Programming without Derivatives
 SIAM Journal on Optimization
, 2000
"... : This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
: This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. It reduces trivially to the Torczon GPS (generalized pattern search) algorithm when there are no constraints, and indeed, it is formulated here to reduce to the version of GPS designed to handle finitely many linear constraints if they are treated explicitly. A key feature is that it preserves the useful division into search and poll steps. Assuming local smoothness, the algorithm produces a KKT point for a problem related to the original problem. Key words Pattern search algorithm, filter algorithm, surrogatebased optimization, derivativefree convergence analysis, constrained op...
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.