Results 1  10
of
63
On logarithmic concave measures and functions
, 1972
"... The purpose of the present paper is to give a new proof for the main theorem proved in [3] ..."
Abstract

Cited by 87 (11 self)
 Add to MetaCart
The purpose of the present paper is to give a new proof for the main theorem proved in [3]
The BrunnMinkowski inequality
 Bull. Amer. Math. Soc. (N.S
, 2002
"... Abstract. In 1978, Osserman [124] wrote an extensive survey on the isoperimetric inequality. The BrunnMinkowski inequality can be proved in a page, yet quickly yields the classical isoperimetric inequality for important classes of subsets of R n, and deserves to be better known. This guide explains ..."
Abstract

Cited by 74 (5 self)
 Add to MetaCart
Abstract. In 1978, Osserman [124] wrote an extensive survey on the isoperimetric inequality. The BrunnMinkowski inequality can be proved in a page, yet quickly yields the classical isoperimetric inequality for important classes of subsets of R n, and deserves to be better known. This guide explains the relationship between the BrunnMinkowski inequality and other inequalities in geometry and analysis, and some applications. 1.
Isoperimetric Problems for Convex Bodies and a Localization Lemma
, 1995
"... We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
We study the smallest number /(K) such that a given convex body K in IR n can be cut into two parts K 1 and K 2 by a surface with an (n \Gamma 1)dimensional measure /(K)vol(K 1 ) \Delta vol(K 2 )=vol(K). Let M 1 (K) be the average distance of a point of K from its center of gravity. We prove for the "isoperimetric coefficient" that /(K) ln 2 M 1 (K) ; and give other upper and lower bounds. We conjecture that our upper bound is best possible up to a constant. Our main tool is a general "Localization Lemma" that reduces integral inequalities over the ndimensional space to integral inequalities in a single variable. This lemma was first proved by two of the authors in an earlier paper, but here we give various extensions and variants that make its application smoother. We illustrate the usefulness of the lemma by showing how a number of wellknown results can be proved using it.
A Riemannian interpolation inequality à la Borell, Brascamp and Lieb
, 2001
"... A concavity estimate is derived for interpolations between L¹(M) mass densities on a Riemannian manifold. The inequality sheds new light on the theorems of Prékopa, Leindler, Borell, Brascamp and Lieb that it generalizes from Euclidean space. Due to the curvature of the manifold, the new Riemannian ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
A concavity estimate is derived for interpolations between L¹(M) mass densities on a Riemannian manifold. The inequality sheds new light on the theorems of Prékopa, Leindler, Borell, Brascamp and Lieb that it generalizes from Euclidean space. Due to the curvature of the manifold, the new Riemannian versions of these theorems incorporate a volume distortion factor which can, however, be controlled via lower bounds on Ricci curvature. The method uses optimal mappings from mass transportation theory. Along the way, several new properties are established for optimal mass transport and interpolating maps on a Riemannian manifold.
Optimization under uncertainty: Stateoftheart and opportunities
 Computers and Chemical Engineering
, 2004
"... A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemi ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemicals. A key difficulty in optimization under uncertainty is in dealing with an uncertainty space that is huge and frequently leads to very largescale optimization models. Decisionmaking under uncertainty is often further complicated by the presence of integer decision variables to model logical and other discrete decisions in a multiperiod or multistage setting. This paper reviews theory and methodology that have been developed to cope with the complexity of optimization problems under uncertainty. We discuss and contrast the classical recoursebased stochastic programming, robust stochastic programming, probabilistic (chanceconstraint) programming, fuzzy programming, and stochastic dynamic programming. The advantages and shortcomings of these models are reviewed and illustrated through examples. Applications and the stateoftheart in computations are also reviewed. Finally, we discuss several main areas for future development in this field. These include development of polynomialtime approximation schemes for multistage stochastic programs and the application of global optimization algorithms to twostage and chanceconstraint formulations.
The geometry of logconcave functions and an O∗(n³) sampling algorithm
"... The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient s ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient sampling algorithm (by a random walk) with no assumptions on the local smoothness of the density function. After appropriate preprocessing, the algorithm produces a point from approximately the right distribution in time O∗(n^4), and in amortized time O∗(n³) if many sample points are needed (where the asterisk indicates that dependence on the error parameter and factors of log n are not shown).
Geometric random walks: a survey
 Combinatorial and Computational Geometry
, 2005
"... Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and convex optimization) that are based on sampling by random walks —are discussed. 1.
Programming Under Probabilistic Constraint with Discrete Random Variable
 Trends in Mathematical Programming, L. Grandinetti et
, 1998
"... Probabilistic constraint of the type P (Ax ≤ β) ≥ p is considered and it is proved that under some conditions the constraining function is quasiconcave. The probabilistic constraint is embedded into a mathematical programming problem of which the algorithmic solution is also discussed. 1 ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Probabilistic constraint of the type P (Ax ≤ β) ≥ p is considered and it is proved that under some conditions the constraining function is quasiconcave. The probabilistic constraint is embedded into a mathematical programming problem of which the algorithmic solution is also discussed. 1
Contributions to the theory of stochastic programming
 Mathematical Programming
, 1973
"... Two stochastic programming decision models are presented. In the rst one, we use probabilistic constraints, and constraints involving conditional expectations further incorporate penalties into the objective. The probabilistic constraint prescribes a lower bound for the probability of simultaneous o ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
Two stochastic programming decision models are presented. In the rst one, we use probabilistic constraints, and constraints involving conditional expectations further incorporate penalties into the objective. The probabilistic constraint prescribes a lower bound for the probability of simultaneous occurrence of events, the number of which can be in nite in which casestochastic processes are involved. The second one is a variant of the model: twostage programming under uncertainty, where we require the solvability of the second stage problem only with a prescribed (high) probability. The theory presented in this paper is based to a large extent on recent results of the author concerning logarithmic concave measures. 1