Results 1  10
of
39
On multivariate normal approximations by Stein’s method and size bias couplings
, 1994
"... Stein’s method is used to obtain two theorems on multivariate normal approximation. Our main theorem, Theorem 1.2, provides a bound on the distance to normality for any nonnegative random vector. Theorem 1.2 requires multivariate size bias coupling, which we discuss in studying the approximation of ..."
Abstract

Cited by 56 (17 self)
 Add to MetaCart
Stein’s method is used to obtain two theorems on multivariate normal approximation. Our main theorem, Theorem 1.2, provides a bound on the distance to normality for any nonnegative random vector. Theorem 1.2 requires multivariate size bias coupling, which we discuss in studying the approximation of distributions of sums of dependent random vectors. In the univariate case, we briefly illustrate this approach for certain sums of nonlinear functions of multivariate normal variables. As a second illustration, we show that the multivariate distribution counting the number of vertices with given degrees in certain random graphs is asymptotically multivariate normal and obtain a bound on the rate of convergence. Both examples demonstrate that this approach may be suitable for situations involving nonlocal dependence. We also present Theorem 1.4 for sums of vectors having a local type of dependence. We apply this theorem to obtain a multivariate normal approximation for the distribution of the random pvector which counts the number of edges in a fixed graph both of whose vertices have the same given color when each vertex is colored by one of p colors independently. All normal approximation results presented here do not require an ordering of the summands related to the dependence structure. This is in contrast to hypotheses of classical central limit theorems and examples, which involve e.g., martingale, Markov chain, or various mixing assumptions. Keywords and phrases: Stein’s method, coupling, size bias, random graphs, multivariate central limit theorems.
Multivariate normal approximation using exchangeable pairs
"... Abstract. Since the introduction of Stein’s method in the early 1970s, much research has been done in extending and strengthening it; however, there does not exist a version of Stein’s original method of exchangeable pairs for multivariate normal approximation. The aim of this article is to fill thi ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Since the introduction of Stein’s method in the early 1970s, much research has been done in extending and strengthening it; however, there does not exist a version of Stein’s original method of exchangeable pairs for multivariate normal approximation. The aim of this article is to fill this void. We present two abstract normal approximation theorems using exchangeable pairs in multivariate contexts, one for situations in which the underlying symmetries are discrete, and one for situations involving continuous symmetry groups. We provide several illustrative examples, including a multivariate version of Hoeffding’s combinatorial central limit theorem and a treatment of projections of Haar measure on the orthogonal and unitary groups. 1.
A new method of normal approximation
, 2006
"... Abstract. We introduce a new version of Stein’s method that reduces a large class of normal approximation problems to variance bounding exercises, thus making a connection between central limit theorems and concentration of measure. Unlike Skorokhod embeddings, the object whose variance has to be bo ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new version of Stein’s method that reduces a large class of normal approximation problems to variance bounding exercises, thus making a connection between central limit theorems and concentration of measure. Unlike Skorokhod embeddings, the object whose variance has to be bounded has an explicit formula that makes it possible to carry out the program more easily. As an application, we derive a general CLT for functions that are obtained as combinations of many local contributions, where the definition of ‘local ’ itself depends on the data. Several examples are given, including the solution to a nearestneighbor CLT problem posed by Peter Bickel. 1.
Central limit theorems for random polytopes in convex polytopes
, 2005
"... Let K be a smooth convex set. The convex hull of independent random points in K is a random polytope. Central limit theorems for the volume and the number of i dimensional faces of random polytopes are proved as the number of random points tends to infinity. One essential step is to determine the pr ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
(Show Context)
Let K be a smooth convex set. The convex hull of independent random points in K is a random polytope. Central limit theorems for the volume and the number of i dimensional faces of random polytopes are proved as the number of random points tends to infinity. One essential step is to determine the precise asymptotic order of the occurring variances.
Normal approximation in geometric probability
 In Stein’s Method and Applications. Lect. Notes
, 2005
"... Statistics arising in geometric probability can often be expressed as sums of stabilizing functionals, that is functionals which satisfy a local dependence structure. In this note we show that stabilization leads to nearly optimal rates of convergence in the CLT for statistics such as total edge len ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
Statistics arising in geometric probability can often be expressed as sums of stabilizing functionals, that is functionals which satisfy a local dependence structure. In this note we show that stabilization leads to nearly optimal rates of convergence in the CLT for statistics such as total edge length and total number of edges of graphs in computational geometry and the total number of particles accepted in random sequential packing models. These rates also apply to the 1dimensional marginals of the random measures associated with these statistics.
Stein’s method for concentration inequalities
 Prob. Th. Rel. Fields
, 2007
"... Abstract. We introduce a version of Stein’s method for proving concentration and moment inequalities in problems with dependence. Simple illustrative examples from combinatorics, physics, and mathematical statistics are provided. 1. Introduction and ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a version of Stein’s method for proving concentration and moment inequalities in problems with dependence. Simple illustrative examples from combinatorics, physics, and mathematical statistics are provided. 1. Introduction and
Random polytopes
 ANNALS OF PROBABILITY
, 2008
"... We prove the central limit theorem for the volume and the fvector of the random polytope Pn and the Poisson random polytope Πn in a fixed convex polytope P ⊂ IR d. Here Pn is the convex hull of n random points in P, and Πn is the convex hull of the intersection of a Poisson process X(n), of intensi ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
We prove the central limit theorem for the volume and the fvector of the random polytope Pn and the Poisson random polytope Πn in a fixed convex polytope P ⊂ IR d. Here Pn is the convex hull of n random points in P, and Πn is the convex hull of the intersection of a Poisson process X(n), of intensity n, with P. A general lower bound on the variance is also proved.
H.V.: Central limit theorems for gaussian polytopes
, 2006
"... Abstract. Choose n random, independent points in R d according to the normal distribution. Their convex hull Kn is the Gaussian random polytope. We prove that the volume and the number of faces of Kn satisfy the central limit theorem, settling a well known conjecture in the field. 1. The main result ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Choose n random, independent points in R d according to the normal distribution. Their convex hull Kn is the Gaussian random polytope. We prove that the volume and the number of faces of Kn satisfy the central limit theorem, settling a well known conjecture in the field. 1. The main result Let Ψd = Ψ denote the standard normal distribution on R d, its density function is ψd = ψ = 1 exp{−x2 (2π) d/2 2} where x 2 = x  2 is the square of the Euclidean norm of x ∈ R d. We will use this notation only for d ≥ 2, for d = 1 the standard normal has density function with distribution Φ. φ = 1 exp{−x2
Random points and lattice points in convex bodies
 Bull. Amer. Math. Soc. (N.S
"... Assume K ⊂ R d is a convex body and X is a (large) finite subset of K. How many convex polytopes are there whose vertices belong to X. Is there a typical shape of such polytopes? How well the maximal such polytope (which is actually the convex hull of X) approximates K? We are interested in these qu ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
Assume K ⊂ R d is a convex body and X is a (large) finite subset of K. How many convex polytopes are there whose vertices belong to X. Is there a typical shape of such polytopes? How well the maximal such polytope (which is actually the convex hull of X) approximates K? We are interested in these questions mainly in two cases. The first is when X is a random sample of n uniform, independent points from K. In this case motivation comes from Sylvester’s famous fourpoint problem, and from the theory of random polytopes. The second case is when X = K ∩ Z d where Z d is the lattice of integer points in R d and the questions come from integer programming and geometry of numbers. Surprisingly (or not so surprisingly), the answers in the two cases are rather similar. 1 Sylvester’s fourpoint problem The study of random points in convex bodies started with an innocent looking question. The year was 1864. The place was London. The journal was the
Sequence Analysis by Additive Scales: DNA Structure for Sequences and Repeats of All Lengths
 Bioinformatics
, 2000
"... Motivation: DNA structure plays an important role in a variety of biological processes. Different di and trinucleotide scales have been proposed to capture various aspects of DNA structure including base stacking energy, propeller twist angle, protein deformability, bendability, and position prefer ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Motivation: DNA structure plays an important role in a variety of biological processes. Different di and trinucleotide scales have been proposed to capture various aspects of DNA structure including base stacking energy, propeller twist angle, protein deformability, bendability, and position preference. Yet, a general framework for the computational analysis and prediction of DNA structure is still lacking. Such a framework should in particular address the following issues: (1) construction of sequences with extremal properties; (2) quantitative evaluation of sequences with respect to a given genomic background; (3) automatic extraction of extremal sequences and profiles from genomic data bases; (4) distribution and asymptotic behavior as the length N of the sequences increases; and (5) complete analysis of correlations between scales. Results: We develop a general framework for sequence analysis based on additive scales, structural or other, that addresses all these issues. We show how to construct extremal sequences and calibrate scores for automatic genomic and data base extraction. We show that distributions rapidly converge to normality as N increases. Pairwise correlations between scales depend both on background distribution and sequence length and rapidly converge to an analytically predictable asymptotic value. For di and trinucleotide scales, normal behavior and asymptotic correlation values are attained over a characteristic window length of about 1015 bp. With a uniform background distribution, pairwise correlations between empiricallyderived scales remain relatively small and roughly constant at all lengths, except for propeller twist and protein deformability which are positively correlated. There is a positive (resp. negative) correlation between din...