Results 1  10
of
62
TracyWidom limit for the largest eigenvalue of a large class of complex sample covariance matrices
 ANN. PROBAB
, 2007
"... We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors with mean 0 and covariance �p. We show that for a large class of covariance matrices �p, the largest eigenvalue of X ∗ X is asymptotically distributed (after recentering and rescaling) as the Tracy–Widom distribution that appears in the study of the Gaussian unitary ensemble. We give explicit formulas for the centering and scaling sequences that are easy to implement and involve only the spectral distribution of the population covariance, n and p. The main theorem applies to a number of covariance models found in applications. For example, wellbehaved Toeplitz matrices as well as covariance matrices whose spectral distribution is a sum of atoms (under some conditions on the mass of the atoms) are among the models the theorem can handle. Generalizations of the theorem to certain spiked versions of our models and a.s. results about the largest eigenvalue are given. We also discuss a simple corollary that does not require normality of the entries of the data matrix and some consequences for applications in multivariate statistics.
Nonembeddability theorems via Fourier analysis
"... Various new nonembeddability results (mainly into L1) are proved via Fourier analysis. In particular, it is shown that the Edit Distance on {0, 1}d has L1 distortion (log d) 12o(1). We also give new lower bounds on the L1 distortion of flat tori, quotients of the discrete hypercube under group ac ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
Various new nonembeddability results (mainly into L1) are proved via Fourier analysis. In particular, it is shown that the Edit Distance on {0, 1}d has L1 distortion (log d) 12o(1). We also give new lower bounds on the L1 distortion of flat tori, quotients of the discrete hypercube under group actions, and the transportation cost (Earthmover) metric.
Interpolated inequalities between exponential and Gaussian, Orlicz hypercontractivity and isoperimetry
, 2004
"... ..."
Dequantizing compressed sensing: When oversampling and nongaussian constraints combine. arXiv:0902.2367 [math.OC
, 2009
"... 6 Projection onto ℓp ball via Newton’s method 17 ..."
Fluctuations of eigenvalues and second order Poincaré inequalities. Probab. Theory Related Fields
, 2008
"... Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a unified technique for deriving such results via relatively soft arguments. In the process, we introduce a notion of ‘second order Poincaré inequalities’: just as ordinary Poincaré inequalities give variance bounds, second order Poincaré inequalities give central limit theorems. The proof of the main result employs Stein’s method of normal approximation. A number of examples are worked out; some of them are new. One of the new results is a CLT for the spectrum of gaussian Toeplitz matrices. 1.
The Price of Privately Releasing Contingency Tables and the Spectra of Random Matrices with Correlated Rows
"... Marginal (contingency) tables are the method of choice for government agencies releasing statistical summaries of categorical data. In this paper, we consider lower bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sens ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Marginal (contingency) tables are the method of choice for government agencies releasing statistical summaries of categorical data. In this paper, we consider lower bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. We extend a line of recent work on lower bounds on noise for private data analysis [9, 14, 15, 16] to a natural and important class of functionalities. Our investigation also leads to new results on the spectra of random matrices with correlated rows. Consider a database D consisting of n rows (one per individual), each row comprising d binary attributes. For any subset of T attributes of size T  = k, the marginal table for T has 2 k entries; each entry counts how many times in the database a particular setting of these attributes occurs. We provide lower bounds for releasing kattribute marginal tables under (i) minimal privacy, a general privacy notion which captures a large class of privacy definitions, and (ii) differential privacy, a rigorous notion of privacy that has received extensive recent study. Our main contributions are: • We give efficient polynomial time attacks which allow an adversary to reconstruct sensitive information given insufficiently perturbed marginal table releases. Using these reconstruction attacks,
Empirical minimization
 Probability Theory and Related Fields, 135(3):311 – 334
, 2003
"... We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomo ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Concentration inequalities for dependent random variables via the martingale method
 ANNALS OF PROBABILITY
, 2008
"... The martingale method is used to establish concentration inequalities for a class of dependent random sequences on a countable state space, with the constants in the inequalities expressed in terms of certain mixing coefficients. Along the way, bounds are obtained on martingale differences associate ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
The martingale method is used to establish concentration inequalities for a class of dependent random sequences on a countable state space, with the constants in the inequalities expressed in terms of certain mixing coefficients. Along the way, bounds are obtained on martingale differences associated with the random sequences, which may be of independent interest. As applications of the main result, concentration inequalities are also derived for inhomogeneous Markov chains and hidden Markov chains, and an extremal property associated with their martingale difference bounds is established. This work complements and generalizes certain concentration inequalities obtained by Marton and Samson, while also providing different proofs of some known results.
Isoperimetry between exponential and Gaussian
 Electronic J. Prob
"... We study in details the isoperimetric profile of product probability measures with tails between the exponential and the Gaussian regime. In particular we exhibit many examples where coordinate halfspaces are approximate solutions of the isoperimetric problem. 1 ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
We study in details the isoperimetric profile of product probability measures with tails between the exponential and the Gaussian regime. In particular we exhibit many examples where coordinate halfspaces are approximate solutions of the isoperimetric problem. 1
Characterization of Talagrand’s like transportationcost inequalities on the real line
, 2006
"... In this paper, we give necessary and sufficient conditions for Talagrand’s like transportation cost inequalities on the real line. This brings a new wide class of examples of probability measures enjoying a dimensionfree concentration of measure property. Another byproduct is the characterization ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
In this paper, we give necessary and sufficient conditions for Talagrand’s like transportation cost inequalities on the real line. This brings a new wide class of examples of probability measures enjoying a dimensionfree concentration of measure property. Another byproduct is the characterization of modified LogSobolev inequalities for Logconcave probability measures on R.