Results 1  10
of
14
Estimation in high dimensions: a geometric perspective
"... Abstract. This tutorial paper provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The paper develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates conn ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This tutorial paper provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The paper develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models. Contents
SIGMADELTA QUANTIZATION OF SUBGAUSSIAN FRAME EXPANSIONS AND ITS APPLICATION TO COMPRESSED SENSING
"... Abstract. Suppose that the collection {ei} m i=1 forms a frame for R k, where each entry of the vector ei is a subGaussian random variable. We consider expansions in such a frame, which are then quantized using a SigmaDelta scheme. We show that an arbitrary signal in R k can be recovered from its ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Suppose that the collection {ei} m i=1 forms a frame for R k, where each entry of the vector ei is a subGaussian random variable. We consider expansions in such a frame, which are then quantized using a SigmaDelta scheme. We show that an arbitrary signal in R k can be recovered from its quantized frame coefficients up to an error which decays rootexponentially in the oversampling rate m/k. Here the quantization scheme is assumed to be chosen appropriately depending on the oversampling rate and the quantization alphabet can be coarse. The result holds with high probability on the draw of the frame uniformly for all signals. The crux of the argument is a bound on the extreme singular values of the product of a deterministic matrix and a subGaussian frame. For fine quantization alphabets, we leverage this bound to show polynomial error decay in the context of compressed sensing. Our results extend previous results for structured deterministic frame expansions and Gaussian compressed sensing measurements. compressed sensing, quantization, random frames, rootexponential accuracy, SigmaDelta, subGaussian matrices 2010 Math Subject Classification: 94A12, 94A20, 41A25, 15B52 1.
HIGHDIMENSIONAL ESTIMATION WITH GEOMETRIC CONSTRAINTS
"... Abstract. Consider measuring a vector x ∈ Rn through the inner product with several measurement vectors, a1, a2,..., am. It is common in both signal processing and statistics to assume the linear response model yi = 〈ai, x〉+ εi, where εi is a noise term. However, in practice the precise relationshi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Consider measuring a vector x ∈ Rn through the inner product with several measurement vectors, a1, a2,..., am. It is common in both signal processing and statistics to assume the linear response model yi = 〈ai, x〉+ εi, where εi is a noise term. However, in practice the precise relationship between the signal x and the observations yi may not follow the linear model, and in some cases it may not even be known. To address this challenge, in this paper we propose a general model where it is only assumed that each observation yi may depend on ai only through 〈ai, x〉. We do not assume that the dependence is known. This is a form of the semiparametricsingle index model, and it includes the linear model as well as many forms of the generalized linear model as special cases. We further assume that the signal x has some structure, and we formulate this as a general assumption that x belongs to some known (but arbitrary) feasible set K ⊆ Rn. We carefully detail the benefit of using the signal structure to improve estimation. The theory is based on the mean width of K, a geometric parameter which can be used to understand its effective dimension in estimation problems. We determine a simple, efficient twostep procedure for estimating the signal based on this model – a linear estimation followed by metric projection onto K. We give general conditions under which the estimator is minimax optimal up to a constant. This leads to the intriguing conclusion that in the high noise regime, an unknown nonlinearity in the observations does not significantly reduce one’s ability to determine the signal, even when the nonlinearity may be noninvertible. Our results may be specialized to understand the effect of nonlinearities in compressed sensing. 1.
1bit matrix completion
 CoRR
"... In this paper we develop a theory of matrix completion for the extreme case of noisy 1bit observations. Instead of observing a subset of the realvalued entries of a matrix M, we obtain a small number of binary (1bit) measurements generated according to a probability distribution determined by th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In this paper we develop a theory of matrix completion for the extreme case of noisy 1bit observations. Instead of observing a subset of the realvalued entries of a matrix M, we obtain a small number of binary (1bit) measurements generated according to a probability distribution determined by the realvalued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ‖M‖ ∞ ≤ α and rank(M) ≤ r. If the loglikelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1bit measurements, then we can eliminate the requirement that ‖M‖ ∞ ≤ α. For both cases, we provide lower bounds showing that these estimates are nearoptimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better. 1
AMaxNorm Constrained Minimization Approach to 1Bit Matrix Completion
"... We consider in this paper the problem of noisy 1bit matrix completion under a general nonuniform sampling distribution using the maxnorm as a convex relaxation for the rank. A maxnorm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider in this paper the problem of noisy 1bit matrix completion under a general nonuniform sampling distribution using the maxnorm as a convex relaxation for the rank. A maxnorm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is obtained. Informationtheoretical methods are used to establish a minimax lower bound under the general sampling model. The minimax upper and lower bounds together yield the optimal rate of convergence for the Frobenius norm loss. Computational algorithms and numerical performance are also discussed.
An RIPbased approach to Σ ∆ quantization for compressed sensing
, 2014
"... In this paper, we provide a new approach to estimating the error of reconstruction from Σ ∆ quantized compressed sensing measurements. Our method is based on the restricted isometry property (RIP) of a certain projection of the measurement matrix. Our result yields simple proofs and a slight genera ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we provide a new approach to estimating the error of reconstruction from Σ ∆ quantized compressed sensing measurements. Our method is based on the restricted isometry property (RIP) of a certain projection of the measurement matrix. Our result yields simple proofs and a slight generalization of the bestknown reconstruction error bounds for Gaussian and subgaussian measurement matrices. 1
Exponential decay of reconstruction error from binary measurements of sparse signals
, 2014
"... Binary measurements arise naturally in a variety of statistical and engineering applications. They may be inherent to the problem—e.g., in determining the relationship between genetics and the presence or absence of a disease—or they may be a result of extreme quantization. A recent influx of litera ..."
Abstract
 Add to MetaCart
Binary measurements arise naturally in a variety of statistical and engineering applications. They may be inherent to the problem—e.g., in determining the relationship between genetics and the presence or absence of a disease—or they may be a result of extreme quantization. A recent influx of literature has suggested that using prior signal information can greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by onebit compressed sensing, which takes the compressed sensing model but assumes that only the sign of each measurement is retained. It has recently been shown that the number of onebit measurements required for signal estimation mirrors that of unquantized compressed sensing. Indeed, ssparse signals in Rn can be estimated (up to normalization) from Ω(s log(n/s)) onebit measurements. Nevertheless, controlling the precise accuracy of the error estimate remains an open challenge. In this paper, we focus on optimizing the decay of the error as a function of the oversampling factor λ: = m/(s log(n/s)), where m is the number of measurements. It is known that the error in reconstructing sparse signals from standard onebit measurements is bounded below by Ω(λ−1). Without adjusting the measurement procedure, reducing this polynomial
Convex Relaxation for LowDimensional Representation: Phase Transitions and Limitations
, 2015
"... ii Dedicated to my family iii Acknowledgments To begin with, it was a great pleasure to work with my advisor Babak Hassibi. Babak has always been a fatherly figure to me and my labmates. Most of our group are thousands of miles away from their home, and we are fortunate to have Babak as advisor, who ..."
Abstract
 Add to MetaCart
(Show Context)
ii Dedicated to my family iii Acknowledgments To begin with, it was a great pleasure to work with my advisor Babak Hassibi. Babak has always been a fatherly figure to me and my labmates. Most of our group are thousands of miles away from their home, and we are fortunate to have Babak as advisor, who always helps us with our troubles, whether they are personal or professional. As a graduate student, Babak’s intuition on identifying new problems and motivation for math was a great inspiration for me. He always encouraged me to do independent research and to be persistent with the toughest (mostly mathematical) challenges.
Quantization and Compressive Sensing
, 2015
"... Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Spec ..."
Abstract
 Add to MetaCart
Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, nonuniform, and 1bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of SigmaDelta () quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.