Results 1  10
of
96
Mutual information and minimum meansquare error in Gaussian channels
 IEEE TRANS. INFORM. THEORY
, 2005
"... This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the out ..."
Abstract

Cited by 285 (32 self)
 Add to MetaCart
(Show Context)
This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signaltonoise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discretetime and continuoustime noncausal MMSE estimation. This fundamental informationtheoretic result has an unexpected consequence in continuoustime nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signaltonoise ratio is chosen uniformly distributed between 0 and SNR.
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 80 (10 self)
 Add to MetaCart
(Show Context)
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
Performance of Polar Codes for Channel and Source Coding
"... Polar codes, introduced recently by Arıkan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder. Although these codes, combined with successive cancellation, are optimal in this respect, their finitelength performance ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
Polar codes, introduced recently by Arıkan, are the first family of codes known to achieve capacity of symmetric channels using a low complexity successive cancellation decoder. Although these codes, combined with successive cancellation, are optimal in this respect, their finitelength performance is not record breaking. We discuss several techniques through which their finitelength performance can be improved. We also study the performance of these codes in the context of source coding, both lossless and lossy, in the singleuser context as well as for distributed applications.
A Singleletter Characterization of Optimal Noisy Compressed Sensing
"... Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimen ..."
Abstract

Cited by 57 (16 self)
 Add to MetaCart
(Show Context)
Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a singleletter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the singleletter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal. I.
Estimation in Gaussian Noise: Properties of the minimum meansquare error
 IEEE Trans. Inf. Theory
, 2011
"... Abstract—Consider the minimum meansquare error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signaltonoise ratio (SNR) as well as a functional of the input distribution (of the random variable t ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Abstract—Consider the minimum meansquare error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signaltonoise ratio (SNR) as well as a functional of the input distribution (of the random variable to be estimated). It is shown that the MMSE is concave in the input distribution at any given SNR. For a given input distribution, the MMSE is found to be infinitely differentiable at all positive SNR, and in fact a real analytic function in SNR under mild conditions. The key to these regularity results is that the posterior distribution conditioned on the observation through Gaussian channels always decays at least as quickly as some Gaussian density. Furthermore, simple expressions for the first three derivatives of the MMSE with respect to the SNR are obtained. It is also shown that, as functions of the SNR, the curves for the MMSE of a Gaussian input and that of a nonGaussian input cross at most once over all SNRs. These properties lead to simple proofs of the facts that Gaussian inputs achieve both the secrecy capacity of scalar Gaussian wiretap channels and the capacity of scalar Gaussian broadcast channels, as well as a simple proof of the entropy power inequality in the special case where one of the variables is Gaussian. Index Terms—Entropy, estimation, Gaussian broadcast channel, Gaussian noise, Gaussian wiretap channel, minimum mean square error (MMSE), mutual information. I.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
Random Sparse Linear Systems Observed Via Arbitrary Channels: A Decoupling Principle
"... Abstract—This paper studies the problem of estimating the vector input to a sparse linear transformation based on the observation of the output vector through a bank of arbitrary independent channels. The linear transformation is drawn randomly from an ensemble with mild regularity conditions. The c ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper studies the problem of estimating the vector input to a sparse linear transformation based on the observation of the output vector through a bank of arbitrary independent channels. The linear transformation is drawn randomly from an ensemble with mild regularity conditions. The central result is a decoupling principle in the largesystem limit. That is, the optimal estimation of each individual symbol in the input vector is asymptotically equivalent to estimating the same symbol through a scalar additive Gaussian channel, where the aggregate effect of the interfering symbols is tantamount to a degradation in the signaltonoise ratio. The degradation is determined from a recursive formula related to the score function of the conditional probability distribution of the noisy channel. A sufficient condition is provided for belief propagation (BP) to asymptotically produce the a posteriori probability distribution of each input symbol given the output. This paper extends the authors ’ previous decoupling result for Gaussian channels to arbitrary channels, which was based on an earlier work of Montanari and Tse. Moreover, a rigorous justification is provided for the generalization of some results obtained via statical physics methods. I.
Improvement of bpbased cdma multiuser detection by spatial coupling
 2011, coRR
"... ar ..."
(Show Context)
Support recovery with sparsely sampled free random matrices
 in Proc. IEEE Int. Symp. Inf. Theory, Saint
, 2011
"... Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of th ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information theoretic bounds, to study the inputoutput mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and ℓ1 relaxation.
Optimal phase transitions in compressed sensing
 IEEE Trans. Inf. Theory
"... Abstract—Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonline ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Abstract—Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear, and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phasetransition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phasetransition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weaknoise regime. Index Terms—Compressed sensing, joint sourcechannel coding, minimum meansquare error (MMSE) dimension, phase transition, random matrix, Rényi information dimension, Shannon theory.