Results 1  10
of
35
Monotonic decrease of the nonGaussianness of the sum of independent random variables: A simple proof
 IEEE Trans. Inform. Theory
, 2006
"... the nonGaussianness (divergence with respect to a Gaussian random variable with identical first and second moments) of the sum of independent and identically distributed (i.i.d.) random variables is monotonically nonincreasing. We give a simplified proof using the relationship between nonGaussiann ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
the nonGaussianness (divergence with respect to a Gaussian random variable with identical first and second moments) of the sum of independent and identically distributed (i.i.d.) random variables is monotonically nonincreasing. We give a simplified proof using the relationship between nonGaussianness and minimum meansquare error (MMSE) in Gaussian channels. As Artstein et al., we also deal with the more general setting of nonidentically distributed random variables. Index Terms—Central limit theorem, differential entropy, divergence, entropy power inequality, minimum meansquare error (MMSE), nonGaussianness, relative entropy. I.
Hessian and concavity of mutual information, differential entropy, and entropy power in linear vector Gaussian channels
 IEEE Trans. Inf. Theory
, 2009
"... Abstract—Within the framework of linear vector Gaussian channels with arbitrary signaling, the Jacobian of the minimum mean square error and Fisher information matrices with respect to arbitrary parameters of the system are calculated in this paper. Capitalizing on prior research where the minimum m ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Abstract—Within the framework of linear vector Gaussian channels with arbitrary signaling, the Jacobian of the minimum mean square error and Fisher information matrices with respect to arbitrary parameters of the system are calculated in this paper. Capitalizing on prior research where the minimum mean square error and Fisher information matrices were linked to informationtheoretic quantities through differentiation, the Hessian of the mutual information and the entropy are derived. These expressions are then used to assess the concavity properties of mutual information and entropy under different channel conditions and also to derive a multivariate version of an entropy power inequality due to Costa. Index Terms—Concavity properties, differential entropy, entropy power, Fisher information matrix, Gaussian noise, Hessian matrices, linear vector Gaussian channels, minimum meansquare
Estimation in Gaussian Noise: Properties of the minimum meansquare error
 IEEE Trans. Inf. Theory
, 2011
"... Abstract—Consider the minimum meansquare error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signaltonoise ratio (SNR) as well as a functional of the input distribution (of the random variable t ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract—Consider the minimum meansquare error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signaltonoise ratio (SNR) as well as a functional of the input distribution (of the random variable to be estimated). It is shown that the MMSE is concave in the input distribution at any given SNR. For a given input distribution, the MMSE is found to be infinitely differentiable at all positive SNR, and in fact a real analytic function in SNR under mild conditions. The key to these regularity results is that the posterior distribution conditioned on the observation through Gaussian channels always decays at least as quickly as some Gaussian density. Furthermore, simple expressions for the first three derivatives of the MMSE with respect to the SNR are obtained. It is also shown that, as functions of the SNR, the curves for the MMSE of a Gaussian input and that of a nonGaussian input cross at most once over all SNRs. These properties lead to simple proofs of the facts that Gaussian inputs achieve both the secrecy capacity of scalar Gaussian wiretap channels and the capacity of scalar Gaussian broadcast channels, as well as a simple proof of the entropy power inequality in the special case where one of the variables is Gaussian. Index Terms—Entropy, estimation, Gaussian broadcast channel, Gaussian noise, Gaussian wiretap channel, minimum mean square error (MMSE), mutual information. I.
Representation of Mutual Information Via Input Estimates
"... Abstract—A relationship between information theory and estimation theory was recently shown for the Gaussian channel, relating the derivative of mutual information with the minimum meansquare error. This paper generalizes the link between information theory and estimation theory to arbitrary channe ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract—A relationship between information theory and estimation theory was recently shown for the Gaussian channel, relating the derivative of mutual information with the minimum meansquare error. This paper generalizes the link between information theory and estimation theory to arbitrary channels, giving representations of the derivative of mutual information as a function of the conditional marginal input distributions given the outputs. We illustrate the use of this representation in the efficient numerical computation of the mutual information achieved by inputs such as specific codes or natural language. Index Terms—Computation of mutual information, extrinsic information, input estimation, lowdensity paritycheck (LDPC) codes, minimum mean square error (MMSE), mutual information, soft channel decoding. I.
Mismatched estimation and relative entropy
 IEEE Trans. Inf. Theory
, 2010
"... Abstract—A random variable with distribution is observed in Gaussian noise and is estimated by a mismatched minimum meansquare estimator that assumes that the distribution is, instead of. This paper shows that the integral over all signaltonoise ratios (SNRs) of the excess meansquare estimation e ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract—A random variable with distribution is observed in Gaussian noise and is estimated by a mismatched minimum meansquare estimator that assumes that the distribution is, instead of. This paper shows that the integral over all signaltonoise ratios (SNRs) of the excess meansquare estimation error incurred by the mismatched estimator is twice the relative entropy (in nats). This representation of relative entropy can be generalized to nonrealvalued random variables, and can be particularized to give new general representations of mutual information in terms of conditional means. Inspired by the new representation, we also propose a definition of free relative entropy which fills a gap in, and is consistent with, the literature on free probability. Index Terms—Divergence, free probability, minimum meansquare error (MMSE) estimation, mutual information, relative entropy, Shannon theory, statistics. I.
MMSE dimension
 in Proc. 2010 IEEE Int. Symp. Inf. Theory
, 2010
"... Abstract—If is standard Gaussian, the minimum meansquare error (MMSE) of estimating a random variable based on + vanishes at least as fast as 1 as.We define the MMSE dimension of as the limit as of the product of and the MMSE. MMSE dimension is also shown to be the asymptotic ratio of nonlinear MMSE ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract—If is standard Gaussian, the minimum meansquare error (MMSE) of estimating a random variable based on + vanishes at least as fast as 1 as.We define the MMSE dimension of as the limit as of the product of and the MMSE. MMSE dimension is also shown to be the asymptotic ratio of nonlinear MMSE to linear MMSE. For discrete, absolutely continuous or mixed distribution we show that MMSE dimension equals Rényi’s information dimension. However, for a class of selfsimilar singular (e.g., Cantor distribution), we show that the product of and MMSE oscillates around information dimension periodically in (dB). We also show that these results extend considerably beyond Gaussian noise under various technical conditions. Index Terms—Additive noise, Bayesian statistics, Gaussian noise, highSNR asymptotics, minimum meansquare error (MMSE), mutual information, nonGaussian noise, Rényi information dimension. where the infimum in (1) is over all Borel measurable. When is related to through an additivenoise channel with gain, i.e., (3) where is independent of, we denote
MIMO Gaussian Channels With Arbitrary Inputs: Optimal Precoding and Power Allocation
, 2010
"... In this paper, we investigate the linear precoding and power allocation policies that maximize the mutual information for general multipleinput–multipleoutput (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum me ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper, we investigate the linear precoding and power allocation policies that maximize the mutual information for general multipleinput–multipleoutput (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum meansquare error (MMSE). The optimal linear precoder satisfies a fixedpoint equation as a function of the channel and the input constellation. For nonGaussian inputs, a nondiagonal precoding matrix in general increases the information transmission rate, even for parallel noninteracting channels. Whenever precoding is precluded, the optimal power allocation policy also satisfies a fixedpoint equation; we put forth a generalization of the mercury/waterfilling algorithm, previously proposed for parallel noninterfering channels, in which the mercury level accounts not only for the nonGaussian input distributions, but also for the
Generalized Mercury/Waterfilling for MultipleInput MultipleOutput Channels
"... Abstract—We determine the powerallocation policy that maximizes the mutual information for general multipleinput multipleoutput Gaussian channels with arbitrary input distributions, by capitalizing on the recent relationship between mutual information and minimum mean squared error (MMSE). In this ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—We determine the powerallocation policy that maximizes the mutual information for general multipleinput multipleoutput Gaussian channels with arbitrary input distributions, by capitalizing on the recent relationship between mutual information and minimum mean squared error (MMSE). In this context, we put forth a novel interpretation of the optimal powerallocation procedure that generalizes the mercury/waterfilling algorithm, previously proposed for parallel noninterfering channels. In this generalization the mercury level accounts for the suboptimal (nonGaussian) input distribution and the interferences between inputs.
Optimal precoding for digital subscriber lines
 in Proc. Int. Conf. Commun
, 2008
"... Abstract—We determine the linear precoding policy that maximizes the mutual information for general multipleinput multipleoutput (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean squared error (MMSE). The op ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract—We determine the linear precoding policy that maximizes the mutual information for general multipleinput multipleoutput (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean squared error (MMSE). The optimal linear precoder can be computed by means of a fixedpoint equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A full precoding matrix may significantly increase the information transmission rate, even for parallel noninteracting channels. We illustrate the application of our results to typical Gigabit DSL systems. I.
On optimal precoding in linear vector Gaussian channels with arbitrary input distribution
, 904
"... Abstract—The design of the precoder the maximizes the mutual information in linear vector Gaussian channels with an arbitrary input distribution is studied. Precisely, the precoder optimal left singular vectors and singular values are derived. The characterization of the right singular vectors is le ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—The design of the precoder the maximizes the mutual information in linear vector Gaussian channels with an arbitrary input distribution is studied. Precisely, the precoder optimal left singular vectors and singular values are derived. The characterization of the right singular vectors is left, in general, as an open problem whose computational complexity is then studied in three cases: Gaussian signaling, low SNR, and high SNR. For the Gaussian signaling case and the low SNR regime, the dependence of the mutual information on the right singular vectors vanishes, making the optimal precoder design problem easy to solve. In the high SNR regime, however, the dependence on the right singular vectors cannot be avoided and we show the difficulty of computing the optimal precoder through an NPhardness analysis. I.