Results 1  10
of
17
The empirical distribution of rateconstrained source codes
 IEEE Trans. Inform. Theory
"... Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k)alongthepair(X n,Y n). Our main interest is in the sample behavior of this (random) distribution. Letting I(Q k) denote the mutual information I(X k; Y k) when (X k,Y k) ∼ Q k we show that for any (sequence of) lossy source code(s) of rate ≤ R lim sup n→∞ 1 k I ˆQ k n n
1 Minimum Complexity Pursuit for Universal Compressed Sensing
"... The nascent field of compressed sensing is founded on the fact that highdimensional signals with “simple structure ” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsit ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
The nascent field of compressed sensing is founded on the fact that highdimensional signals with “simple structure ” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity to lowrankedness. However, two fundamental questions have been left unanswered, namely: What are the general abstract meanings of “structure ” and “simplicity”? And do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we address these two questions. Using algorithmic information theory tools such as the Kolmogorov complexity, we provide a unified definition of structure and simplicity. Leveraging this new definition, we develop and analyze an abstract algorithm for signal recovery motivated by Occam’s Razor. Minimum complexity pursuit (MCP) requires just O(κ log n) randomized samples to recover a signal of complexity κ and ambient dimension n. We also discuss the performance of MCP in the presence of measurement noise and with approximately simple signals. I.
INFORMATION COMPLEXITY AND ESTIMATION
"... We consider an input x generated by an unknown stationary ergodic source X that enters a signal processing system J, resulting in w = J(x). We observe w through a noisy channel, y = z(w); our goal is to estimate x from y, J, and knowledge of f Y W. This is universal estimation, because fX is unknow ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
We consider an input x generated by an unknown stationary ergodic source X that enters a signal processing system J, resulting in w = J(x). We observe w through a noisy channel, y = z(w); our goal is to estimate x from y, J, and knowledge of f Y W. This is universal estimation, because fX is unknown. We provide a formulation that describes a tradeoff between information complexity and noise. Initial theoretical, algorithmic, and experimental evidence is presented in support of our approach. 1.
Minimum complexity pursuit: Stability analysis
 In Proc. Int. Symposium Info. Theory (ISIT
, 2012
"... ar ..."
(Show Context)
Universally attainable errorexponents for rateconstrained denoising of noisy sources
 IEEE Trans. Inform. Theory
, 2002
"... ..."
Compressed Sensing via Universal Denoising and Approximate Message Passing
"... Abstract—We study compressed sensing (CS) signal reconstruction problems where an input signal is measured via matrix multiplication under additive white Gaussian noise. Our signals are assumed to be stationary and ergodic, but the input statistics are unknown; the goal is to provide reconstruction ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We study compressed sensing (CS) signal reconstruction problems where an input signal is measured via matrix multiplication under additive white Gaussian noise. Our signals are assumed to be stationary and ergodic, but the input statistics are unknown; the goal is to provide reconstruction algorithms that are universal to the input statistics. We present a novel algorithm that combines: (i) the approximate message passing (AMP) CS reconstruction framework, which converts the matrix channel recovery problem into scalar channel denoising; (ii) a universal denoising scheme based on context quantization, which partitions the stationary ergodic signal denoising into independent and identically distributed (i.i.d.) subsequence denoising; and (iii) a density estimation approach that approximates the probability distribution of an i.i.d. sequence by fitting a Gaussian mixture (GM) model. In addition to the algorithmic framework, we provide three contributions: (i) numerical results showing that state evolution holds for nonseparable Bayesian slidingwindow denoisers; (ii) a universal denoiser that does not require the input signal to be bounded; and (iii) we modify the GM learning algorithm, and extend it to an i.i.d. denoiser. Our universal CS recovery algorithm compares favorably with existing reconstruction algorithms in terms of both reconstruction quality and runtime, despite not knowing the input statistics of the stationary ergodic signal. Index Terms—approximate message passing, compressed sensing, Gaussian mixture model, universal denoising.
Universal compressed sensing of Markov sources,” Arxiv preprint arXiv:1406.7807
, 2014
"... ar ..."
(Show Context)
Empirical Bayes and full Bayes for signal estimation
 in Inf. Theory Appl. Workshop
, 2014
"... Abstract—We consider signals that follow a parametric distribution where the parameter values are unknown. To estimate such signals from noisy measurements in scalar channels, we study the empirical performance of an empirical Bayes (EB) approach and a full Bayes (FB) approach. We then apply EB and ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We consider signals that follow a parametric distribution where the parameter values are unknown. To estimate such signals from noisy measurements in scalar channels, we study the empirical performance of an empirical Bayes (EB) approach and a full Bayes (FB) approach. We then apply EB and FB to solve compressed sensing (CS) signal estimation problems by successively denoising a scalar Gaussian channel within an approximate message passing (AMP) framework. Our numerical results show that FB achieves better performance than EB in scalar channel denoising problems when the signal dimension is small. In the CS setting, the signal dimension must be large enough for AMP to work well; for large signal dimensions, AMP has similar performance with FB and EB.
and How It Is Currently Solved
"... Detecting arcing faults is an important but difficulttosolve practical problem. In this paper, we show how the Minimum Description Length (MDL) Principle can help in solving this problem. Mathematics Subject Classification: 68Q30, 93AXX ..."
Abstract
 Add to MetaCart
(Show Context)
Detecting arcing faults is an important but difficulttosolve practical problem. In this paper, we show how the Minimum Description Length (MDL) Principle can help in solving this problem. Mathematics Subject Classification: 68Q30, 93AXX
Approximate Message Passing with Universal Denoising
"... Abstract—We study compressed sensing (CS) signal reconstruction problems where an input signal is measured via matrix multiplication under additive white Gaussian noise. Our signals are assumed to be stationary and ergodic, but the input statistics are unknown; the goal is to provide reconstruction ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We study compressed sensing (CS) signal reconstruction problems where an input signal is measured via matrix multiplication under additive white Gaussian noise. Our signals are assumed to be stationary and ergodic, but the input statistics are unknown; the goal is to provide reconstruction algorithms that are universal to the input statistics. We present a novel algorithmic framework that combines: (i) the approximate message passing (AMP) CS reconstruction framework, which solves the matrix channel recovery problem by iterative scalar channel denoising; (ii) a universal denoising scheme based on context quantization, which partitions the stationary ergodic signal denoising into independent and identically distributed (i.i.d.) subsequence denoising; and (iii) a density estimation approach that approximates the probability distribution of an i.i.d. sequence by fitting a Gaussian mixture (GM) model. In addition to the algorithmic framework, we provide three contributions: (i) numerical results showing that state evolution holds for nonseparable Bayesian slidingwindow denoisers; (ii) an i.i.d. denoiser based on a modified GM learning algorithm; and (iii) a universal denoiser that does not require the input signal to be bounded. We provide two implementations of our universal CS recovery algorithm with one being faster and the other being more accurate. The two implementations compare favorably with existing reconstruction algorithms in terms of both reconstruction quality and runtime. Index Terms—approximate message passing, compressed sensing, Gaussian mixture model, universal denoising.