Results 1 
4 of
4
Lossy compression via sparse linear regression: Computationally efficient encoding and decoding
"... We study a new class of codes for lossy compression with the squarederror distortion criterion, designed using the statistical framework of highdimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regre ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We study a new class of codes for lossy compression with the squarederror distortion criterion, designed using the statistical framework of highdimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimumdistance encoding, we show that such a code can attain the ratedistortion function with the optimal errorexponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance σ2 with (squarederror) distortion D can compress any ergodic source of variance less than σ2 to within distortion D. Thus the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having having a compact representation in terms of a matrix whose size is a loworder polynomial in the blocklength. 1
Sparse regression codes for multiterminal source and channel coding
 in 50th Allerton Conf. on Commun., Control, and Computing
, 2012
"... AbstractWe study a new class of codes for Gaussian multiterminal source and channel coding. These codes are designed using the statistical framework of highdimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
AbstractWe study a new class of codes for Gaussian multiterminal source and channel coding. These codes are designed using the statistical framework of highdimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of columns of a design matrix. These codes were introduced by Barron and Joseph and shown to achieve the channel capacity of AWGN channels with computationally feasible decoding. They have also recently been shown to achieve the optimal ratedistortion function for Gaussian sources. In this paper, we demonstrate how to implement random binning and superposition coding using sparse regression codes. In particular, with minimumdistance encoding/decoding it is shown that sparse regression codes attain the optimal informationtheoretic limits for a variety of multiterminal source and channel coding problems.
The Gaussian RateDistortion Function of Sparse Regression Codes with Optimal Encoding”, arXiv preprint arXiv:1401.5272
, 2014
"... Abstract—We study the ratedistortion performance of Sparse Regression Codes where the codewords are linear combinations of subsets of columns of a design matrix. It is shown that with minimumdistance encoding and squared error distortion, these codes achieve R∗(D), the Shannon ratedistortion func ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We study the ratedistortion performance of Sparse Regression Codes where the codewords are linear combinations of subsets of columns of a design matrix. It is shown that with minimumdistance encoding and squared error distortion, these codes achieve R∗(D), the Shannon ratedistortion function for i.i.d. Gaussian sources. This completes a previous result which showed that R∗(D) was achievable for distortions below a certain threshold. The proof is based on the second moment method, a popular technique to show that a nonnegative random variable X is strictly positive with high probability. We first identify the reason behind the failure of the vanilla second moment method for this problem, and then introduce a refinement to show that R∗(D) is achievable for all distortions. I.
Sparse Regression Codes: Recent Results and Future Directions
"... AbstractSparse Superposition or Sparse Regression codes were recently introduced by Barron and Joseph for communication over the AWGN channel. The code is defined in terms of a design matrix; codewords are linear combinations of subsets of columns of the matrix. These codes achieve the AWGN channe ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractSparse Superposition or Sparse Regression codes were recently introduced by Barron and Joseph for communication over the AWGN channel. The code is defined in terms of a design matrix; codewords are linear combinations of subsets of columns of the matrix. These codes achieve the AWGN channel capacity with computationally feasible decoding. We have shown that they also achieve the optimal ratedistortion function for Gaussian sources. Further, the sparse regression codebook has a partitioned structure that facilitates random binning and superposition. In this paper, we review existing results concerning Sparse Regression codes and discuss directions for future research.