Results 1 -
2 of
2
Convolutional deep stacking networks for distributed compressive sensing,”
- Signal Processing,
, 2016
"... a b s t r a c t This paper addresses the reconstruction of sparse vectors in the Multiple Measurement Vectors (MMV) problem in compressive sensing, where the sparse vectors are correlated. This problem has so far been studied using model based and Bayesian methods. In this paper, we propose a deep ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
a b s t r a c t This paper addresses the reconstruction of sparse vectors in the Multiple Measurement Vectors (MMV) problem in compressive sensing, where the sparse vectors are correlated. This problem has so far been studied using model based and Bayesian methods. In this paper, we propose a deep learning approach that relies on a Convolutional Deep Stacking Network (CDSN) to capture the dependency among the different channels. To reconstruct the sparse vectors, we propose a greedy method that exploits the information captured by CDSN. The proposed method encodes the sparse vectors using random measurements (as done usually in compressive sensing). Experiments using a real world image dataset show that the proposed method outperforms the traditional MMV solver, i.e., Simultaneous Orthogonal Matching Pursuit (SOMP), as well as three of the Bayesian methods proposed for solving the MMV compressive sensing problem. We also show that the proposed method is almost as fast as greedy methods. The good performance of the proposed method depends on the availability of training data (as is the case in all deep learning methods). The training data, e.g., different images of the same class or signals with similar sparsity patterns are usually available for many applications.
RECONSTRUCTION OF SPARSE VECTORS IN COMPRESSIVE SENSING WITH MULTIPLE MEASUREMENT VECTORS USING BIDIRECTIONAL LONG SHORT-TERM MEMORY
"... ABSTRACT In this paper we address the problem of compressive sensing with multiple measurement vectors. We propose a reconstruction algorithm which learns sparse structure inside each sparse vector and among sparse vectors. The learning is based on a cross entropy cost function. The model is the Bi ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT In this paper we address the problem of compressive sensing with multiple measurement vectors. We propose a reconstruction algorithm which learns sparse structure inside each sparse vector and among sparse vectors. The learning is based on a cross entropy cost function. The model is the Bidirectional Long Short-Term Memory that is deep in time. All modifications are done at the decoder so that the encoder remains the general compressive sensing encoder, i.e., wide random matrix. Through numerical experiments on a real world dataset, we show that the proposed method outperforms the traditional greedy algorithm SOMP as well as a number of model based Bayesian methods including Multitask Compressive Sensing and Compressive Sensing with Temporally Correlated Sources. We emphasize that since the proposed method is a learning based method, its performance depends on the availability of training data. Nevertheless, in many applications huge dataset of offline training data is usually available. 1 Index Terms-Deep Learning, Compressive Sensing, Sparse Reconstruction. INTRODUCTION Compressive Sensing (CS) [1] is a framework where both sensing and compression are performed at the same time. This has been made possible by exploiting the sparsity of a signal, either in time or spatial domain, or in a transform domain like DCT or Wavelet. Given the fact that there are many natural signals that are sparse in one of the above domains, CS has found numerous applications. In compressive sensing with one measurement vector, instead of acquiring N samples of a signal x ∈ N ×1 , M random measurements are acquired where M < N : where y ∈ M ×1 is the known measured vector and Φ ∈ M ×N is a wide random measurement matrix. To find a To reconstruct S in (3), there are a number of approaches in the literature, the greedy approach [2] where a subset selection is performed that is not necessarily optimal, the relaxed mixed norm minimization approach Recently a number of methods based on Deep Learning In this paper, we address the MMV problem where the sparsity patterns in different channels are not much different. For example, all channels are DCT or Wavelet transforms of images. This problem has wide practical applications. Given