Results 1  10
of
19
A Fast and Efficient Algorithm for Lowrank Approximation of a Matrix
"... The lowrank matrix approximation problem involves finding of a rank k version of a m × n matrix A, labeled Ak, such that Ak is as ”close ” as possible to the best SVD approximation version of A at the same rank level. Previous approaches approximate matrix A by nonuniformly adaptive sampling some ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
The lowrank matrix approximation problem involves finding of a rank k version of a m × n matrix A, labeled Ak, such that Ak is as ”close ” as possible to the best SVD approximation version of A at the same rank level. Previous approaches approximate matrix A by nonuniformly adaptive sampling some columns (or rows) of A, hoping that this subset of columns contain enough information about A. The submatrix is then used for the approximation process. However, these approaches are often computationally intensive due to the complexity in the adaptive sampling. In this paper, we propose a fast and efficient algorithm which at first preprocesses matrix A in order to spread out information (energy) of every columns (or rows) of A, then randomly selects some of its columns (or rows). Finally, a rankk approximation is generated from the row space of these selected sets. The preprocessing step is performed by uniformly randomizing signs of entries of A and transforming all columns of A by an orthonormal matrix F with existing fast implementation (e.g. Hadamard, FFT, DCT...). Our main contribution is summarized as follows. 1) We show that by uniformly selecting at random d rows of the preprocessed matrix with d = O 1
Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques
, 2011
"... We advocate a compressed sensing strategy that consists of multiplying the signal of interest by a wide bandwidth modulation before projection onto randomly selected vectors of an orthonormal basis. Firstly, in a digital setting with random modulation, considering a whole class of sensing bases incl ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We advocate a compressed sensing strategy that consists of multiplying the signal of interest by a wide bandwidth modulation before projection onto randomly selected vectors of an orthonormal basis. Firstly, in a digital setting with random modulation, considering a whole class of sensing bases including the Fourier basis, we prove that the technique is universal in the sense that the required number of measurements for accurate recovery is optimal and independent of the sparsity basis. This universality stems from a drastic decrease of coherence between the sparsity and the sensing bases, which for a Fourier sensing basis relates to a spread of the original signal spectrum by the modulation (hence the name “spread spectrum”). The approach is also efficient as sensing matrices with fast matrix multiplication algorithms can be used, in particular in the case of Fourier measurements. Secondly, these results are confirmed by a numerical analysis of the phase transition of the ℓ1minimization problem. Finally, we show that the spread spectrum technique remains effective in an analog setting with chirp modulation for application to realistic Fourier imaging. We illustrate these findings in the context of radio interferometry. 1 1
Sparsity adaptive matching pursuit algorithm for practical compressed sensing
 in Proceedings of the 42 th Asilomar Conference on Signals, Systems, and Computers
, 2008
"... This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other stateoftheart greedy algorithms, the most innovative feature of the SAMP is its capability of signal reconstructio ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other stateoftheart greedy algorithms, the most innovative feature of the SAMP is its capability of signal reconstruction without prior information of the sparsity. This makes it a promising candidate for many practical applications when the number of nonzero (significant) coefficients of a signal is not available. The proposed algorithm adopts a similar flavor of the EM algorithm, which alternatively estimates the sparsity and the true support set of the target signals. In fact, SAMP provides a generalized greedy reconstruction framework in which the orthogonal matching pursuit and the subspace pursuit can be viewed as its special cases. Such a connection also gives us an intuitive justification of tradeoffs between computational complexity and reconstruction performance. While the SAMP offers a comparably theoretical guarantees as the best optimizationbased approach, simulation results show that it outperforms many existing iterative algorithms, especially for compressible signals. Index Terms — Sparsity adaptive, greedy pursuit, compressed sensing, compressive sampling, sparse reconstruction 1.
Spatiallylocalized compressed sensing and routing in multihop sensor networks
 in GSN
, 2009
"... Abstract. We propose energyefficient compressed sensing for wireless sensor networks using spatiallylocalized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstr ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We propose energyefficient compressed sensing for wireless sensor networks using spatiallylocalized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstruction provides significantly better reconstruction than independent reconstruction. We also propose a metric of energy overlap between clusters and basis functions that allows us to characterize the gains of joint reconstruction for different basis functions. Compared with state of the art compressed sensing techniques for sensor network, our experimental results demonstrate significant gains in reconstruction accuracy and transmission cost. 1
FAST AND EFFICIENT DIMENSIONALITY REDUCTION USING STRUCTURALLY RANDOM MATRICES
"... Structurally Random Matrices (SRM) are first proposed in [1] as fast and highly efficient measurement operators for large scale compressed sensing applications. Motivated by the bridge between compressed sensing and the JohnsonLindenstrauss lemma [2] , this paper introduces a related application of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Structurally Random Matrices (SRM) are first proposed in [1] as fast and highly efficient measurement operators for large scale compressed sensing applications. Motivated by the bridge between compressed sensing and the JohnsonLindenstrauss lemma [2] , this paper introduces a related application of SRMs regarding to realizing a fast and highly efficient embedding. In particular, it shows that a SRM is also a promising dimensionality reduction transform that preserves all pairwise distances of high dimensional vectors within an arbitrarily small factor ɛ, provided that the projection dimension is on the order of O(ɛ −2 log 3 N), where N denotes the number of ddimensional vectors. In other words, SRM can be viewed as the suboptimal JohnsonLindenstrauss embedding that, however, owns very low computational complexity O(d log d) and highly efficient implementation that uses only O(d) random bits, making it a promising candidate for practical, large scale applications where efficiency and speed of computation are highly critical. Index Terms — Lowdistortion embedding, JohnsonLindenstrauss, dimensionality reduction, compressed sensing, machine learning.
CompressedSensing Recovery of Images and Video Using Multihypothesis Predictions
 Proceedings of the 45 th Asilomar Conference on Signals, Systems, and Computers
, 2011
"... Abstract—Compressedsensing reconstruction of still images and video sequences driven by multihypothesis predictions is considered. Specifically, for still images, multiple predictions drawn for an image block are made from spatially surrounding blocks within an initial nonpredicted reconstruction. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—Compressedsensing reconstruction of still images and video sequences driven by multihypothesis predictions is considered. Specifically, for still images, multiple predictions drawn for an image block are made from spatially surrounding blocks within an initial nonpredicted reconstruction. For video, multihypothesis predictions of the current frame are generated from one or more previously reconstructed reference frames. In each case, the predictions are used to generate a residual in the domain of the compressedsensing random projections. This residual being typically more compressible than the original signal leads to improved reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an illposed leastsquares optimization is proposed. Experimental results demonstrate that the proposed reconstructions outperform alternative strategies not employing multihypothesis predictions. I.
Joint Optimization of Transport Cost and Reconstruction for SpatiallyLocalized Compressed Sensing in MultiHop Sensor Networks
"... Abstract—In sensor networks, energy efficient data manipulation / transmission is very important for data gathering, due to significant power constraints on the sensors. As a potential solution, Compressed Sensing (CS) has been proposed, because it requires capturing a smaller number of samples for ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—In sensor networks, energy efficient data manipulation / transmission is very important for data gathering, due to significant power constraints on the sensors. As a potential solution, Compressed Sensing (CS) has been proposed, because it requires capturing a smaller number of samples for successful reconstruction of sparse data. Traditional CS does not take explicitly into consideration the cost of each measurement (it simply tries to minimize the number of measurements), and this ignores the need to transport measurements over the sensor network. In this paper, we study CS approaches for sensor networks that are spatiallylocalized, thus reducing the cost of data gathering. In particular, we study the reconstruction accuracy properties of a new distributed measurement system that constructs measurements within spatiallylocalized clusters. We first introduce the concept of maximum energy overlap between clusters and basis functions (β), and show that β can be used to estimate the minimum number of measurements required for accurate reconstruction. Based on this metric, we propose a centralized iterative algorithm for joint optimization of the energy overlap and distance between sensors in each cluster. Our simulation results show that we can achieve significant savings in transport cost with small reconstruction error. I.
Scalable Video Coding with Compressive Sensing for Wireless Videocast
"... Abstract—Channel coding such as ReedSolomon (RS) and convolutional codes has been widely used to protect video transmission in wireless networks. However, this type of channel coding can effectively correct error bits only if the error rate is smaller than a given threshold; when the bit error rate ..."
Abstract
 Add to MetaCart
Abstract—Channel coding such as ReedSolomon (RS) and convolutional codes has been widely used to protect video transmission in wireless networks. However, this type of channel coding can effectively correct error bits only if the error rate is smaller than a given threshold; when the bit error rate is underestimated, the effectiveness of channel coding drops dramatically and so does the decoded video quality. In this paper, we propose a lowcomplex, scalable video coding architecture based on compressive sensing (SVCCS) for wireless unicast and multicast transmissions. SVCCS achieves good scalability, error resilience and coding efficiency. SVCCS encoded bitstream is divided into base and enhancement layers. The layered structure provides quality and temporal scalability. While in the enhancement layer, the CS measurements provide fine granular quality scalability. In addition, we incorporate the stateoftheart technologies to improve the compressive sensing coding efficiency. Experimental results show that SVCCS is more effective and efficient for wireless videocast than the existing solutions. I.
A FAST AND EFFICIENT HEURISTIC NUCLEARNORM ALGORITHM FOR AFFINE RANK MINIMIZATION
"... The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a set of linear equality constraints. Generally, since affine rank minimization is NPhard, a popular heuristic method is to minimize the nuclear norm that is a sum of singular values of the matrix variable ..."
Abstract
 Add to MetaCart
The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a set of linear equality constraints. Generally, since affine rank minimization is NPhard, a popular heuristic method is to minimize the nuclear norm that is a sum of singular values of the matrix variable [1]. A recent intriguing paper [2] shows that if the linear transform that defines the set of equality constraints is nearly isometrically distributed and the number of constraints is at least O(r(m + n)logmn), where r and m × n are the rank and size of the minimum rank matrix, minimizing the nuclear norm yields exactly the minimum rank matrix solution. Unfortunately, it takes a large amount of computational complexity and memory buffering to solve the nuclear norm minimization problem with known nearly isometric transforms. This paper presents a fast and efficient algorithm for nuclear norm minimization that employs structurally random matrices [3] for its linear transform and a projected subgradient method that exploits the unique features of structurally random matrices to substantially speed up the optimization process. Theoretically, we show that nuclear norm minimization using structurally random linear constraints guarantees the minimum rank matrix solution if the number of linear constraints is at least O(r(m+n)log 3 mn). Extensive simulations verify that structurally random transforms still retain optimal performance while their implementation complexity is just a fraction of that of completely random transforms, making them promising candidates for large scale applications. Index Terms — Rank minimization, nuclear norm heuristic, compressed sensing, system identification, structurally random transforms, random matrices 1.