Results 1  10
of
226
Fast Linearized Bregman Iteration for Compressed Sensing
 and Sparse Denoising, 2008. UCLA CAM Reprots
, 2008
"... Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm sh ..."
Abstract

Cited by 62 (16 self)
 Add to MetaCart
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [28, 32] in approximating a minimal ℓ1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing. 1. Introduction. Let A ∈ R m×n with n> m and f ∈ R m be given. The aim of a basis pursuit problem is to find u ∈ R n by solving the following constrained minimization problem min
Dictionaries for Sparse Representation Modeling
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1D and 2D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the KSVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
Sparse Representation For Computer Vision and Pattern Recognition
, 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining stateoftheart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 39 (23 self)
 Add to MetaCart
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels
"... Highrate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Trainingbased methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most co ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Highrate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Trainingbased methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional trainingbased channel estimation methods, typically comprising of linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing, can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional leastsquaresbased training methods.
Almost optimal unrestricted fast johnsonlindenstrauss transform
 Noga Alon. Problems and results in extremal combinatorics–i. Discrete Mathematics
, 2003
"... The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley’s theorem for bounding Gaussian processes. Our main result states that any set of N = exp ( Õ(n)) real vectors in n dimensional space can be linearly mapped to a space of dimension k = O(log N polylog(n)), while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time O(n log n) on each vector. This improves on the best known N = exp ( Õ(n1/2)) achieved by Ailon and Liberty and N = exp ( Õ(n1/3)) by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a polylog(n) factor while considerably simplifying their constructions. 1
Compressed sensing in astronomy
"... Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged: compressed sensing. This theory is a new sampling framework that provides an alternative to the wellknown Shannon sampling ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged: compressed sensing. This theory is a new sampling framework that provides an alternative to the wellknown Shannon sampling theory. In this paper we investigate how compressed sensing (CS) can provide new insights into astronomical data compression and more generally how it paves the way for new conceptions in astronomical remote sensing. We first give a brief overview of the compressed sensing theory which provides very simple coding process with low computational cost, thus favoring its use for realtime applications often found on board space mission. We introduce a practical and effective recovery algorithm for decoding compressed data. In astronomy, physical prior information is often crucial for devising effective signal processing methods. We particularly point out that a CSbased compression scheme is flexible enough to account for such information. In this context, compressed sensing is a new framework in which data acquisition and data processing are merged. We show also that CS provides a new fantastic way to handle multiple observations of the same field view, allowing us to recover information at very low signaltonoise ratio, which is impossible with standard compression methods. This CS data fusion concept could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments on board the Herschel spacecraft which will launched in 2008.
On the Role of Sparse and Redundant Representations in Image Processing
 PROCEEDINGS OF THE IEEE – SPECIAL ISSUE ON APPLICATIONS OF SPARSE REPRESENTATION AND COMPRESSIVE SENSING
, 2009
"... Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. This path of models spans from the simple ℓ2norm smoothness, through robust, thus edge preserving, measures of smo ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. This path of models spans from the simple ℓ2norm smoothness, through robust, thus edge preserving, measures of smoothness (e.g. total variation), and till the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. We discuss ways to employ these tools for various image processing tasks, and present several applications in which stateoftheart results are obtained.
Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation
"... An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ΦA where Φ is a fixed base dictionary and A is sparse. The spa ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ΦA where Φ is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but nonefficient and costly to deploy. In this paper we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3D image denoising.
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms