Results 1  10
of
42
Xampling: Signal acquisition and processing in union of subspaces,” Electr
 CCIT Rep. 747, Oct. 2009 [Online]. Available: http://arxiv.org/abs/0911.0519
"... Abstract—We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algor ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
Abstract—We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A representative union model of spectrally sparse signals serves as a testcase to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy, and software complexities. We conduct a comprehensive comparison between two subNyquist acquisition strategies for spectrally sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at subNyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicelyintoourframework. Index Terms—Analog to digital conversion, baseband processing, compressed sensing, digital signal processing, modulated wideband converter, subNyquist, Xampling. I.
A Short Note on Compressed Sensing with Partially Known Signal Support
, 2010
"... This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization progr ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization program coined innovative Basis Pursuit DeNoise (or i BPDN). Under the common ℓ2fidelity constraint made on the available measurements, this optimization promotes the (ℓ1) sparsity of the candidate signal over the complement of this known part. In particular, this paper extends the results of Vaswani et al. to the cases of compressible signals and noisy measurements. Our proof relies on a small adaption of the results of Candes in 2008 for characterizing the stability of the Basis Pursuit DeNoise (BPDN) program. We emphasize also an interesting link between our method and the recent work of Davenport et al. on the δstable embeddings and the cancelthenrecover strategy applied to our problem. For both approaches, reconstructions are indeed stabilized when the sensing matrix respects the Restricted Isometry Property for the same sparsity order. We conclude by sketching an easy numerical method relying on monotone operator splitting and proximal methods that iteratively solves i BPDN.
On the Observability of Linear Systems from Random, Compressive Measurements
"... Abstract — Recovering or estimating the initial state of a highdimensional system can require a potentially large number of measurements. In this paper, we explain how this burden can be significantly reduced for certain linear systems when randomized measurement operators are employed. Our work bui ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Abstract — Recovering or estimating the initial state of a highdimensional system can require a potentially large number of measurements. In this paper, we explain how this burden can be significantly reduced for certain linear systems when randomized measurement operators are employed. Our work builds upon recent results from the field of Compressive Sensing (CS), in which a highdimensional signal containing few nonzero entries can be efficiently recovered from a small number of random measurements. In particular, we develop concentration of measure bounds for the observability matrix and explain circumstances under which this matrix can satisfy the Restricted Isometry Property (RIP), which is central to much analysis in CS. We also illustrate our results with a simple case study of a diffusion system. Aside from permitting recovery of sparse initial states, our analysis has potential applications in solving inference problems such as detection and classification of more general initial states. I.
Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification
"... Abstract — In this paper, we derive concentration of measure inequalities for compressive Toeplitz matrices (having fewer rows than columns) with entries drawn from an independent and identically distributed (i.i.d.) Gaussian random sequence. These inequalities show that the norm of a vector mapped ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Abstract — In this paper, we derive concentration of measure inequalities for compressive Toeplitz matrices (having fewer rows than columns) with entries drawn from an independent and identically distributed (i.i.d.) Gaussian random sequence. These inequalities show that the norm of a vector mapped by a Toeplitz matrix to a lower dimensional space concentrates around its mean with a tail probability bound that decays exponentially in the dimension of the range space divided by a factor that is a function of the sample covariance of the vector. Motivated by the emerging field of Compressive Sensing (CS), we apply these inequalities to problems involving the analysis of highdimensional systems from convolutionbased compressive measurements. We discuss applications such as system identification, namely the estimation of the impulse response of a system, in cases where one can assume that the impulse response is highdimensional, but sparse. We also consider the problem of detecting a change in the dynamic behavior of a system, where the change itself can be modeled by a system with a sparse impulse response. I.
Concentration of Measure for Block Diagonal Matrices with Applications to Compressive Sensing
, 2010
"... Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is ve ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality reduction and Compressive Sensing. Concentration of measure results are wellestablished for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many realworld acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as (analytical and empirical) investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing the Restricted Isometry Property for the Toeplitz matrices that arise in compressive channel sensing.
Concentration of measure for block diagonal measurement matrices
 in Proc. Int. Conf. Acoustics, Speech, Signal Proc. (ICASSP
, 2010
"... Concentration of measure inequalities are at the heart of much theoretical analysis of randomized compressive operators. Though commonly studied for dense matrices, in this paper we derive a concentration of measure bound for block diagonal matrices where the nonzero entries along the main diagonal ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Concentration of measure inequalities are at the heart of much theoretical analysis of randomized compressive operators. Though commonly studied for dense matrices, in this paper we derive a concentration of measure bound for block diagonal matrices where the nonzero entries along the main diagonal blocks are i.i.d. subgaussian random variables. Our main result states that the concentration exponent, in the best case, scales as that for a fully dense matrix. We also identify the role that the energy distribution of the signal plays in distinguishing the best case from the worst. We illustrate these phenomena with a series of experiments. Index Terms — Compressive Sensing, concentration of measure, JohnsonLindenstrauss lemma, block diagonal matrices 1.
Wideangle Micro Sensors for Vision on a Tight Budget
"... Achieving computer vision on microscale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Achieving computer vision on microscale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through templatebased optical convolution, and they enable a wide fieldofview within a small form through a novel optical design. We describe the tradeoffs between the field of view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We also demonstrate milliscale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces. 1.
Random observations on random observations: Sparse signal acquisition and processing
 RICE UNIVERSITY
, 2010
"... ..."
The Restricted Isometry Property for Block Diagonal Matrices
"... Isometry Property (RIP) is a powerful condition on measurement operators which ensures robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. Early papers in CS showed that Gaussian random matrices satisfy the RIP with high proba ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Isometry Property (RIP) is a powerful condition on measurement operators which ensures robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. Early papers in CS showed that Gaussian random matrices satisfy the RIP with high probability, but such matrices are usually undesirable in practical applications due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. To alleviate some or all of these difficulties, recent research efforts have focused on structured random matrices. In this paper, we study block diagonal measurement matrices where each block on the main diagonal is itself a Gaussian random matrix. The main result of this paper shows that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on the coherence of the basis in which the signals are sparse. In the best case—for signals that are sparse in the frequency domain—these matrices perform nearly as well as dense Gaussian random matrices despite having many fewer nonzero entries.
Reconstruction and cancellation of sampled multiband signals using discrete prolate spheroidal sequences
 in: Workshop on Signal Proc. with Adaptive Sparse Structured Representations (SPARS
, 2011
"... Abstract—There remains a significant gap between the discrete, finitedimensional compressive sensing (CS) framework and the problem of acquiring a continuoustime signal. In this talk, we will discuss how sparse representations for multiband signals can be incorporated into the CS framework through ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—There remains a significant gap between the discrete, finitedimensional compressive sensing (CS) framework and the problem of acquiring a continuoustime signal. In this talk, we will discuss how sparse representations for multiband signals can be incorporated into the CS framework through the use of Discrete Prolate Spheroidal Sequences (DPSS’s). DPSS’s form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, one obtains a sparse representation for sampled multiband signals. We will discuss the use of DPSS bases for both signal recovery and the cancellation of strong narrowband interferers from compressive samples. In many respects, the core theory of compressive sensing (CS) is now wellsettled. Given a suitable number of compressive measurements y = Φx of a finitedimensional vector x, one can recover x exactly if x can be expressed in some dictionary Ψ as x = Ψα where