Results 1  10
of
133
Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals
"... We address the problem of reconstructing a multiband signal from its subNyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multicoset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose stric ..."
Abstract

Cited by 109 (62 self)
 Add to MetaCart
We address the problem of reconstructing a multiband signal from its subNyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multicoset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose strict limitations on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multicoset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finitedimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of knownspectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.
Spectral Compressive Sensing
"... Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency do ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency domain. In practical applications, the standard frequency domain signal representation is the discrete Fourier transform (DFT). Unfortunately, the DFT coefficients of a frequencysparse signal are themselves sparse only in the contrived case where the sinusoid frequencies are integer multiples of the DFT’s fundamental frequency. As a result, practical DFTbased CS acquisition and recovery of smooth signals does not perform nearly as well as one might expect. In this paper, we develop a new spectral compressive sensing (SCS) theory for general frequencysparse signals. The key ingredients are an oversampled DFT frame, a signal model that inhibits closely spaced sinusoids, and classical sinusoid parameter estimation algorithms from the field of spectrum estimation. Using peridogram and eigenanalysis based spectrum estimates (e.g., MUSIC), our new SCS algorithms significantly outperform the current stateoftheart CS algorithms while providing provable bounds on the number of measurements required for stable recovery. I.
Theory And Design Of Optimum FIR Compaction Filters
 IEEE TRANS. SIGNAL PROCESSING
, 1998
"... The problem of optimum FIR energy compaction filter design for a given number of channels M and a filter order N is considered. The special cases where N ! M and N = 1 have analytical solutions that involve eigenvector decomposition of the autocorrelation matrix and the power spectrum matrix respec ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
The problem of optimum FIR energy compaction filter design for a given number of channels M and a filter order N is considered. The special cases where N ! M and N = 1 have analytical solutions that involve eigenvector decomposition of the autocorrelation matrix and the power spectrum matrix respectively. In this paper, we deal with the more difficult case of M ! N ! 1. For the twochannel case and for a restricted but important class of random processes, we give an analytical solution for the compaction filter which is characterized by its zeros on the unitcircle. This also corresponds to the optimal twochannel FIR filter bank that maximizes the coding gain under the traditional quantization noise assumptions. This can also be used to generate optimal wavelets. For the arbitrary M \Gammachannel case, we provide a very efficient suboptimal design method called the window method. The method involves two stages that are associated with the above two special cases. As the order incre...
Sparse Sampling of Signal Innovations: Theory, Algorithms and Performance Bounds
 IEEE SIGNAL PROCESSING MAGAZINE
, 2008
"... Signal acquisition and reconstruction is at the heart of signal processing, and sampling theorems provide the bridge between the continuous and the discretetime worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon (and many others, from Whittaker to Kotel’niko ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
Signal acquisition and reconstruction is at the heart of signal processing, and sampling theorems provide the bridge between the continuous and the discretetime worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon (and many others, from Whittaker to Kotel’nikov and Nyquist, to name a few) and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. The sampling rate, at twice the maximum frequency present in the signal, is usually called the Nyquist rate. Bandlimitedness, however, is not necessary as is well known but only rarely taken advantage of [1]. In this broader, nonbandlimited view, the question is: when can we acquire a signal using a sampling kernel followed by uniform sampling and perfectly reconstruct it? The Shannon case is a particular example, where any signal from the subspace of bandlimited signals, denoted by BL, can be acquired through sampling and perfectly interpolated from the samples. Using the sinc kernel, or ideal lowpass filter, nonbandlimited signals will be projected onto
Formally biorthogonal polynomials and a lookahead Levinson algorithm for general Toeplitz systems
 Linear Algebra Appl
, 1993
"... ..."
(Show Context)
On generalized gaussian quadratures for exponentials and their applications
 Appl. Comput. Harmon. Anal
"... We introduce new families of Gaussiantype quadratures for weighted integrals of exponential functions and consider their applications to integration and interpolation of bandlimited functions. We use a generalization of a representation theorem due to Carathéodory to derive these quadratures. For ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
(Show Context)
We introduce new families of Gaussiantype quadratures for weighted integrals of exponential functions and consider their applications to integration and interpolation of bandlimited functions. We use a generalization of a representation theorem due to Carathéodory to derive these quadratures. For each positive measure, the quadratures are parameterized by eigenvalues of the Toeplitz matrix constructed from the trigonometric moments of the measure. For a given accuracy , selecting an eigenvalue close to yields an approximate quadrature with that accuracy. To compute its weights and nodes, we present a new fast algorithm. These new quadratures can be used to approximate and integrate bandlimited functions, such as prolate spheroidal wave functions, and essentially bandlimited functions, such as Bessel functions. We also develop, for a given precision, an interpolating basis for bandlimited functions on an interval. 2002 Elsevier Science (USA) 1.
Spectral Estimation via Selective Harmonic Amplification
, 2001
"... The statecovariance of a linear filter is characterized by a certain algebraic commutativity property with the state matrix of the filter, and also imposes a generalized interpolation constraint on the power spectrum of the input process. This algebraic property and the relationship between statec ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
The statecovariance of a linear filter is characterized by a certain algebraic commutativity property with the state matrix of the filter, and also imposes a generalized interpolation constraint on the power spectrum of the input process. This algebraic property and the relationship between statecovariance and the power spectrum of the input allow the use of matrix pencils and analytic interpolation theory for spectral analysis. Several algorithms for spectral estimation will be developed with resolution higher than state of the art. Index TermsAnalytic interpolation, nonlinear spectral estimation, spectral analysis, state covariances. I.
Sparsity and uniqueness for some specific underdetermined linear systems
 in Proc. of IEEE ICASSP ’05
, 2005
"... The purpose of this contribution is to extend some results on sparse representations of signals in redundant bases developed for arbitrary bases to two frequently encountered bases. The general problem is the following: given a matrix with and a vector with having nonzero components, find sufficient ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
The purpose of this contribution is to extend some results on sparse representations of signals in redundant bases developed for arbitrary bases to two frequently encountered bases. The general problem is the following: given a matrix with and a vector with having nonzero components, find sufficient conditions for to be the unique sparsest solution to. The answer gives an upperbound on depending upon. We consider the cases where is a Vandermonde matrix or a real Fourier matrix and the components of are known to be greater than or equal to zero. The sufficient conditions we get are much weaker than those valid for arbitrary matrices and guarantee further that can be recovered by solving a linear program. 1.
On A Sturm Sequence Of Polynomials For Unitary Hessenberg Matrices
 SIAM J. Matrix Anal. Appl
, 1993
"... Unitary matrices have a rich mathematical structure which is closely analogous to real symmetric matrices. For real symmetric matrices this structure can be exploited to develop very efficient numerical algorithms and for some of these algorithms unitary analogues are known. Here we present a unitar ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Unitary matrices have a rich mathematical structure which is closely analogous to real symmetric matrices. For real symmetric matrices this structure can be exploited to develop very efficient numerical algorithms and for some of these algorithms unitary analogues are known. Here we present a unitary analogue of the bisection method for symmetric tridiagonal matrices. Recently Delsarte and Genin introduced a sequence of socalled fl n symmetric polynomials which can be used to replace the classical Szego polynomials in several signal processing problems. These polynomials satisfy a three term recurrence relation and their roots interlace on the unit circle. Here we explain this sequence of polynomials in matrix terms. For an n \Theta n unitary Hessenberg matrix, we introduce, motivated by the Cayley transformation, a sequence of modified unitary submatrices. The characteristic polynomials of the modified unitary submatrices p k (z); k = 1; 2; : : : ; n are exactly the fl n symmetric ...