Results 1  10
of
32
Multichannel sampling of pulse streams at the rate of innovation
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on subNyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimalrate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches.
Nonideal Sampling and Regularization Theory
, 2008
"... Shannon’s sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shiftinvariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressi ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Shannon’s sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shiftinvariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling, and the samples are corrupted by additive noise. We adopt a variational approach to the reconstruction problem and minimize a data fidelity term subject to a Tikhonovlike (continuous domain) 2regularization to obtain the continuousspace solution. We present theoretical justification for the minimization of this cost functional and show that the globally minimal continuousspace solution belongs to a shiftinvariant space generated by a function (generalized Bspline) that is generally not bandlimited. When the sampling is ideal, we recover some of the classical smoothing spline estimators. The optimal reconstruction space is characterized by a condition that links the generating function to the regularization operator and implies the existence of a Bsplinelike basis. To make the scheme practical, we specify the generating functions corresponding to the most popular families of regularization operators (derivatives, iterated Laplacian), as well as a new, generalized one that leads to a new brand of Matérn splines. We conclude the paper by proposing a stochastic interpretation of the reconstruction algorithm and establishing an equivalence with the minimax and minimum mean square error (MMSE/Wiener) solutions of the generalized sampling problem.
Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity
, 2008
"... Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuoustime nature, discretetime signal processing (DSP) inherently relies on sampling a continuoustime signal to obtain a discretetime representation. Consequently, sampl ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuoustime nature, discretetime signal processing (DSP) inherently relies on sampling a continuoustime signal to obtain a discretetime representation. Consequently, sampling theories lie at the heart of signal processing devices and communication systems. Examples include sampling rate conversion for software radio [1] and between audio formats [2], biomedical imaging [3], lens distortion correction and the formation of image mosaics [4], and superresolution of image sequences [5]. To accommodate high operating rates while retaining low computational cost, efficient analogtodigital (ADC) and digitaltoanalog (DAC) converters must be developed. Many of the limitations encountered in current converters is due to a traditional assumption that the sampling stage needs to acquire the data at the ShannonNyquist rate, corresponding to twice the signal bandwidth [6], [7], [8]. To avoid aliasing, a sharp lowpass filter (LPF) must be implemented prior to sampling. The reconstructed signal is also a bandlimited function, generated by integer shifts of the sinc interpolation kernel. A major drawback of this paradigm is that many natural signals are better represented in alternative bases other than the Fourier basis [9], [10], [11], or possess further structure in the Fourier domain. In addition, ideal pointwise sampling, as assumed by the Shannon theorem, cannot be implemented. More practical ADCs introduce
Reconstructing signals with finite rate of innovation from noisy samples
 Acta Appl. Math
"... Abstract. A signal is said to have finite rate of innovation if it has a finite number of degrees of freedom per unit of time. Reconstructing signals with finite rate of innovation from their exact average samples has been studied in SIAM J. Math. Anal., 38(2006), 13891422. In this paper, we consid ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
(Show Context)
Abstract. A signal is said to have finite rate of innovation if it has a finite number of degrees of freedom per unit of time. Reconstructing signals with finite rate of innovation from their exact average samples has been studied in SIAM J. Math. Anal., 38(2006), 13891422. In this paper, we consider the problem of reconstructing signals with finite rate of innovation from their average samples in the presence of deterministic and random noise. We develop an adaptive Tikhonov regularization approach to this reconstruction problem. Our simulation results demonstrate that our adaptive approach is robust against noise, is almost consistent in various sampling processes, and is also locally implementable. 1.
QuasiInterpolating Spline Models for HexagonallySampled Data
"... Abstract—The reconstruction of a continuousdomain representation from sampled data is an essential element of many image processing tasks, in particular, image resampling. Until today, most image data have been available on Cartesian lattices, despite the many theoretical advantages of hexagonal sa ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The reconstruction of a continuousdomain representation from sampled data is an essential element of many image processing tasks, in particular, image resampling. Until today, most image data have been available on Cartesian lattices, despite the many theoretical advantages of hexagonal sampling. In this paper, we propose new reconstruction methods for hexagonally sampled data that use the intrinsically 2D nature of the lattice, and that at the same time remain practical and efficient. To that aim, we deploy boxspline and hexspline models, which are notably well adapted to hexagonal lattices. We also rely on the quasiinterpolation paradigm to design compelling prefilters; that is, the optimal filter for a prescribed design is found using recent results from approximation theory. The feasibility and efficiency of the proposed methods are illustrated and compared for a hexagonal to Cartesian grid conversion problem. Index Terms—Approximation theory, boxsplines, hexagonal lattices, hexsplines, interpolation, linear shift invariant signal
Local reconstruction for sampling in shiftinvariant spaces
 Adv. Comp. Math
"... Abstract. The local reconstruction from samples is one of most desirable properties for many applications in signal processing, but it has not been given as much attention. In this paper, we will consider the local reconstruction problem for signals in a shiftinvariant space. In particular, we consi ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The local reconstruction from samples is one of most desirable properties for many applications in signal processing, but it has not been given as much attention. In this paper, we will consider the local reconstruction problem for signals in a shiftinvariant space. In particular, we consider finding sampling sets X such that signals in a shiftinvariant space can be locally reconstructed from their samples on X. For a locally finitedimensional shiftinvariant space V we show that signals in V can be locally reconstructed from its samples on any sampling set with sufficiently large density. For a shiftinvariant space V (φ1,..., φN) generated by finitely many compactly supported functions φ1,..., φN, we characterize all periodic nonuniform sampling sets X such that signals in that shiftinvariant space V (φ1,..., φN) can be locally reconstructed from the samples taken from X. For a refinable shiftinvariant space V (φ) generated by a compactly supported refinable function φ, we prove that for almost all (x0, x1) ∈ [0, 1] 2, any signal in V (φ) can be locally reconstructed from its samples from {x0, x1} + Z with oversampling rate 2. The proofs of our results on the local sampling and reconstruction in the refinable shiftinvariant space V (φ) depend heavily on the linear independent shifts of a refinable function on measurable sets with positive Lebesgue measure and the almost ripplet property for a refinable function, which are new and interesting by themselves. 1.
Performance Bounds and Design Criteria for Estimating Finite Rate of Innovation Signals
"... Abstract—In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainabl ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract—In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainable regardless of the sampling method. Next, we provide a bound on the performance achievable using any specific samplingapproach. Essential differences between the noisy and noisefree cases arise from this analysis. In particular, we identify settings in which noisefree recovery techniques deteriorate substantially under slight noise levels, thus quantifying the numerical instability inherent in such methods. This instability, which is only present in some families of FRI signals, is shown to be related toaspecific typeofstructure,which can be characterized by viewing the signal model as a union of subspaces. Finally, we develop a methodology for choosing the optimal sampling kernels for linear reconstruction, based on a generalization of the Karhunen–Loève transform. The results are illustrated for several types of timedelay estimation problems. Index Terms—Cramér–Rao bound (CRB), finite rate of innovation (FRI), sampling, timedelay estimation, union of subspaces. I.
Minimax approximation of representation coefficients from generalized samples
 IEEE Trans. Signal Processing
, 2007
"... Abstract—Many sources of information are of analog or continuoustime nature. However, digital signal processing applications rely on discrete data. We consider the problem of approximating 2 inner products, i.e., representation coefficients of a continuoustime signal, from its generalized samples. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Many sources of information are of analog or continuoustime nature. However, digital signal processing applications rely on discrete data. We consider the problem of approximating 2 inner products, i.e., representation coefficients of a continuoustime signal, from its generalized samples. Adopting a robust approach, we process these generalized samples in a minimax optimal sense. Specifically, we minimize the worst approximation error of the desired representation coefficients by proper processing of the given sample sequence. We then extend our results to criteria which incorporate smoothness constraints on the unknown function. Finally, we compare our methods with the piecewiseconstant approximation technique, commonly used for this problem, and discuss the possible improvements by the suggested schemes. Index Terms—Generalized sampling, interpolation, robust approximation, smoothness. I.
Meansquared error sampling and reconstruction in the presence of noise
 IEEE Trans. Signal Processing
, 2006
"... Abstract—One of the main goals of sampling theory is to represent a continuoustime function by a discrete set of samples. Here, we treat the class of sampling problems in which the underlying function can be specified by a finite set of samples. Our problem is to reconstruct the signal from nonidea ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract—One of the main goals of sampling theory is to represent a continuoustime function by a discrete set of samples. Here, we treat the class of sampling problems in which the underlying function can be specified by a finite set of samples. Our problem is to reconstruct the signal from nonideal, noisy samples, which are modeled as the inner products of the signal with a set of sampling vectors, contaminated by noise. To mitigate the effect of the noise and the mismatch between the sampling and reconstruction vectors, the samples are linearly processed prior to reconstruction. Considering a statistical reconstruction framework, we characterize the strategies that are meansquared error (MSE) admissible, meaning that they are not dominated in terms of MSE by any other linear reconstruction. We also present explicit designs of admissible reconstructions that dominate a given inadmissible method. Adapting several classical estimation approaches to our particular sampling problem, we suggest concrete admissible reconstruction methods and compare their performance. The results are then specialized to the case in which the samples are processed by a digital correction filter. Index Terms—Generalized sampling, interpolation, minimax reconstruction, sampling. I.
Regularized Interpolation for Noisy Images
"... Abstract—Interpolation is the means by which a continuously defined model is fit to discrete data samples. When the data samples are exempt of noise, it seems desirable to build the model by fitting them exactly. In medical imaging, where quality is of paramount importance, this ideal situation unfo ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—Interpolation is the means by which a continuously defined model is fit to discrete data samples. When the data samples are exempt of noise, it seems desirable to build the model by fitting them exactly. In medical imaging, where quality is of paramount importance, this ideal situation unfortunately does not occur. In this paper, we propose a scheme that improves on the quality by specifying a tradeoff between fidelity to the data and robustness to the noise. We resort to variational principles, which allow us to impose smoothness constraints on the model for tackling noisy data. Based on shift, rotation, and scaleinvariant requirements on the model, we show that thenorm of an appropriate vector derivative is the most suitable choice of regularization for this purpose. In addition to Tikhonovlike quadratic regularization, this includes edgepreserving totalvariationlike (TV) regularization. We give algorithms to recover the continuously defined model from noisy samples and also provide a datadriven scheme to determine the optimal amount of regularization. We validate our method with numerical examples where we demonstrate its superiority over an exact fit as well as the benefit of TVlike nonquadratic regularization over Tikhonovlike quadratic regularization. Index Terms—Interpolation, regularization, regularization parameter, splines, Tikhonov functional, totalvariation functional.