Results 1  10
of
539
New results in linear filtering and prediction theory
 Trans. ASME, Ser. D, J. Basic Eng
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 459 (0 self)
 Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly. 1
Mutual information and minimum meansquare error in Gaussian channels
 IEEE TRANS. INFORM. THEORY
, 2005
"... This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the out ..."
Abstract

Cited by 231 (28 self)
 Add to MetaCart
(Show Context)
This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signaltonoise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discretetime and continuoustime noncausal MMSE estimation. This fundamental informationtheoretic result has an unexpected consequence in continuoustime nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signaltonoise ratio is chosen uniformly distributed between 0 and SNR.
Advanced Spectral Methods for Climatic Time Series
, 2001
"... The analysis of uni or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical eld of ..."
Abstract

Cited by 169 (33 self)
 Add to MetaCart
The analysis of uni or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical eld of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory.
Universal prediction
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... This paper consists of an overview on universal prediction from an informationtheoretic perspective. Special attention is given to the notion of probability assignment under the selfinformation loss function, which is directly related to the theory of universal data compression. Both the probabili ..."
Abstract

Cited by 161 (14 self)
 Add to MetaCart
This paper consists of an overview on universal prediction from an informationtheoretic perspective. Special attention is given to the notion of probability assignment under the selfinformation loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.
Highquality Motion Deblurring from a Single Image
, 2008
"... Figure 1 High quality single image motiondeblurring. The left subfigure shows one captured image using a handheld camera under dim light. It is severely blurred by an unknown kernel. The right subfigure shows our deblurred image result computed by estimating both the blur kernel and the unblurre ..."
Abstract

Cited by 160 (6 self)
 Add to MetaCart
Figure 1 High quality single image motiondeblurring. The left subfigure shows one captured image using a handheld camera under dim light. It is severely blurred by an unknown kernel. The right subfigure shows our deblurred image result computed by estimating both the blur kernel and the unblurred latent image. We show several closeups of blurred/unblurred image regions for comparison. We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an efficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.
Antialiasing through stochastic sampling
 SIGGRAPH
, 1985
"... Stochastic sampling techniques, in particular Poisson and jittered sampling, are developed and analyzed. These approaches allow the construction of aliasfree approximations to continuous functions using discrete calculations. Stochastic sampling scatters high frequency information into broadband no ..."
Abstract

Cited by 123 (0 self)
 Add to MetaCart
Stochastic sampling techniques, in particular Poisson and jittered sampling, are developed and analyzed. These approaches allow the construction of aliasfree approximations to continuous functions using discrete calculations. Stochastic sampling scatters high frequency information into broadband noise rather than generating the false patterns produced by regular sampling. The type of randomness used in the sampling process controls the spectral character of the noise. The average sampling rate and the function being sampled determine the amount of noise that is produced. Stochastic sampling is applied adaptively so that a greater number of samples are taken where the function varies most. An estimate is used to determine how many samples to take over a given region. Noise reducing filters are used to increase the efficacy of a given sampling rate. The filter width is adaptively controlled to further improve performance. Stochastic sampling can be applied spatiotcmporally aswell as to other aspects of scene simulation. Ray tracing is one example of an image synthesis approach that can be antialiased by stochastic sampling.
Fast Motion Deblurring
"... This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image deri ..."
Abstract

Cited by 111 (12 self)
 Add to MetaCart
This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speedup, making our method fast enough for practical use.
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 93 (33 self)
 Add to MetaCart
(Show Context)
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Contributions to the Theory of Optimal Control
, 1960
"... This paper was in fact the first to introduce the RDE as an algorithm for computing the state feedback gain of the optimal controller for a general linear system with a quadratic performance criterion. RDE had emerged earlier in the study of the second variations in the calculus of variations, but i ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
This paper was in fact the first to introduce the RDE as an algorithm for computing the state feedback gain of the optimal controller for a general linear system with a quadratic performance criterion. RDE had emerged earlier in the study of the second variations in the calculus of variations, but its use in general linear systems, where the optimal trajectory needs to be generated by a control input, was new. The analysis throughout the paper concentrates on timevarying systems, and uses the HamiltonJacobi theory to arrive at RDE and to deduce optimality of the LQ control gain. We now know, however, that an alternative way to prove optimality in least squares is by showing how RDE allows one to "complete the square" (see, e.g., [5], [18]).