Results 1  10
of
60
Image Mosaicing and Superresolution
, 2004
"... The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of movi ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
(Show Context)
The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of moving objects, removing blur, increasing spatial resolution and improving dynamic range. As such, this research has many applications in fields as diverse as forensic image restoration, computer generated special effects, video image compression, and digital video editing. An essential enabling step prior to performing frame fusion is image registration, by which an accurate estimate of the pointtopoint mapping between views is computed. A robust and efficient algorithm is described to automatically register multiple images using only information contained within the images themselves. The accuracy of this method, and the statistical assumptions upon which it relies, are investigated empirically. Two forms of framefusion are investigated. The first is image mosaicing, which is the alignment of multiple images into a single composition representing part of a 3D scene.
Multiscale Bayesian Segmentation Using a Trainable Context Model
 IEEE Trans. on Image Processing
, 2001
"... In recent years, multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
(Show Context)
In recent years, multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to encourage large uniformly classified regions. Consequently, these context models have a limited ability to capture the complex contextual dependencies that are important in applications such as document segmentation. In this paper, we propose a multiscale...
ANALYSIS OF THE RECOVERY OF EDGES IN IMAGES AND SIGNALS BY MINIMIZING NONCONVEX REGULARIZED LeastSquares
, 2005
"... We consider the restoration of discrete signals and images using leastsquares with nonconvex regularization. Our goal is to find important features of the (local) minimizers of the cost function in connection with the shape of the regularization term. This question is of paramount importance for ..."
Abstract

Cited by 35 (12 self)
 Add to MetaCart
We consider the restoration of discrete signals and images using leastsquares with nonconvex regularization. Our goal is to find important features of the (local) minimizers of the cost function in connection with the shape of the regularization term. This question is of paramount importance for a relevant choice of regularization term. The main point of interest is the restoration of edges. We show that the differences between neighboring pixels in homogeneous regions are smaller than a small threshold, while they are larger than a large threshold at edges: we can say that the former are shrunk, while the latter are enhanced. This naturally entails a neat classification of differences as belonging to smooth regions or to edges. Furthermore, if the original signal or image is a scaled characteristic function of a subset, we show that the global minimizer is smooth everywhere if the contrast is low, whereas edges are correctly recovered at higher (finite) contrast. Explicit expressions are derived for the truncated quadratic and the “01 ” regularization function. It is seen that restoration using nonconvex regularization is fundamentally different from edgepreserving convex regularization. Our theoretical results are illustrated using a numerical experiment.
Hyperparameter estimation for satellite image restoration using a MCMC Maximum Likelihood method
 Pattern Recognition
, 2000
"... The satellite image deconvolution problem is illposed and must be regularized. Herein, we use an edgepreserving regularization model using a ' function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to us ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
The satellite image deconvolution problem is illposed and must be regularized. Herein, we use an edgepreserving regularization model using a ' function, involving two hyperparameters. Our goal is to estimate the optimal parameters in order to automatically reconstruct images. We propose to use the Maximum Likelihood Estimator (MLE), applied to the observed image. We need sampling from prior and posterior distributions. Since the convolution prevents from using standard samplers, we have developed a modied GemanYang algorithm, using an auxiliary variable and a cosine transform. We present a Markov Chain Monte Carlo Maximum Likelihood (MCMCML) technique which is able to simultaneously achieve the estimation and the reconstruction.
Statistical approaches in quantitative positron emission tomography
 Statistics and Computing
"... Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.
Unsupervised robust nonparametric estimation of the hemodynamic response function for any fMRI experiment
 IEEE Trans. Medical Imaging
, 2003
"... © 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
(Show Context)
© 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—This paper deals with the estimation of the blood oxygen leveldependent response to a stimulus, as measured in functional magnetic resonance imaging (fMRI) data. A precise estimation is essential for a better understanding of cerebral activations. The most recent works have used a nonparametric framework for this estimation, considering each brain region as a system characterized by its impulse response, the socalled hemodynamic response function (HRF). However, the use of these techniques has remained limited since they are not welladapted to real fMRI data. Here, we develop a threefold extension to previous works. We consider asynchronous eventrelated paradigms, account for different trial types and integrate several fMRI sessions into the estimation. These generalizations are simultaneously addressed through a badly conditioned observation model. Bayesian formalism is used to model temporal prior information of the underlying physiological process of the brain hemodynamic response. By this way, the HRF estimate results from a tradeoff between information brought by the data and by our prior knowledge. This tradeoff is modeled with hyperparameters that are set to the maximumlikelihood estimate using an expectation conditional maximization algorithm. The proposed unsupervised approach is validated on both synthetic and real fMRI data, the latter originating from a speech perception experiment. Index Terms—Bayesian estimation, ECM algorithm, eventrelated fMRI paradigm, HRF modeling. I.
Parallelizable Bayesian Tomography Algorithms with Rapid, Guaranteed Convergence
 IEEE TRANS. ON IMAGE PROCESSING
, 2000
"... Bayesian tomographic reconstruction algorithms generally require the efficient optimization of a functional of many variables. In this setting, as well as in many other optimization tasks, functional substitution (FS) has been widely applied to simplify each step of the iterative process. The functi ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
(Show Context)
Bayesian tomographic reconstruction algorithms generally require the efficient optimization of a functional of many variables. In this setting, as well as in many other optimization tasks, functional substitution (FS) has been widely applied to simplify each step of the iterative process. The function to be minimized is replaced locally by an approximation having a more easily manipulated form, e.g., quadratic, but which maintains sufficient similarity to descend the true functional while computing only the substitute. In this paper, we provide two new applications of FS methods in iterative coordinate descent for Bayesian tomography. The first is a modification of our coordinate descent algorithm with onedimensional (1D) NewtonRaphson approximations to an alternative quadratic which allows convergence to be proven easily. In simulations, we find essentially no difference in convergence speed between the two techniques. We also present a new algorithm which exploits the FS method to allow parallel updates of arbitrary sets of pixels using computations similar to iterative coordinate descent. The theoretical potential speed up of parallel implementations is nearly linear with the number of processors if communication costs are neglected.
ModelBased Image Reconstruction From TimeResolved Diffusion Data
"... This paper addresses the issue of reconstructing the unknown field of absorption and scattering coefficients from timeresolved measurements of diffused light in a computationally efficient manner. The intended application is optical tomography, which has generated considerable interest in recent ti ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
This paper addresses the issue of reconstructing the unknown field of absorption and scattering coefficients from timeresolved measurements of diffused light in a computationally efficient manner. The intended application is optical tomography, which has generated considerable interest in recent times. The inverse problem is posed in the Bayesian framework. The maximum aposteriori (MAP) estimate is used to compute the reconstruction. We use an edgepreserving generalized Gaussian Markov random field to model the unknown image. The diffusion model used for the measurements is solved forward in time using a finitedifference approach known as the alternatingdirections implicit method. This method requires the inversion of a tridiagonal matrix at each time step and is therefore of O(N) complexity, where N is the dimensionality of the image. Adjoint differentiation is used to compute the sensitivity of the measurements with respect to the unknown image. The novelty of our method lies in the computation of the sensitivity since we can achieve it in O(N) time as opposed to O(N²) time required by the perturbation approach. We present results using simulated data to show that the proposed method yields superior quality reconstructions with substantial savings in computation.
A ContentAware Image Prior
"... In image restoration tasks, a heavytailed gradient distribution of natural images has been extensively exploited as an image prior. Most image restoration algorithms impose a sparse gradient prior on the whole image, reconstructing an image with piecewise smooth characteristics. While the sparse gr ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
In image restoration tasks, a heavytailed gradient distribution of natural images has been extensively exploited as an image prior. Most image restoration algorithms impose a sparse gradient prior on the whole image, reconstructing an image with piecewise smooth characteristics. While the sparse gradient prior removes ringing and noise artifacts, it also tends to remove midfrequency textures, degrading the visual quality. We can attribute such degradations to imposing an incorrect image prior. The gradient profile in fractallike textures, such as trees, is close to a Gaussian distribution, and small gradients from such regions are severely penalized by the sparse gradient prior. To address this issue, we introduce an image restoration algorithm that adapts the image prior to the underlying texture. We adapt the prior to both lowlevel local structures as well as midlevel textural characteristics. Improvements in visual quality is demonstrated on deconvolution and denoising tasks. Orthogonal gradients
Unified inference for variational Bayesian linear Gaussian statespace models
 In Proceedings of NIPS 2006
"... Abstract. Linear Gaussian StateSpace Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinf ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Linear Gaussian StateSpace Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinformatics. The most challenging aspect of implementing the method is in performing inference on the hidden state sequence of the model. We show how to convert the inference problem so that standard and stable Kalman Filtering/Smoothing recursions from the literature may be applied. This is in contrast to previously published approaches based on Belief Propagation. Our framework both simplifies and unifies the inference problem, so that future applications may be easily developed. We demonstrate the elegance of the approach on Bayesian temporal ICA, with an application to finding independent components in noisy EEG signals. IDIAP–RR 0650 1 1 Linear Gaussian StateSpace Models Linear Gaussian StateSpace Models (LGSSMs) 1 are fundamental in timeseries analysis [1, 2, 3]. In these models the observations v1:T 2 are generated from an underlying dynamical system on h1:T according to vt = Bht + η v t, η v t ∼ N(0V,ΣV); ht = Aht−1 + η h t, η h t ∼ N (0H,ΣH), where N(µ,Σ) denotes a Gaussian with mean µ and covariance Σ, and 0X denotes an Xdimensional zero vector. The observation vt has dimension V and the hidden state ht dimension H. Probabilistically, the LGSSM is defined by: T∏ p(v1:T,h1:T Θ) = p(v1h1)p(h1) p(vtht)p(htht−1), t=2