Results 1  10
of
53
Volumetric Transformation of Brain Anatomy
 IEEE TRANSACTIONS ON MEDICAL IMAGING
, 1997
"... This paper presents diffeomorphic transformations of threedimensional (3D) anatomical image data of the macaque occipital lobe and whole brain cryosection imagery and of deep brain structures in human brains as imaged via magnetic resonance imagery. These transformations are generated in a hierarc ..."
Abstract

Cited by 115 (10 self)
 Add to MetaCart
This paper presents diffeomorphic transformations of threedimensional (3D) anatomical image data of the macaque occipital lobe and whole brain cryosection imagery and of deep brain structures in human brains as imaged via magnetic resonance imagery. These transformations are generated in a hierarchical manner, accommodating both global and local anatomical detail. The initial lowdimensional registration is accomplished by constraining the transformation to be in a lowdimensional basis. The basis is defined by the Green's function of the elasticity operator placed at predefined locations in the anatomy and the eigenfunctions of the elasticity operator. The highdimensional large deformations are vector fields generated via the mismatch between the template and targetimage volumes constrained to be the solution of a NavierStokes fluid model. As part of this procedure, the Jacobian of the transformation is tracked, insuring the generation of diffeomorphisms. It is shown that transformations constrained by quadratic regularization methods such as the Laplacian, biharmonic, and linear elasticity models, do not ensure that the transformation maintains topology and, therefore, must only be used for coarse global registration.
Mean and Variance of Implicitly Defined Biased Estimators (such as Penalized Maximum Likelihood): Applications to Tomography
 IEEE Tr. Im. Proc
, 1996
"... Many estimators in signal processing problems are defined implicitly as the maximum of some objective function. Examples of implicitly defined estimators include maximum likelihood, penalized likelihood, maximum a posteriori, and nonlinear leastsquares estimation. For such estimators, exact analyti ..."
Abstract

Cited by 84 (30 self)
 Add to MetaCart
Many estimators in signal processing problems are defined implicitly as the maximum of some objective function. Examples of implicitly defined estimators include maximum likelihood, penalized likelihood, maximum a posteriori, and nonlinear leastsquares estimation. For such estimators, exact analytical expressions for the mean and variance are usually unavailable. Therefore investigators usually resort to numerical simulations to examine properties of the mean and variance of such estimators. This paper describes approximate expressions for the mean and variance of implicitly defined estimators of unconstrained continuous parameters. We derive the approximations using the implicit function theorem, the Taylor expansion, and the chain rule. The expressions are defined solely in terms of the partial derivatives of whatever objective function one uses for estimation. As illustrations, we demonstrate that the approximations work well in two tomographic imaging applications with Poisson sta...
Penalized MaximumLikelihood Image Reconstruction using SpaceAlternating Generalized EM Algorithms
 IEEE Tr. Im. Proc
, 1995
"... Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penal ..."
Abstract

Cited by 82 (31 self)
 Add to MetaCart
Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the Msteps of most EMtype algorithms. This paper presents spacealternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large completedata space. The sequential update decouples the Mstep, so the maximization can typically be performed analytically. We introduce new hiddendata spaces that are less informative than the conventional completedata space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints.
Multiscale Modeling and Estimation of Poisson Processes with Application to Photonlimited Imaging
 IEEE TRANS. ON INFO. THEORY
, 1999
"... Many important problems in engineering and science are wellmodeled by Poisson processes. In many applications it is of great interest to accurately estimate the intensities underlying observed Poisson data. In particular, this work is motivated by photonlimited imaging problems. This paper studies ..."
Abstract

Cited by 56 (10 self)
 Add to MetaCart
Many important problems in engineering and science are wellmodeled by Poisson processes. In many applications it is of great interest to accurately estimate the intensities underlying observed Poisson data. In particular, this work is motivated by photonlimited imaging problems. This paper studies a new Bayesian approach to Poisson intensity estimation based on the Haar wavelet transform. It is shown that the Haar transform provides a very natural and powerful framework for this problem. Using this framework, a novel multiscale Bayesian prior to model intensity functions is devised. The new prior leads to a simple, Bayesian intensity estimation procedure. Furthermore, we characterize the correlation behavior of the new prior and show that it has 1/f spectral characteristics. The new framework is applied to photonlimited image estimation and its potential to improve nuclear medicine imaging is examined.
ConjugateGradient Preconditioning Methods for ShiftVariant PET Image Reconstruction
 IEEE Tr. Im. Proc
, 2002
"... Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian mat ..."
Abstract

Cited by 51 (21 self)
 Add to MetaCart
Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shiftinvariant, i.e. for those with approximately blockToeplitz or blockcirculant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantumlimited optical imaging, the Hessian of the weighted leastsquares objective function is quite shiftvariant, and circulant preconditioners perform poorly. Additional shiftvariance is caused by edgepreserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shiftvariant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugategradient (CG) iteration. We also propose a new efficient method for the linesearch step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
HilbertSchmidt Lower Bounds for Estimators on Matrix Lie Groups for ATR
, 1998
"... Deformable template representations of observed imagery, model the variability of target pose via the actions of the matrix Lie groups on rigid templates. In this paper, we study the construction of minimum mean squared error estimators on the special orthogonal group, SO(n), for pose estimation. ..."
Abstract

Cited by 45 (22 self)
 Add to MetaCart
Deformable template representations of observed imagery, model the variability of target pose via the actions of the matrix Lie groups on rigid templates. In this paper, we study the construction of minimum mean squared error estimators on the special orthogonal group, SO(n), for pose estimation. Due to the nonflat geometry of SO(n), the standard Bayesian formulation, of optimal estimators and their characteristics, requires modifications. By utilizing HilbertSchmidt metric defined on GL(n), a larger group containing SO(n), a mean squared criterion is defined on SO(n). The HilbertSchmidt estimate (HSE) is defined to be a minimum mean squared error estimator, restricted to SO(n). The expected error associated with the HSE is shown to be a lower bound, called the HilbertSchmidt bound (HSB), on the error incurred by any other estimator. Analysis and algorithms are presented for evaluating the HSE and the HSB in case of both groundbased and airborne targets.
Asymptotic Performance Analysis of Bayesian Object Recognition
 IEEE Transactions of Information Theory
, 1998
"... This paper analyzes the performance of Bayesian object recognition algorithms in the context of deformable templates. Rigid CAD surface models represent the underlying targets; lowdimensional matrix Lie groups (rotation and translation) extend them to the particular instance of pose and position. F ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
This paper analyzes the performance of Bayesian object recognition algorithms in the context of deformable templates. Rigid CAD surface models represent the underlying targets; lowdimensional matrix Lie groups (rotation and translation) extend them to the particular instance of pose and position. For a target ff, I ff represents its templates and sI ff is the target template at the pose/location denoted by the parameter s. The remote sensors observing the objects are modeled by the projective transformation T , that is, T sI ff is the signature of target ff at pose s when viewed by the sensor T . The observations I D are modeled as a random fields with mean T sI ff . In a Bayesian approach, object recognition and pose estimation are basically optimizations for a given cost function related to the posterior. Recognition performance is analyzed through probability of error: given a target ff 0 at pose s 0 what is the probability of it being recognized as ff 1 . Asymptotic ex...
Ergodic Algorithms on Special Euclidean Groups for ATR
, 1997
"... The variabilities in orientations and positions of rigid objects can be modeled by applying rotation and translation groups on their surface manifolds. Following the deformable template theory the rigid templates, given by twodimensional surface descriptions, are rotated and translated to conform t ..."
Abstract

Cited by 16 (14 self)
 Add to MetaCart
The variabilities in orientations and positions of rigid objects can be modeled by applying rotation and translation groups on their surface manifolds. Following the deformable template theory the rigid templates, given by twodimensional surface descriptions, are rotated and translated to conform to individual objects in a particular scene. The fundamental group generating rigid motion is special Euclidean group SE(n), the semidirect product of the special orthogonal group SO(n) and the translation group IR n . Under this model the scene representations take values in Cartesian products of the curved Lie group, SE(n). Given the observations of a scene obtained from a set of standard remote sensors, we generate the conditional mean estimates of transformation groups modeling that scene. Techniques, based on ergodic jumping stochastic gradient flows, are developed which accommodate the curved geometry of these groups. Algorithms and simulation results are presented in the context o...
Statistical imaging and complexity regularization
 IEEE Trans. Inf. Theory
, 2000
"... Abstract — We apply the complexity–regularization principle to statistical illposed inverse problems in imaging. We formulate a natural distortion measure in image space and develop nonasymptotic bounds on estimation performance in terms of an index of resolvability that characterizes the compressi ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Abstract — We apply the complexity–regularization principle to statistical illposed inverse problems in imaging. We formulate a natural distortion measure in image space and develop nonasymptotic bounds on estimation performance in terms of an index of resolvability that characterizes the compressibility of the true image. These bounds extend previous results that were obtained in the literature under simpler observational models. I. Statement of the Problem A variety of imaging problems involve estimation of an image from noisy, degraded observations [1, 2]. Examples include tomography, astronomical imaging, ultrasound imaging, radar imaging, forensic science, and restoration of old movies. In some of these problems, a statistical model relating the observations
Localization accuracy in singlemolecule microscopy
 Biophysical Journal
, 2004
"... ABSTRACT One of the most basic questions in singlemolecule microscopy concerns the accuracy with which the location of a single molecule can be determined. Using p ffiffiffiffiffiffiffiffi the Fisher information matrix it is shown that the limit of the localization accuracy for a single molecule is ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
ABSTRACT One of the most basic questions in singlemolecule microscopy concerns the accuracy with which the location of a single molecule can be determined. Using p ffiffiffiffiffiffiffiffi the Fisher information matrix it is shown that the limit of the localization accuracy for a single molecule is given by lem/2pna gAt, where lem, na, g, A, and t denote the emission wavelength of the single molecule, the numerical aperture of the objective, the efficiency of the optical system, the emission rate of the single molecule and the acquisition time, respectively. Using Monte Carlo simulations it is shown that estimation algorithms can come close to attaining the limit given in the expression. Explicit quantitative results are also provided to show how the limit of the localization accuracy is reduced by factors such as pixelation of the detector and noise sources in the detection system. The results demonstrate what is achievable by singlemolecule microscopy and provide guidelines for experimental design.