Results 1  10
of
57
Regularization for uniform spatial resolution properties in penalizedlikelihood image reconstruction
 IEEE Tr. Med. Im
, 2000
"... Traditional spaceinvariant regularization methods in tomographic image reconstruction using penalizedlikelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are spacevariant, as ..."
Abstract

Cited by 59 (28 self)
 Add to MetaCart
(Show Context)
Traditional spaceinvariant regularization methods in tomographic image reconstruction using penalizedlikelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are spacevariant, asymmetric, and objectdependent even for spaceinvariant imaging systems. We propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the leastsquares tting of a parameterized local impulse response to a desired global response. We have developed computationally e cient methods for PET systems with shiftinvariant geometric responses. We demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.
A Theoretical Study of the Contrast Recovery and Variance of MAP Reconstructions From PET Data
 IEEE Trans. Med. Imag
, 1999
"... We examine the spatial resolution and variance properties of PET images reconstructed using maximum a posteriori (MAP) or penalizedlikelihood methods. Resolution is characterized by the contrast recovery coefficient (CRC) of the local impulse response. Simplified approximate expressions are derived ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
(Show Context)
We examine the spatial resolution and variance properties of PET images reconstructed using maximum a posteriori (MAP) or penalizedlikelihood methods. Resolution is characterized by the contrast recovery coefficient (CRC) of the local impulse response. Simplified approximate expressions are derived for the local impulse response CRCs and variances for each voxel. Using these results we propose a practical scheme for selecting spatially variant smoothing parameters to optimize lesion detectability through maximization of the local CRCtonoise ratio in the reconstructed image. I. INTRODUCTION PET image reconstruction algorithms based on maximum likelihood (ML) or maximum a posteriori (MAP) principles can produce improved spatial resolution and noise properties in comparison to conventional filtered backprojection (FBP) methods. It is often important to be able to quantify this improvement in terms of the resolution (or bias) and variance of the resulting images. These measures can be...
Statistical approaches in quantitative positron emission tomography
 Statistics and Computing
"... Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
Positron emission tomography is a medical imaging modality for producing 3D images of the spatial distribution of biochemical tracers within the human body. The images are reconstructed from data formed through detection of radiation resulting from the emission of positrons from radioisotopes tagged onto the tracer of interest. These measurements are approximate line integrals from which the image can be reconstructed using analytical inversion formulae. However these direct methods do not allow accurate modeling either of the detector system or of the inherent statistical fluctuations in the data. Here we review recent progress in developing statistical approaches to image estimation that can overcome these limitations. We describe the various components of the physical model and review different formulations of the inverse problem. The wide range of numerical procedures for solving these problems are then reviewed. Finally, we describe recent work aimed at quantifying the quality of the resulting images, both in terms of classical measures of estimator bias and variance, and also using measures that are of more direct clinical relevance.
Statistical Image Reconstruction Methods for RandomsPrecorrected PET Scans
 Med. Im. Anal
, 1998
"... PET measurements are usually precorrected for accidental coincidence events by realtime subtraction of the delayed window coincidences. Randoms subtraction compensates in mean for accidental coincidences but destroys the Poisson statistics. We propose and analyze two new approximations to the ex ..."
Abstract

Cited by 34 (16 self)
 Add to MetaCart
(Show Context)
PET measurements are usually precorrected for accidental coincidence events by realtime subtraction of the delayed window coincidences. Randoms subtraction compensates in mean for accidental coincidences but destroys the Poisson statistics. We propose and analyze two new approximations to the exact loglikelihood of the precorrected measurements, one based on a "shifted Poisson" model, the other based on saddlepoint approximations to the measurement probability mass function (pmf). The methods apply to both emission and transmission tomography; however in this paper we focus on transmission tomography. We compare the new models to conventional dataweighted least squares (WLS) and conventional maximum likelihood (based on the ordinary Poisson (OP) model) using simulations and analytic approximations. The results demonstrate that the proposed methods avoid the systematic bias of the WLS method, and lead to significantly lower variance than the conventional OP method. The saddlepoint method provides a more accurate approximation to the exact loglikelihood than the WLS, OP and shifted Poisson alternatives. However, the simpler shifted Poisson method yielded comparable biasvariance performance to the saddlepoint method in the simulations. The new methods offer improved image reconstruction in PET through more realistic statistical modeling, yet with negligible increase in computation over the conventional OP method.
Optimality analysis of sensortarget geometries in passive localization: Part 1  Bearingonly localization
 In ISSNIP’07
, 2007
"... In this paper we characterize the relative sensortarget geometry in R2 in terms of potential localization performance for timeofarrival based localization. Our aim is to characterize those relative sensortarget geometries which minimize the relative CramerRao lower bound. 1. ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
(Show Context)
In this paper we characterize the relative sensortarget geometry in R2 in terms of potential localization performance for timeofarrival based localization. Our aim is to characterize those relative sensortarget geometries which minimize the relative CramerRao lower bound. 1.
New Statistical Models for RandomsPrecorrected PET Scans
 in Information Processing in Medical
, 2001
"... PET measurements are usually precorrected for accidental coincidence events by realtime subtraction of the delayed window coincidences. Randoms subtraction compensates in mean for accidental coincidences but destroys the Poisson statistics. We propose and analyze two new approximations to the exa ..."
Abstract

Cited by 24 (19 self)
 Add to MetaCart
(Show Context)
PET measurements are usually precorrected for accidental coincidence events by realtime subtraction of the delayed window coincidences. Randoms subtraction compensates in mean for accidental coincidences but destroys the Poisson statistics. We propose and analyze two new approximations to the exact loglikelihood of the precorrected measurements, one based on a "shifted Poisson" model, the other based on saddlepoint approximations to the measurement probability mass function (pmf). The methods apply to both emission and transmission tomography; however in this paper we focus on transmission tomography. We compare the new models to conventional dataweighted least squares (WLS) and conventional maximum likelihood (based on the ordinary Poisson (OP) model) using simulations and analytic approximations. The results demonstrate that the proposed methods avoid the systematic bias of the WLS method, and lead to significantly lower variance than the conventional OP method. The saddlepoint method provides a more accurate approximation to the exact loglikelihood than the WLS, OP and shifted Poisson alternatives.
Resolution properties of regularized image reconstruction methods
 of EECS, Univ. of Michigan, Ann Arbor, MI
, 1995
"... This paper examines the spatial resolution properties of penalizedlikelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce spacevariant local impulse response functions, even for spaceinvariant tomographic s ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
This paper examines the spatial resolution properties of penalizedlikelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce spacevariant local impulse response functions, even for spaceinvariant tomographic systems. Paradoxically, for emission image reconstruction the local resolution is generally poorest in highcount regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalizedlikelihood methods.
Minimum variance in biased estimation: Bounds and asymptotically optimal estimators
 IEEE Trans. Signal Processing
, 2004
"... Abstract—We develop a uniform Cramér–Rao lower bound (UCRLB) on the total variance of any estimator of an unknown vector of parameters, with bias gradient matrix whose norm is bounded by a constant. We consider both the Frobenius norm and the spectral norm of the bias gradient matrix, leading to two ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We develop a uniform Cramér–Rao lower bound (UCRLB) on the total variance of any estimator of an unknown vector of parameters, with bias gradient matrix whose norm is bounded by a constant. We consider both the Frobenius norm and the spectral norm of the bias gradient matrix, leading to two corresponding lower bounds. We then develop optimal estimators that achieve these lower bounds. In the case in which the measurements are related to the unknown parameters through a linear Gaussian model, Tikhonov regularization is shown to achieve the UCRLB when the Frobenius norm is considered, and the shrunken estimator is shown to achieve the UCRLB when the spectral norm is considered. For more general models, the penalized maximum likelihood (PML) estimator with a suitable penalizing function is shown to asymptotically achieve the UCRLB. To establish the asymptotic optimality of the PML estimator, we first develop the asymptotic mean and variance of the PML estimator for any choice of penalizing function satisfying certain regularity constraints and then derive a general condition on the penalizing function under which the resulting PML estimator asymptotically achieves the UCRLB. This then implies that from all linear and nonlinear estimators with bias gradient whose norm is bounded by a constant, the proposed PML estimator asymptotically results in the smallest possible variance. Index Terms—Asymptotic optimality, biased estimation, bias gradient norm, Cramér–Rao lower bound, penalized maximum likelihood, Tikhonov regularization.
Linear Regression with Gaussian Model Uncertainty: Algorithms and Bounds
"... We consider the problem of estimating an unknown deterministic parameter vector in a linear regression model with random Gaussian uncertainty in the mixing matrix. We prove that the maximum likelihood (ML) estimator is a regularized least squares estimator and develop three alternative approaches f ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating an unknown deterministic parameter vector in a linear regression model with random Gaussian uncertainty in the mixing matrix. We prove that the maximum likelihood (ML) estimator is a regularized least squares estimator and develop three alternative approaches for finding the regularization parameter which maximizes the likelihood. We analyze the performance using the Cramér Rao bound (CRB) on the mean squared error, and show that the degradation in performance due the uncertainty is not as severe as may be expected. Next, we address the problem again assuming that the variances of the noise and the elements in the model matrix are unknown and derive the associated CRB and ML estimator. We compare our methods to known results on linear regression in the error in variables (EIV) model. We discuss the similarity between these two competing approaches, and provide a thorough comparison which sheds light on their theoretical and practical differences.
Uniformly Improving the CramérRao Bound and MaximumLikelihood Estimation
"... Abstract — An important aspect of estimation theory is characterizing the best achievable performance in a given estimation problem, as well as determining estimators that achieve the optimal performance. The traditional CramérRao type bounds provide benchmarks on the variance of any estimator of a ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract — An important aspect of estimation theory is characterizing the best achievable performance in a given estimation problem, as well as determining estimators that achieve the optimal performance. The traditional CramérRao type bounds provide benchmarks on the variance of any estimator of a deterministic parameter vector under suitable regularity conditions, while requiring apriori specification of a desired bias gradient. In applications, it is often not clear how to choose the required bias. A direct measure of the estimation error that takes both the variance and the bias into account is the meansquared error (MSE), which is the sum of the variance and the squarednorm of the bias. Here, we develop bounds on the MSE in estimating a deterministic parameter vector x0 over all bias vectors that are linear in x0, which includes the traditional unbiased estimation as a special case. In some settings, it is possible to minimize the MSE over all linear bias vectors. More generally, direct minimization is not possible since the optimal solution depends on the unknown x0. Nonetheless, we show that in many cases we can find bias vectors that result in an MSE bound that is smaller than the CRLB for all values of x0. Furthermore, we explicitly construct estimators that achieve these bounds in cases where an efficient estimator exists, by performing a simple linear transformation on the standard maximum likelihood (ML) estimator. This leads to estimators that result in a smaller MSE than the ML estimator for all possible values of x0. I.