Results 1  10
of
99
Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing
 Advances in Neural Information Processing Systems 9
, 1996
"... The Support Vector (SV) method was recently proposed for estimating regressions, constructing multidimensional splines, and solving linear operator equations [Vapnik, 1995]. In this presentation we report results of applying the SV method to these problems. 1 Introduction The Support Vector method i ..."
Abstract

Cited by 191 (24 self)
 Add to MetaCart
The Support Vector (SV) method was recently proposed for estimating regressions, constructing multidimensional splines, and solving linear operator equations [Vapnik, 1995]. In this presentation we report results of applying the SV method to these problems. 1 Introduction The Support Vector method is a universal tool for solving multidimensional function estimation problems. Initially it was designed to solve pattern recognition problems, where in order to find a decision rule with good generalization ability one selects some (small) subset of the training data, called the Support Vectors (SVs). Optimal separation of the SVs is equivalent to optimal separation the entire data. This led to a new method of representing decision functions where the decision functions are a linear expansion on a basis whose elements are nonlinear functions parameterized by the SVs (we need one SV for each element of the basis). This type of function representation is especially useful for high dimensional...
Bayesian reconstructions from emission tomography data using a modified EM algorithm
 IEEE Trans. Med. Imag
, 1990
"... AbstractA new method of reconstruction from SPECT data is proposed, which builds on the EM approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, that takes account of prior belief about “smoothness ” in the isotope ..."
Abstract

Cited by 191 (3 self)
 Add to MetaCart
AbstractA new method of reconstruction from SPECT data is proposed, which builds on the EM approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, that takes account of prior belief about “smoothness ” in the isotope concentration. A novel modification to the EM algorithm yields a practical method. The method is illustrated by an application to data from brain scans. I.
Accelerated Image Reconstruction using Ordered Subsets of Projection Data
 IEEE Trans. Med. Imag
, 1994
"... We define ordered subset processing for standard algorithms (such as Expectation Maximization, EM) for image restoration from projections. Ordered subsets methods group projection data into an ordered sequence of subsets (or blocks). An iteration of ordered subsets EM is defined as a single pass thr ..."
Abstract

Cited by 154 (2 self)
 Add to MetaCart
We define ordered subset processing for standard algorithms (such as Expectation Maximization, EM) for image restoration from projections. Ordered subsets methods group projection data into an ordered sequence of subsets (or blocks). An iteration of ordered subsets EM is defined as a single pass through all the subsets, in each subset using the current estimate to initialise application of EM with that data subset. This approach is similar in concept to blockKaczmarz methods introduced by Eggermont et al [1] for iterative reconstruction. Simultaneous iterative reconstruction (SIRT) and multiplicative algebraic reconstruction (MART) techniques are well known special cases. Ordered subsets EM (OSEM) provides a restoration imposing a natural positivity condition and with close links to the EM algorithm. OSEM is applicable in both single photon (SPECT) and positron emission tomography (PET). In simulation studies in SPECT the OSEM algorithm provides an orderofmagnitude acceleration ...
Penalized Weighted LeastSquares Image Reconstruction for Positron Emission Tomography
 IEEE TR. MED. IM
, 1994
"... This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as approp ..."
Abstract

Cited by 86 (38 self)
 Add to MetaCart
This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. We propose a simple databased method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive overrelaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximumlikelihood expectationmaximization (MLEM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminat...
Platelets: A Multiscale Approach for Recovering Edges and Surfaces in PhotonLimited Medical Imaging
 IEEE TRANSACTIONS ON MEDICAL IMAGING
, 2003
"... The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional waveletbased methods, are both well suited to photonlimited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized ..."
Abstract

Cited by 77 (19 self)
 Add to MetaCart
The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional waveletbased methods, are both well suited to photonlimited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized functions at various scales, locations, and orientations that produce piecewise linear image approximations, and a new multiscale image decomposition based on these functions. Platelets are well suited for approximating images consisting of smooth regions separated by smooth boundaries. For smoothness measured in certain H older classes, it is shown that the error of mterm platelet approximations can decay significantly faster than that of mterm approximations in terms of sinusoids, wavelets, or wedgelets. This suggests that platelets may outperform existing techniques for image denoising and reconstruction. Fast, plateletbased, maximum penalized likelihood methods for photonlimited image denoising, deblurring and tomographic reconstruction problems are developed. Because platelet decompositions of Poisson distributed images are tractable and computationally efficient, existing image reconstruction methods based on expectationmaximization type algorithms can be easily enhanced with platelet techniques. Experimental results suggest that plateletbased methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.
A Statistical Multiscale Framework for Poisson Inverse Problems
, 2000
"... This paper describes a statistical modeling and analysis method for linear inverse problems involving Poisson data based on a novel multiscale framework. The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
This paper describes a statistical modeling and analysis method for linear inverse problems involving Poisson data based on a novel multiscale framework. The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the \splits" in the underlying partition. The class of priors used here has the interesting feature that the \noninformative" member yields the traditional maximum likelihood solution; other choices are made to reect prior belief as to the smoothness of the unknown intensity. Adopting the expectationmaximization (EM) algorithm for use in computing the MAP estimate corresponding to our model, we nd that our model permits remarkably simple, closedform expressions for the EM update equations. The behavior of our EM algorit...
Parameter expansion to accelerate EM: The PXEM algorithm
, 1998
"... The EM algorithm and its extensions are popular tools for modal estimation but are often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment ' to correct the analysis of the M step, capitalising on e ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
The EM algorithm and its extensions are popular tools for modal estimation but are often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment ' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data. The way we accomplish this is by parameter expansion; we expand the completedata model while preserving the observeddata model and use the expanded completedata model to generate EM. This parameterexpanded EM, PXEM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis. The PXEM algorithm is illustrated for the multivariate t distribution, a random effects model, factor analysis, probit regression and a Poisson imaging model.
The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
 SIAM J. Optim
, 2001
"... Abstract. We describe an optimization problem arising in reconstructing 3D medical images from Positron Emission Tomography (PET). A mathematical model of the problem, based on the Maximum Likelihood principle is posed as a problem of minimizing a convex function of several millions variables over t ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
Abstract. We describe an optimization problem arising in reconstructing 3D medical images from Positron Emission Tomography (PET). A mathematical model of the problem, based on the Maximum Likelihood principle is posed as a problem of minimizing a convex function of several millions variables over the standard simplex. To solve a problem of these characteristics, we develop and implement a new algorithm, Ordered Subsets Mirror Descent, and demonstrate, theoretically and computationally, that it is well suited for solving the PET reconstruction problem. Key words: positron emission tomography, maximum likelihood, image reconstruction, convex optimization, mirror descent. 1
Informationtheoretic image formation
 IEEE Transactions on Information Theory
, 1998
"... Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigo ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract — The emergent role of information theory in image formation is surveyed. Unlike the subject of informationtheoretic communication theory, informationtheoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigorous framework for defining the imaging problem, for defining measures of optimality used to form estimates of images, for addressing issues associated with the development of algorithms based on these optimality criteria, and for quantifying the quality of the approximations. The definition of the imaging problem consists of an appropriate model for the data and an appropriate model for the reproduction space, which is the space within which image estimates take values. Each problem statement has an associated optimality criterion that measures the overall quality of an estimate. The optimality criteria include maximizing the likelihood function and minimizing mean squared error for stochastic problems, and minimizing squared error and discrimination for deterministic problems. The development of algorithms is closely tied to the definition of the imaging problem and the associated optimality criterion. Algorithms with a strong informationtheoretic motivation are obtained by the method of expectation maximization. Related alternating minimization algorithms are discussed. In quantifying the quality of approximations, global and local measures are discussed. Global measures include the (mean) squared error and discrimination between an estimate and the truth, and probability of error for recognition or hypothesis testing problems. Local measures include Fisher information. Index Terms—Image analysis, image formation, image processing, image reconstruction, image restoration, imaging, inverse problems, maximumlikelihood estimation, pattern recognition. I.
Wavelet Methods For The Inversion Of Certain Homogeneous Linear Operators In The Presence Of Noisy Data
, 1994
"... In this dissertation we explore the use of wavelets in certain linear inverse problems with discrete, noisy data. We observe discrete samples of a process y(u) = (Kf)(u)+ z(u), where K is a linear operator, z is a noise process, and f is a function we wish to recover from the data. In the problems ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
In this dissertation we explore the use of wavelets in certain linear inverse problems with discrete, noisy data. We observe discrete samples of a process y(u) = (Kf)(u)+ z(u), where K is a linear operator, z is a noise process, and f is a function we wish to recover from the data. In the problems that we consider, the inverse of K, K \Gamma1 , either does not exist or is poorly behaved. Such problems are termed illposed i.e., ones in which small changes in the data may lead to large changes in the recovered version of f . Our methods are most effective for problems where the operator K is homogeneous with respect to dilations, such as integration, fractional integration, convolution, and the Radon transform. The theoretical framework in which we work is that of Donoho's (1992) WaveletVaguelette Decomposition (WVD). The WVD uses wavelets and vaguelettes (almost wavelets) to decompose the operator K. Although this formally resembles the Singular Value Decomposition (SVD), the use o...