Results 1  10
of
74
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 620 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
A generalized Gaussian image model for edgepreserving MAP estimation
 IEEE Trans. on Image Processing
, 1993
"... Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distri ..."
Abstract

Cited by 238 (34 self)
 Add to MetaCart
Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisifies several desirable analytical and computational properties for MAP estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the U posteriori loglikeihood function. The GGMRF is demonstrated to be useful for image reconstruction in lowdosage transmission tomography. I.
Deterministic edgepreserving regularization in computed imaging
 IEEE Trans. Image Processing
, 1997
"... Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such ..."
Abstract

Cited by 231 (23 self)
 Add to MetaCart
Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edgepreserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion halfquadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of tomography, but this method can be applied in a large number of applications in image processing. I.
Nonlinear Image Recovery with HalfQuadratic Regularization
, 1993
"... One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce ..."
Abstract

Cited by 132 (0 self)
 Add to MetaCart
One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate, but often far less accessible. This is particularly true when the objective function is nonconvex and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function, so the original image estimate can be obtained by joint min...
A unified approach to statistical tomography using coordinate descent optimization
 IEEE Trans. on Image Processing
, 1996
"... Abstract 1 Over the past ten years there has been considerable interest in statistically optimal reconstruction of image crosssections from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. ..."
Abstract

Cited by 108 (24 self)
 Add to MetaCart
Abstract 1 Over the past ten years there has been considerable interest in statistically optimal reconstruction of image crosssections from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. In this paper we propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixelwise computations known as iterative coordinate decent (ICD). We show that the ICD iterations require approximately the same amount of computation per iteration as EM based approaches, but the new method converges much more rapidly (in our experiments typically 5 iterations). Other advantages of the ICD method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are simply incorporated.
Using a Deformable Surface Model to Obtain a Shape Representation of the Cortex
 IEEE Trans. Med. Imag
, 1996
"... The problem of obtaining a mathematical representation of the cortex of the human brain is examined. A parametrization of the outer cortex is first obtained using a deformable surface algorithm which, motivated by the structure of the cortex, is constructed to find the central layer of thick surface ..."
Abstract

Cited by 87 (11 self)
 Add to MetaCart
The problem of obtaining a mathematical representation of the cortex of the human brain is examined. A parametrization of the outer cortex is first obtained using a deformable surface algorithm which, motivated by the structure of the cortex, is constructed to find the central layer of thick surfaces. Based on this parametrization, a hierarchical representation of the cortical structure is proposed through its depth map and its curvature maps at various scales. Various experiments on magnetic resonance data are presented. I. Introduction The problem of finding and parametrizing boundaries in two and threedimensional images is often an important step toward shape visualization and analysis, and has been extensively studied in the image analysis and computer vision literature. Several methods have been proposed, basedboth on bottomup and topbottom procedures. One very promising model which combines robustness to noise and the flexibility to represent a broad class of shapes is base...
Penalized Weighted LeastSquares Image Reconstruction for Positron Emission Tomography
 IEEE TR. MED. IM
, 1994
"... This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as approp ..."
Abstract

Cited by 86 (38 self)
 Add to MetaCart
This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. We propose a simple databased method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive overrelaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximumlikelihood expectationmaximization (MLEM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminat...
Penalized MaximumLikelihood Image Reconstruction using SpaceAlternating Generalized EM Algorithms
 IEEE Tr. Im. Proc
, 1995
"... Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penal ..."
Abstract

Cited by 82 (31 self)
 Add to MetaCart
Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the Msteps of most EMtype algorithms. This paper presents spacealternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large completedata space. The sequential update decouples the Mstep, so the maximization can typically be performed analytically. We introduce new hiddendata spaces that are less informative than the conventional completedata space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints.
A Bayesian Paradigm for Dynamic Graph Layout
, 1997
"... Dynamic graph layout refers to the layout of graphs that change over time. These changes are due to user interaction, algorithms, or other underlying processes determining the graph. Typically, users spend a noteworthy amount of time to get familiar with a layout, i.e. ..."
Abstract

Cited by 52 (13 self)
 Add to MetaCart
Dynamic graph layout refers to the layout of graphs that change over time. These changes are due to user interaction, algorithms, or other underlying processes determining the graph. Typically, users spend a noteworthy amount of time to get familiar with a layout, i.e.
ML parameter estimation for Markov random fields, with applications to Bayesian tomography
 IEEE Trans. on Image Processing
, 1998
"... Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (so ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
Abstract 1 Markov random fields (MRF) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters (sometimes referred to as hyperparameters) is difficult in practice for two reasons: 1) Direct parameter estimation for MRF’s is known to be mathematically and numerically challenging. 2) Parameters can not be directly estimated because the true image crosssection is unavailable. In this paper, we propose a computationally efficient scheme to address both these difficulties for a general class of MRF models, and we derive specific methods of parameter estimation for the MRF model known as a generalized Gaussian MRF (GGMRF). The first section of the paper derives methods of direct estimation of scale and shape parameters for a general continuously valued MRF. For the GGMRF case, we show that the ML estimate of the scale parameter, σ, has a simple closed form solution, and we present an efficient scheme for computing the ML estimate of the shape parameter, p, by an offline numerical computation of the dependence of the partition function on p.