Results 1  10
of
10
GroupedCoordinate Ascent Algorithms for PenalizedLikelihood Transmission Image Reconstruction
 IEEE Tr. Med. Im
, 1996
"... This paper presents a new class of algorithms for penalizedlikelihood reconstruction of attenuation maps from lowcount transmission scans. We derive the algorithms by applying to the transmission loglikelihood a version of the convexity technique developed by De Pierro for emission tomography. The ..."
Abstract

Cited by 49 (22 self)
 Add to MetaCart
This paper presents a new class of algorithms for penalizedlikelihood reconstruction of attenuation maps from lowcount transmission scans. We derive the algorithms by applying to the transmission loglikelihood a version of the convexity technique developed by De Pierro for emission tomography. The new class includes the singlecoordinate ascent (SCA) algorithmand Lange's convex algorithm for transmission tomography as special cases. The new groupedcoordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms. (1) Fewer exponentiations are required than in the transmission MLEM algorithm or in the SCA algorithm. (2) The algorithms intrinsically accommodate nonnegativity constraints, unlike many gradientbased methods. (3) The algorithms are easily parallelizable, unlike the SCA algorithm and perhaps linesearch algorithms. We show that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations. An ex...
Estimating Normalizing Constants and Reweighting Mixtures in Markov Chain Monte Carlo
, 1994
"... Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
Markov chain Monte Carlo (the MetropolisHastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying out Bayesian, likelihood, and frequentist inference in analytically intractable problems. Although many applications of Markov chain Monte Carlo do not need estimation of normalizing constants, three do: calculation of Bayes factors, calculation of likelihoods in the presence of missing data, and importance sampling from mixtures. Here reverse logistic regression is proposed as a solution to the problem of estimating normalizing constants, and convergence and asymptotic normality of the estimates are proved under very weak regularity conditions. Markov chain Monte Carlo is most useful when combined with importance reweighting so that a Monte Carlo sample from one distribution can be used fo...
Blockrelaxation Algorithms in Statistics
, 1994
"... this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majoriza ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
this paper we discuss four such classes of algorithms. Or, more precisely, we discuss a single class of algorithms, and we show how some wellknown classes of statistical algorithms fit in this common class. The subclasses are, in logical order, blockrelaxation methods augmentation methods majorization methods ExpectationMaximization Alternating Least Squares Alternating Conditional Expectations
Accelerated Quantification of Bayesian Networks with Incomplete Data
 In Proceedings of First International Conference on Knowledge Discovery and Data Mining
, 1995
"... Probabilistic expert systems based on Bayesian networks (BNs) require initial specification of both a qualitative graphical structure and quantitative assessment of conditional probability tables. This paper considers statistical batch learning of the probability tables on the basis of incomple ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Probabilistic expert systems based on Bayesian networks (BNs) require initial specification of both a qualitative graphical structure and quantitative assessment of conditional probability tables. This paper considers statistical batch learning of the probability tables on the basis of incomplete data and expert knowledge. The EM algorithm with a generalized conjugate gradient acceleration method has been dedicated to quantification of BNs by maximum posterior likelihood estimation for a superclass of the recursive graphical models. This new class of models allows a great variety of local functional restrictions to be imposed on the statistical model, which hereby extents the control and applicability of the constructed method for quantifying BNs. Introduction The construction of probabilistic expert systems (Pearl 1988, Andreassen et al. 1989) based on Bayesian networks (BNs) is often a challenging process. It is typically divided into two parts: First the constructi...
Smoothing Spline Analysis Of Variance For Polychotomous Response Data
, 1998
"... We consider the penalized likelihood method with smoothing spline ANOVA for estimating nonparametric functions to data involving a polychotomous response. The fitting procedure involves minimizing the penalized likelihood in a Reproducing Kernel Hilbert Space. One Step Block SORNewtonRaphson Algor ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We consider the penalized likelihood method with smoothing spline ANOVA for estimating nonparametric functions to data involving a polychotomous response. The fitting procedure involves minimizing the penalized likelihood in a Reproducing Kernel Hilbert Space. One Step Block SORNewtonRaphson Algorithm is used to solve the minimization problem. Generalized CrossValidation or unbiased risk estimation is used to empirically assess the amount of smoothing (which controls the bias and variance tradeoff) at each onestep Block SORNewtonRaphson iteration. Under some regular smoothness conditions, the onestep Block SORNewtonRaphson will produce a sequence which converges to the minimizer of the penalized likelihood for the fixed smoothing parameters. Monte Carlo simulations are conducted to examine the performance of the algorithm. The method is applied to polychotomous data from the Wisconsin Epidemiological Study of Diabetic Retinopathy to estimate the risks of causespecific mortality given several potential risk factors at the start of the study. Strategies to obtain smoothing spline estimates for large data sets with polychotomous response are also proposed in this thesis. Simulation studies are conducted to check the performance of the proposed method. ii Acknowledgements I would like to express my sincerest gratitude to my advisor, Professor Grace Wahba, for her invaluable advice during the course of this dissertation. Appreciation is extended to Professors Michael Kosorok, Mary Lindstrom, Olvi Mangasarian, and KamWah Tsui for their service on my final examination committee, their careful reading of this thesis and their valuable comments. I would like to thank Ronald Klein, MD and Barbara Klein, MD for providing the WESDR data. Fellow graduate students Fangy...
Majorization Methods In Statistics
, 2000
"... It is a pleasure to comment on such a wellwritten and obviously important paper. We agree with the basic explicit message of Lange, Hunter, and Yang (LHY). Their socalled "optimization transfer" algorithms form a very interesting and versatile class. One of the main reasons for this is that tay ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
It is a pleasure to comment on such a wellwritten and obviously important paper. We agree with the basic explicit message of Lange, Hunter, and Yang (LHY). Their socalled "optimization transfer" algorithms form a very interesting and versatile class. One of the main reasons for this is that taylormade statistical techniques written in interpreted languages are becoming more and more common. For such techniques, and in such computational environments, taylormade algorithms in the "optimization transfer" class are, at least initially, very convenient, although perhaps ultimately not optimal. We also agree with what we read as a more implicit message in LHY. The usual derivations of the EM algorithm tend to be somewhat mysterious, because they confound statistics and numerical analysis. The notion of likelihood and of missing data can be used to provide statistical interpretations of the algorithm, but the engine that drives EM is majorization based on Jensen's i
Bayesian Variable Selection in Qualitative Models by KullbackLeibler projections
, 1998
"... The Bayesian variable selection method proposed in the paper is based on the evaluation of the KullbackLeibler distance between the full (or encompassing) model and the submodels. The implementation of the method does not require a separate prior modeling on the submodels since the corresponding pa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Bayesian variable selection method proposed in the paper is based on the evaluation of the KullbackLeibler distance between the full (or encompassing) model and the submodels. The implementation of the method does not require a separate prior modeling on the submodels since the corresponding parameters for the submodels are defined as the KullbackLeibler projections of the full model parameters. The result of the selection procedure is the submodel with the smallest number of covariates which is at an acceptable distance of the full model. We introduce the notion of explanatory power of a model and scale the maximal acceptable distance in terms of the explanatory power of the full model. Moreover, an additivity property between embedded submodels shows that our selection procedure is equivalent to select the submodel with the smallest number of covariates which has a sufficient explanatory power. We illustrate the performances of this method on a breast cancer dataset, where they...
Restricted concentration models  graphical Gaussian models with
"... In this paper we introduce restricted concentration models (RCMs) as a class of graphical models for the multivariate Gaussian distribution in which some elements of the concentration matrix are restricted to being identical is introduced. An estimation algorithm for RCMs, which is guaranteed ..."
Abstract
 Add to MetaCart
In this paper we introduce restricted concentration models (RCMs) as a class of graphical models for the multivariate Gaussian distribution in which some elements of the concentration matrix are restricted to being identical is introduced. An estimation algorithm for RCMs, which is guaranteed to converge to the maximum likelihood estimate, is presented.
Measurement by Subjective Estimation: Testing for Separable Representations
"... Studying how individuals compare two given quantitative stimuli, say d1 and d2, is a fundamental problem. One very common way to address it is through ratio estimation, that is to ask individuals not to give values to d1 and d2, but rather to give their estimates of the ratio p = d1/d2. Several psyc ..."
Abstract
 Add to MetaCart
Studying how individuals compare two given quantitative stimuli, say d1 and d2, is a fundamental problem. One very common way to address it is through ratio estimation, that is to ask individuals not to give values to d1 and d2, but rather to give their estimates of the ratio p = d1/d2. Several psychophysical theories (the best known being Stevens ’ powerlaw) claim that this ratio cannot be known directly and that there are cognitive distortions on the apprehension of the different quantities. These theories result in the socalled separable representations (Narens 1996, Luce 2002), which include Stevens ’ model as a special case. In this paper we propose a general statistical framework that allows for testing in a rigorous way whether the separable representation theory is grounded or not. We conclude in favor of it, but strongly reject Stevens ’ model. As a byproduct, we provide estimates of the psychophysical functions of interest.