## Approximate Dirichlet Process Computing in Finite Normal Mixtures: Smoothing and Prior Information (2000)

### Cached

### Download Links

Venue: | JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS |

Citations: | 25 - 3 self |

### BibTeX

@ARTICLE{Ishwaran00approximatedirichlet,

author = {Hemant Ishwaran and Lancelot F. James},

title = {Approximate Dirichlet Process Computing in Finite Normal Mixtures: Smoothing and Prior Information},

journal = {JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS},

year = {2000},

volume = {11},

pages = {508--532}

}

### Years of Citing Articles

### OpenURL

### Abstract

### Citations

2534 | An Introduction to the Bootstrap - Efron, Tibshirani - 1993 |

1235 |
Information theory and an extension of the maximum likelihood principle
- Akaike
- 1973
(Show Context)
Citation Context ... (i) Schwartz's BIC criteria (Schwartz 1978), which corresponds to the penalty an ( e P N ) = 1 2 log n dimension( e P N ) = log n N X k=1 Ifr k > 0g 1 2 ; and (ii) Akaike's AIC criteria (Akaike=-= 19-=-73), an ( e P N ) = dimension( e P N ) = 2 N X k=1 Ifr k > 0g 1: See Section 4 for an example illustrating this method. Remark 1. Notice that the blocked Gibbs algorithm makes use of blocked updates... |

708 |
A Bayesian analysis of some nonparametric problems
- Ferguson
- 1973
(Show Context)
Citation Context ...t and Robert (1994), Escobar and West (1995), Chib (1995), Richardson and Green (1997) and Roeder and Wasserman (1997). As far back as Ferguson (1983) it has been realized that the Dirichlet process (=-=Ferguson 1973-=-, 1974) can be used as a powerful nonparametric approach for studying this model. However, earlier attempts for Dirichlet process computing involving mixtures of normals were based on Monte Carlo simu... |

623 | Non-Uniform Random Variate Generation
- Devroye
- 1986
(Show Context)
Citation Context ...1. n j > 2: In this case, is a truncated Gamma(n j =2 1) random variable. Let (a; t) = 1 (a) Z t 0 u a 1 exp( u) du; a > 0 be the normalized incomplete gamma function. Then by inverse sampling (see D=-=evroye-=-, 1986), it follows that D = 1 a j ; (a j ; C j =T ) + U (1 (a j ; C j =T )) ; where a j = n j =2 1 and U Uniform[0; 1]. The functions (a; ) and 1 (a; ) are easy to compute in various software pac... |

440 | On Bayesian analysis of mixtures with an unknown number of components - Richardson, Green - 1997 |

397 | Bayesian density estimation and inference using mixtures - Escobar, West - 1995 |

372 | Markov chain sampling methods for Dirichlet process mixture models - Neal - 1998 |

324 | Marginal Likelihood from the Gibbs Output - Chib - 1995 |

323 |
Estimating the dimension of a model
- Schwartz
- 1978
(Show Context)
Citation Context ...where l n (Q) = P n i=1 log fQ (X i ) is the log-likelihood evaluated at a mixing distribution Q and an (Q) is the penalty for Q. We will consider two dierent penalties: (i) Schwartz's BIC criteria (S=-=chwar-=-tz 1978), which corresponds to the penalty an ( e P N ) = 1 2 log n dimension( e P N ) = log n N X k=1 Ifr k > 0g 1 2 ; and (ii) Akaike's AIC criteria (Akaike 1973), an ( e P N ) = dimension( e... |

307 | A constructive definition of dirichlet priors - Sethuraman - 1994 |

212 | Gibbs sampling methods for stick-breaking priors
- Ishwaran, James
- 2001
(Show Context)
Citation Context ...tile Gibbs sampler for Y i , one needs to convert these posterior values into inference for Q 0 , which in the end will require some form of approximation to the Dirichlet process (see Theorem 3 from =-=Ishwaran and James 2001-=- for a general method for converting posterior Y i values into draws from the posterior random measure). Our argument is that one might as well start with a Dirichlet process approximation, which as a... |

160 | Prior distributions on space of probability measures - Ferguson - 1974 |

143 | Estimation of Finite Mixture Distributions through Bayesian Sampling - Diebolt, Robert - 1994 |

141 | Estimating mixture of dirichlet process models - MacEachern, Müller - 1998 |

114 | Practical bayesian density estimation using mixtures of normals - ROEDER, WASSERMANN - 1997 |

106 | On a class of Bayesian nonparametric estimates. I. Density estimates - Lo - 1984 |

103 | Estimating Normal means with a Dirichlet process prior - Escobar - 1994 |

101 | Estimating normal means with a conjugate style dirichlet process prior - MacEachern - 1994 |

97 | A semiparametric Bayesian model for randomised block designs - Bush, MacEachern - 1996 |

91 | Hierarchical priors and mixture models, with application in regression and density estimation. In: Aspects of Uncertainty: A tribute to - West, Müller, et al. - 1994 |

65 | Bayesian Density Estimation by Mixtures of Normal Distributions - Ferguson - 1983 |

65 | Size-biased sampling of Poisson point processes and excursions. Probab. Theory Related Fields 92 - Perman, Pitman, et al. - 1992 |

57 | Density estimation with confidence sets exemplified by superclusters and voids in the galaxies - Roeder - 1990 |

53 | Bayesian Curve Fitting Using Multivariate Normal Mixtures - Müller, Erkanli, et al. - 1996 |

39 | On a class of Bayesian nonparametric estimates - Lo - 1984 |

35 | The mode tree: a tool for visualization of nonparametric density features - Minnotte, Scott - 1993 |

31 |
Markov chain Monte Carlo in approximate Dirichlet and beta two–parameter process hierarchical models. Biometrika 87 371–390. MR1782485 ISHWARAN,H.andZAREPOUR, M. (2002b). Dirichlet prior sieves in finite normal mixtures. Statist
- ISHWARAN, M
(Show Context)
Citation Context ...les we may only be interested in modeling the mean with a mixture distribution, with the variance component modeled parametrically as a positive parameter (see the analysis of Section 4; also consult =-=Ishwaran and Zarepo-=-ur 2000 for further applications of this method via the blocked Gibbs sampler). This is easily accommodated within the framework here by setting 1 = 2 = = N to equal some parameter, say, 0 and ... |

28 | Approximating distributions of random functionals of Ferguson-Dirichlet priors - Muliere, Tardella - 1998 |

28 |
Convergence of Dirichlet Measures and the Interpretation of their Parameter
- Sethuraman, Tiwari
- 1982
(Show Context)
Citation Context ... ; N (3) where V 1 ; V 2 ; : : : VN 1 are i.i.d. Beta(1; ) random variables and we set VN = 1 to ensure that P N k=1 p k = 1. By the construction given in Sethuraman (1994) (see also McCloskey 1965; S=-=ethuraman and Tiwari 1-=-982; Donnelly and Joyce 1989; Perman, Pitman and Yor 1992), it easily follows that PN converges almost surely to a Dirichlet process with measure H , written as DP(H), i.e. PN a:s ! DP(H). We refer to... |

23 | Philatelic mixtures and multimodal densities - Izenman, Sommer - 1988 |

22 | Estimating the means of several normal populations by nonparametric estimation of the distribution of the means - Escobar - 1988 |

22 | Optimal rate of convergence for finite mixture models - Chen - 1995 |

21 |
A model for the distribution of individuals by species in an environment
- McCloskey
- 1965
(Show Context)
Citation Context ...V k k = 2; : : : ; N (3) where V 1 ; V 2 ; : : : VN 1 are i.i.d. Beta(1; ) random variables and we set VN = 1 to ensure that P N k=1 p k = 1. By the construction given in Sethuraman (1994) (see also M=-=cCloskey 19-=-65; Sethuraman and Tiwari 1982; Donnelly and Joyce 1989; Perman, Pitman and Yor 1992), it easily follows that PN converges almost surely to a Dirichlet process with measure H , written as DP(H), i.e. ... |

21 | Bayesian model selection in finite mixtures by marginal density decompositions - Ishwaran, James, et al. |

20 |
Continuity and weak convergence of ranked and size-biased permutations on the infinite simplex. Stochastic Process
- Donnelly, Joyce
- 1989
(Show Context)
Citation Context ... : : VN 1 are i.i.d. Beta(1; ) random variables and we set VN = 1 to ensure that P N k=1 p k = 1. By the construction given in Sethuraman (1994) (see also McCloskey 1965; Sethuraman and Tiwari 1982; D=-=onnelly and Joyce 1-=-989; Perman, Pitman and Yor 1992), it easily follows that PN converges almost surely to a Dirichlet process with measure H , written as DP(H), i.e. PN a:s ! DP(H). We refer to H as the reference distr... |

20 | Computations of Mixtures of Dirichlet Processes - Kuo - 1986 |

12 | Estimation of mixture distributions through Bayesian sampling - Diebolt, Robert - 1994 |

6 | Sampling methods for bayesian nonparametric inference involving stochastic processes - Walker, Damien - 1998 |

5 | A constructive de of Dirichlet priors - Sethuraman - 1994 |

3 | Optimal rate of convergence for mixture models - Chen - 1995 |

3 | Bayesian model selection in mixtures by marginal density decompositions - Ishwaran, James, et al. - 2001 |

3 | Size-Biased Sampling - Perman, Pitman, et al. - 1992 |

2 | Bayesian curve using multivariate normal mixtures - Muller, Erkanli, et al. - 1996 |

2 | Density estimation with con sets exempli by superclusters and voids in the galaxies - Roeder - 1990 |

2 |
Continuity and Weak Convergence of Ranked and
- Donnelly, Joyce
- 1989
(Show Context)
Citation Context ...e V1,V2,...VN−1 are iid Beta(1,α) random variables and we set VN = 1 to ensure that �N k=1 pk = 1. By the construction given by Sethuraman (1994) (see also McCloskey 1965; Sethuraman and Tiwari 1=-=982; Donnelly and Joyce 1989; Per-=-man, Pitman, and Yor 1992), it easily follows that PN converges almost surely to a Dirichlet process with measure αH, a.s. written as DP(αH), that is, PN → DP(αH). We refer to H as the reference ... |

1 | Using kKernel Density Estimates to Investigate Multimodality - Silverman - 1981 |