Results 1  10
of
12
The consistency of the BIC Markov order estimator.
"... . The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that minimizes the sum of the negative logarithm of the kth order maximum likelihood and the penalty term jAj ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
. The Bayesian Information Criterion (BIC) estimates the order of a Markov chain (with finite alphabet A) from observation of a sample path x 1 ; x 2 ; : : : ; x n , as that value k = k that minimizes the sum of the negative logarithm of the kth order maximum likelihood and the penalty term jAj k (jAj\Gamma1) 2 log n: We show that k equals the correct order of the chain, eventually almost surely as n ! 1, thereby strengthening earlier consistency results that assumed an apriori bound on the order. A key tool is a strong ratiotypicality result for Markov sample paths. We also show that the Bayesian estimator or minimum description length estimator, of which the BIC estimator is an approximation, fails to be consistent for the uniformly distributed i.i.d. process. AMS 1991 subject classification: Primary 62F12, 62M05; Secondary 62F13, 60J10 Key words and phrases: Bayesian Information Criterion, order estimation, ratiotypicality, Markov chains. 1 Supported in part by a joint N...
Consistency issues in Bayesian Nonparametrics
 IN ASYMPTOTICS, NONPARAMETRICS AND TIME SERIES: A TRIBUTE
, 1998
"... ..."
Consistency of Bayes estimates for nonparametric regression: normal theory
 Bernoulli
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
UNAWARENESS, PRIORS AND POSTERIORS
"... Abstract. This note contains first thoughts on awareness of unawareness in a simple dynamic context where a decision situation is repeated over time. The main consequence of increasing awareness is that the model the decision maker uses, and the prior which it contains, becomes richer over time. The ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. This note contains first thoughts on awareness of unawareness in a simple dynamic context where a decision situation is repeated over time. The main consequence of increasing awareness is that the model the decision maker uses, and the prior which it contains, becomes richer over time. The decision maker is prepared to this change, and we show that if a projectionconsistency axiom is satisfied unawareness does not affect the value of her estimate of a payoffrelevant conditional probability (although it may weaken confidence in such estimate). Probabilityzero events however pose a challenge to this axiom, and if that fails, even estimate values will be different if the decision maker takes unawareness into account. In examining evolution of knowledge about relevant variable through time, we distinguish between transition from uncertainty to certainty and from unawareness to certainty directly, and argue that new knowledge may cause posteriors to jump more if it is also new awareness. Some preliminary considerations on convergence of estimates are included.
Consistency of Bayes estimators of a binary regression function
 Annals of Statistics
, 2006
"... Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binary regression problem in detail and establish frequentist consistency for a certain class of Bayes procedures based on hierarchical priors, called uniform mixture priors. These are defin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binary regression problem in detail and establish frequentist consistency for a certain class of Bayes procedures based on hierarchical priors, called uniform mixture priors. These are defined as follows: let ν be any probability distribution on the nonnegative integers. To sample a function f from the prior π ν, first sample m from ν and then sample f uniformly from the set of step functions from [0,1] into [0, 1] that have exactly m jumps (i.e. sample all m jump locations and m + 1 function values independently and uniformly). The main result states that if a datastream is generated according to any fixed, measurable binaryregression function f0 ̸ ≡ 1/2 then frequentist consistency obtains: i.e. for any ν with infinite support, the posterior of π ν concentrates on any L 1 neighborhood of f0. Solution of an associated largedeviations problem is central to the consistency proof. 1.
On Posterior Consistency in Selection Models
, 1999
"... Selection models are appropriate when the probability that a potential datum enters the sample is a nondecreasing function of the numeric value of the datum. It is rarely justifiable to model this function, called the weight function, with a specific parametric form, but is appealing to model with a ..."
Abstract
 Add to MetaCart
Selection models are appropriate when the probability that a potential datum enters the sample is a nondecreasing function of the numeric value of the datum. It is rarely justifiable to model this function, called the weight function, with a specific parametric form, but is appealing to model with a nonparametric prior centered around a parametric form. The Bayesian analysis with Dirichlet process prior for the weight function is considered and it is proved that the posterior is consistent under the weak topology.