Results 1 
4 of
4
Generalizing Case Frames Using a Thesaurus and the MDL Principle
 Computational Linguistics
, 1998
"... this paper, we confine ourselves to the former issue, and refer the interested reader to Li and Abe (1996), which deals with the latter issue ..."
Abstract

Cited by 131 (4 self)
 Add to MetaCart
this paper, we confine ourselves to the former issue, and refer the interested reader to Li and Abe (1996), which deals with the latter issue
Properties of Jeffreys mixture for Markov sources
 Proc. of the fourth Workshop on InformationBased Induction Sciences (IBIS2001
, 2001
"... Abstract: We discuss the properties of Jeffreys mixture for general FSMX model (a certain class of Markov sources [11]). First, we show that modified Jeffreys mixture asymptotically achieves the minimax coding regret [7], where we do not put any restriction on data sequences at all. This is extensi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract: We discuss the properties of Jeffreys mixture for general FSMX model (a certain class of Markov sources [11]). First, we show that modified Jeffreys mixture asymptotically achieves the minimax coding regret [7], where we do not put any restriction on data sequences at all. This is extension of results in [13, 15]. Then, we give an approximation formula for the prediction probability of Jeffreys mixture for FSMX models (review of the result in [10, 19]). By this formula, it is revealed that the prediction probability by Jeffreys mixture for the first order Markov chain with alphabet {0, 1} is not of the form (k + α)/(n+ β) (n is data size, k is number of occurrences of ‘1’). Moreover, we evaluate by simulation the regret of our approximation formula for the first order Markov chain and show that the prediction strategy using our approximation formula gives smaller coding regret than the one using Laplace estimator. 1
Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection
, 2000
"... Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have previously gone unnoticed and clarifying misconceptions which seem to have propagated in the applied literature. Our tour begins with Schwarz's approximation of Bayesian integrals via Laplace's method. We then introduce the concepts underlying Rissanen 's minimum description length principle via a Bayesian scenario with a known prior; this provides the groundwork for understanding his more complex nonBayesian MDL which employs a "universal" encoding of the integers. Rissanen's method of parameter truncation is contrasted with that employed in various versions of Wallace's minimum message length criteria.
Master Thesis
, 91
"... this paper. 129 in encoding y using q(y) is \Gamma ln q(y) + ln p l (yjx(y)) = ln ..."
Abstract
 Add to MetaCart
this paper. 129 in encoding y using q(y) is \Gamma ln q(y) + ln p l (yjx(y)) = ln