Results 1  10
of
44
Model selection and accounting for model uncertainty in graphical models using Occam's window
, 1993
"... We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection o ..."
Abstract

Cited by 266 (46 self)
 Add to MetaCart
We consider the problem of model selection and accounting for model uncertainty in highdimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic Pvalues leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 1011). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty byaveraging overamuch smaller set of models. An efficient search algorithm is developed for nding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable
Multidimensional Scaling
 Handbook of Statistics
, 2001
"... eflecting the importance or precision of dissimilarity # i j . 1. SOURCES OF DISTANCE DATA Dissimilarity information about a set of objects can arise in many different ways. We review some of the more important ones, organized by scientific discipline. 1.1. Geodesy. The most obvious application, ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
eflecting the importance or precision of dissimilarity # i j . 1. SOURCES OF DISTANCE DATA Dissimilarity information about a set of objects can arise in many different ways. We review some of the more important ones, organized by scientific discipline. 1.1. Geodesy. The most obvious application, perhaps, is in sciences in which distance is measured directly, although generally with error. This happens, for instance, in triangulation in geodesy. We have measurements which are approximately equal to distances, either Euclidean or spherical, depending on the scale of the experiment. In other examples, measured distances are less directly related to physical distances. For example, we could measure airplane or road or train travel distances between different cities. Physical distance is usually not the only factor determining these types of dissimilarities. 1 2 J. DE LEEUW<
Soft Evidential Update for Probabilistic Multiagent Systems
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2000
"... We address the problem of updating a probability distribution represented by a Bayesian network upon presentation of soft evidence. Our motivation ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
We address the problem of updating a probability distribution represented by a Bayesian network upon presentation of soft evidence. Our motivation
Network Routing
 Phil. Trans. R. Soc. Lond. A,337
, 1991
"... How should flows through a network be organized, so that the network responds sensibly to failures and overloads? The question is currently of considerable technological importance in connection with the development of computer and telecommunication networks, while in various other forms it has a lo ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
How should flows through a network be organized, so that the network responds sensibly to failures and overloads? The question is currently of considerable technological importance in connection with the development of computer and telecommunication networks, while in various other forms it has a long history in the fields of physics and economics. In all of these areas there is interest in how simple, local rules, often involving random actions, can produce coherent and purposeful behaviour at the macroscopic level. This paper describes some examples from these various fields, and indicates how analogies with fundamental concepts such as energy and price can provide powerful insights into the design of routing schemes for communication networks.
Polyhedral conditions for the nonexistence of the MLE for hierarchical loglinear models
, 2006
"... ..."
On Maximum Likelihood Estimation in LogLinear Models
"... In this article, we combine results from the theory of linear exponential families, polyhedral geometry and algebraic geometry to provide analytic and geometric characterizations of loglinear models and maximum likelihood estimation. Geometric and combinatorial conditions for the existence of the M ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this article, we combine results from the theory of linear exponential families, polyhedral geometry and algebraic geometry to provide analytic and geometric characterizations of loglinear models and maximum likelihood estimation. Geometric and combinatorial conditions for the existence of the Maximum Likelihood Estimate (MLE) of the cell mean vector of a contingency table are given for general loglinear models under conditional Poisson sampling. It is shown that any loglinear model can be generalized to an extended exponential family of distributions parametrized, in a mean value sense, by points of a polyhedron. Such a parametrization is continuous and, with respect to this extended family, the MLE always exists and is unique. In addition, the set of cell mean vectors form a subset of a toric variety consisting of nonnegative points satisfying a certain system of polynomial equations. These results of are theoretical and practical importance for estimation and model selection. 1
Approximate string comparator search strategies for very large administrative lists
 STATISTICAL RESEARCH DIVISION, U.S. CENSUS BUREAU
, 2005
"... Rather than collect data from a variety of surveys, it is often more efficient to merge information from administrative lists. Matching of person files might be done using name and dateofbirth as the primary identifying information. There are obvious difficulties with entities having a commonly oc ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Rather than collect data from a variety of surveys, it is often more efficient to merge information from administrative lists. Matching of person files might be done using name and dateofbirth as the primary identifying information. There are obvious difficulties with entities having a commonly occurring name such as John Smith that may occur 30,000+ times (1.5 for each dateofbirth). If there are 5 % typographical error in each field, then using fast characterbycharacter searches can miss 20 % of true matches among noncommonly occurring records where name plus dateofbirth might be unique. This paper describes some existing solutions and current research directions.
Pattern discovery by residual analysis and recursive partitioning
 IEEE Transactions on Knowledge and Data Engineering
, 1999
"... AbstractÐIn this paper, a novel method of pattern discovery is proposed. It is based on the theoretical formulation of a contingency table of events. Using residual analysis and recursive partitioning, statistically significant events are identified in a data set. These events constitute the importa ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
AbstractÐIn this paper, a novel method of pattern discovery is proposed. It is based on the theoretical formulation of a contingency table of events. Using residual analysis and recursive partitioning, statistically significant events are identified in a data set. These events constitute the important information contained in the data set and are easily interpretable as simple rules, contour plots, or parallel axes plots. In addition, an informative probabilistic description of the data is automatically furnished by the discovery process. Following a theoretical formulation, experiments with real and simulated data will demonstrate the ability to discover subtle patterns amid noise, the invariance to changes of scale, cluster detection, and discovery of multidimensional patterns. It is shown that the pattern discovery method offers the advantages of easy interpretation, rapid training, and tolerance to noncentralized noise. Index TermsÐPattern discovery, residual analysis, recursive partitioning, events, contingency tables.