Results 21  30
of
53
Learning Bayesian Network Classifiers: Searching . . .
, 2005
"... There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian networkbased classifiers. Several specialized algorithms that carry out the search into different types of directed acyclic graph (DAG) topologies have since been developed, most of these being extensions (using augmenting arcs) or modifications of the Naive Bayes basic topology. In this paper, we present a new algorithm to induce classifiers based on Bayesian networks which obtains excellent results even when standard scoring functions are used. The method performs a simple local search in a space unlike unrestricted or augmented DAGs. Our search space consists of a type of partially directed acyclic graph (PDAG) which combines two concepts of DAG equivalence: classification equivalence and independence equivalence. The results of exhaustive experimentation indicate that the proposed method can compete with stateoftheart algorithms for classification.
A New Criterion for Comparing Fuzzy Logics for Uncertain Reasoning
, 1996
"... A new criterion is introduced for judging the suitability of various `fuzzy logics' for practical uncertain reasoning in a probabilistic world and the relationship of this criterion to several established criteria, and its consequences for truth functional belief, are investigated. Introduction It ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A new criterion is introduced for judging the suitability of various `fuzzy logics' for practical uncertain reasoning in a probabilistic world and the relationship of this criterion to several established criteria, and its consequences for truth functional belief, are investigated. Introduction It is a rather widespread assumption in uncertain reasoning, and one that we shall make for the purpose of this paper, that a piece of uncertain knowledge can be adequately captured by attaching a real number (signifying the degree of uncertainty) on some scale to some unequivocal statement or conditional, and that an intelligent agent's knowledge base consists of a large, but nevertheless nite, set K of such expressions. Whether or not this is the correct picture for animate intelligent agents such as ourselves is, perhaps, questionable, but it is certainly the case that many expert systems (which one might feel should be included under the vague title of `intelligent agent') have, by design...
Probability Bounds Analysis in Environmental Risk Assessment
 Applied Biomathematics, Setauket
, 2003
"... This document provides a detailed overview of probability bounds analysis. In the sections that follow, the conceptual background of the approach is briefly presented, followed by the mathematical derivation of probability bounds around parametric, nonparametric, empirical, and assumed or stipulate ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This document provides a detailed overview of probability bounds analysis. In the sections that follow, the conceptual background of the approach is briefly presented, followed by the mathematical derivation of probability bounds around parametric, nonparametric, empirical, and assumed or stipulated models. Computation with pboxes is then described, and numerical examples of computations are provided. In the next section, probability bounds analysis is compared and contrasted with Monte Carlo simulation techniques. Methods used by Monte Carlo analysts for treating input variables, dependencies between input variables, and model uncertainty are compared to methods used in probability bounds analysis. Techniques for implementing microexposure event analysis models are also compared, along with methods for conducting sensitivity analysis. Finally, the use of probability bounds analysis within the tiered framework for conducting probabilistic risk assessments recommended by EPA is discussed.
The Effect Of Small Disjuncts And Class Distribution On Decision Tree Learning
 RUTGERS UNIVERSITY
, 2003
"... The main goal of classifier learning is to generate a model that makes few misclassification errors. Given this emphasis on error minimization, it makes sense to try to understand how the induction process gives rise to classifiers that make errors and whether we can identify those parts of the cla ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The main goal of classifier learning is to generate a model that makes few misclassification errors. Given this emphasis on error minimization, it makes sense to try to understand how the induction process gives rise to classifiers that make errors and whether we can identify those parts of the classifier that generate most of the errors. In this thesis we provide the first comprehensive studies of two major sources of classification errors. The first study concerns small disjuncts, which are those disjuncts within a classifier that cover only a few training examples. An analysis of classifiers induced from thirty data sets shows that these small disjuncts are extremely error prone and often account for the majority of all classification errors. Because small disjuncts largely determine classifier performance, we use them as a "lens" through which to study classifier induction. Factors such as pruning, trainingset size, noise and class imbalance are each analyzed to determine how they affect small disjuncts and, more generally, classifier learning. The second
Hierarchical NonEmitting Markov Models
, 1998
"... We describe a simple variant of the interpolated Markov model with nonemitting state transitions and prove that it is strictly more powerful than any Markov model. More importantly, the nonemitting model outperforms the classic interpolated model on natural language texts under a wide range of expe ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We describe a simple variant of the interpolated Markov model with nonemitting state transitions and prove that it is strictly more powerful than any Markov model. More importantly, the nonemitting model outperforms the classic interpolated model on natural language texts under a wide range of experimental conditions, with only a modest increase in computational requirements. The nonemitting model is also much less prone to overfitting.
Empirical Bayes Adjustments for Multiple Results in Hypothesisgenerating or Surveillance Studies
, 2000
"... Traditional methods of adjustment for multiple comparisons (e.g., Bonferroni adjustments) have fallen into disuse in epidemiological studies. However, alternative kinds of adjustment for data with multiple comparisons may sometimes be advisable. When a large number of comparisons are made, and when ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Traditional methods of adjustment for multiple comparisons (e.g., Bonferroni adjustments) have fallen into disuse in epidemiological studies. However, alternative kinds of adjustment for data with multiple comparisons may sometimes be advisable. When a large number of comparisons are made, and when there is a high cost to investigating false positive leads, empirical or semiBayes adjustments may help in the selection of the most promising leads. Here we offer an example of such adjustments in a large surveillance data set of occupation and cancer in Nordic countries, in which we used empirical Bayes (EB) adjustments to evaluate standardized incidence ratios (SIRs) for cancer and occupation among craftsmen and laborers. For men,
A Unified Treatment of Uncertainties
 In Proceedings of the Fourth International Conference for Young Computer Scientists
, 1993
"... "Uncertainty in artificial intelligence" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environme ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
"Uncertainty in artificial intelligence" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, NonAxiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation. 1 The Problem The central issue of this paper is uncertaint...
Some Bayesian perspectives on statistical modelling
, 1988
"... I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
I would like to thank my supervisor, Professor A. F. M. Smith, for all his advice and encourage
Testing the Untestable: Reliability in the 21st Century
 IEEE Transactions on Software Reliability
, 2002
"... and industry are relying more and more on science’s advanced methods to determine reliability. Unfortunately, political, economic, time, and other constraints imposed by the real world inhibit the ability of researchers to calculate reliability efficiently and accurately. Because of such constraints ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
and industry are relying more and more on science’s advanced methods to determine reliability. Unfortunately, political, economic, time, and other constraints imposed by the real world inhibit the ability of researchers to calculate reliability efficiently and accurately. Because of such constraints, reliability must undergo an evolutionary change. The first step in this evolution is to reinterpret the concept so that it meets the new century’s needs. The next step is to quantify reliability using both empirical methods and auxiliary data sources, such as expert knowledge, corporate memory, and mathematical modeling and simulation. 1