Results 1  10
of
20
Numerical Uncertainty Management in User and Student Modeling: An Overview of Systems and Issues
, 1996
"... . A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections fo ..."
Abstract

Cited by 104 (10 self)
 Add to MetaCart
. A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections focuses on one of these paradigms. It first introduces the basic concepts by showing how they can be applied to a relatively simple user modeling problem. It then surveys systems that have applied techniques from the paradigm to user or student modeling, characterizing each system within a common framework. The final main section discusses several aspects of the usability of these techniques for user and student modeling, such as their knowledge engineering requirements, their need for computational resources, and the communicability of their results. Key words: numerical uncertainty management, Bayesian networks, DempsterShafer theory, fuzzy logic, user modeling, student modeling 1. Introdu...
Learning hybrid Bayesian networks from data
, 1998
"... We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first method ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first methodology uses discretized data to learn the Bayesian network structure, and the original nondiscretized data for the parameterization of the learned structure. The second methodology uses nondiscretized data both to learn the Bayesian network structure and its parameterization. For the direct handling of continuous data, we propose the use of artificial neural networks as probability estimators, to be used as an integral part of the scoring metric defined to search the space of Bayesian network structures. With both methodologies, we assume the availability of a complete dataset, with no missing values or hidden variables. We report experimental results aimed at comparing the two methodologies. These results provide evidence that learning with discretized data presents advantages both in terms of efficiency and in terms of accuracy of the learned models over the alternative approach of using nondiscretized data.
A Case Study in Knowledge Discovery and Elicitation in an Intelligent Tutoring Application
, 2001
"... Most successful Bayesian network (BN) applications to date have been built through knowledge elicitation from experts. This is difficult and time consuming, which has lead to recent interest in automated methods for learning BNs from data. We present a case study in the construction of a BN in ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Most successful Bayesian network (BN) applications to date have been built through knowledge elicitation from experts. This is difficult and time consuming, which has lead to recent interest in automated methods for learning BNs from data. We present a case study in the construction of a BN in an intelligent tutoring application, specifically decimal misconceptions. We describe the BN construction using expert elicitation and then investigate how certain existing automated knowledge discovery methods might support the BN knowledge engineering process.
An Empirical Study of Probability Elicitation under NoisyOR Assumption
 Norman E Fenton is Professor of Computer Science at Queen Mary (University of London
, 2004
"... Bayesian network is a popular modeling tool for uncertain domains that provides a compact representation of a joint probability distribution among a set of variables. Even though Bayesian networks significantly reduce the number of probabilities required to specify probabilistic relationships in the ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Bayesian network is a popular modeling tool for uncertain domains that provides a compact representation of a joint probability distribution among a set of variables. Even though Bayesian networks significantly reduce the number of probabilities required to specify probabilistic relationships in the domain, the number of parameters required to quantify large models is still a serious bottleneck. Further reduction of parameters in a model is usually achieved by utilization of parametric probability distributions such as noisyOR gates. In this paper report the results of an empirical study that suggests that under the assumption, that the underlying modeled distribution follows the noisyOR assumptions, human experts provide parameters with better accuracy using elicitation of noisyOR parameters than when eliciting conditional probability tables directly. It also seems that of the two alternative noisyOR parameterizations due to Henrion and Díez the latter results in better elicitation accuracy.
Learning Bayesian belief networks with neural network estimators
 In Neural Information Processing Systems 9
, 1997
"... In this paper we propose a method for learning Bayesian belief networks from data. The method uses artificial neural networks as probability estimators, thus avoiding the need for making prior assumptions on the nature of the probability distributions governing the relationships among the participat ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this paper we propose a method for learning Bayesian belief networks from data. The method uses artificial neural networks as probability estimators, thus avoiding the need for making prior assumptions on the nature of the probability distributions governing the relationships among the participating variables. This new method has the potential for being applied to domains containing both discrete and continuous variables arbitrarily distributed. We compare the learning performance of this new method with the performance of the method proposed by Cooper and Herskovits in [10]. The experimental results show that, although the learning scheme based on the use of ANN estimators is slower, the learning accuracy of the two methods is comparable. y To appear in Advances in Neural Information Processing Systems, 1996. 1 Introduction Bayesian belief networks (BBN), often referred to as probabilistic networks, are a powerful formalism for representing and reasoning under uncertainty. This...
Know Your Whereabouts
 2004 Communication Networks and Distributed Systems Modeling and Simulation Conference (CNDS'04
, 2004
"... Context recognition alone is not adequate to enable intelligence in ubiquitous systems, but routine learning is also needed. A routine is a temporal context sequence that occurs frequently. The methodology is presented via a use case scenario. Association rules are used to determine routines. The ro ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Context recognition alone is not adequate to enable intelligence in ubiquitous systems, but routine learning is also needed. A routine is a temporal context sequence that occurs frequently. The methodology is presented via a use case scenario. Association rules are used to determine routines. The routine learning algorithms are implemented on an architecture that also offers other adaptive mobile services. The first prototype of the architecture, containing routine learning is already available.
Uncertainty Management
"... The purpose of this fairly nontechnical introduction to uncertainty management is to identify various forms of uncertainty, and to survey methods for managing some of these uncertainties. Our emphasis is on topics that may not be familiar to software engineers or, to a lesser extent, to knowledge en ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this fairly nontechnical introduction to uncertainty management is to identify various forms of uncertainty, and to survey methods for managing some of these uncertainties. Our emphasis is on topics that may not be familiar to software engineers or, to a lesser extent, to knowledge engineers. These topics include Bayesian estimation, fuzziness, time Petri nets, rough sets, belief and evidence, and possibility theory. Uncertainty management has been studied in the contexts of information systems and of artificial intelligence. We attempt to present a balanced view of the contributions from both areas.
FMEA and BBN for robustness analysis in webbased applications. Accepted to the European Safety and Reliability Conference
, 2007
"... ABSTRACT: In this paper we present a general framework for conducting robustness analysis early in the development life cycle of webbased systems. This framework exploits the robustness failure modes and evaluates the impact of modifications that can be applied to reduce the severity of these failu ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
ABSTRACT: In this paper we present a general framework for conducting robustness analysis early in the development life cycle of webbased systems. This framework exploits the robustness failure modes and evaluates the impact of modifications that can be applied to reduce the severity of these failures. First, the system is systematically decomposed in its components using Jacobson’s analysis. Next, with Failure Modes and Effects Analysis (FMEA) we identify all failure modes, causes and effects. Finally, by using Bayesian Belief Networks (BBNs) we model each subsystem and evaluate failure severities and possible improvements. A more complex model of the system can also be built up by integrating the subsystem models, giving a better understanding of the overall system behavior. We present a practical example and we discuss the benefits of applying this framework and BBN models to analyze the robustness of webbased systems. 1 INTRODUCTION BBN has been applied successfully in many doMarket pressure brings new challenges in terms of quality issues in webbased systems development. In order to improve the system in an inexpensive way it is crucial to be able to analyze qualities early in the
Using a Relevance Model for Performing Feature Weighting
, 2003
"... Feature Weighting is one of the most difficult tasks when developing Case Based Reasoning applications. This complexity grows when dealing with illdefined wide domains with a sparse case base. Moreover, most widelyused feature selection and feature weighting methods assume that features are either ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Feature Weighting is one of the most difficult tasks when developing Case Based Reasoning applications. This complexity grows when dealing with illdefined wide domains with a sparse case base. Moreover, most widelyused feature selection and feature weighting methods assume that features are either relevant in the whole instance space or irrelevant throughout. However, it is often the case that specific features are only relevant within the context of other features' values (i.e., feature Y is relevant if feature X=1, but irrelevant if X=0). Therefore, features' weight and relevance are ContextSensitive. This paper defines a model...
Relieving the elicitation burden of Bayesian Belief Networks
"... In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be specified in the quantification phase. This method enables the derivation of a variable’s co ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we present a new method (EBBN) that aims at reducing the need to elicit formidable amounts of probabilities for Bayesian belief networks, by reducing the number of probabilities that need to be specified in the quantification phase. This method enables the derivation of a variable’s conditional probability table (CPT) in the general case that the states of the variable are ordered and the states of each of its parent nodes can be ordered with respect to the influence they exercise. EBBN requires only a limited amount of probability assessments from experts to determine a variable’s full CPT and uses piecewise linear interpolation. The number of probabilities to be assessed in this method is linear in the number of conditioning variables. EBBN’s performance was compared with the results achieved by applying both the normal copula vine approach from Hanea & Kurowicka (2007), and by using a simple uniform distribution. 1