Results 1  10
of
13
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
A Review of Uncertainty Handling Formalisms
, 1998
"... Many different formal techniques, both numerical and symbolic, have been developed over the past two decades for dealing with incomplete and uncertain information. In this paper we review some of the most important of these formalisms, describing how they work, and in what ways they differ from one ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Many different formal techniques, both numerical and symbolic, have been developed over the past two decades for dealing with incomplete and uncertain information. In this paper we review some of the most important of these formalisms, describing how they work, and in what ways they differ from one another. We also consider heterogeneous approaches which incorporate two or more approximate reasoning mechanisms within a single reasoning system. These have been proposed to address limitations in the use of individual formalisms.
Foundations for Bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
Learning Structure from Data and its Application to Ozone Prediction
 Appl. Intell
, 1997
"... . In this paper we propose an algorithm for structure learning in predictive expert systems based on a probabilistic network representation. The idea is to have the "simplest" structure (minimum number of links) with acceptable predictive capability. The algorithm starts by building a tree ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. In this paper we propose an algorithm for structure learning in predictive expert systems based on a probabilistic network representation. The idea is to have the "simplest" structure (minimum number of links) with acceptable predictive capability. The algorithm starts by building a tree structure based on measuring mutual information between pairs of variables, and then it adds links as necessary to obtain certain predictive performance. We have applied this method for ozone prediction in M'exico City, where the ozone level is used as a global indicator for the air quality in different parts of the city. It is important to predict the ozone level a day, or at least several hours in advance, to reduce the health hazards and industrial losses that occur when the ozone reaches emergency levels. We obtained as a first approximation a treestructured dependency model for predicting ozone in one part of the city. We observe that even with only three parameters, its estimations are accepta...
Automated Endoscope Navigation and Advisory System from medical imaging
, 1999
"... In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a twolayer system. The first layer is at the signal level, which consists of the processing that will be performed on a series ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a twolayer system. The first layer is at the signal level, which consists of the processing that will be performed on a series of images to extract all the identifiable features. The information is purely dependent on what can be extracted from the 'raw' images. At the signal level, the first task is performed by detecting a single dominant feature, lumen. Few methods of identifying the lumen are proposed. The first method used contour extraction. Contours are extracted by edge detection, thresholding and linking. This method required images to be divided into overlapping squares (8 by 8 or 4 by 4) where line segments are extracted by using a Hough transform. Perceptual criteria such as proximity, connectivity, similarity in orientation, contrast and edge pixel intensity, are used to group edges both strong and weak. This approach is called perceptual grouping. The second method is based on a region extraction using split and merge approach using spatial domain data. An nlevel (for a 2 n by 2 n image) quadtree based pyramid structure is constructed to find the most homogenous large dark region, which in most cases corresponds to the lumen. The algorithm constructs the quadtree from the bottom (pixel) level upward, recursively and computes the mean and variance of image regions corresponding to quadtree nodes. On reaching the root, the largest uniform seed region, whose mean corresponds to a lumen is selected that is grown by merging with its neighboring regions. In addition to the use of twodimensional information in the form of regions and contours, threedimensional shape can provide additi...
Probabilistic Reasoning and MultipleExpert Methodology for Correlated Objective Data
 Artificial Intelligence in Engineering
, 1998
"... In this paper, a numerical expert system using probabilistic reasoning with influence structure generated from the observed data is demonstrated. Instead of using an expert to encode the influence diagram, the system has the capability to construct it from the objective data. In cases where data are ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, a numerical expert system using probabilistic reasoning with influence structure generated from the observed data is demonstrated. Instead of using an expert to encode the influence diagram, the system has the capability to construct it from the objective data. In cases where data are correlated, instead of compromising the performance by wrestling with different influence structures based on the assumption that all the environment variables are observed, we incorporated the flexibility of including unobservable variables in our system. The resulting methodology minimised the intervention of a domain expert during modelling and improved the system performance. Global optimisation using all variables is often very difficult and unmanageable in probabilistic network construction. In our approach, we group all the variables into subsets and generate advice for these subsets of features using multiple small probabilistic networks, and then seek to aggregate these into a consensus output. We proposed a probabilistic aggregation using the joint probability of data and model approaches. In this approach, we avoided the very highdimensional integration over all possible parameter configurations. The resulting system has the benefit of a multipleexpert system and is easily expandable when new information is to be added. 0 1997 Elsevier Science Limited. Key words: probabilistic network, bayesian inference, multipleexpert system, unobservable variables. 1
An Object Oriented Shell For Probabilistic Reasoning In Expert Systems
"... A new way for representing and reasoning with uncertainty in expert systems is developing, namely Probabilistic Networks. Probabilistic networks are graphical structures used for representing expert knowledge, drawing conclusions from input data and explaining the reasoning process to the user. We d ..."
Abstract
 Add to MetaCart
A new way for representing and reasoning with uncertainty in expert systems is developing, namely Probabilistic Networks. Probabilistic networks are graphical structures used for representing expert knowledge, drawing conclusions from input data and explaining the reasoning process to the user. We developed a shell for uncertainty management in expert systems based on Probabilistic Networks. The shell allows the user to define his probabilistic model in a friendly form, by providing a graphical user interface. It includes technics for probability propagation, parameter learning and evaluation. Using this shell, we are developing two applications, fault diagnosis in electrical networks and Ozone prediction in Mexico City. Key Words Expert Systems, Bayesian Networks, Human/Machine Interface 1 Introduction Knowledgebased systems are being applied in many areas in which the information is uncertain, such as medical diagnosis, speech recognition and image understanding. Uncertainty aris...
Probabilistic Graphical Models in Artificial Intelligence Abstract
"... In this paper, we review the role of probabilistic graphical models in artificial intelligence. We start by giving an account of the early years when there was important controversy about the suitability of probability for intelligent systems. We then discuss the main milestones for the foundations ..."
Abstract
 Add to MetaCart
In this paper, we review the role of probabilistic graphical models in artificial intelligence. We start by giving an account of the early years when there was important controversy about the suitability of probability for intelligent systems. We then discuss the main milestones for the foundations of graphical models starting with Pearl’s pioneering work. Some of the main techniques for problem solving (abduction, classification, and decision making) are briefly explained. Finally, we propose some important challenges for future research and highlight relevant applications (forensic reasoning, genomics and the use of graphical models as a general optimization tool).