Results 1  10
of
49
Reasoning about Beliefs and Actions under Computational Resource Constraints
 In Proceedings of the 1987 Workshop on Uncertainty in Artificial Intelligence
, 1987
"... ion Modulation In many cases, it may be more useful to do normative inference on a model that is deemed to be complete at a particular level of abstraction than it is to do an approximate or heuristic analysis of a model that is too large to be analyzed under specific resource constraints. It may pr ..."
Abstract

Cited by 179 (18 self)
 Add to MetaCart
ion Modulation In many cases, it may be more useful to do normative inference on a model that is deemed to be complete at a particular level of abstraction than it is to do an approximate or heuristic analysis of a model that is too large to be analyzed under specific resource constraints. It may prove useful in many cases to store several beliefnetwork representations, each containing propositions at different levels of abstraction. In many domains, models at higher levels of abstraction are more tractable. As the time available for computation decreases, network modules of increasing abstraction can be employed. ffl Local Reformulation Local reformulation is the modification of specific troublesome topologies in a belief network. Approximation methods and heuristics designed to modify the microstructure of belief networks will undoubtedly be useful in the tractable solution of large uncertainreasoning problems. Such strategies might be best applied at knowledgeencoding time. An...
Decision Theory in Expert Systems and Artificial Intelligence
 International Journal of Approximate Reasoning
, 1988
"... Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision ..."
Abstract

Cited by 89 (18 self)
 Add to MetaCart
Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decisiontheoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decisiontheoretic representation and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expertsystem paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expertsystems sett...
Toward normative expert systems: Part I. The pathfinder project
 Methods Inf. Med
, 1992
"... Pathfinder is an expert system that assists surgical pathologists with the diagnosis of lymphnode diseases. The program is one of a growing number of normative expert systems that use probability and decision theory to acquire, represent, manipulate, and explain uncertain medical knowledge. In this ..."
Abstract

Cited by 83 (15 self)
 Add to MetaCart
Pathfinder is an expert system that assists surgical pathologists with the diagnosis of lymphnode diseases. The program is one of a growing number of normative expert systems that use probability and decision theory to acquire, represent, manipulate, and explain uncertain medical knowledge. In this article, we describe Pathfinder and our research in uncertainreasoning paradigms that was stimulated by the development of the program. We discuss limitations with early decisiontheoretic methods for reasoning under uncertainty and our initial attempts to use nondecisiontheoretic methods. Then, we describe experimental and theoretical results that directed us to return to reasoning methods based in probability and decision theory.
From Laplace To Supernova Sn 1987a: Bayesian Inference In Astrophysics
, 1990
"... . The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
. The Bayesian approach to probability theory is presented as an alternative to the currently used longrun relative frequency approach, which does not offer clear, compelling criteria for the design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to wellposed statistical problems, and is historically the original approach to statistics. The reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the measurement of wea...
A decision theoretic framework for approximating concepts
 International Journal of Manmachine Studies
, 1992
"... This paper explores the implications of approximating a concept based on the Bayesian decision procedure, which provides a plausible unification of the fuzzy set and rough set approaches for approximating a concept. We show that if a given concept is approximated by one set, the same result given by ..."
Abstract

Cited by 36 (20 self)
 Add to MetaCart
This paper explores the implications of approximating a concept based on the Bayesian decision procedure, which provides a plausible unification of the fuzzy set and rough set approaches for approximating a concept. We show that if a given concept is approximated by one set, the same result given by the αcut in the fuzzy set theory is obtained. On the other hand, if a given concept is approximated by two sets, we can derive both the algebraic and probabilistic rough set approximations. Moreover, based on the well known principle of maximum (minimum) entropy, we give a useful interpretation of fuzzy intersection and union. Our results enhance the understanding and broaden the applications of both fuzzy and rough sets. 1.
Imprecision in Engineering Design
 ASME JOURNAL OF MECHANICAL DESIGN
, 1995
"... Methods for incorporating imprecision in engineering design decisionmaking are briefly reviewed and compared. A tutorial is presented on the Method of Imprecision (MoI), a formal method, based on the mathematics of fuzzy sets, for representing and manipulating imprecision in engineering design. The ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Methods for incorporating imprecision in engineering design decisionmaking are briefly reviewed and compared. A tutorial is presented on the Method of Imprecision (MoI), a formal method, based on the mathematics of fuzzy sets, for representing and manipulating imprecision in engineering design. The results of a design cost estimation example, utilizing a new informal cost specification, are presented. The MoI can provide formal information upon which to base decisions during preliminary engineering design and can facilitate setbased concurrent design.
Aggregation Functions for Engineering Design Tradeoffs
, 1998
"... The choice of an aggregation function is a common problem in Multi Attribute Decision Making (MADM) systems. The Method of Imprecision (MoI) is a formal theory for the manipulation of preliminary design information that represents preferences among design alternatives with the mathematics of fuzzy s ..."
Abstract

Cited by 25 (14 self)
 Add to MetaCart
The choice of an aggregation function is a common problem in Multi Attribute Decision Making (MADM) systems. The Method of Imprecision (MoI) is a formal theory for the manipulation of preliminary design information that represents preferences among design alternatives with the mathematics of fuzzy sets. The MoI formulates the preliminary design problem as a MADM problem. To date, two aggregation functions have been developed for the MoI, one representing a compensating strategy and one a noncompensating strategy. Much of the prior fuzzy sets research on aggregation functions has been inappropriate for application to engineering design. In this paper, the selection of an aggregation function for MADM schemes is discussed within the context of the MoI. The general restrictions on designappropriate aggregation functions are outlined, and a family of functions, modeling a range of tradeoff strategies, is presented. The results are illustrated with an example.
The method of imprecision compared to utility theory for design selection problems
 In Proceedings of the 1993 ASME Design Theory and Methodology Conference
, 1993
"... Two methods have been proposed for manipulating uncertainty reflecting designer choice: utility theory and the method of imprecision. Both methods represent this uncertainty across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. The two methods can di ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Two methods have been proposed for manipulating uncertainty reflecting designer choice: utility theory and the method of imprecision. Both methods represent this uncertainty across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. The two methods can differ, however, in the combination metrics used to combine the ranks of the incommensurate design attributes. Utility theory resolves the multiattributes using various well proven additive metrics. In contrast, the method of imprecision resolves by also considering nonadditive metrics, such as ranking by the worst case performance or by multiplicative metrics. The axioms of utility theory are appropriate for designs where it is deemed the attributes can always be traded off, even to the point of achieving zero preference in some attributes. In the case of a design with attributes which cannot have zero preference, such as stress limits or maximum allowed cost, the method of imprecision is more appropriate: it trades off attribute levels without permitting any of them to be traded off to zero performance. 1
FORMALISMS FOR NEGOTIATION IN ENGINEERING DESIGN
, 1996
"... Engineering projects often undergo several design iterations before being completed. Information received from other groups working on a project (analysis, manufacturing, marketing, sales) will often necessitate changes in a design. The interaction between different groups associated with a design p ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Engineering projects often undergo several design iterations before being completed. Information received from other groups working on a project (analysis, manufacturing, marketing, sales) will often necessitate changes in a design. The interaction between different groups associated with a design project often takes the form of informal “negotiation. ” This form of interaction commonly arises when engineering information is imprecise. The Method of Imprecision (MoI) is a formal method for the representation and manipulation of preliminary and imprecise design information. It provides a mechanism for the formalization of these informal negotiations. The nature and scope of informal negotiation in engineering is explored and discussed, and application of the MoI is illustrated with an example.
Lattice duality: The origin of probability and entropy
 In press: Neurocomputing
, 2005
"... Bayesian probability theory is an inference calculus, which originates from a generalization of inclusion on the Boolean lattice of logical assertions to a degree of inclusion represented by a real number. Dual to this lattice is the distributive lattice of questions constructed from the ordered set ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Bayesian probability theory is an inference calculus, which originates from a generalization of inclusion on the Boolean lattice of logical assertions to a degree of inclusion represented by a real number. Dual to this lattice is the distributive lattice of questions constructed from the ordered set of downsets of assertions, which forms the foundation of the calculus of inquiry—a generalization of information theory. In this paper we introduce this novel perspective on these spaces in which machine learning is performed and discuss the relationship between these results and several proposed generalizations of information theory in the literature.