Results 1  10
of
218
Partial Constraint Satisfaction
, 1992
"... . A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to solve these problems completely. We may seek to partially solve the problem, in particular by satisfying ..."
Abstract

Cited by 427 (23 self)
 Add to MetaCart
. A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to solve these problems completely. We may seek to partially solve the problem, in particular by satisfying a maximal number of constraints. Standard backtracking and local consistency techniques for solving constraint satisfaction problems can be adapted to cope with, and take advantage of, the differences between partial and complete constraint satisfaction. Extensive experimentation on maximal satisfaction problems illuminates the relative and absolute effectiveness of these methods. A general model of partial constraint satisfaction is proposed. 1 Introduction Constraint satisfaction involves finding values for problem variables subject to constraints on acceptable combinations of values. Constraint satisfaction has wide application in artificial intelligence, in areas ranging from temporal r...
Operations for Learning with Graphical Models
 Journal of Artificial Intelligence Research
, 1994
"... This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Wellknown examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models ..."
Abstract

Cited by 249 (12 self)
 Add to MetaCart
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Wellknown examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feedforward networks, and learning Gaussian and discrete Bayesian networks from data. The paper conclu...
How to improve Bayesian reasoning without instruction: Frequency formats
 Psychological Review
, 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract

Cited by 220 (21 self)
 Add to MetaCart
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50 % of all inferences by Bayesian algorithms. NonBayesian algorithms included simple versions of Fisherian and NeymanPearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical probabilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of instinct,
Numerical Uncertainty Management in User and Student Modeling: An Overview of Systems and Issues
, 1996
"... . A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections fo ..."
Abstract

Cited by 104 (10 self)
 Add to MetaCart
. A rapidly growing number of user and student modeling systems have employed numerical techniques for uncertainty management. The three major paradigms are those of Bayesian networks, the DempsterShafer theory of evidence, and fuzzy logic. In this overview, each of the first three main sections focuses on one of these paradigms. It first introduces the basic concepts by showing how they can be applied to a relatively simple user modeling problem. It then surveys systems that have applied techniques from the paradigm to user or student modeling, characterizing each system within a common framework. The final main section discusses several aspects of the usability of these techniques for user and student modeling, such as their knowledge engineering requirements, their need for computational resources, and the communicability of their results. Key words: numerical uncertainty management, Bayesian networks, DempsterShafer theory, fuzzy logic, user modeling, student modeling 1. Introdu...
Decision Theory in Expert Systems and Artificial Intelligence
 International Journal of Approximate Reasoning
, 1988
"... Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision ..."
Abstract

Cited by 89 (18 self)
 Add to MetaCart
Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decisiontheoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decisiontheoretic representation and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expertsystem paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expertsystems sett...
Hierarchical Constraint Logic Programming
, 1993
"... A constraint describes a relation to be maintained ..."
Abstract

Cited by 67 (3 self)
 Add to MetaCart
A constraint describes a relation to be maintained
Adaptive provision of evaluationoriented information: Tasks and techniques
 PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1995
"... Evaluationoriented information provision is a function performed by many systems that serve as personal assistants, advisors, or sales assistants. Five general tasks are distinguished which need to be addressed by such systems. For each task, techniques employed in a sample of systems are discussed ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
Evaluationoriented information provision is a function performed by many systems that serve as personal assistants, advisors, or sales assistants. Five general tasks are distinguished which need to be addressed by such systems. For each task, techniques employed in a sample of systems are discussed, and it is shown how the lessons learned from these systems can be taken into account with a set of unified techniques that make use of wellunderstood concepts and principles from MultiAttribute Utility Theory and Bayesian networks. These techniques are illustrated as realized in the dialog system PRACMA.
Preference Ratios in Multiattribute Evaluation (PRIME)  Elicitation and Decision Procedures Under Incomplete Information
, 2001
"... This paper presents the preference ratios in multiattribute evaluation (PRIME) method which supports the analysis of incomplete information in multiattribute weighting models. In PRIME, preference elicitation and synthesis is based on 1) the conversion of possibly imprecise ratio judgments into an i ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
This paper presents the preference ratios in multiattribute evaluation (PRIME) method which supports the analysis of incomplete information in multiattribute weighting models. In PRIME, preference elicitation and synthesis is based on 1) the conversion of possibly imprecise ratio judgments into an imprecisely specified preference model, 2) the use of dominance structures and decision rules in deriving decision recommendations, and 3) the sequencing of the elicitation process into a series of elicitation tasks. This process may be continued until the most preferred alternative is identified or, alternatively, stopped with a decision recommendation if the decision maker is prepared to accept the possibility that the value of some other alternative is higher. An extensive simulation study on the computational properties of PRIME is presented. The method is illustrated with a reanalysis of an earlier case study on international oil tanker negotiations.
The Role of Aspiration Level in Risky Choice: A Comparison of Cumulative Prospect Theory and SP/A Theory
 Journal of Mathematical Psychology
, 1999
"... In recent years, descriptive models of risky choice have incorporated features that reflect the importance of particular outcome values in choice. Cumulative prospect theory (CPT) does this by inserting a reference point in the utility function. SP/A (securitypotential/aspiration) theory uses aspir ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
In recent years, descriptive models of risky choice have incorporated features that reflect the importance of particular outcome values in choice. Cumulative prospect theory (CPT) does this by inserting a reference point in the utility function. SP/A (securitypotential/aspiration) theory uses aspiration level as a second criterion in the choice process. Experiment 1 compares the ability of the CPT and SP/A models to account for the same withinsubjects data set and finds in favor of SP/A. Experiment 2 replicates the main finding of Experiment 1 in a betweensubjects design. The final discussion brackets the SP/A result by showing the impact on fit of both decreasing and increasing the number of free
ProblemFocused Incremental Elicitation of MultiAttribute Utility Models
, 1997
"... Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility information. While much work in AI has focused on providing representations and tools f ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility information. While much work in AI has focused on providing representations and tools for elicitation of probabilities, relatively little work has addressed the elicitation of utility models. This imbalance is not particularly justified considering that probability models are relatively stable across problem instances, while utility models may be different for each instance. Spending large amounts of time on elicitation can be undesirable for interactive systems used in lowstakes decision making and in timecritical decision making. In this paper we investigate the issues of reasoning with incomplete utility models. We identify patterns of problem instances where plans can be proved to be suboptimal if the (unknown) utility function satisfies certain conditions. We present an...