DMCA
A Challenging Example Background to Qualitative Decision Theory
BibTeX
@MISC{Doyle_achallenging,
author = {Jon Doyle and Richmond H Thomason},
title = {A Challenging Example Background to Qualitative Decision Theory},
year = {}
}
OpenURL
Abstract
s This article provides an overview of the field of qualitative decision theory: its motivating tasks and issues, its antecedents, and its prospects. Qualitative decision theory studies qualitative approaches to problems of decision making and their sound and effective reconciliation and integration with quantitative approaches. Although it inherits from a long tradition, the field offers a new focus on a number of important unanswered questions of common concern to AI, economics, law, psychology, and management. T he field of decision theory and its companion methodology of decision analysis deal with the merits and making of decisions. As developed by philosophers, economists, and mathematicians over some 300 years, these disciplines have developed many powerful ideas and techniques, which exert major influences over virtually all the biological, cognitive, and social sciences. Their uses range from providing mathematical foundations for microeconomics to daily application in a range of fields of practice, including finance, public policy, medicine, and now even automated device diagnosis. In spite of these remarkable achievements, the tools of traditional decision theory have not proven fully adequate for supporting recent attempts in AI to automate decision making. The field of qualitative decision theory aims to provide better support for automation efforts by developing qualitative and hybrid qualitative-quantitative representations and procedures that complement and improve the quantitative approach's ability to address the full range of decision-making tasks in the way such tasks appear within larger activities of planning, learning, and collaboration. The following brief survey of qualitative decision theory seeks to stimulate new work in the area and alert researchers in other areas to topics of mutual interest. We first illustrate some of the motivations for pursuing more qualitative approaches and continue by examining the nature of traditional decision theory and analysis. We then identify a number of technical issues and topics for investigation. We provide sketches of representative results and work concerning these matters. Much of this work is incomplete and preliminary, providing many opportunities for further research. The concluding remarks seek to reflect on the available results to help set the context for future studies. A Challenging Example Naive and expert humans regularly and routinely solve decision problems that challenge the formalizations and methods of decision making that predominate in economics and AI. To illustrate this point, consider someone, call him Aethelred, who goes to meet with a financial planner. Aethelred brings a number of preferences to the meeting, some of them implicit and unexamined, some pretty abstract or generic, and many of them competing. He feels restless and dissatisfied and would rather retire early, although he will not be forced to retire on account of age. He is more timid about financial risk than he was when he was younger and unsure about the financial markets. He feels increasingly uneasy about his health (one reason he wants to retire early). When Aethelred meets with the planning expert, he somehow focuses on one part of this mix of preferences and beliefs and produces a goal. He says, "I want to retire at age 60." This opening remark clearly does not provide a total picture of his state of mind, nor does traditional decision theory provide a way of expressing the goal formally. A good advisor might explore what happens through planning from the supposed goal but will also be quite willing to challenge the goal itself. Suppose, for example, that the expert describes some scenarios based on the announced goal, and Aethelred is extraction of information about either values or beliefs through a conversational medium. The scenario suggests the importance of these issues in advising tasks, but the issues also arise, perhaps even more strongly, in tasks involving planning and acting, for example, should Aethelred hire someone to manage his estate in accord with his preferences. To better understand how to respond to these challenges, we first reexamine the nature of decision theory and its standard methods of application. Decision Theory and Its Discontents Decision theory and decision analysis provide answers to four main questions: 1. What Is a Decision? A decision is a choice made by some entity of an action from some set of alternative actions. Decision theory has nothing to say about actions-either about their nature or about how a set of them becomes available to the decision maker. The main branch of decision theory treats decisions of individual actors, and other branches treat decisions made by groups of individuals or groups of groups. We focus here on choices made by individual decision makers. What Makes a Decision Good? A good decision identifies an alternative that the decision maker believes will prove at least as good as other alternative actions. We here recount the standard answer about ideal decisions. Variant decision theories, touched on later, provide different answers, such as calling a decision good if it identifies an alternative that the decision maker considers good enough, even if not as good as other alternatives that might have been found through further deliberation. 3. How Should One Formalize Evaluation of Decisions? Good decisions are formally characterized as actions that maximize expected utility, a notion involving both belief and goodness. Decision theory develops this notion in stages. Let A stand for the set of actions or alternatives. Decision theory first presumes an association of a set of outcomes with each action. As with actions, the theory says little about the nature of outcomes (for example, whether they range over long or short intervals of time) or about the factors that determine the association, leaving it to the decision analyst to decide these things. Let Ω stand for the set of all outcomes identified with any actions (the union of those associated with each action). unhappy with them all. The expert asks him why he wants to retire at 60. Aethelred finds he can't produce any very compelling reason. He chose the number 60 arbitrarily. His reasons for preserving his salary and employee benefits are much more compelling than his reasons for early retirement. The expert points out that a large proportion of early retirees are restless and dissatisfied with their retirement. At the end of the discussion, Aethelred's preferences (or the degrees of importance he attaches to preference factors) have changed. He decides to do nothing for at least five years and to try harder to enjoy his job before rethinking the matter. The expert has neither brainwashed nor compelled him but has instead helped Aethelred to reexamine and readjust his preferences. This scenario exhibits features typical of many formal and informal advising situations. The advisor initially knows nothing about the advisee's preferences, possessing only expectations developed from stereotypes and evidence provided by the advisee's current financial situation. The advisor develops better information about these preferences in the course of the advisory dialog, but some uncertainties might remain even at the end of the conversation. The same situation obtains concerning the advisee's beliefs and subjective probabilities, for which the advisor refines or corrects initial ignorance or expectations in the course of the interview. The advisor might also treat information about the advisee's beliefs differently from information about preferences; rather than simply accepting both types of information as parameters defining the financial planning problem, the advisor might challenge or seek to correct the client's views, for example, on the likelihood of inflation, the riskiness of certain investments, or the permanence of current government policies. In discussing both preference and probability information, the advisor and advisee mainly stick to generalities and explanations of reasons and assumptions; they do not descend, at least in the preliminary stages, to talk about specific probability or utility values. The communications and actions illustrated in this scenario involve nothing out of the ordinary but challenge both the theory of expected utility and the formalisms used in AI planning systems because these make no provision for making decisions in cases where there is uncertainty about goals and preferences. Neither do they provide a satisfactory model for rational revision of goals and preferences, for using stereotypes concerning values and beliefs in modeling an advisee, or for the incremental AI MAGAZINE Second, the theory presumes a measure U of outcome value that assigns a utility U(ω) to each outcome ω ∈ Ω. To make this assignment possible, the theory requires that outcomes be identified so as to have some determinate value or utility in the context of the decision under consideration to ensure that a single outcome cannot come about in ways that differ in value. Given this requirement, the theory takes outcomes as unrefinable because no refinements matter to the model of the decision. Third, the theory presumes a measure of the probability of outcomes conditional on actions, with Pr(ω | a) denoting the probability that outcome ω comes about after taking action a ∈ A in the situation under consideration. Using these elements, the theory defines the expected utility EU(a) of an action a as the average utility of the outcomes associated with the alternative, weighting the utility of each outcome by the probability that the outcome results from the alternative, that is, Decision theory defines rational decision makers as those that always maximize expected utility. Again, we have described only the orthodox account; some approaches employ nonprobabilistic notions of belief, different notions of worth, or criteria other than expected utility maximization. 4. How Should One Formulate the Decision Problem Confronting a Decision Maker? One identifies alternatives, outcomes, probabilities, and utilities through an iterative process of hypothesizing, testing, and refining a sequence of tentative formulations. One identifies the alternatives and outcomes through direct queries or knowledge of the circumstances. (How can you intervene in this situation? How do you evaluate the success of an intervention?) One assesses probabilities and utilities either through direct queries (How likely do you think this outcome is? How much would you be willing to pay for this outcome?) or indirectly by inferring them from patterns of hypothetical choices. The indirect approach exploits fundamental theorems of decision theory showing that certain patterns uniquely determine the probabilities and utilities. (In particular, Savage's [1972] representation theorem provides an isomorphism between rational preferences over uncertain outcomes and quantitative representations in terms of expected utility.) One repeatedly improves tentative identifications of alternatives, outcomes, probabilities, and utilities by evaluating their completeness, consistency, conceptual convenience, sensitivity to errors in assessment, and relation to intuitive judgments, if any. Complicated cases can require many iterations and substantial analytic effort to obtain a satisfactory decision formulation. These central questions by no means exhaust the important practical or philosophical issues. Philosophical criticisms of utility-based approaches to good decisions go back at least to the debate over utilitarianism in the late eighteenth and early nineteenth centuries. They include doubts about the existence of any universal currency for reconciling very different sorts of goods and evils (see, for example, chapter 7 of Hare [1963] and The issues that arise in automated decision making recall some of the traditional philosophical questions but locate the debate in a new setting by focusing on the need for efficient reasoning in the very complex decisionmaking cases that can arise in human affairs. Quantitative representations of probability and utility, and procedures for computing with these representations, do provide an adequate framework for manual treatment of very simple decision problems, but they are less successful in more complicated and realistic cases. For example, virtually every autonomous system, whether it be a robotic spacecraft exploring the solar system or a housecleaning robot exploring under a third-grader's bed, must possess the ability to formulate decisions in unforeseen settings. Performing these tasks might require unforeseen preferences and call for the structuring of the new decision problem by applying general guidelines to the particulars of the situation. Such creativity in individual decision making requires reasoning quite similar in quality to the advisory reasoning that we found in the "challenging example" presented earlier. Simply attempting to mechanize elements of the traditional decision-analytic approach is challenging in itself and provides many highly useful projects. (See Linden, Hanks, and Lesh [1997], for example, for an application of multiattribute utility theory to infer user preferences over fully determinate outcomes.) One can get even further by seeking to apply standard knowledge representation methods for structuring and representing decision-analytic information, as can be seen in the developing field of automated decision-model construction. (See Wellman, Breese, and Goldman [1992] for an overview of this area.) Articles SUMMER 1999 57 Information-Limited Rationality Traditional decision theory provides an account of the information that, in principle, suffices for making a rational decision. In practice, however, the decision maker might have never considered the type of choice in question and so might not happen to possess this information. Groups seeking to come to decisions, as well as individual decision makers possessing group-like internal organizations (for example, "society of mind" agents), can harbor competing preferences and so might suffer additional limitations illustrated by Arrow's theorem, which shows the impossibility of combining group-member preferences in ways that meet conditions of ideal rationality (see and Arrow [1959]). Even when a decision maker with coherent preferences knows how to obtain the information needed for a decision, the process can take an amount of time that, even if known, can exceed what is available or reasonable for deliberation, leading the deliberation to proceed without complete information. In other cases, the decision maker may feel quite sure that the decision in question requires only a few preferences already possessed, not the large number needed to relate all possible contingencies. The need for an approach to decision making based on partial information provides a powerful motivation for a qualitative decision theory. Common experience shows people offering partial, abstract, generic, tentative, and uncertain information in explaining their decisions. This information includes qualitative probability ("I'm likely to need $50,000 a year for retirement"), generic preference information ("I prefer investments in companies that respect the environment"), and generic goals ("I want to retire young enough to enjoy it"). If we wish a more direct model of the way people seem to think about decisions, we need to deal with such information and need models of deliberative reasoning that can make use of it in formulating and making decisions. Because qualitative decision theory seeks to formalize reasonable decision making by relaxing the decision preconditions of classical decision theory, the study of qualitative decision making is closely related to the study of limited rationality (see, for example, Doyle [1992]). The recent work in qualitative decision theory, however, has not yet developed to the point where it has a distinct contribution to make to the field of limited rationality. Using notions such as expected value of information The traditional decision-theoretic approach, however, fails on many counts to provide useful guidance for decision-making activities. It does not address making decisions in unforeseen circumstances or changing the assumptions that underlie decisions. It offers no means for capturing generic preferences (that is, preferences among classes, such as "I'd rather eat Italian than Chinese tonight") and other common human expressions of decision-making guidelines in convenient formal terms, even though such expressions compactly communicate large amounts of information at comfortable levels of detail. The traditional approach provides little help in modeling decision makers who exhibit discomfort with numeric tradeoffs or who exhibit outright inconsistency. Finally, the traditional approach provides little help in effectively representing and reasoning about decisions involving broad knowledge of the world and in communicating about the reasons for decisions in ways that humans will find intelligible. These failures severely limit the utility of the unadorned traditional concepts for automating autonomous decision making and, more generally, for eliciting decision-making information and guidelines from decision makers or communicating such information to decision makers or decision-making aids, such as emergency room physicians, executors of wills, governmental representatives, or even (one might hope) everyday software systems. Automating these abilities can require rethinking not only the classical approach to decision theory to take into account qualitative goals, generic preferences, and the changeability of preferences but also the usual approaches to planning and learning to relate their goals and methods to preferences and to base their decisions on decision-theoretic information rather than ad hoc rules (see Qualitative decision theory seeks to adapt or extend traditional decision-theoretic and analytic concepts to a broader context, building whenever possible on existing strengths. Doing this requires facing up to the limitations of traditional decision-theoretic conceptual tools. In some cases, such as how to best represent actions and outcomes, these limitations correspond to genuine unanswered questions about knowledge representation, questions reaching far beyond qualitative decision theory. In other cases, the limitations represent weaknesses of the quantitative representations and methods used, not of the underlying decision theory. Recent research has begun to tackle some of these problems, but others remain essentially unexplored. Developing qualitative representations and methods for automated decision making and for direct models of the way people think about decisions leads to a variety of challenging formalization tasks. Some of these tasks concern the notions of alternative actions, outcomes, probabilities, and preferences that have already received substantial attention. For example, although decision theory takes outcomes as unrefinable, practical approaches must take the opposite view and seek to reveal an internal structure for outcomes that facilitates computation of probability and utility information (see, for example, Boutilier, Dean, and Hanks [1999]). This means interpreting these nominally unrefinable outcomes as constructs from sets of properties or attributes, a task that shades off into the much larger area of general knowledge representation. Beyond these well-developed topics, qualitative decision theory must also look for formalizations of a number of other notions. Fortunately, AI and philosophical logic have developed formalizations of some of these notions already, although much remains undone. Of course, the usual standards of formalization apply. In particular, representations should have a sound semantics. The formalization should illuminate concepts and methods with deep theorems that aid mathematical analysis, computation, and reasoning. If possible, the formalization should also reflect familiar concepts and constructs and relax classical decision-theoretic concepts in reasonable ways. Generic and Structural Information Standard decision theory rests on qualitative foundations, expressed in terms of qualitative preference orders and comparisons among alternatives. Unfortunately, these foundations do not provide any ready way of expressing generic probabilities and preferences. Recent work has begun to address these gaps with active investigations of logics of probability and preference; work on multiattribute utility structures; partial probability theory; and variations from the standard decision-theoretic model, including possibilistic logic. Work on qualitative representation of probabilistic information has explored several paths, notably qualitative probabilistic networks, logics of probability, and partial probabilities. Qualitative probabilistic networks, introduced by Wellman (1990b), directly represent generic probabilistic influences, for example, "The more likely that an investment has done well recently, the more likely it will do well in the future." Formal semantics for such generic probabilistic relationships bear close similarities to some logics of preferences discussed later. Direct logics of probability-as in the work of Haddawy (1991); Direct logics of preference have been studied in the philosophical literature; see, for example, It has recently occurred to many members of the AI community that multiattribute utility, developed in works such as The extra analysis induced by partial formulations is not a mere artifact of the formalism because one can view it as a more direct way of performing the standard perturbation analysis of decision analysis. One advantage of starting with qualitative formulations is that they force the analyst to capture the stable, qualitative information about comparisons directly, only proceeding to detailed quantitative comparisons once the stable core has been identified. The most obvious properties of alternatives for analysis concern their relations to each other. Standard decision analysis already considers aspects of this issue in Savage's (1972) notion of "small worlds," that is, in choosing how one individuates alternatives and outcomes to avoid unnecessary complexity and to reduce the complexity to intelligible levels. The knowledgestructuring techniques of AI can help ensure intelligibility even when the formulation must contain great numbers of alternatives and outcomes. These techniques include the standard multiattribute techniques, in which one represents alternatives and outcomes as sets of attribute values, but also include methods for representing abstract attributes and taxonomic or other relations among these abstractions and the most concrete attributes. The true power of abstractions of alternatives and outcomes comes to the fore when one considers similarity or distance relations among alternatives and outcomes that permit the decision analyst to group these objects into qualitatively different classes. These groupings satisfy one of the principal needs of decision-support systems-the need to provide a human decilibraries of standard forms for utility functions. Multiattribute decision theory identifies some basic structural forms here, but investigation of decision-specific forms and ways of organizing them holds much promise. Now, multiattribute utility theory in itself merely provides a way of representing and calculating the classical utilities of outcomes by decomposing them into the utilities of the value-relevant factors that make up the outcomes. The theory becomes relevant to qualitative (or at least nonclassical) approaches to decision analysis only by combining the decomposition of utilities into factors with fixed assumptions about the independence of the values assigned to different factors and with defeasible (nonmonotonic) assumptions about the magnitudes of these values. The papers mentioned earlier explore this idea; perhaps the most ambitious application of this idea appears in Yoav Shoham's work. At the moment, much of this work remains unpublished, but it gives reason to hope that a formalism for decision theory might emerge from it that extends the Bayesian network representation of the structure of probability distributions to represent the structure of utility as well. Possibilistic logic is an extension of possibility theory (Dubois and Prade 1986), a theory inspired in part by fuzzy logic. Possibility theory takes utilities into account, as well as probabilities, but provides a qualitative approach to decision problems by replacing numeric preferences and probabilities with linear orderings. To obtain nontrivial recommendations of actions and useful algorithms, additional assumptions need to be made: the scales for preferences and belief need to be interrelated, and a decisionmaking policy needs to be adopted, for example, a minimax policy. For discussion of this approach to utilities, see Properties of Decision Formulations The decision-analytic process of formulating decisions naturally involves analyzing a number of properties of tentative formulations, such as stability under perturbations and agreement with intuitions. The set of properties grows significantly when formulations can involve partial information about alternatives, outcomes, probabilities, and utilities. The most obvious properties of interest regarding probabilities and utilities concern the logical consistency and completeness of the partial specifications, especially when we think of the partial specifications as axioms that partially characterize the ultimate probaArticles 60 AI MAGAZINE sion maker with a set of meaningful options rather than an incomprehensibly large number of alternatives, most of which represent unimportant variations on each other. See Young (1997) for a detailed treatment of this issue, which uses similarity of utility as a grouping criterion. Similarity orderings play a prime role in case-based reasoning. It might prove very interesting to study the relation between similarity relations useful for decision formulation and those useful for case-based decision making, as in Chajewska et al. (1998) and Reasons and Explanations People tend to recommend, explain, criticize, and excuse decisions in terms of generic preferences (for example, "He bought the Jaguar because he likes sports cars"). This tendency makes generic preferences important for communicating about decisions as well as for coming to decisions on the basis of partial information. The problem of formalizing generic preferences bears close connections to the problem of formalizing obligations, especially conditional obligations. Recently, the logic of conditional obligation has even been used as a basis for formalizing generic statements in natural language (see Conditional obligation forms the subject of a long-standing and increasingly sophisticated tradition in modal logic, begun in the first half of this century by Georg H. von Wright (1963). The more recent forms of this study draw on ideas from nonmonotonic logic (see Horty [1994, 1993]). The AI literature offers a corresponding argument-based approach to decision making in which reasoning identifies explicit reasons for alternatives, generic preferences, and other elements of decision formulations. These explicit reasons themselves can enter into reasoning about the decision formulation through comparisons of reasons with respect to importance measures and through a variety of methods for making decisions on the basis of the arguments represented by the reasons. Some papers in this tradition include Studies of argument-based decision making have also taken the opposite tack, using preferences to formalize the meanings of reasons and arguments, as in Revision of Preferences Developing a satisfactory theory of decision capable of providing an account of rational preference revision-and, more generally, rational guidelines for decision-theoretic reasoning about novel circumstances-requires much foundational work. The scarcity of work devoted to the rational revision of preferences seems remarkable in view of the attention that has been paid to rational belief revision and the many analogies between belief and preference (but see, for example, Jeffrey [1977]; To some extent, lack of attention to preference revision might reflect a long-standing attitude in economics that preferences change very slowly compared to beliefs as well as a more general empiricist conception that beliefs and belief changes have something like a rational (logical) basis in experience but that desires and preferences stem from irrational or physiological processes. Such attitudes dissipate quickly as the topic turns to planning and acting in novel circumstances, which forms a requirement for all but the most circumscribed robots or human proxies. Novel circumstances call for creating preferences prior to revising or refining them. For example, on finding oneself living in or traveling through a strange town, it is necessary to reinvent the preferences needed for everyday life. Daily shopping decisions, for example, require standing preferences about where to shop and what brands to buy. Shopping preferences from one's previous life do not apply automatically to the new circumstances of different stores and different selections of goods on the shelves. In such cases, theoreticians often refer to high-level standing goals, such as the goal to lead a good life. However, such references rarely say anything about the reasoning that could lead from such uninformative general goals to the specific preferences needed for daily activities. Generic preferences offer a more natural and more informative starting point for such reasoning. A generic preference, for example, for food stores that minimize driving time over those that minimize cost, could provide the sort of specific guidance needed for shopping decisions. Of course, once created in this fashion, specific preferences might require revision in light of experience. Indeed, if one interprets goals as generic preferences along the lines of Agent Modeling Formalized decision-making information should not exist in a vacuum but should be integrated in coherent ways into agent models developed in the broader fields of AI and distributed systems. The distributed agent models of Hybrid Representation and Reasoning One need not limit qualitative decision-making techniques to explicit reasoning with qualitative information. Such reasoning can adequately address some tasks, but one should not expect purely qualitative information to settle the trade-off questions likely to be involved in the majority of practical decisions. More generally, one should expect the need to represent every sort of decision-making information in different forms for different computational purposes, even within the same decision-making process. In the case of preferences, generic representations might provide the best basis for the reasoning involved in elicitation, revision, and reuse of preferences, but numeric re-representations of this same information might provide the best basis for computing expected utilities. These comparative advantages call for integrating qualitative and quantitative information even when the mechanisms one obtains in the end for makgoaling constitute reasoned adoption and abandonment of preferences. Ideas from casebased planning can prove useful in providing more complex forms of preference revision, although work in this area has not, as far as we know, dealt with the problem of goal modification (see, for instance, Hanks and Weld [1992] and Practicable Qualitative Decision-Making Procedures An adequate qualitative decision theory should go beyond the needs of formalizing a fairly broad spectrum of decision problems to provide decision-making methods that obtain solutions in many cases without numeric calculation. Some qualitative decision-making procedures have long standing in the literature, but others might await discovery. Dominance relationships provide an important and common type of qualitative decisionmaking method. Speaking informally, a plan p dominates a plan q when one can be sure that adopting p would yield better outcomes than adopting q. Leonard Savage (1972) named the rule of preferring a dominating action the surething principle. Michael Wellman's (1990a) work showed how to formalize purely qualitative dominance reasoning and use it profitably in planning. Recent work by Brafman and Work by By making independence assumptions concerning the actions of different agents, one can define counterfactual selection functions in these models and consider perturbations of a history h in which a planning agent follows a plan p to obtain a set of maximally close histories in which the agent counterfactually follows another plan q. (Think of these perturbaArticles 62 AI MAGAZINE ing actual decisions do not differ in computational efficiency from hand-crafted quantitative mechanisms. Qualitative Specification of Quantitative Models Even when quantitative trade-offs prove necessary, the decision-making process can benefit from indirect use of the qualitative information. More specifically, the generic information might provide constraints that guide selection or construction of a quantitative utility measure. This approach draws on a lesson learned from the study of qualitative reasoning about physical systems In this approach, the decision maker would perform most reasoning using ordinary quantitative utility functions in ways well known from decision analysis, especially in computing optimal choices and assessing the sensitivity of these choices to variations in probabilities. When this analysis reveals a deficiency in the current utility model, the analyst formulates the deficiency as a change in the qualitative specification of the utility function and uses the updated specification to compute a new utility function, either directly or as a modification to the previous one. This approach could offer some attractive benefits: obtaining the efficient computations possible with numeric utility representations, yet preserving the intelligibility and easy modifiability of the qualitative constraints underlying the raw numbers. Making this idea work requires methods for constructing utility functions from generic constraints on preferences, a topic hardly explored to date. Graphical Representations Graphical representations of probability distributions, such as Bayesian networks, constitute perhaps the best-known hybrid representation. Bayesian networks simplify the specification of many useful probability distributions by using a network structure to encode relationships of probabilistic dependence and independence. The network specifies direct probabilistic dependence explicitly and probabilistic independence relationships implicitly. Each node in the network bears quantitative annotations that indicate the conditional probabilities of the node given the probabilities of the nodes on which it depends. Details of these representations can be found in many sources, for example, in The success of Bayesian networks in representing probability information suggests extending these graphical methods to represent preference and utility information as well as to model and implement reasoning about expected utility. The most direct such extensions include influence diagrams Decision Making in Context Other issues for investigation relate to the roles and forms of qualitative decision-theoretic representations and reasoning as they enter into planning, collaboration, and learning of decision-making information. The obvious roles here concern the decisions made in the course of planning, collaboration, or learning, but the more important questions for the development of qualitative decision theory concern the identification of requirements or constraints placed on decision-theoretic representations by the informational and computational needs of these processes. For example, several works have explored the role of utility in guiding learning, both in the elementary case of search and in higher-level processes of learning concepts and procedures (see, for example, Russell and Wefald [1991]; We cannot provide a full discussion of these contextual issues here. Instead, we provide brief discussions of how they arise in planning and collaboration. obtaining desired outcomes. The user can hope to benefit from the computer's enhanced ability to search large plan spaces and perhaps from its greater domain knowledge. It goes without saying that the advice will not be appropriate unless it is based on an accurate model of the user's preferences. The task of the system then is one of decision-theoretic planning, in which the system holds beliefs (including probabilistic beliefs) constant and adjusts the utilities to accommodate the user. Sympathetic planning tasks, as illustrated by the challenging example presented early in this article, exercise many of the representational and reasoning abilities desired of a qualitative approach to decision making, including formation of representations of the preferences of another, generation and evaluation of alternative actions, and revision of preferences. Sympathetic planning seems a natural and potentially useful extension of current planning systems, offering the challenge of integrating the planning with a version of decision theory that is able to cope with domains that involve uncertainty, substantial variation in the utilities of outcomes, and users who might differ substantially in their needs and preferences. Recent research trends indicate growing interest in the problem of sympathetic planning (see Haddawy [1998b, 1997b]; Boutiler et al. ), although to our knowledge, almost nobody develops advisory systems that use decision theory as the foundation of their advice. Decision-Theoretic Planning Decision-theoretic planning has already grown into a substantial topic in its own right, as can be seen from the companion article on this topic (see the Another rich topic concerns how one determines the preferences and beliefs appropriate for specific planning tasks. The most familiar methods involve ordinary debriefing of experts or informants, as practiced by decision analysts. Less familiar but potentially rewarding techniques seek to infer such information from plans. Specifically, what information about the beliefs and preferences of the planner can one obtain from the subgoal relationships of a plan, from temporal ordering, or from other structural information? Can one infer when the satisfaction of one subgoal is more important than the satisfaction of another? How much do such inferences depend on the background knowledge used in constructing the plan? Sympathetic Planning Successful collaboration depends on identifying and adopting preferences and other information adequate to make decisions on behalf of one's collaborators or to further their own or mutual objectives. One of the most common collaborative settings is that of advisory systems, which collaborate with their users to help solve the user's problems. For example, in domains involving elements of uncertainty and risk, a user can seek the advice of an automated planning expert, not about alternative outcomes, which the user might understand full well, but about alternative courses of action or plans for Articles AI MAGAZINE Conclusion Formal studies of decision making have their origins in the seventeenth century, reaching a point of maturity in the mid and late 1940s with reasonably solid mathematical foundations and reasonably practical quantitative methods. Just as it attained this maturity, however, Herbert Simon and others led an exodus from the new theory, charging it with requiring excessive information, memory, and reasoning power. The critique hardly budged the main part of decision theory but led to exploration and development of the notion of qualitative goals (generalizing Simon's utility aspiration levels), formalization of goal-based problem solving, and the modern field of AI. Work on reasoned deliberation then proceeded largely oblivious to the notions of economic decision theory, facing instead large hurdles posed by fundamental representational issues and inadequate computational tools. The new areas eventually developed to the point where researchers realized the need to reconnect the methods of AI with the qualitative foundations and quantitative methods of economics. Although the field of qualitative decision theory benefits from the substantial contributions of economic decision analysis and AI, many central questions of interest remain unanswered by the relatively small literature produced by recent studies, to the point where this brief survey constitutes more an indication of directions for future work than a presentation of answered questions. Although the existing literature reveals an emerging field with some promising ideas and potentially important applications, a large gap still separates the ideas from workable applications. Development of foundations for qualitative decision theory might require as much effort as the earlier development of foundations for quantitative decision theory. One might identify the most important trends as grappling in one way or another with the challenge of reworking a field that provides powerful theoretical arguments for representations of preferences that are not at all commonsensical and that can be difficult to elicit. How to emerge from this foundational stage with an apparatus that will integrate with problem-solving applications remains unclear at present. Lack of a single dominant approach also complicates matters; different people have quite different ideas about how to proceed. It might take a while for a few dominant paradigms to emerge, but we fully expect further studies to yield expansions of decision theory that serve applications well.