Results 1  10
of
153
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 913 (38 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
Approximate Signal Processing
, 1997
"... It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tra ..."
Abstract

Cited by 324 (2 self)
 Add to MetaCart
It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tradeoffs. One of the objectives of this paper is to suggest that there is the potential for developing a more formal approach, including utilizing current research in Computer Science on Approximate Processing and one of its central concepts, Incremental Refinement. Toward this end, we first summarize a number of ideas and approaches to approximate processing as currently being formulated in the computer science community. We then present four examples of signal processing algorithms/systems that are structured with these goals in mind. These examples may be viewed as partial inroads toward the ultimate objective of developing, within the context of signal processing design and implementation,...
Coalition Structure Generation with Worst Case Guarantees
, 1999
"... Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition ..."
Abstract

Cited by 209 (10 self)
 Add to MetaCart
Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NPcomplete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired
Coalitions Among Computationally Bounded Agents
 Artificial Intelligence
, 1997
"... This paper analyzes coalitions among selfinterested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization problem) the agents can sometimes save costs compared to operating individua ..."
Abstract

Cited by 167 (24 self)
 Add to MetaCart
This paper analyzes coalitions among selfinterested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization problem) the agents can sometimes save costs compared to operating individually. A model of bounded rationality is adopted where computation resources are costly. It is not worthwhile solving the problems optimally: solution quality is decisiontheoretically traded o against computation cost. A normative, application and protocolindependent theory of coalitions among boundedrational agents is devised. The optimal coalition structure and its stability are signi cantly a ected by the agents ' algorithms ' performance pro les and the cost of computation. This relationship is rst analyzed theoretically. Then a domain classi cation including rational and boundedrational agents is introduced. Experimental results are presented in vehicle routing with real data from ve dispatch centers. This problem is NPcomplete and the instances are so large thatwith current technologyany agent's rationality is bounded by computational complexity. 1
Introducing the Tileworld: Experimentally Evaluating Agent Architectures
 In Proceedings of the Eighth National Conference on Artificial Intelligence
, 1990
"... We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally inves ..."
Abstract

Cited by 163 (13 self)
 Add to MetaCart
We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally investigate the behavior of various metalevel reasoning strategies by tuning the parameters of the agent, and can assess the success of alternative strategies in different environments by tuning the environmental parameters. Our hypothesis is that the appropriateness of a particular metalevel reasoning strategy will depend in large part upon the characteristics of the environment in which the agent incorporating that strategy is situated. We describe our initial experiments using Tileworld, in which we have been evaluating a version of the metalevel reasoning strategy proposed in earlier work by one of the authors [5]. Topic: Automated Reasoning Subtopics: Planning and Scheduling, ResourceBo...
Principles of Metareasoning
 Artificial Intelligence
, 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Abstract

Cited by 162 (10 self)
 Add to MetaCart
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resourcebounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problemsolving systems, a...
DecisionTheoretic Deliberation Scheduling for Problem Solving In . . .
 ARTIFICIAL INTELLIGENCE
, 1994
"... We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in ..."
Abstract

Cited by 157 (3 self)
 Add to MetaCart
We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in decision making affects the quality of the responses generated by a system.
Using Anytime Algorithms in Intelligent Systems
, 1996
"... Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, realtime diagnosis and repair, and mobile robot control. What characterizes these domains i ..."
Abstract

Cited by 145 (8 self)
 Add to MetaCart
Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, realtime diagnosis and repair, and mobile robot control. What characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. This article surveys the main control problems that arise when a system is composed of several anytime algorithms. These problems relate to optimal management of uncertainty and precision. After a brief introduction to anytime computation, I outline a wide range of existing solutions to the metalevel control problem and describe current work that is aimed at increasing the applicability of anytime computation.
Reasoning Under Varying and Uncertain Resource Constraints
, 1988
"... We describe the use of decisiontheory to optimize the value of computation under uncertain and varying resource limitations. The research is motivated by the pursuit of formal models of rational decision making for computational agents, centering on the explicit consideration of preferences and res ..."
Abstract

Cited by 117 (20 self)
 Add to MetaCart
We describe the use of decisiontheory to optimize the value of computation under uncertain and varying resource limitations. The research is motivated by the pursuit of formal models of rational decision making for computational agents, centering on the explicit consideration of preferences and resource availability. We focus here on the importance of identifying the multiattribute structure of partial results generated by approximation methods for making control decisions. Work on simple algorithms and on the control of decisiontheoretic inference itself is described. 1 Computation Under Uncertainty We are investigating the decisiontheoretic control of problem solving under varying constraints in resources required for reasoning, such as time and memory. This work is motivated by the pursuit of formal models of rational decision making under resource constraints and our goal of extending foundational work on normative rationality to computational agents. We describe here a portion...
Optimal Composition of RealTime Systems
 ARTIFICIAL INTELLIGENCE
, 1996
"... Realtime systems are designed for environments in which the utility of actions is strongly timedependent. Recent work by Dean, Horvitz and others has shown that anytime algorithms are a useful tool for realtime system design, since they allow computation time to be traded for decision quality. In ..."
Abstract

Cited by 113 (21 self)
 Add to MetaCart
Realtime systems are designed for environments in which the utility of actions is strongly timedependent. Recent work by Dean, Horvitz and others has shown that anytime algorithms are a useful tool for realtime system design, since they allow computation time to be traded for decision quality. In order to construct complex systems, however, we need to be able to compose larger systems from smaller, reusable anytime modules. This paper addresses two basic problems associated with composition: how to ensure the interruptibility of the composed system