Results 1 - 10
of
219
Learning Bayesian networks: The combination of knowledge and statistical data
- Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract
-
Cited by 1158 (35 self)
- Add to MetaCart
(Show Context)
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
Approximate Signal Processing
, 1997
"... It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tra ..."
Abstract
-
Cited by 538 (2 self)
- Add to MetaCart
It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tradeoffs. One of the objectives of this paper is to suggest that there is the potential for developing a more formal approach, including utilizing current research in Computer Science on Approximate Processing and one of its central concepts, Incremental Refinement. Toward this end, we first summarize a number of ideas and approaches to approximate processing as currently being formulated in the computer science community. We then present four examples of signal processing algorithms/systems that are structured with these goals in mind. These examples may be viewed as partial inroads toward the ultimate objective of developing, within the context of signal processing design and implementation,...
Coalition Structure Generation with Worst Case Guarantees
, 1999
"... Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition ..."
Abstract
-
Cited by 270 (9 self)
- Add to MetaCart
Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NP-complete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired
Coalitions Among Computationally Bounded Agents
- Artificial Intelligence
, 1997
"... This paper analyzes coalitions among self-interested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization prob-lem) the agents can sometimes save costs compared to operating individua ..."
Abstract
-
Cited by 203 (26 self)
- Add to MetaCart
(Show Context)
This paper analyzes coalitions among self-interested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization prob-lem) the agents can sometimes save costs compared to operating individually. A model of bounded rationality is adopted where computation resources are costly. It is not worthwhile solving the problems optimally: solution quality is decision-theoretically traded o against computation cost. A normative, application- and protocol-independent theory of coalitions among bounded-rational agents is devised. The optimal coalition structure and its stability are signi cantly a ected by the agents ' algorithms ' performance pro les and the cost of computation. This relationship is rst analyzed theoretically. Then a domain classi cation including rational and bounded-rational agents is in-troduced. Experimental results are presented in vehicle routing with real data from ve dispatch centers. This problem is NP-complete and the instances are so large that|with current technology|any agent's rationality is bounded by computational complexity. 1
Introducing the Tileworld: Experimentally evaluating agent architectures
- In Proceedings of the National Conference on Artificial Intelligence
, 1990
"... We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally inves ..."
Abstract
-
Cited by 195 (13 self)
- Add to MetaCart
(Show Context)
We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally investigate the behavior of var-ious meta-level reasoning strategies by tuning the parameters of the agent, and can assess the success of alternative strategies in dierent environments by tuning the en-vironmental parameters. Our hypothesis is that the appropriateness of a particular meta-level reasoning strategy will depend in large part upon the characteristics of the environment in which the agent incorporating that strategy is situated. We describe our initial experiments using Tileworld, in which we have been evaluating a version of the meta-level reasoning strategy proposed in earlier work by one of the authors [5].
Using Anytime Algorithms in Intelligent Systems
, 1996
"... Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, real-time diagnosis and repair, and mobile robot control. What characterizes these domains i ..."
Abstract
-
Cited by 193 (8 self)
- Add to MetaCart
Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, real-time diagnosis and repair, and mobile robot control. What characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. This article surveys the main control problems that arise when a system is composed of several anytime algorithms. These problems relate to optimal management of uncertainty and precision. After a brief introduction to anytime computation, I outline a wide range of existing solutions to the metalevel control problem and describe current work that is aimed at increasing the applicability of anytime computation.
Principles of Metareasoning
- Artificial Intelligence
, 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Abstract
-
Cited by 183 (10 self)
- Add to MetaCart
(Show Context)
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resource-bounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problem-solving systems, a...
CIRCA: The Cooperative Intelligent Real-Time Control Architecture
"... The Cooperative Intelligent Real-time Control Architecture (CIRCA) is a novel architecture for intelligent real-time control that can guarantee to meet hard deadlines while still using unpredictable, unrestricted AI methods. CIRCA includes a real-time subsystem used to execute reactive control plans ..."
Abstract
-
Cited by 180 (49 self)
- Add to MetaCart
(Show Context)
The Cooperative Intelligent Real-time Control Architecture (CIRCA) is a novel architecture for intelligent real-time control that can guarantee to meet hard deadlines while still using unpredictable, unrestricted AI methods. CIRCA includes a real-time subsystem used to execute reactive control plans that are guaranteed to meet the domain's real-time deadlines, keeping the system safe. At the same time, CIRCA's AI subsystem performs higher-level reasoning about the domain and the system's goals and capabilities, to develop future reactive control plans. CIRCA thus aims to be intelligent about real-time: rather than requiring the system's AI methods to meet deadlines, CIRCA isolates its reasoning about which time-critical reactions to guarantee from the actual execution of the selected reactions. The formal basis for CIRCA's performance guarantees is a state-based world model of agent/environment interactions. Borrowing approaches from real-time systems research, the world model provides the information required to make real-time performance guarantees, but avoids unnecessary complexity. Using the world model, the AI subsystem develops reactive control plans that restrict the world to a limited set of safe and desirable states, by
Decision-Theoretic Deliberation Scheduling for Problem Solving In . . .
- ARTIFICIAL INTELLIGENCE
, 1994
"... We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problem-solving tasks in which the time spent in ..."
Abstract
-
Cited by 173 (3 self)
- Add to MetaCart
We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problem-solving tasks in which the time spent in decision making affects the quality of the responses generated by a system.
Iterative Combinatorial Auctions: Achieving Economic and Computational Efficiency
- DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE, UNIVERSITY OF PENNSYLVANIA
, 2001
"... This thesis presents new auction-based mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality... ..."
Abstract
-
Cited by 159 (19 self)
- Add to MetaCart
This thesis presents new auction-based mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality...