Results 1  10
of
30
A Statistical Approach to Solving the EBL Utility Problem
, 1992
"... Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set o ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that sidesteps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hillclimb to a local o...
A Formal Framework for Speedup Learning from Problems and Solutions
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1996
"... Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned know ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macrooperators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanationbased speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macrooperator learning, ExplanationBased Learning (EBL), and Probably Approximately Correct (PAC) Learning.
Exploiting Irrelevance Reasoning to Guide Problem Solving
 IN PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1993
"... Identifying that parts of a knowledge base (KB) are irrelevant to a specific query is a powerful method of controlling search during problem solving. However, finding methods of such irrelevance reasoning and analyzing their utility are open problems. We present a framework based on a proofth ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Identifying that parts of a knowledge base (KB) are irrelevant to a specific query is a powerful method of controlling search during problem solving. However, finding methods of such irrelevance reasoning and analyzing their utility are open problems. We present a framework based on a prooftheoretic analysis of irrelevance that enables us to address these problems. Within the framework, we focus on a class of strongirrelevance claims and show that they have several desirable properties. For example, in the context of Hornrule theories, we show that strongirrelevance claims can be derived efficiently either by examining the KB or as logical consequences of other strongirrelevance claims. An important aspect is that our algorithms reason about irrelevance using only a small part of the KB. Consequently, the reasoning is efficient and the derived irrelevance claims are independent of changes to other parts of the KB.
Probabilistic HillClimbing: Theory and Applications
 In Proceedings of CSCSI92
, 1992
"... Many learning systems search through a space of possible performance elements, seeking an element with high expected utility. As the task of finding the globally optimal element is usually intractable, many practical learning systems use hillclimbing to find a local optimum. Unfortunately, even thi ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
Many learning systems search through a space of possible performance elements, seeking an element with high expected utility. As the task of finding the globally optimal element is usually intractable, many practical learning systems use hillclimbing to find a local optimum. Unfortunately, even this is difficult, as it depends on the distribution of problems, which is typically unknown. This paper addresses the task of approximating this hillclimbing search when the utility function can only be estimated by sampling. We present an algorithm that returns an element that is, with provably high probability, essentially a local optimum. We then demonstrate the generality of this algorithm by sketching three meaningful applications, that respectively find an element whose efficiency, accuracy or completeness is nearly optimal. These results suggest approaches to solving the utility problem from explanationbased learning, the multiple extension problem from nonmonotonic reasoning and the ...
Speeding Up Inferences Using Relevance Reasoning: A Formalism and Algorithms
 ARTIFICIAL INTELLIGENCE
, 1997
"... Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such a ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such as modeling complex physical devices and information gathering in distributed heterogeneous systems. This article presents a novel framework for studying the various kinds of irrelevance that arise in inference and efficient algorithms for relevance reasoning. We present a
PALO: A Probabilistic HillClimbing Algorithm
 Artificial Intelligence
, 1995
"... Many learning systems search through a space of possible performance elements, seeking an element whose expected utility, over the distribution of problems, is high. As the task of finding the globally optimal element is often intractable, many practical learning systems instead hillclimb to a loca ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Many learning systems search through a space of possible performance elements, seeking an element whose expected utility, over the distribution of problems, is high. As the task of finding the globally optimal element is often intractable, many practical learning systems instead hillclimb to a local optimum. Unfortunately, even this is problematic as the learner typically does not know the underlying distribution of problems, which it needs to determine an element's expected utility. This paper addresses the task of approximating this hillclimbing search when the utility function can only be estimated by sampling. We present a general algorithm, palo, that returns an element that is, with provably high probability, essentially a local optimum. We then demonstrate the generality of this algorithm by presenting three distinct applications, that respectively find an element whose efficiency, accuracy or completeness is nearly optimal. These results suggest approaches to solving the util...
Diagnosing Double Regular Systems
 ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE
, 1999
"... We consider the problem of testing sequentially the components of a double regular system, when the testing of each component is costly. Generalizing earlier results about koutofn systems, we provide a polynomial time algorithm for the most costeffficient sequential testing of double regular ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We consider the problem of testing sequentially the components of a double regular system, when the testing of each component is costly. Generalizing earlier results about koutofn systems, we provide a polynomial time algorithm for the most costeffficient sequential testing of double regular systems. The algorithm can be implemented to work efficiently both for explicitly given systems, and for systems given by an oracle.
Measuring and Improving the Effectiveness of Representations
 In Proceedings of IJCAI91
, 1991
"... This report discusses what it means to claim that a representation is an effective encoding of knowledge. We first present dimensions of merit for evaluating representations, based on the view that usefulness is a behavioral property, and is necessarily relative to a specified task. We then provide ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
This report discusses what it means to claim that a representation is an effective encoding of knowledge. We first present dimensions of merit for evaluating representations, based on the view that usefulness is a behavioral property, and is necessarily relative to a specified task. We then provide methods (based on results from mathematical statistics) for reliably measuring effectiveness empirically, and hence for comparing different representations. We also discuss weak but guaranteed methods of improving inadequate representations. Our results are an application of the ideas of formal learning theory to concrete knowledge representation formalisms. 1 Introduction A principal aim of research in knowledge representation and reasoning is to design good formalisms for representing knowledge about the world. This paper gives operational criteria for evaluating the goodness of a "representation". 1 Many areas of AI research can use these results. For example, many papers on nonmonoton...
Probably Approximately Optimal Derivation Strategies
 In Proc, KR91
, 1991
"... An inference graph can have many "derivation strategies", each a particular ordering of the steps involved in reducing a given query to a sequence of database retrievals. An "optimal strategy" for a given distribution of queries is a complete strategy whose "expected cost&qu ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
(Show Context)
An inference graph can have many "derivation strategies", each a particular ordering of the steps involved in reducing a given query to a sequence of database retrievals. An "optimal strategy" for a given distribution of queries is a complete strategy whose "expected cost" is minimal, where the expected cost depends on the conditional probabilities that each requested retrieval succeeds, given that a member of this class of queries is posed. This paper describes the PAO algorithm that first uses a set of training examples to approximate these probability values, and then uses these estimates to produce a "probably approximately optimal" strategy  i.e., given any ffl; ffi ? 0, PAO produces a strategy whose cost is within ffl of the cost of the optimal strategy, with probability greater than 1 \Gamma ffi . This paper also shows how to obtain these strategies in time polynomial in 1=ffl, 1=ffi and the size of the inference graph, for many important classes of graphs, including all and...
Probably Approximately Optimal Satisficing Strategies
 ARTIFICIAL INTELLIGENCE
, 1990
"... A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, seeking a satisfying configuration of successes and failures. The expected cost of the search depends both on the success probabilities of the individual experiments, and on the search strategy ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, seeking a satisfying configuration of successes and failures. The expected cost of the search depends both on the success probabilities of the individual experiments, and on the search strategy , which specifies the order in which the experiments are to be performed. A strategy that minimizes the expected cost is optimal. Earlier work has provided "optimizing functions" that compute optimal strategies for certain classes of search problems from the success probabilities of the individual experiments. We extend those results by providing a general model of such strategies, and an algorithm pao that identifies an approximately optimal strategy when the probability values are not known. The algorithm first estimates the relevant probabilities from a number of trials of each undetermined experiment, and then uses these estimates, and the proper optimizing function, to identify a stra...