Results 1  10
of
24
Forget It!
 In Proceedings of the AAAI Fall Symposium on Relevance
, 1994
"... This paper describes in general terms the particular forms of forgetting used in (Lin and Reiter [2; 3]). Specifically, we propose a logical theory to account for: forgetting about a fact (forget that John is a student), and forgetting about a relation (forget the student relation). We then apply ou ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
(Show Context)
This paper describes in general terms the particular forms of forgetting used in (Lin and Reiter [2; 3]). Specifically, we propose a logical theory to account for: forgetting about a fact (forget that John is a student), and forgetting about a relation (forget the student relation). We then apply our notion of forgetting in defining various notion of relevance.
Protovalue functions: A laplacian framework for learning representation and control in markov decision processes
 Journal of Machine Learning Research
, 2006
"... This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by d ..."
Abstract

Cited by 88 (11 self)
 Add to MetaCart
(Show Context)
This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called protovalue functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A threephased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using leastsquares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for outofsample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.
Automated Model Selection for Simulation Based on Relevance Reasoning
 Artificial Intelligence
, 1997
"... Constructing an appropriate model is a crucial step in performing the reasoning required to successfully answer a query about the behavior of a physical situation. In the compositional modeling approach [7], a system is provided with a library of composable pieces of knowledge about the physical wor ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
Constructing an appropriate model is a crucial step in performing the reasoning required to successfully answer a query about the behavior of a physical situation. In the compositional modeling approach [7], a system is provided with a library of composable pieces of knowledge about the physical world called model fragments. The model construction problem involves selecting appropriate model fragments to describe the situation. Model construction can be considered either for static analysis of a single state or for simulation of dynamic behavior over a sequence of states. The latter is significantly more difficult than the former since one must select model fragments without knowing exactly what will happen in the future states. The model construction problem in general can advantageously be formulated as a problem of reasoning about relevance of knowledge that is available to the system using a general framework for reasoning about relevance described in [21, 16]. In this paper, we p...
Irrelevance Reasoning In Knowledge Based Systems
, 1993
"... Speeding up inferences made from large knowledge bases is a key to scaling up knowledge based systems. To do so, a system must have the ability to automatically identify and ignore information that is irrelevant to a specific task. Identifying irrelevant knowledge is also key to enabling reasoning i ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
Speeding up inferences made from large knowledge bases is a key to scaling up knowledge based systems. To do so, a system must have the ability to automatically identify and ignore information that is irrelevant to a specific task. Identifying irrelevant knowledge is also key to enabling reasoning in environments in which several systems (and their respective knowledge bases) interoperate. This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. As a basis for addressing these problems, we present a formal framework for analyzing irrelevance. The framework includes a space of possible definitions of irrelevance, based on a proof theoretic analysis of the notion. Within the space of definitions, we identify the class of...
Samuel meets Amarel: Automating Value Function Approximation using Global State Space Analysis
, 2005
"... Most work on value function approximation adheres to Samuel’s original design: agents learn a taskspecific value function using parameter estimation, where the approximation architecture (e.g, polynomials) is specified by a human designer. This paper proposes a novel framework generalizing Samuel’s ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Most work on value function approximation adheres to Samuel’s original design: agents learn a taskspecific value function using parameter estimation, where the approximation architecture (e.g, polynomials) is specified by a human designer. This paper proposes a novel framework generalizing Samuel’s paradigm using a coordinatefree approach to value function approximation. Agents learn both representations and value functions by constructing geometrically customized taskindependent basis functions that form an orthonormal set for the Hilbert space of smooth functions on the underlying state space manifold. The approach rests on a technical result showing that the space of smooth functions on a (compact) Riemannian manifold has a discrete spectrum associated with the LaplaceBeltrami operator. In the discrete setting, spectral analysis of the graph Laplacian yields a set of geometrically customized basis functions for approximating and decomposing value functions. The proposed framework generalizes Samuel’s value function approximation paradigm by combining it with a formalization of Saul Amarel’s paradigm of representation learning through global state space analysis.
A logical notion of conditional independence: Properties and applications
 ARTIFICIAL INTELLIGENCE
, 1997
"... We propose a notion of conditional independence with respect to propositional logic and study some of its key properties. We present several equivalent formulations of the proposed notion, each oriented towards a specific application of logical reasoning such as abduction and diagnosis. We suggest a ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
We propose a notion of conditional independence with respect to propositional logic and study some of its key properties. We present several equivalent formulations of the proposed notion, each oriented towards a specific application of logical reasoning such as abduction and diagnosis. We suggest a framework for utilizing logical independence computationally by structuring a propositional logic database around a directed acyclic graph. This structuring explicates many of the independences satisfied by the underlying database. Based on these structural independences, we develop an algorithm for a class of structured databases that is not necessarily Horn. The algorithm is linear in the size of a database structure and can be used for deciding entailment, computing abductions and diagnoses. The presented results are motivated by similar results in the literature on probabilistic and constraintbased reasoning.
Exploiting Irrelevance Reasoning to Guide Problem Solving
 IN PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1993
"... Identifying that parts of a knowledge base (KB) are irrelevant to a specific query is a powerful method of controlling search during problem solving. However, finding methods of such irrelevance reasoning and analyzing their utility are open problems. We present a framework based on a proofth ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Identifying that parts of a knowledge base (KB) are irrelevant to a specific query is a powerful method of controlling search during problem solving. However, finding methods of such irrelevance reasoning and analyzing their utility are open problems. We present a framework based on a prooftheoretic analysis of irrelevance that enables us to address these problems. Within the framework, we focus on a class of strongirrelevance claims and show that they have several desirable properties. For example, in the context of Hornrule theories, we show that strongirrelevance claims can be derived efficiently either by examining the KB or as logical consequences of other strongirrelevance claims. An important aspect is that our algorithms reason about irrelevance using only a small part of the KB. Consequently, the reasoning is efficient and the derived irrelevance claims are independent of changes to other parts of the KB.
Semantic Abstraction for Concept Representation And Learning
 Symposium on Abstraction, Reformulation and Approximation (SARA98), Asilomar Conference
, 1998
"... So far, abstraction has been mainly investigated in problem solving tasks. In this paper, we are interested in the role of abstraction in representing and learning concepts (i.e., intensional descriptions of classes of objects). We propose a novel perspective on abstraction, originating from the obs ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
So far, abstraction has been mainly investigated in problem solving tasks. In this paper, we are interested in the role of abstraction in representing and learning concepts (i.e., intensional descriptions of classes of objects). We propose a novel perspective on abstraction, originating from the observation that a conceptualization of a domain involves entities belonging to at least three levels. The fundamental level is the perception of the world, where concrete objects reside. For memorizing objects, some kind of structure, which describes objects and relations perceived in the world, is needed. Finally, to communicate with others, and also to perform reasoning, a language has to be used; the language allows both the world and theories about the world to be described intensionally.
The Representation Race  Preprocessing for Handling Time Phenomena
 In Ramon Lopez de Mantaras and Enric Plaza, editors, Machine Learning: ECML 2000, Lecture Notes in Artificial Intelligence
, 2000
"... . Designing the representation languages for the input,LE , and output, LH , of a learning algorithm is the hardest task within machine learning applications. This paper emphasizes the importance of constructing an appropriate representation LE for knowledge discovery applications using the exam ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
. Designing the representation languages for the input,LE , and output, LH , of a learning algorithm is the hardest task within machine learning applications. This paper emphasizes the importance of constructing an appropriate representation LE for knowledge discovery applications using the example of time related phenomena. Given the same raw data  most frequently a database with timestamped data  rather different representations have to be produced for the learning methods that handle time. In this paper, a set of learning tasks dealing with time is given together with the input required by learning methods which solve the tasks. Transformations from raw data to the desired representation are illustrated by three case studies. 1 Introduction Designing the representation languages for the input and output of a learning algorithm is the hardest task within machine learning applications. The "no free lunch theorem" actually implies that if a hard learning task becomes e...
Speeding Up Inferences Using Relevance Reasoning: A Formalism and Algorithms
 ARTIFICIAL INTELLIGENCE
, 1997
"... Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such a ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such as modeling complex physical devices and information gathering in distributed heterogeneous systems. This article presents a novel framework for studying the various kinds of irrelevance that arise in inference and efficient algorithms for relevance reasoning. We present a