Results 1  10
of
24
Learning Action Strategies for Planning Domains
 ARTIFICIAL INTELLIGENCE
, 1997
"... This paper reports on experiments where techniques of supervised machine learning are applied to the problem of planning. The input to the learning algorithm is composed of a description of a planning domain, planning problems in this domain, and solutions for them. The output is an efficient algori ..."
Abstract

Cited by 71 (3 self)
 Add to MetaCart
This paper reports on experiments where techniques of supervised machine learning are applied to the problem of planning. The input to the learning algorithm is composed of a description of a planning domain, planning problems in this domain, and solutions for them. The output is an efficient algorithm  a strategy  for solving problems in that domain. We test the strategy on an independent set of planning problems from the same domain, so that success is measured by its ability to solve complete problems. A system, L2Act, has been developed in order to perform these experiments. We have experimented with the blocks world domain, and the logistics domain, using strategies in the form of a generalization of decision lists, where the rules on the list are existentially quantified first order expressions. The learning algorithm is a variant of Rivest`s [39] algorithm, improved with several techniques that reduce its time complexity. As the experiments demonstrate, generalization is a...
Learning to Take Actions
, 1998
"... We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representati ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists; strategies include rules with existentially quantified conditions, simple recursive predicates, and small internal state, but are syntactically restricted. We also study the learnability of hierarchically composed strategies where a subroutine already acquired can be used as a basic action in a higher level strategy. We prove some positive results in this setting, but also show that in some cases the hierarchical learning problem is computationally hard. 1 Introduction We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and study the learnability of strategies represented by rulebased syste...
Robust Logics
"... Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibl ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. ontopof(x; y), above(x; y), etc. and believe that some possibly complex expression in terms of these base relationships should suffice to approximate the desired notion of an arch. How can we formulate such a relational learning problem so as to exploit the benefits that are demonstrably available in propositional learning, such as attributeefficient learning by linear separators, and errorresilient learning? We believe that learning in a general setting that allows for multiple objects and relations in this way is a fundamental key to resolving the following dilemma that arises in the design of intelligent systems: Mathematical logic is an attractive language of description because it has clear semantics and sound proof procedures. However, as a basis for large programmed systems it leads to brittleness because, in practice, consistent usage of the various predicate names throughout a system cannot be guaranteed, except in application areas such as mathematics where the viability of the axiomatic method has been demonstrated independently. In this paper we develop the following approach to circumventing this dilemma. We suggest that brittleness can be overcome by using a new kind of logic in which each statement is learnable. By allowing the system to learn rules empirically from the environment, relative to any particular programs it may have for recognizing some base predicates, we enable the system to acquire a set of statements approximately consistent with each other and with the world, without the need for a globally knowledgeable and consistent programmer. We illustrate
Relational Learning for NLP using Linear Threshold Elements
, 1999
"... We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an expl ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
We describe a coherent view of learning and reasoning with relational representations in the context of natural language processing. In particular, we discuss the Neuroidal Architecture, Inductive Logic Programming and the SNoW system explaining the relationships among these, and thereby oer an explanation of the theoretical basis for the SNoW system. We suggest that extensions of this system along the lines suggested by the theory may provide new levels of scalability and functionality. 1 Introduction The paper explores some aspects of relational knowledge representation and their learnability. While the discussion is to a large extent general it is made in the context of lowlevel natural language processing (NLP) tasks. Recent eorts in NLP emphasize empirical approaches, that attempt to learn how to perform various natural language tasks by being trained using an annotated corpus. These approaches have been used for a wide variety of fairly low level tasks such as partofspeech...
ManyLayered Learning
 Neural Computation
, 2002
"... We explore incremental assimilation of new knowledge by sequential learning. Of particular interest is how a network of many knowledge layers can be constructed in an online manner, such that the learned units represent building blocks of knowledge that serve to compress the overall representation ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We explore incremental assimilation of new knowledge by sequential learning. Of particular interest is how a network of many knowledge layers can be constructed in an online manner, such that the learned units represent building blocks of knowledge that serve to compress the overall representation and facilitate transfer. We motivate the need for many layers of knowledge, and we advocate sequential learning as an avenue for promoting construction of layered knowledge structures. Finally, our novel STL algorithm demonstrates an efficient method for simultaneously acquiring and organizing a collection of concepts and functions from a stream of rich but otherwise unstructured information. 1
Accelerated dense random projections
, 2009
"... In dimensionality reduction, a set of points in Rd is mapped into Rk, with the target dimension k smaller than the original dimension d, while distances between all pairs of points are approximately preserved. Currently popular methods for achieving this involve random projection, or choosing a line ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In dimensionality reduction, a set of points in Rd is mapped into Rk, with the target dimension k smaller than the original dimension d, while distances between all pairs of points are approximately preserved. Currently popular methods for achieving this involve random projection, or choosing a linear mapping (a k × d matrix) from a distribution that is independent of the input points. Applying the mapping (chosen according to this distribution) is shown to give the desired property with at least constant probability. The contributions in this thesis are twofold. First, we provide a framework for designing such distributions. Second, we derive efficient random projection algorithms using this framework. Our results achieve performance exceeding other existing approaches. When the target dimension is significantly smaller than the original dimension we gain significant improvement by designing efficient algorithms for applying certain linear algebraic transforms.
The Required Measures of Phase Segregation in Distributed Cortical Processing
, 2001
"... Many studies conducted in the Neuroscience field suggest that synchronous and oscillatory activity plays an essential role in the neural processing among cortical areas. A recent doctrine, the temporal correlation hypothesis, attempts to integrate the synchronous activities of neurons at distributed ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Many studies conducted in the Neuroscience field suggest that synchronous and oscillatory activity plays an essential role in the neural processing among cortical areas. A recent doctrine, the temporal correlation hypothesis, attempts to integrate the synchronous activities of neurons at distributed areas of the cortex to represent separate objects, overcoming what is known as the binding problem. The segregation of phase helps preserve the integrity of the synchronized neural activity as it propagates to deeper layers of the cortex. Here, the timing is crucial especially in the case where synchronized spike volleys must meet after crossing different paths in the cortex. The purpose of this work is to show that cortical circuits can act as a phaselocked segregation mechanism to desynchronize the neural responses associated with different objects. In particular, the inhibitory interneurons that are found in cortex give the desired behavior. As neural correlate, we employ the spike response model (SRM) on top of a localist, connectionist architecture designed for representing symbolic and relational information.
Using Temporal Binding for Hierarchical Recruitment of Conjunctive Concepts over Delayed Lines
, 2003
"... The temporal correlation hypothesis proposes using distributed synchrony for the binding of different stimulus features. However, synchronized spikes must travel over cortical circuits that have varyinglength pathways, leading to mismatched arrival times. This raises the question of how initial sti ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The temporal correlation hypothesis proposes using distributed synchrony for the binding of different stimulus features. However, synchronized spikes must travel over cortical circuits that have varyinglength pathways, leading to mismatched arrival times. This raises the question of how initial stimulusdependent synchrony might be preserved at a destination binding site. Earlier, we proposed constraints on tolerance and segregation parameters for a phasecoding approach, within cortical circuits, to address this question [22]. The purpose of the present paper is twofold. First, we conduct simulation experiments to test the proposed constraints. Second, we explore the practicality of temporal binding to drive a process of longterm memory formation based on a recruitment learning method [15].
Towards a Game Agent
, 2002
"... The objective of this report is to give the reader a survey on stateoftheart techniques and academic research in the field of artificial life where the simulation of complex and emergent behavior is the central point of investigation. Furthermore, games, artificial intelligence, and the concept o ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The objective of this report is to give the reader a survey on stateoftheart techniques and academic research in the field of artificial life where the simulation of complex and emergent behavior is the central point of investigation. Furthermore, games, artificial intelligence, and the concept of agents are focussed to give a classification and comparison of modern techniques used to simulate and/or animate creatures and other lifelike forms.
Algorithmic Theories of Learning
 Foundations of Computer Science
, 1999
"... We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples, in spite of the fact that each example contains a huge amount of information? We provide a novel analysis for a model of robust concept lear ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples, in spite of the fact that each example contains a huge amount of information? We provide a novel analysis for a model of robust concept learning (closely related to "margin classifiers"), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, boolean formulae and polynomial surfaces). As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithms have several advantages  they are faster, conceptually simpler, and highly resistant to noise. For example, a robust halfspace can be PAClearned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are...