Results 11  20
of
24
Toward a Theory of Learning Coherent Concepts
 In AAAI/IAAI
, 2000
"... We develop a theory for learning scenarios where multiple learners coexist but there are mutual compatibility constraints on their outcomes. This is natural in cognitive learning situations, where \natural" compatibility constraints are imposed on the outcomes of classiers so that a valid sent ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We develop a theory for learning scenarios where multiple learners coexist but there are mutual compatibility constraints on their outcomes. This is natural in cognitive learning situations, where \natural" compatibility constraints are imposed on the outcomes of classiers so that a valid sentence, image or any other domain representation is produced. We suggest that work in this direction may help to resolve the contrast between the hardness of learning as predicted by the current theoretical models and the apparent ease at which cognitive systems seem to learn. A model of concept learning is studied in which the target concept is required to cohere with other concepts of interest. The coherency is expressed via a (Boolean) constraint that the concepts have to satisfy. Under this model, learning a concept is shown to be easier (in terms of sample complexity and mistake bounds) and the concepts learned are shown to be more robust to noise in their input (attribute n...
Molecular learning of wDNF formulae
 In Proc. 11th Int. Meeting on DNA Computing (DNA
, 2005
"... Abstract. We introduce a class of generalized DNF formulae called wDNF or weighted disjunctive normal form, and present a molecular algorithm that learns a wDNF formula from training examples. Realized in DNA molecules, the wDNF machines have a natural probabilistic semantics, allowing for their app ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We introduce a class of generalized DNF formulae called wDNF or weighted disjunctive normal form, and present a molecular algorithm that learns a wDNF formula from training examples. Realized in DNA molecules, the wDNF machines have a natural probabilistic semantics, allowing for their application beyond the pure Boolean logical structure of the standard DNF to reallife problems with uncertainty. The potential of the molecular wDNF machines is evaluated on reallife genomics data in simulation. Our empirical results suggest the possibility of building errorresilient molecular computers that are able to learn from data, potentially from wet DNA data. 1
Probabilistic logic and induction
 J. of Logic and Computation
"... We give a probabilistic interpretation of firstorder formulas based on Valiants model of paclearning. We study the resulting notion of probabilistic or approximate truth and take some first steps in developing its model theory. In particular we show that every fixed error parameter determining the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We give a probabilistic interpretation of firstorder formulas based on Valiants model of paclearning. We study the resulting notion of probabilistic or approximate truth and take some first steps in developing its model theory. In particular we show that every fixed error parameter determining the precision of universal quantification gives rise to a different class of tautologies. Finally we study the inductive inference of firstorder formulas from atomic truths. 1 introduction The goal of this paper is to develop a notion of model theoretic paclearning and to study the corresponding notion of probabilistic truth. This parallels the fact that Golds model of language learning [5] can be transformed to a more general modeltheoretic one (Osherson et al. [12], see also Terwijn [13]). This has already yielded some interesting results, e.g. connections with the theory of belief revision (Martin and Osherson [11]). The model of paclearning was introduced by Valiant [15]. This model was the first probabilistic model of learning amenable to a complexity theoretic analysis of learning tasks, and in the subsequent years became one of the most prominent models in the learning theory research. A good introduction to the theory of this model is Kearns and Vazirani [8]. The connections between logic and probability are old and manifold. An early critic of the use of universal statements outside of the synthetic realm of mathematics was the sceptic Sextus Empiricus (2nd–3rd century). He pointed out that without a formal context, where a universal statement can hold by definition, such a statement can only be true when every instance
A Connectionist Model for Constructive Modal Reasoning
"... We present a new connectionist model for constructive, intuitionistic modal reasoning. We use ensembles of neural networks to represent intuitionistic modal theories, and show that for each intuitionistic modal program there exists a corresponding neural network ensemble that computes the program. T ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a new connectionist model for constructive, intuitionistic modal reasoning. We use ensembles of neural networks to represent intuitionistic modal theories, and show that for each intuitionistic modal program there exists a corresponding neural network ensemble that computes the program. This provides a massively parallel model for intuitionistic modal reasoning, and sets the scene for integrated reasoning, knowledge representation, and learning of intuitionistic theories in neural networks, since the networks in the ensemble can be trained by examples using standard neural learning algorithms. 1
Evolvability
"... Abstract. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection. However, there has existed no theory that would explain quantitative ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection. However, there has existed no theory that would explain quantitatively which mechanisms can so evolve in realistic population sizes within realistic time periods, and which are too complex. In this paper we suggest such a theory. Evolution is treated as a form of computational learning from examples in which the course of learning is influenced only by the fitness of the hypotheses on the examples, and not otherwise by the specific examples. We formulate a notion of evolvability that quantifies the evolvability of different classes of functions. It is shown that in any one phase of evolution where selection is for one beneficial behavior, monotone Boolean conjunctions and disjunctions are demonstrably evolvable over the uniform distribution, while Boolean parity functions are demonstrably not. The framework also allows a wider range of issues in evolution to be quantified. We suggest that the overall mechanism that underlies biological evolution is evolvable target pursuit, which consists of a series of evolutionary stages, each one pursuing an evolvable target in our technical sense, each target being rendered evolvable by the serendipitous combination of the environment and the outcome of previous evolutionary stages. 1
Learning to Assign Degrees of Belief in Relational Domains
"... Abstract. A recurrent question in the design of intelligent agents is how to assign degrees of beliefs, or subjective probabilities, to various events in a relational environment. In the standard knowledge representation approach, these probabilities are evaluated according to a knowledge base, such ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. A recurrent question in the design of intelligent agents is how to assign degrees of beliefs, or subjective probabilities, to various events in a relational environment. In the standard knowledge representation approach, these probabilities are evaluated according to a knowledge base, such as a logical program or a Bayesian network. However, even for very restricted representation languages, the problem of evaluating probabilities from a knowledge base is computationally prohibitive. By contrast, this study embarks on the learning to reason (L2R) framework that aims at eliciting degrees of belief in an inductive manner. The agent is viewed as an anytime reasoner that iteratively improves its performance in light of the knowledge induced from its mistakes. By coupling exponentiated gradient strategies in online learning and weighted model counting techniques in reasoning, the L2R framework is shown to provide efficient solutions to relational probabilistic reasoning problems that are provably intractable in the classical framework. 1
Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence ∗
"... ABSTRACT. Endowing computers with the ability to apply commonsense knowledge with humanlevel performance is a primary challenge for computer science, comparable in importance to past great challenges in other fields of science such as the sequencing of the human genome. The right approach to this pr ..."
Abstract
 Add to MetaCart
ABSTRACT. Endowing computers with the ability to apply commonsense knowledge with humanlevel performance is a primary challenge for computer science, comparable in importance to past great challenges in other fields of science such as the sequencing of the human genome. The right approach to this problem is still under debate. Here we shall discuss and attempt to justify one approach, that of knowledge infusion. This approach is based on the view that the fundamental objective that needs to be achieved is robustness in the following sense: a framework is needed in which a computer system can represent pieces of knowledge about the world, each piece having some uncertainty, and the interactions among the pieces having even more uncertainty, such that the system can nevertheless reason from these pieces so that the uncertainties in its conclusions are at least controlled. In knowledge infusion rules are learned from the world in a principled way so that subsequent reasoning using these rules will also be principled, and subject only to errors that can be bounded in terms of the inverse of the effort invested in the learning process. 1
Tractable Feature Generation through Description Logics with Value and Number Restrictions
"... Abstract. In the line of a feature generation paradigm based on relational concept descriptions, we extend the applicability to other languages of the Description Logics family endowed with specific language constructors that do not have a counterpart in the standard relational representations, such ..."
Abstract
 Add to MetaCart
Abstract. In the line of a feature generation paradigm based on relational concept descriptions, we extend the applicability to other languages of the Description Logics family endowed with specific language constructors that do not have a counterpart in the standard relational representations, such as clausal logics. We show that the adoption of an enhanced language does not increase the complexity of feature generation, since the process is still tractable. Moreover this can be considered as a formalization for future employment of even more expressive languages from the Description Logics family. 1
On Channels between Knowledge and Objective Worlds
"... Channel Theory is a mathematical theory of information flow proposed by Barwise and Seligman in 1990's. In this paper, we shall discuss about basic concepts in Channel Theory from the viewpoint of learning and interfaces between agents and objective worlds, and about relations between Channel Theory ..."
Abstract
 Add to MetaCart
Channel Theory is a mathematical theory of information flow proposed by Barwise and Seligman in 1990's. In this paper, we shall discuss about basic concepts in Channel Theory from the viewpoint of learning and interfaces between agents and objective worlds, and about relations between Channel Theory and Valiant's notion of scenes in Robust Logic and Khardon and Roth's formulation of Learn to Reason. We shall also present some examples of channels in this context.
DOI 10.1007/s1099400850755 Learning to assign degrees of belief in relational domains
"... Abstract A recurrent problem in the development of reasoning agents is how to assign degrees of beliefs to uncertain events in a complex environment. The standard knowledge representation framework imposes a sharp separation between learning and reasoning; the agent starts by acquiring a “model ” of ..."
Abstract
 Add to MetaCart
Abstract A recurrent problem in the development of reasoning agents is how to assign degrees of beliefs to uncertain events in a complex environment. The standard knowledge representation framework imposes a sharp separation between learning and reasoning; the agent starts by acquiring a “model ” of its environment, represented into an expressive language, and then uses this model to quantify the likelihood of various queries. Yet, even for simple queries, the problem of evaluating probabilities from a general purpose representation is computationally prohibitive. In contrast, this study embarks on the learning to reason (L2R) framework that aims at eliciting degrees of belief in an inductive manner. The agent is viewed as an anytime reasoner that iteratively improves its performance in light of the knowledge induced from its mistakes. Indeed, by coupling exponentiated gradient strategies in learning and weighted model counting techniques in reasoning, the L2R framework is shown to provide efficient solutions to relational probabilistic reasoning problems that are provably intractable in the classical paradigm.