Results 11  20
of
41
Learning in Natural Language: Theory and Algorithmic Approaches
, 2000
"... This article summarizes work on developing a learning theory account for the major learning and statistics based approaches used in natural language processing. It shows that these approaches can all be explained using a single distribution free inductive principle related to the pac model of learni ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This article summarizes work on developing a learning theory account for the major learning and statistics based approaches used in natural language processing. It shows that these approaches can all be explained using a single distribution free inductive principle related to the pac model of learning. Furthermore, they all make predictions using the same simple knowledge representation  a linear representation over a common feature space. This is significant both to explaining the generalization and robustness properties of these methods and to understanding how these methods might be extended to learn from more structured, knowledge intensive examples, as part of a learning centered approach to higher level natural language inferences.
Probabilistic logic and induction
 J. of Logic and Computation
"... We give a probabilistic interpretation of firstorder formulas based on Valiants model of paclearning. We study the resulting notion of probabilistic or approximate truth and take some first steps in developing its model theory. In particular we show that every fixed error parameter determining the ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We give a probabilistic interpretation of firstorder formulas based on Valiants model of paclearning. We study the resulting notion of probabilistic or approximate truth and take some first steps in developing its model theory. In particular we show that every fixed error parameter determining the precision of universal quantification gives rise to a different class of tautologies. Finally we study the inductive inference of firstorder formulas from atomic truths. 1 introduction The goal of this paper is to develop a notion of model theoretic paclearning and to study the corresponding notion of probabilistic truth. This parallels the fact that Golds model of language learning [5] can be transformed to a more general modeltheoretic one (Osherson et al. [12], see also Terwijn [13]). This has already yielded some interesting results, e.g. connections with the theory of belief revision (Martin and Osherson [11]). The model of paclearning was introduced by Valiant [15]. This model was the first probabilistic model of learning amenable to a complexity theoretic analysis of learning tasks, and in the subsequent years became one of the most prominent models in the learning theory research. A good introduction to the theory of this model is Kearns and Vazirani [8]. The connections between logic and probability are old and manifold. An early critic of the use of universal statements outside of the synthetic realm of mathematics was the sceptic Sextus Empiricus (2nd–3rd century). He pointed out that without a formal context, where a universal statement can hold by definition, such a statement can only be true when every instance
Molecular learning of wDNF formulae
 In Proc. 11th Int. Meeting on DNA Computing (DNA
, 2005
"... Abstract. We introduce a class of generalized DNF formulae called wDNF or weighted disjunctive normal form, and present a molecular algorithm that learns a wDNF formula from training examples. Realized in DNA molecules, the wDNF machines have a natural probabilistic semantics, allowing for their app ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a class of generalized DNF formulae called wDNF or weighted disjunctive normal form, and present a molecular algorithm that learns a wDNF formula from training examples. Realized in DNA molecules, the wDNF machines have a natural probabilistic semantics, allowing for their application beyond the pure Boolean logical structure of the standard DNF to reallife problems with uncertainty. The potential of the molecular wDNF machines is evaluated on reallife genomics data in simulation. Our empirical results suggest the possibility of building errorresilient molecular computers that are able to learn from data, potentially from wet DNA data. 1
New revision algorithms
 In Algorithmic Learning Theory, 15th International Conference, ALT 2004
"... Abstract. A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the resource one is interested in) is polynomial in the syntactic distance between the initial and the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the resource one is interested in) is polynomial in the syntactic distance between the initial and the target concept, but only polylogarithmic in the number of variables in the universe. We give efficient revision algorithms in the model of learning with equivalence and membership queries. The algorithms work in a general revision model where both deletion and addition type revision operators are allowed. In this model one of the main open problems is the efficient revision of Horn sentences. Two revision algorithms are presented for special cases of this problem: for depth1 acyclic Horn sentences, and for definite Horn sentences with unique heads. We also present an efficient revision algorithm for threshold functions. 1
Evolvability
"... Abstract. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection. However, there has existed no theory that would explain quantitative ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection. However, there has existed no theory that would explain quantitatively which mechanisms can so evolve in realistic population sizes within realistic time periods, and which are too complex. In this paper we suggest such a theory. Evolution is treated as a form of computational learning from examples in which the course of learning is influenced only by the fitness of the hypotheses on the examples, and not otherwise by the specific examples. We formulate a notion of evolvability that quantifies the evolvability of different classes of functions. It is shown that in any one phase of evolution where selection is for one beneficial behavior, monotone Boolean conjunctions and disjunctions are demonstrably evolvable over the uniform distribution, while Boolean parity functions are demonstrably not. The framework also allows a wider range of issues in evolution to be quantified. We suggest that the overall mechanism that underlies biological evolution is evolvable target pursuit, which consists of a series of evolutionary stages, each one pursuing an evolvable target in our technical sense, each target being rendered evolvable by the serendipitous combination of the environment and the outcome of previous evolutionary stages. 1
Learning to Assign Degrees of Belief in Relational Domains
"... Abstract. A recurrent question in the design of intelligent agents is how to assign degrees of beliefs, or subjective probabilities, to various events in a relational environment. In the standard knowledge representation approach, these probabilities are evaluated according to a knowledge base, such ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A recurrent question in the design of intelligent agents is how to assign degrees of beliefs, or subjective probabilities, to various events in a relational environment. In the standard knowledge representation approach, these probabilities are evaluated according to a knowledge base, such as a logical program or a Bayesian network. However, even for very restricted representation languages, the problem of evaluating probabilities from a knowledge base is computationally prohibitive. By contrast, this study embarks on the learning to reason (L2R) framework that aims at eliciting degrees of belief in an inductive manner. The agent is viewed as an anytime reasoner that iteratively improves its performance in light of the knowledge induced from its mistakes. By coupling exponentiated gradient strategies in online learning and weighted model counting techniques in reasoning, the L2R framework is shown to provide efficient solutions to relational probabilistic reasoning problems that are provably intractable in the classical framework. 1
Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence ∗
"... ABSTRACT. Endowing computers with the ability to apply commonsense knowledge with humanlevel performance is a primary challenge for computer science, comparable in importance to past great challenges in other fields of science such as the sequencing of the human genome. The right approach to this pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Endowing computers with the ability to apply commonsense knowledge with humanlevel performance is a primary challenge for computer science, comparable in importance to past great challenges in other fields of science such as the sequencing of the human genome. The right approach to this problem is still under debate. Here we shall discuss and attempt to justify one approach, that of knowledge infusion. This approach is based on the view that the fundamental objective that needs to be achieved is robustness in the following sense: a framework is needed in which a computer system can represent pieces of knowledge about the world, each piece having some uncertainty, and the interactions among the pieces having even more uncertainty, such that the system can nevertheless reason from these pieces so that the uncertainties in its conclusions are at least controlled. In knowledge infusion rules are learned from the world in a principled way so that subsequent reasoning using these rules will also be principled, and subject only to errors that can be bounded in terms of the inverse of the effort invested in the learning process. 1
PAC Quasiautomatizability of Resolution over Restricted Distributions
, 2013
"... We consider principled alternatives to unsupervised learning in data mining by situating the learning task in the context of the subsequent analysis task. Specifically, we consider a queryanswering (hypothesistesting) task: In the combined task, we decide whether an input query formula is satisfie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider principled alternatives to unsupervised learning in data mining by situating the learning task in the context of the subsequent analysis task. Specifically, we consider a queryanswering (hypothesistesting) task: In the combined task, we decide whether an input query formula is satisfied over a background distribution by using input examples directly, rather than invoking a twostage process in which (i) rules over the distribution are learned by an unsupervised learning algorithm and (ii) a reasoning algorithm decides whether or not the query formula follows from the learned rules. In a previous work [15], we observed that the learning task could satisfy numerous desirable criteria in this combined context – effectively matching what could be achieved by agnostic learning of CNFs from partial information – that are not known to be achievable directly. In this work, we show that likewise, there are reasoning tasks that are achievable in such a combined context that are not known to be achievable directly (and indeed, have been seriously conjectured to be impossible, cf. Alekhnovich and Razborov [1]). Namely, we test for a resolution proof of the query formula of a given size in quasipolynomial time (that is, “quasiautomatizing ” resolution). The learning setting we consider is a partialinformation, restricteddistribution setting that generalizes learning parities over the uniform distribution from partial information, another task that is known not to be achievable directly in various models (cf. BenDavid and Dichterman [5] and Michael [20]).
A Connectionist Model for Constructive Modal Reasoning
"... We present a new connectionist model for constructive, intuitionistic modal reasoning. We use ensembles of neural networks to represent intuitionistic modal theories, and show that for each intuitionistic modal program there exists a corresponding neural network ensemble that computes the program. T ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present a new connectionist model for constructive, intuitionistic modal reasoning. We use ensembles of neural networks to represent intuitionistic modal theories, and show that for each intuitionistic modal program there exists a corresponding neural network ensemble that computes the program. This provides a massively parallel model for intuitionistic modal reasoning, and sets the scene for integrated reasoning, knowledge representation, and learning of intuitionistic theories in neural networks, since the networks in the ensemble can be trained by examples using standard neural learning algorithms. 1
Electronic Colloquium on Computational Complexity
, 2006
"... Living cells function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection and realized by genetic mutations. However, as some observers have noted, there has ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Living cells function according to complex mechanisms that operate in different ways depending on conditions. Evolutionary theory suggests that such mechanisms evolved as result of a random search guided by selection and realized by genetic mutations. However, as some observers have noted, there has existed no theory that would explain quantitatively which mechanisms can so evolve, and which are too complex. In this paper we provide such a theory. As a starting point, we relate evolution to the theory of computational learning, which addresses the question of the complexity of recognition mechanisms that can be derived from examples. We derive a quantitative notion of evolvability. The notion quantifies the limited sizes of populations and the limited numbers of generations that need to be sufficient for the evolution of significant classes of mechanisms. It is shown that in any one phase of evolution where selection is for one beneficial behavior, certain specific classes of mechanisms are demonstrably evolvable, while certain others are demonstrably not. 1