Results 1  10
of
28
Exploiting the Past and the Future in Protein Secondary Structure Prediction
, 1999
"... Motivation: Predicting the secondary structure of a protein (alphahelix, betasheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network archite ..."
Abstract

Cited by 115 (22 self)
 Add to MetaCart
Motivation: Predicting the secondary structure of a protein (alphahelix, betasheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the prediction site. Although a fixed small window avoids overfitting problems, it does not permit to capture variable longranged information. Results: We introduce a family of novel architectures which can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing noncausal bidirectional dynamics to capture both upstream and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the input and output levels. While our system currently achieves an overall performance close to 76% correct predictionat least comparable to the best existing systemsthe main emphasis here is on the development of new algorithmic ideas. Availability: The executable program for predicting protein secondary structure is available from the authors free of charge. Contact: pfbaldi@ics.uci.edu, gpollast@ics.uci.edu, brunak@cbs.dtu.dk, paolo@dsi.unifi.it. 1
Types of Monotonic Language Learning and Their Characterization
 in "Proceedings 5th Annual ACM Workshop on Computational Learning Theory," July 27  29, Pittsburgh
, 1992
"... The present paper deals with strongmonotonic, monotonic and weakmonotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and b ..."
Abstract

Cited by 32 (26 self)
 Add to MetaCart
The present paper deals with strongmonotonic, monotonic and weakmonotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. We characterize strong monotonic, monotonic, weakmonotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of Angluin (1980). Moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. In particular, it is proved that strongmonotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weakmonotonic inference cannot. 1 Introduction The process of hypothesizing a general rule from eventually inc...
Monotonic Versus Nonmonotonic Language Learning
, 1993
"... In the present paper strongmonotonic, monotonic and weakmonotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data. Strongmonotonicity describes the requirement to only produce better and better generalization ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
In the present paper strongmonotonic, monotonic and weakmonotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data. Strongmonotonicity describes the requirement to only produce better and better generalizations when more and more data are fed to the inference device. Monotonic learning reflects the eventual interplay between generalization and restriction during the process of inferring a language. However, it is demanded that for any two hypotheses the one output later has to be at least as good as the previously produced one with respect to the language to be learnt. Weakmonotonicity is the analogue of cumulativity in learning theory. We relate all these notions one to the other as well as to previously studied modes of identification, thereby in particular obtaining a strong hierarchy.
Characterizations of Monotonic and Dual Monotonic Language Learning
 Information and Computation
, 1995
"... The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed m ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned.
Ignoring Data May be the Only Way to Learn Efficiently
, 1994
"... In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to t ..."
Abstract

Cited by 19 (13 self)
 Add to MetaCart
In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.
Recursion Theoretic Models of Learning: Some Results and Intuitions
 Annals of Mathematics and Artificial Intelligence
, 1995
"... View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representati ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representation independent fashion as possible. In this work, we consider machines that learn programs for recursive functions. Several authors have argued that such studies are general enough to include a wide array of learning situations [2,3,22,23,24]. For example, a behavior to be learned can be modeled as a set of stimulus and response pairs. Assuming that any behavior associates only one response to each possible stimulus, behaviors can be viewed as functions from stimuli to responses. Some behaviors, such as anger, are not easily modeled as functions. Our primary interest, however, concerns the learning of fundamental behaviors such as reading (mapping symbols to sounds), recognition (mapping pa...
Trading Monotonicity Demands versus Efficiency
 Bull. Inf. Cybern
, 1995
"... The present paper deals with the learnability of indexed families L of uniformly recursive languages from positive data. We consider the influence of three monotonicity demands and their dual counterparts to the efficiency of the learning process. The efficiency of learning is measured in depend ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The present paper deals with the learnability of indexed families L of uniformly recursive languages from positive data. We consider the influence of three monotonicity demands and their dual counterparts to the efficiency of the learning process. The efficiency of learning is measured in dependence on the number of mind changes a learning algorithm is allowed to perform. The three notions of (dual) monotonicity reflect different formalizations of the requirement that the learner has to produce better and better (specializations) generalizations when fed more and more data on the target concept.
Two Variations of Inductive Inference of Languages from Positive Data
, 1995
"... The present paper deals with the learnability of indexed families of uniformly recursive languages by single inductive inference machines (abbr. IIM) and teams of IIMs from positive and both positive and negative data. We study the learning power of single IIMs in dependence on the hypothesis space ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The present paper deals with the learnability of indexed families of uniformly recursive languages by single inductive inference machines (abbr. IIM) and teams of IIMs from positive and both positive and negative data. We study the learning power of single IIMs in dependence on the hypothesis space and the number of allowed anomalies the synthesized language may have. Our results are fourfold. First, we show that allowing anomalies does not increase the learning power as long as inference from positive and negative data is considered. Second, we establish an infinite hierarchy in the number of allowed anomalies for learning from positive data. Third, we prove that every learnable indexed family L may be even inferred with respect to the hypothesis space L itself. Fourth, we characterize learning with anomalies from positive data. Finally, we investigate the error correcting power of team learners, and relate the inference capabilities of teams in dependence on their size to one another...