Results 1  10
of
6,413
Neural network ensembles, cross validation, and active learning
 Neural Information Processing Systems 7
, 1995
"... Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it qua ..."
Abstract

Cited by 479 (6 self)
 Add to MetaCart
Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so
Improving generalization with active learning
 Machine Learning
, 1994
"... Abstract. Active learning differs from "learning from examples " in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples ..."
Abstract

Cited by 544 (1 self)
 Add to MetaCart
neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers "useful. " We test our implementation, called an SGnetwork, on three domains and observe significant improvement
Exponential stability of BAM neural networks with tranmission delays, Neurocomputing 57
, 2004
"... In this paper, a generalized model of bidirectional associative memory (BAM) neural networks delays and impulses is investigated. By constructing suitable Lyapunov functional, Halanaly differential inequality and Mmatrix theory, some sufficient conditions for global exponential stability of genera ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, a generalized model of bidirectional associative memory (BAM) neural networks delays and impulses is investigated. By constructing suitable Lyapunov functional, Halanaly differential inequality and Mmatrix theory, some sufficient conditions for global exponential stability
Complete discrete 2D Gabor transforms by neural networks for image analysis and compression
, 1988
"... A threelayered neural network is described for transforming twodimensional discrete signals into generalized nonorthogonal 2D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a comp ..."
Abstract

Cited by 478 (8 self)
 Add to MetaCart
A threelayered neural network is described for transforming twodimensional discrete signals into generalized nonorthogonal 2D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a
Optimal Brain Damage
, 1990
"... We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved sp ..."
Abstract

Cited by 510 (5 self)
 Add to MetaCart
We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved
A Neural Probabilistic Language Model
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen ..."
Abstract

Cited by 447 (19 self)
 Add to MetaCart
is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on stateoftheart ngram models, and that the proposed approach allows to take advantage of longer contexts.
DiscreteTime BAM Neural Networks With Variable Delays
"... This letter deals with the global exponential stability of discretetime bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delaydependent exponential stability criterion for BAM ne ..."
Abstract
 Add to MetaCart
neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical
Statistical pattern recognition: A review
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques ..."
Abstract

Cited by 1035 (30 self)
 Add to MetaCart
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network
Regularization Theory and Neural Networks Architectures
 Neural Computation
, 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract

Cited by 395 (32 self)
 Add to MetaCart
Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
 Advances in Neural Information Processing Systems 8
, 1996
"... On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have ..."
Abstract

Cited by 433 (20 self)
 Add to MetaCart
On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results
Results 1  10
of
6,413