Results 1  10
of
3,842
NEURAL NETWORKS IN NONEUCLIDEAN SPACES.
"... Abstract. Multilayer Perceptrons (MLPs) use scalar products to compute weighted activation of neurons providing decision borders using combinations of soft hyperplanes. The weighted funin activation function may be replaced by a distance function between the inputs and the weights, offering a natur ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
natural generalization of the standard MLP model. NonEuclidean distance functions may also be introduced by normalization of the input vectors into an extended feature space. Both approaches influence the shapes of decision borders dramatically. An illustrative example showing these changes is provided.
Neural networks in nonEuclidean metric spaces
, 1999
"... Multilayer Perceptrons (MLPs) use scalar products to compute weighted activation of neurons providing decision borders using combinations of soft hyperplanes. The weighted funin activation function corresponds to Euclidean distance functions used to compute similarities between input and weight vec ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
vector. Replacing the fanin activation function by nonEuclidean distance function offers a natural generalization of the standard MLP model, providing more flexible decision borders. An alternative way leading to similar results is based on renormalization of the input vectors using nonEuclidean norms
Neural networks in nonEuclidean metric spaces.
, 1999
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Neural NetworkBased Face Detection
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1998
"... We present a neural networkbased upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present ..."
Abstract

Cited by 1206 (22 self)
 Add to MetaCart
We present a neural networkbased upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present
TABU SEARCH
"... Tabu Search is a metaheuristic that guides a local heuristic search procedure to explore the solution space beyond local optimality. One of the main components of tabu search is its use of adaptive memory, which creates a more flexible search behavior. Memory based strategies are therefore the hallm ..."
Abstract

Cited by 822 (48 self)
 Add to MetaCart
the hallmark of tabu search approaches, founded on a quest for "integrating principles, " by which alternative forms of memory are appropriately combined with effective strategies for exploiting them. In this chapter we address the problem of training multilayer feedforward neural networks
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. R ..."
Abstract

Cited by 736 (24 self)
 Add to MetaCart
from data, and Bayesian learning for neural networks.
A Growing Neural Gas Network Learns Topologies
 Advances in Neural Information Processing Systems 7
, 1995
"... An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this m ..."
Abstract

Cited by 401 (5 self)
 Add to MetaCart
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994
A scaled conjugate gradient algorithm for fast supervised learning
 NEURAL NETWORKS
, 1993
"... A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural netwo ..."
Abstract

Cited by 451 (0 self)
 Add to MetaCart
and avoids a time consuming linesearch, which CGB and BFGS uses in each iteration in order to determine an appropriate step size.
Incorporating problem dependent structural information in the architecture of a neural network often lowers the overall complexity. The smaller the complexity of the neural
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
 Advances in Neural Information Processing Systems 8
, 1996
"... On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have ..."
Abstract

Cited by 433 (20 self)
 Add to MetaCart
On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results
When Networks Disagree: Ensemble Methods for Hybrid Neural Networks
, 1993
"... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..."
Abstract

Cited by 349 (3 self)
 Add to MetaCart
in functional space which helps to avoid overfitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct
Results 1  10
of
3,842