Results 1  10
of
12
Improving the Rprop Learning Algorithm
 PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON NEURAL COMPUTATION (NC 2000)
, 2000
"... The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks a ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks as well as for artificial error surfaces.
Operator Adaptation in Evolutionary Computation and its Application to Structure Optimization of Neural Networks
, 2001
"... In this study, we give a brief overview of search strategy adaptation in evolutionary computation. The ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
In this study, we give a brief overview of search strategy adaptation in evolutionary computation. The
Neural network regularization and ensembling using multiobjective evolutionary algorithms
 In: Congress on Evolutionary Computation (CEC’04), IEEE
, 2004
"... Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that determines to which extension the regularization in¤uences the learning algorithm must be determined beforehand. This paper addresses the neural network regularization problem from a multiobjective optimization point of view. During the optimization, both structure and parameters of the neural network will be optimized. A slightly modi£ed version of two multiobjective optimization algorithms, the dynamic weighted aggregation (DWA) method and the elitist nondominated sorting genetic algorithm (NSGAII) are used and compared. An evolutionary multiobjective approach to neural network regularization has a number of advantages compared to the traditional methods. First, a number of models with a spectrum of model complexity can be obtained in one optimization run instead of only one single solution. Second, an ef£cient new regularization term can be introduced, which is not applicable to gradientbased learning algorithms. As a natural byproduct of the multiobjective optimization approach to neural network regularization, neural network ensembles can be easily constructed using the obtained networks with different levels of model complexity. Thus, the model complexity of the ensemble can be adjusted by adjusting the weight of each member network in the ensemble. Simulations are carried out on a test function to illustrate the feasibility of the proposed ideas. I.
On Approximate Learning by Multilayered Feedforward Circuits
 Algorithmic Learning Theory'2000
, 2000
"... We deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NPhard to approximate the ratio within some constant relative error if ar ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NPhard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilonseparation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NPhard even if restricted to situations where the number of examples is limited with respect to n.
On Verification & Validation of Neural Network Based Controllers
, 2003
"... Arti cial neural networks (ANNs) are used as an alternative to traditional models in the realm of control. Unfortunately, ANN models rarely provide any indication of accuracy or reliability of their predictions. Before ANNs can be used in safety critical applications (aircraft, nuclear plants, etc. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Arti cial neural networks (ANNs) are used as an alternative to traditional models in the realm of control. Unfortunately, ANN models rarely provide any indication of accuracy or reliability of their predictions. Before ANNs can be used in safety critical applications (aircraft, nuclear plants, etc.), a certi cation process must be established for ANN based controllers. Traditional approaches to validation of neural networks are mostly based on empirical evaluation through simulation and/or experimental testing. For online trained ANNs used in safety critical applications, traditional methods of veri cation and validation cannot be applied, leaving a wide technological gap, which we attempt to address in this paper. We will describe a layered approach for ANN V&V which includes a V&V software process for pretrained neural networks, a detailed discussion of numerical issues, and techniques for dynamically measuring and monitoring the con dence of the ANN output.
Using an artificial neural network to predict parameters for frost deposition on
 Iowa State University
, 2003
"... Forecasting frost formation on bridgeways in Iowa is an important yet difficult problem. Frost forms when water vapor in the air sublimates onto a surface (which occurs when the dew point temperature of the air is greater than the surface temperature), and the surface temperature is below freezing. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Forecasting frost formation on bridgeways in Iowa is an important yet difficult problem. Frost forms when water vapor in the air sublimates onto a surface (which occurs when the dew point temperature of the air is greater than the surface temperature), and the surface temperature is below freezing. Only small amounts of moisture are needed to cover surfaces with frost and create hazardous travel conditions. Recently, a frost model was devised by Knollhoff et al. (2001) to predict frost deposition based on moisture flux principles. The inputs required by the frost model include the following: (1) air temperature, (2) dewpoint temperature, (3) wind speed, and (4) surface temperature. An artificial neural network predicts these four inputs at 20minute intervals for a 24hour period. The output from the neural network models can then be used as input into the frost deposition model to predict frost formation on bridgeways in Iowa. The proper development of an artificial neural network requires the dataset to be subdivided into at least a training set and a validation set. A test set can also be used to test the model(s) even further. Key words: artificial neural network—bridgeways—frost deposition
Fast Learning for Problem Classes Using Knowledge Based Network Initialization
, 2000
"... The success of learning as well as the learning speed of an artificial neural network (ANN) strongly depends on the initial weights. If problem or domain specific knowledge exists, it can be transferred to the ANN by means of a special choice of the initial weights. In this paper, we focus on the ch ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The success of learning as well as the learning speed of an artificial neural network (ANN) strongly depends on the initial weights. If problem or domain specific knowledge exists, it can be transferred to the ANN by means of a special choice of the initial weights. In this paper, we focus on the choice of a set of initial weights, well suited to fast and robust learning of all particular problems out of a class of related problems. Our evolutionary approach particularly takes the learning algorithm into consideration in the design of the initial weights. The superior properties of the initial weights resulting from this algorithm are corroborated using a class defined by solving a differential equation with variable boundary conditions.
Linear Algebra for Neural Networks
, 2001
"... Neural networks are quantitative models which learn to associate input and output patterns adaptively with the use of learning algorithms. We expose four main concepts from linear algebra which are essential for analyzing these models: 1) the projection of a vector, 2) the eigen and singular value d ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Neural networks are quantitative models which learn to associate input and output patterns adaptively with the use of learning algorithms. We expose four main concepts from linear algebra which are essential for analyzing these models: 1) the projection of a vector, 2) the eigen and singular value decomposition, 3) the gradient vector and Hessian matrix of a vector function, and 4) the Taylor expansion of a vector function. We illustrate these concepts by the analysis of the Hebbian and WidrowHo# rules and some basic neural network architectures (i.e., the linear autoassociator, the linear heteroassociator, and the error backpropagation network). We show also that neural networks are equivalent to iterative versions of standard statistical and optimization models such as multiple regression analysis and principal component analysis. 1
EVALUATION OF THE USE OF ARTIFICIAL NEURAL NETWORKS FOR THE SIMULATION OF HYBRID SOLAR COLLECTORS
"... In the last decade, artificial neural networks (ANNs) have been receiving an increasing attention for simulating engineering systems due to some interesting characteristics such as learning capability, fault tolerance, speed and nonlinearity. This paper describes an alternative approach to assess t ..."
Abstract
 Add to MetaCart
In the last decade, artificial neural networks (ANNs) have been receiving an increasing attention for simulating engineering systems due to some interesting characteristics such as learning capability, fault tolerance, speed and nonlinearity. This paper describes an alternative approach to assess two types of hybrid solar collector/heat pipe systems (plate heat pipe type and tube heat pipe type) using ANNs. Multiple Layer Perceptrons (MLPs) and Radial Basis Networks (RBFs) were considered. The networks were trained using results from mathematical models generated by Monte Carlo simulation. The mathematical models were based on energy balances and resulted in a system of nonlinear equations. The solution of the models was very sensitive to initial estimates, and convergence was not obtained under certain conditions. Between the two neural models, MLPs performed slightly better than RBFs. It can be concluded that similar configurations were adequate for both collector systems. It was found that ANNs simulated both collector efficiency and heat output with high accuracy when “unseen ” data were presented to the networks. An important advantage of a trained ANN over the mathematical models is that convergence is not an issue and the result is obtained almost instantaneously.