Results 1 
2 of
2
Improving Model Accuracy using Optimal Linear Combinations of Trained Neural Networks
 IEEE Transactions on Neural Networks
, 1992
"... Neural network (NN) based modeling often requires trying multiple networks with different architectures and training parameters in order to achieve an acceptable model accuracy. Typically, only one of the trained networks is selected as "best" and the rest are discarded. We propose using optimal lin ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
Neural network (NN) based modeling often requires trying multiple networks with different architectures and training parameters in order to achieve an acceptable model accuracy. Typically, only one of the trained networks is selected as "best" and the rest are discarded. We propose using optimal linear combinations (OLCs) of the corresponding outputs of a set of NNs as an alternative to using a single network. Modeling accuracy is measured by mean squared error (MSE) with respect to the distribution of random inputs. Optimality is defined by minimizing the MSE, with the resultant combination referred to as MSEOLC. We formulate the MSEOLC problem for trained NNs and derive two closedform expressions for the optimal combinationweights. An example that illustrates significant improvement in model accuracy as a result of using MSEOLCs of the trained networks is included. I. INTRODUCTION Constructing neural network (NN) based models often involves training a number of networks. The cr...
Approximating a Function and its Derivatives Using MSEOptimal Linear Combinations of Trained Feedforward Neural Networks
 In Proceedings of the Joint Conference on Neural Networks
, 1993
"... In this paper, we show that using MSEoptimal linear combinations of a set of trained feedforward networks may significantly improve the accuracy of approximating a function and its first and second order derivatives. Our results are compared to the accuracies achieved by the single best network and ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
In this paper, we show that using MSEoptimal linear combinations of a set of trained feedforward networks may significantly improve the accuracy of approximating a function and its first and second order derivatives. Our results are compared to the accuracies achieved by the single best network and by the simple averaging of the outputs of the trained networks. 1 Introduction Feedforward neural networks (FNN) are widely used for function approximation. They are considered universal approximators capable of approximating an unknown mapping and its derivatives arbitrarily well (Hornik et al. 1990). Approximating the derivatives, that is the derivatives of the output with respect to the inputs, is of significant importance in many applications. For example in process optimization, the first and second order derivatives obtained from a neural network, which was trained on the process response, may be used in approximating the gradient vector and the Hessian matrix of the process response...