Results 11  20
of
60
Predicting the Stock Market
, 1998
"... This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of nearrandomwalk pr ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of nearrandomwalk processes are illustrated with examples together with guidelines for avoiding the risk of datasnooping. The connections to concepts like "the biasvariance dilemma", overtraining and model complexity are further covered. Existing benchmarks and testing metrics are surveyed and some new measures are introduced.
Feature Selection with Neural Networks
 Behaviormetrika
, 1998
"... Features gathered from the observation of a phenomenon are not all equally informative: some of them may be noisy, correlated or irrelevant. Feature selection aims at selecting a feature set that is relevant for a given task. This problem is complex and remains an important issue in many domains. In ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Features gathered from the observation of a phenomenon are not all equally informative: some of them may be noisy, correlated or irrelevant. Feature selection aims at selecting a feature set that is relevant for a given task. This problem is complex and remains an important issue in many domains. In the field of neural networks, feature selection has been studied for the last ten years and classical as well as original methods have been employed. This paper is a review of neural network approaches to feature selection. We first briefly introduce baseline statistical methods used in regression and classification. We then describe families of methods which have been developed specifically for neural networks. Representative methods are then compared on different test problems. Keywords Feature Selection, Subset selection, Variable Sensitivity, Sequential Search Sélection de Variables et Réseaux de Neurones Philippe LERAY et Patrick GALLINARI Résumé Les données collectées lors de l'obse...
Adaptive Regularization in Neural Network Modeling
, 1997
"... . In this paper we address the important problem of optimizing regularization parameters in neural network modeling. The suggested optimization scheme is an extended version of the recently presented algorithm [24]. The idea is to minimize an empirical estimate  like the crossvalidation estimate ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
. In this paper we address the important problem of optimizing regularization parameters in neural network modeling. The suggested optimization scheme is an extended version of the recently presented algorithm [24]. The idea is to minimize an empirical estimate  like the crossvalidation estimate  of the generalization error with respect to regularization parameters. This is done by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training. Experiments with feedforward neural network models for time series prediction and classification tasks showed the viability and robustness of the algorithm. Moreover, we provided some simple theoretical examples in order to illustrate the potential and limitations of the proposed regularization framework. 1 Introduction Neural networks are flexible tools for time series processing and pattern recognition. By increasing the number of hidden neurons in a 2layer architec...
Automatic model selection in a hybrid perceptron/radial network
 TO APPEAR: SPECIAL ISSUE OF INFORMATION FUSION ON MULTIPLE EXPERTS
, 2002
"... ..."
MLPs (monolayer polynomials and multilayer perceptrons) for nonlinear modeling. JMLR, 3:1383–1398 (this issue
 Journal of Machine Learning Research
, 2003
"... This paper presents a model selection procedure which stresses the importance of the classic polynomial models as tools for evaluating the complexity of a given modeling problem, and for removing nonsignificant input variables. If the complexity of the problem makes a neural network necessary, the ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
This paper presents a model selection procedure which stresses the importance of the classic polynomial models as tools for evaluating the complexity of a given modeling problem, and for removing nonsignificant input variables. If the complexity of the problem makes a neural network necessary, the selection among neural candidates can be performed in two phases. In an additive phase, the most important one, candidate neural networks with an increasing number of hidden neurons are trained. The addition of hidden neurons is stopped when the effect of the roundoff errors becomes significant, so that, for instance, confidence intervals cannot be accurately estimated. This phase leads to a set of approved candidate networks. In a subsequent subtractive phase, a selection among approved networks is performed using statistical Fisher tests. The series of tests starts from a possibly too large unbiased network (the full network), and ends with the smallest unbiased network whose input variables and hidden neurons all have a significant contribution to the regression estimate. This method was successfully tested against the realworld regression problems proposed at the NIPS2000 Unlabeled Data Supervised Learning Competition; two of them are included here as illustrative examples.
Bias and Variance of Validation Methods for Function Approximation Neural Networks Under Conditions of Sparse Data
 IEEE Transactions on Systems, Man, and Cybernetics, Part C
, 1998
"... Neural networks must be constructed and validated with strong empirical dependence, which is difficult under conditions of sparse data. This paper examines the most common methods of neural network validation along with several general validation methods from the statistical resampling literature ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Neural networks must be constructed and validated with strong empirical dependence, which is difficult under conditions of sparse data. This paper examines the most common methods of neural network validation along with several general validation methods from the statistical resampling literature as applied to function approximation networks with small sample sizes. It is shown that an increase in computation, necessary for the statistical resampling methods, produces networks that perform better than those constructed in the traditional manner. The statistical resampling methods also result in lower variance of validation, however some of the methods are biased in estimating network error. 1. INTRODUCTION To be beneficial, system models must be validated to assure the users that the model emulates the actual system in the desired manner. This is especially true of empirical models, such as neural network and statistical models, which rely primarily on observed data rather th...
Nonlinear versus Linear Models in Functional Neuroimaging: Learning Curves and Generalization Crossover
, 1997
"... . We introduce the concept of generalization for models of functional neuroactivation, and show how it is affected by the number, N , of neuroimaging scans available. By plotting generalization as a function of N (i.e. a "learning curve") we demonstrate that while simple, linear models may generaliz ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
. We introduce the concept of generalization for models of functional neuroactivation, and show how it is affected by the number, N , of neuroimaging scans available. By plotting generalization as a function of N (i.e. a "learning curve") we demonstrate that while simple, linear models may generalize better for small N 's, more flexible, lowbiased nonlinear models, based on artificial neural networks (ANN's), generalize better for larger N 's. We demonstrate that for sets of scans of two simple motor tasksone set acquired with [O 15 ]water using PET, and the other using fMRIpractical N 's exist for which "generalization crossover" occurs. This observation supports the application of highly flexible, ANN models to sufficiently large functional activation datasets. Keywords: Multivariate brain modeling, illposed learning, generalization, learning curves. 1 Introduction Datasets that result from functional activation studies of the living, human brain typically consist of two ...
Visualization of Neural Networks Using Saliency Maps
, 1995
"... The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the illposed case, where the dimensionality of the inputfield by far exceeds the number of examples. Several levels of approximations are dis ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the illposed case, where the dimensionality of the inputfield by far exceeds the number of examples. Several levels of approximations are discussed. The saliency maps are applied to medical imaging (PETscans) for identification of paradigmrelevant regions in the human brain. Keywords: saliency map, model interpretation, illposed learning, PCA, SVD, PET. 1. Introduction Mathematical modeling is of increasing importance in medical informatics. In biomedical context the aim of neural network modeling is often twofold. Besides using empirical relations established within a given model, there is typically a wish to interpret the model in order to achieve an understanding of the processes underlying and generating the data. This paper presents a new tool for such opening of the neural network "black box". Our method is aimed at n...
Design and Regularization of Neural Networks: The Optimal Use of a Validation Set
, 1996
"... In this paper we derive novel algorithms for estimation of regularization parameters and for optimization of neural net architectures based on a validation set. Regularization parameters are estimated using an iterative gradient descent scheme. Architecture optimization is performed by approximative ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
In this paper we derive novel algorithms for estimation of regularization parameters and for optimization of neural net architectures based on a validation set. Regularization parameters are estimated using an iterative gradient descent scheme. Architecture optimization is performed by approximative combinatorial search among the relevant subsets of an initial neural network architecture by employing a validation set based Optimal Brain Damage/Surgeon (OBD/OBS) or a mean field combinatorial optimization approach. Numerical results with linear models and feedforward neural networks demonstrate the viability of the methods. INTRODUCTION Neural networks are flexible tools for function approximation and by expanding the network any relevant target function can be approximated [6]. The associated risk of overfitting on noisy data is of major concern in neural network design [2]. The objective of architecture optimization is to minimize the generalization error. The literature suggest a v...