Results 1  10
of
11
Input selection for radial basis function networks by constrained optimization
 Proceedings of the 17th International Conference on Artificial Neural Networks (ICANN 2007
"... Abstract. Input selection in the nonlinear function approximation is important and difficult problem. Neural networks provide good generalization in many cases, but their interpretability is usually limited. However, the contributions of input variables in the prediction of output would be valuabl ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Input selection in the nonlinear function approximation is important and difficult problem. Neural networks provide good generalization in many cases, but their interpretability is usually limited. However, the contributions of input variables in the prediction of output would be valuable information in many real world applications. In this work, an input selection algorithm for Radial basis function networks is proposed. The selection of input variables is achieved using a constrained cost function, in which each input dimension is weighted. The constraints are imposed on the values of weights. The proposed algorithm solves a logbarrier reformulation of the original optimization problem. The input selection algorithm was applied to both simulated and benchmark data and obtained results were compelling. 1
LongTerm Prediction of Time Series using NNEbased Projection and OPELM
"... Abstract — This paper proposes a combination of methodologies based on a recent development –called Extreme Learning Machine (ELM) – decreasing drastically the training time of nonlinear models. Variable selection is beforehand performed on the original dataset, using the Partial Least Squares (PLS) ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper proposes a combination of methodologies based on a recent development –called Extreme Learning Machine (ELM) – decreasing drastically the training time of nonlinear models. Variable selection is beforehand performed on the original dataset, using the Partial Least Squares (PLS) and a projection based on Nonparametric Noise Estimation (NNE), to ensure proper results by the ELM method. Then, after the network is first created using the original ELM, the selection of the most relevant nodes is performed by using a Least Angle Regression (LARS) ranking of the nodes and a LeaveOneOut estimation of the performances, leading to an OptimallyPruned ELM (OPELM). Finally, the prediction accuracy of the global methodology is demonstrated using the
Duroc and Iberian Pork Neural Network Classification by Visible and Near Infrared Reflectance Spectroscopy
 Journal of Food Engineering
, 2009
"... a b s t r a c t Visible and near infrared reflectance spectroscopy (VIS/NIRS) was used to differentiate between Duroc and Iberian pork in the M. masseter. Samples of Duroc (n = 15) and Iberian (n = 15) pig muscles were scanned in the VIS/NIR region (3502500 nm) using a portable spectral radiometer ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
a b s t r a c t Visible and near infrared reflectance spectroscopy (VIS/NIRS) was used to differentiate between Duroc and Iberian pork in the M. masseter. Samples of Duroc (n = 15) and Iberian (n = 15) pig muscles were scanned in the VIS/NIR region (3502500 nm) using a portable spectral radiometer. Both mutual information and VIS/NIRS spectra characterization were developed to generate a ranking of variables and the data were then processed by artificial neural networks, establishing 1, 3, or 10 wavelengths as input variable for classifying between the pig breeds. The models correctly classified >70% of all problem assumptions, with a correct classification of >95% for the threevariable assumption using either mutual information ranking or VIS/NIRS spectra characterization. These results demonstrate the potential value of the VIS/ NIRS technique as an objective and rapid method for the authentication and identification of Duroc and Iberian pork.
Variable Selection in a GPU Cluster Using Delta Test
"... Abstract. The work presented in this paper consists in an adaptation of a Genetic Algorithm (GA) to perform variable selection in an heterogeneous cluster where the nodes are themselves clusters of GPUs. Due to this heterogeneity, several mechanisms to perform a load balance will be discussed as wel ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The work presented in this paper consists in an adaptation of a Genetic Algorithm (GA) to perform variable selection in an heterogeneous cluster where the nodes are themselves clusters of GPUs. Due to this heterogeneity, several mechanisms to perform a load balance will be discussed as well as the optimization of the fitness function to take advantage of the GPUs available. The algorithm will be compared with previous parallel implementations analysing the advantages and disadvantages of the approach, showing that for large data sets, the proposed approach is the only one that can provide a solution. 1
Applying Mutual Information for Prototype or Instance Selection in Regression Problems
"... Abstract. The problem of selecting the patterns to be learned by any model is usually not considered by the time of designing the concrete model but as a preprocessing step. Information theory provides a robust theoretical framework for performing input variable selection thanks to the concept of mu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of selecting the patterns to be learned by any model is usually not considered by the time of designing the concrete model but as a preprocessing step. Information theory provides a robust theoretical framework for performing input variable selection thanks to the concept of mutual information. Recently the computation of the mutual information for regression tasks has been proposed so this paper presents a new application of the concept of mutual information not to select the variables but to decide which prototypes should belong to the training data set in regression problems. The proposed methodology consists in deciding if a prototype should belong or not to the training set using as criteria the estimation of the mutual information between the variables. The novelty of the approach is to focus in prototype selection for regression problems instead of classification as the majority of the literature deals only with the last one. Other element that distinguishes this work from others is that it is not proposed as an outlier identificator but as an algorithm that determines the best subset of input vectors by the time of building a model to approximate it. As the experiment section shows, this new method is able to identify a high percentage of the real data set when it is applied to a highly distorted data sets. 1
New Methodologies Based on Delta Test for Variable Selection in Regression Problems
"... Abstract — The problem of selecting an adequate set of variables from a given data set of a sampled function, becomes crucial by the time of designing the model that will approximate it. Several approaches have been presented in the literature although recent studies showed how the Delta Test is a p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract — The problem of selecting an adequate set of variables from a given data set of a sampled function, becomes crucial by the time of designing the model that will approximate it. Several approaches have been presented in the literature although recent studies showed how the Delta Test is a powerful tool to determine if a subset of variables is correct. This paper presents new methodologies based on the Delta Test such as Tabu Search, Genetic Algorithms and the hybridization of them, to determine a subset of variables which is representative of a function. The paper considers as well the scaling problem where a relevance value is assigned to each variable. The new algorithms were adapted to be run in parallel architectures so better performances could be obtained in a small amount of time, presenting great robustness and scalability. I.
Instance or Prototype Selection for Function Approximation using Mutual Information
"... The problem of selecting the patterns to be learned by any model is usually not considered by the time of designing the concrete model but as a preprocessing step. Information theory provides a robust theoretical framework for performing input variable selection thanks to the concept of mutual infor ..."
Abstract
 Add to MetaCart
(Show Context)
The problem of selecting the patterns to be learned by any model is usually not considered by the time of designing the concrete model but as a preprocessing step. Information theory provides a robust theoretical framework for performing input variable selection thanks to the concept of mutual information. The computation of the mutual information for regression tasks has been recently proposed providing good results in feature selection. This paper presents a new application of the concept of mutual information not to select the variables but to decide which prototypes should belong to the training data set in regression problems. The proposed methodology consists in deciding if a prototype should belong or not to the training set using as criteria the estimation of the mutual information between the variables. The novelty of the approach is to focus in prototype selection for regression problems instead of classification as the majority of the literature deals only with the last one. Other element that distinguish this work from others is that it is not proposed as an outlier identificator but as algorithm that determine the best subset of input vectors by the time of building a model to approximate it. As the experiment section shows, this new method is able to identify a high percentage of the real data set when it is applied to a highly distorted data sets. 1
*Corresponding author
"... Abstract: The problem of selecting an adequate set of variables from a given data set of a sampled function, becomes crucial by the time of designing the model that will approximate it. Several approaches have been presented in the literature although recent studies showed how the Delta Test is a po ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: The problem of selecting an adequate set of variables from a given data set of a sampled function, becomes crucial by the time of designing the model that will approximate it. Several approaches have been presented in the literature although recent studies showed how the Delta Test is a powerful tool to determine if a subset of variables is correct. This paper presents new methodologies based on the Delta Test such as Tabu Search, Genetic Algorithms and the hybridization of them, to determine a subset of variables which is representative of a function. The paper considers as well the scaling problem where a relevance value is assigned to each variable. The new algorithms were adapted to be run in parallel architectures so better performances could be obtained in a small amount of time, presenting great robustness and scalability.
Contents lists available at ScienceDirect
"... journal homepage: www.elsevier.com/locate/neucom New method for instance or prototype selection using mutual information in time series prediction ..."
Abstract
 Add to MetaCart
(Show Context)
journal homepage: www.elsevier.com/locate/neucom New method for instance or prototype selection using mutual information in time series prediction
Towards Using Neural Networks to Perform ObjectOriented Function Approximation
"... Abstract — Many computational methods are based on the manipulation of entities with internal structure, such as objects, records, or data structures. Most conventional approaches based on neural networks have problems dealing with such structured entities. The algorithms presented in this paper rep ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Many computational methods are based on the manipulation of entities with internal structure, such as objects, records, or data structures. Most conventional approaches based on neural networks have problems dealing with such structured entities. The algorithms presented in this paper represent a novel approach to neuralsymbolic integration that allows for symbolic data in the form of objects to be translated to a scalar representation that can then be used by connectionist systems. We present the implementation of two translation algorithms that aid in performing objectoriented function approximation. We argue that objects provide an abstract representation of data that is well suited for the input and output of neural networks, as well as other statistical learning techniques. By examining the results of a simple sorting example, we illustrate the efficacy of these techniques. I.