Results 1  10
of
11
Evaluation Of Gaussian Processes And Other Methods For NonLinear Regression
, 1996
"... This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are ..."
Abstract

Cited by 140 (16 self)
 Add to MetaCart
This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new nonparametric Bayesian learning methods relying on Gaussian process priors over functions are developed. These priors are controlled by hyperparameters which set the characteristic length scale for each input dimension. In the simplest method, these parameters are fit from the data using optimization. In the second, fully Bayesian method, a Markov chain Monte Carlo technique is used to integrate over the hyperparameters. One advantage of these G...
Biogeographybased optimization of neurofuzzy system parameters for diagnosis of cardiac disease
 in: Genetic and Evolutionary Computation Conference
, 2010
"... Cardiomyopathy refers to diseases of the heart muscle that becomes enlarged, thick, or rigid. These changes affect the electrical stability of the myocardial cells, which in turn predisposes the heart to failure or arrhythmias. Cardiomyopathy in its two common forms, dilated and hypertrophic, implie ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Cardiomyopathy refers to diseases of the heart muscle that becomes enlarged, thick, or rigid. These changes affect the electrical stability of the myocardial cells, which in turn predisposes the heart to failure or arrhythmias. Cardiomyopathy in its two common forms, dilated and hypertrophic, implies enlargement of the atria; therefore, we investigate its diagnosis through P wave features. In particular, we design a neurofuzzy network trained with a new evolutionary algorithm called biogeographybased optimization (BBO). The neurofuzzy network recognizes and classifies P wave features for the diagnosis of cardiomyopathy. In addition, we incorporate oppositionbased learning in the BBO algorithm for improved training. First we develop a neurofuzzy model structure to diagnose cardiomyopathy using P wave features. Next we train the network using BBO and a clinical database of ECG signals. Preliminary results indicate that cardiomyopathy can be reliably diagnosed with these techniques.
SafeLevelSMOTE: SafeLevelSynthetic Minority OverSampling TEchnique for Handling the Class Imbalanced Problem
"... Abstract. The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of oversam ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of oversampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevelSMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and BorderlineSMOTE.
Associative Neural Network
, 2002
"... An associative neural network (ASNN) is a combination of an ensemble of the feedforward neural networks and the Knearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance among the analyzed cases for the nearest neighbor technique and pr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
An associative neural network (ASNN) is a combination of an ensemble of the feedforward neural networks and the Knearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance among the analyzed cases for the nearest neighbor technique and provides an improved prediction by the bias correction of the neural network ensemble both for function approximation and classification. Actually, the proposed method corrects a bias of a global model for a considered data case by analyzing the biases of its nearest neighbors determined in the space of calculated models. An associative neural network has a memory that can coincide with the training set. If new data become available the network can provide a reasonable approximation of such data without a need to retrain the neural network ensemble. Applications of ASNN for prediction of lipophilicity of chemical compounds and classification of UCI letter and satellite data set are presented. The developed algorithm is available online at http://www.virtuallaboratory.org/lab/asnn.
766 Combinatorial QSAR Modeling of Chemical Toxicants Tested against
, 2007
"... Selecting most rigorous quantitative structureactivity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Selecting most rigorous quantitative structureactivity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal
Benchmarking of linear and nonlinear approaches for quantitative structureproperty relationship studies of metal complexation with ionophores
"... property relationships (QSPR) of stability constants logK1 for the 1:1 (M:L) and log�2 for 1:2 complexes of metal cations Ag + and Eu 3+ with diverse sets of organic molecules in water at 298 K and ionic strength 0.1 M. The methods were tested on three types of descriptors: molecular descriptors inc ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
property relationships (QSPR) of stability constants logK1 for the 1:1 (M:L) and log�2 for 1:2 complexes of metal cations Ag + and Eu 3+ with diverse sets of organic molecules in water at 298 K and ionic strength 0.1 M. The methods were tested on three types of descriptors: molecular descriptors including Estate values, counts of atoms determined for Estate atom types, and substructural molecular fragments (SMF). Comparison of the models was performed using a 5fold external crossvalidation procedure. Robust statistical tests (bootstrap and KolmogorovSmirnov statistics) were employed to evaluate the significance of calculated models. The Wilcoxon signedrank test was used to compare the performance of methods. Individual structurecomplexation property models obtained with nonlinear methods demonstrated a significantly better performance than the models built using multilinear regression analysis (MLRA). However, the averaging of several MLRA models based on SMF descriptors provided as good of a prediction as the most efficient nonlinear techniques. Support Vector Machines and Associative Neural Networks contributed in the largest number of significant models. Models based on fragments (SMF descriptors and Estate counts) had higher prediction ability than those based on Estate indices. The use of SMF descriptors and Estate counts provided
Associative Neural Network
 Neur. Proc. Lett
, 2002
"... An associative neural network (ASNN) is a combination of an ensemble of the feedforward neural networks and the Knearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance amid the analyzed cases for the nearest neighbor technique and pr ..."
Abstract
 Add to MetaCart
An associative neural network (ASNN) is a combination of an ensemble of the feedforward neural networks and the Knearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance amid the analyzed cases for the nearest neighbor technique and provides an improved prediction by the bias correction of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network further improves its predicting ability and can often provide a reasonable approximation of the unknown function without a need to retrain the neural network ensemble.
Systems Analysis Modelling Simulation
"... This article presents the Robust Polynomial Neural Networks, a selforganizing multilayered iterative GMDHtype algorithm that provides robust linear and nonlinear polynomial regression models. The accuracy of the algorithm is compared to traditional GMDH and the multiple linear regression analysis ..."
Abstract
 Add to MetaCart
This article presents the Robust Polynomial Neural Networks, a selforganizing multilayered iterative GMDHtype algorithm that provides robust linear and nonlinear polynomial regression models. The accuracy of the algorithm is compared to traditional GMDH and the multiple linear regression analysis using artificial and real data sets in quantitativestructure activity relationship studies. The calculated data shows that the proposed method is able to select nonlinear models characterized by a high prediction ability, it is insensible to outliers and irrelevant variables and thus it provides a considerable interest in quantitative structure activity relationship studies
other Methods for NonLinear Regression
, 1997
"... This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are ..."
Abstract
 Add to MetaCart
This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new nonparametric Bayesian learning methods relying on Gaussian process priors over functions are developed. These priors are controlled by hyperparameters which set the characteristic length scale for each input dimension. In the simplest method, these parameters are fit from the data using optimization. In the second, fully Bayesian method, a Markov chain Monte Carlo technique is used to integrate over the hyperparameters. One advantage of these Gaussian process methods is that the priors and hyperparameters of the trained models are easy to interpret.