Results 1  10
of
12
Computational Intelligence Methods for RuleBased Data Understanding
 PROCEEDINGS OF THE IEEE
, 2004
"... ... This paper is focused on the extraction and use of logical rules for data understanding. All aspects of rule generation, optimization, and application are described, including the problem of finding good symbolic descriptors for continuous data, tradeoffs between accuracy and simplicity at the r ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
... This paper is focused on the extraction and use of logical rules for data understanding. All aspects of rule generation, optimization, and application are described, including the problem of finding good symbolic descriptors for continuous data, tradeoffs between accuracy and simplicity at the ruleextraction stage, and tradeoffs between rejection and error level at the rule optimization stage. Stability of rulebased description, calculation of probabilities from rules, and other related issues are also discussed. Major approaches to extraction of logical rules based on neural networks, decision trees, machine learning, and statistical methods are introduced. Optimization and application issues for sets of logical rules are described. Applications of such methods to benchmark and reallife problems are reported and illustrated with simple logical rules for many datasets. Challenges and new directions for research are outlined.
Quantum Associative Memory with Exponential Capacity
 Proceedings of the International Joint Conference on Neural Networks
, 1998
"... Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts by taking advantage of quantum parallelism. The unique characteristics of quantum theory may also be used t ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts by taking advantage of quantum parallelism. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper covers necessary highlevel quantum mechanical ideas and introduces a simple quantum associative memory. Further, it provides discussion, empirical results and directions for future work. 1.
Cross Validation and MLP Architecture Selection
 IN PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS
, 1999
"... The performance of cross validation (CV) based MLP architecture selection is examined using 14 real world problem domains. When testing many different network architectures the results show that CV is only slightly more likey than random to select the optimal network architecture, and that the strat ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
The performance of cross validation (CV) based MLP architecture selection is examined using 14 real world problem domains. When testing many different network architectures the results show that CV is only slightly more likey than random to select the optimal network architecture, and that the strategy of using the simplest available network architecture performs better than CV in this case. Experimental evidence suggests several reasons for the poor performance of CV. In addition, three general strategies which lead to significant increase in the performance of CV are proposed. While this paper focuses on using CV to select the optimal MLP architecture, the strategies are also applicable when CV is used to select between several different learning models, whether the models are neural networks, decision trees, or other types of learning algorithms. When using these strategies the average generalization performance of the network architecture which CV selects is significantly better than the performance of several other well known machine learning algorithms on the data sets tested.
Searchbased Algorithms for Multilayer Perceptrons
, 2005
"... Algorithms based on systematic search techniques can be successfully applied for multilayer perceptron (MLP) training and for logical rule extraction from data using MLP networks. The proposed solutions are easier to implement and frequently outperform gradientbased optimization algorithms. Search ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Algorithms based on systematic search techniques can be successfully applied for multilayer perceptron (MLP) training and for logical rule extraction from data using MLP networks. The proposed solutions are easier to implement and frequently outperform gradientbased optimization algorithms. Searchbased techniques, popular in artificial intelligence and almost completely neglected in neural networks can be the basis for MLP network training algorithms. There are plenty of wellknown search algorithms, however since they are not suitable for MLP training, new algorithms dedicated to this task must be developed. Search algorithms applied to MLP networks change network parameters (weights and biases) and check the influence of the changes on the error function. MLP networks considered in this thesis are used for data classification and logical rulebased understanding of the data. The proposed solutions in many cases outperform gradientbased backpropagation algorithms. The thesis is organized in three parts. The first part of the thesis concentrates on better understanding of MLP properties.
Ventura,Training a Quantum Neural network
 http://books.nips.cc/papers/files/nips16/NIPS 2003_ET05.pdf
"... Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing are different enough from classical computing that the issue of training should be treated in detail. We propose a simple quantum neural network and a training me ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing are different enough from classical computing that the issue of training should be treated in detail. We propose a simple quantum neural network and a training method for it. It can be shown that this algorithm works in quantum systems. Results on several realworld data sets show that this algorithm can train the proposed quantum neural networks, and that it has some advantages over classical learning algorithms. 1
The little neuron that could
 Proceedings of the International Joint Conference on Neural Networks
, 1999
"... SLPs (single layer perceptrons) often exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs calle ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
SLPs (single layer perceptrons) often exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs called &quot;wagging &quot; (weight averaging). This method involves training several different SLPs on the same training data, and then averaging their weights to obtain a single SLP. The performance of the wagged SLP is compared with other more complex learning algorithms (bp, c4.5, ib1, MML, etc) on 15 data sets from real world problem domains. Surprisingly, the wagged SLP has better average generalization performance than any of the other learning algorithms on the problems tested. This result is explained and analyzed. The analysis includes looking at the performance characteristics of the standard delta rule training algorithm for SLPs and the correlation between training and test set scores as training progresses. 1.
A Global kmeans Approach for Autonomous Cluster Initialization of Probabilistic Neural Network
, 2007
"... This paper focuses on the statistical based Probabilistic Neural Network (PNN) for pattern classification problems with Expectation – Maximization (EM) chosen as the training algorithm. This brings about the problem of random initialization, which means, the user has to predefine the number of clust ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper focuses on the statistical based Probabilistic Neural Network (PNN) for pattern classification problems with Expectation – Maximization (EM) chosen as the training algorithm. This brings about the problem of random initialization, which means, the user has to predefine the number of clusters through trial and error. Global kmeans is used to solve this and to provide a deterministic number of clusters using a selection criterion. On top of that, Fast Global kmeans was tested as a substitute for Global kmeans, to reduce the computational time taken. Tests were done on both homescedastic and heteroscedastic PNNs using benchmark medical datasets and also vibration data obtained from a U.S. Navy CH46E helicopter aft gearbox (Westland). Povzetek: Opisana je metoda nevronskih mrež. 1
Victor José de Almeida e Sousa Lobo SHIP NOISE CLASSIFICATION
"... Although a PhD program is supposed to be a personal accomplishment, I feel that this one was really a team effort of many people. I would like to express my great gratitude, admiration, and friendship towards my supervisor, Prof.Dr. Fernando Moura Pires. He was extremely patient, spending long hours ..."
Abstract
 Add to MetaCart
Although a PhD program is supposed to be a personal accomplishment, I feel that this one was really a team effort of many people. I would like to express my great gratitude, admiration, and friendship towards my supervisor, Prof.Dr. Fernando Moura Pires. He was extremely patient, spending long hours discussing and working with me, sometimes showing more faith than myself in the success of the project. I am also in dept towards my cosupervisor Prof.Dr. Roman Swiniarski. Thanks to him I spent several months at San Diego State University, opening my eyes to a New World, that changed me forever. His attention to detail and the many discussions and corrections of the manuscript had an enormous impact on the final text of the thesis. I must also give a very special acknowledgement to Commander Paulo Mónica de Oliveira and Eng.Nuno Bandeira, who had a very direct impact on my thesis and could be considered my unofficial cosupervisors. Having done his PhD on signal processing, Commander Mónica de Oliveira helped me with all signal processing aspects of my work, and being head of department at the Naval Academy gave me all the support possible. More than that, his advice as a very dear friend changed the way I work and think about many things. Eng. Nuno Bandeira is, besides the
Training a Quantum Neural Network
 http://books.nips.cc/papers/files/nips16/NIPS 2003_ET05.pdf
, 2003
"... Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing are different enough from classical computing that the issue of training should be treated in detail. We propose a simple quantum neural network and a trainin ..."
Abstract
 Add to MetaCart
Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing are different enough from classical computing that the issue of training should be treated in detail. We propose a simple quantum neural network and a training method for it. It can be shown that this algorithm works in quantum systems. Results on several realworld data sets show that this algorithm can train the proposed quantum neural networks, and that it has some advantages over classical learning algorithms.
Discrete quasigradient features weighting algorithm
"... Abstract. A new method of feature weighting, useful also for feature extraction has been described. It is quite efficient and gives quite accurate results. Weighting algorithm may be used with any kind of learning algorithm. The weighting algorithm with knearest neighbors model was used to estimate ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. A new method of feature weighting, useful also for feature extraction has been described. It is quite efficient and gives quite accurate results. Weighting algorithm may be used with any kind of learning algorithm. The weighting algorithm with knearest neighbors model was used to estimate the best feature base for a given distance measure. Results obtained with this algorithm clearly show its superior performance in several benchmark tests. 1