Results 1  10
of
24
Neural networks for classification: a survey
 and Cybernetics  Part C: Applications and Reviews
, 2000
"... Abstract—Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability esti ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Abstract—Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics. Index Terms—Bayesian classifier, classification, ensemble methods, feature variable selection, learning and generalization, misclassification costs, neural networks. I.
Parallel consensual neural networks
 MULTIPLE CLASSIFIERS APPLIED TO MULTISOURCE REMOTE SENSING DATA 2299
, 1997
"... Abstract — A new type of a neuralnetwork architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage ne ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Abstract — A new type of a neuralnetwork architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugategradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data. Index Terms — Consensus theory, wavelet packets, accuracy, classification, probability density estimation, statistical pattern
Discriminative Training of Hidden Markov Models
, 1998
"... vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Finding the Best Transcription . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.5 Setting the Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Objective Functions 19 3.1 Properties of Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . 19 3.2 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Maximum Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Frame Discrimination . . . . . . . . . . . . . . . . ....
Global Search Methods For Solving Nonlinear Optimization Problems
, 1997
"... ... these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead), that solves nonlinear constrained and unconstrained problems in a unified framework. We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the lear ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
... these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead), that solves nonlinear constrained and unconstrained problems in a unified framework. We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the learning of feedforward neural networks, (b) the design of quadraturemirrorfilter digital filter banks, (c) the satisfiability problem, (d) the maximum satisfiability problem, and (e) the design of multiplierless quadraturemirrorfilter digital filter banks. Our method achieves better solutions than existing methods, or achieves solutions of the same quality but at a lower cost.
Hybrid Consensus Theoretic Classification
 IEEE Transactions on Geoscience and Remote Sensing
, 1997
"... Abstract — Hybrid classification methods based on consensus from several data sources are considered. Each data source is at first treated separately and modeled using statistical methods. Then weighting mechanisms are used to control the influence of each data source in the combined classification. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract — Hybrid classification methods based on consensus from several data sources are considered. Each data source is at first treated separately and modeled using statistical methods. Then weighting mechanisms are used to control the influence of each data source in the combined classification. The weights are optimized in order to improve the combined classification accuracies. Both linear and nonlinear optimization methods are considered and used in classification of two multisource remote sensing and geographic data sets. A nonlinear method which utilizes a neural network gives excellent experimental results. The hybrid statistical/neural method outperforms all other methods in terms of test accuracies in the experiments. I.
NEURObjects: an objectoriented library for neural network development
"... NEURObjects is a set of C library classes for neural network development, exploiting the potentialities of objectoriented design and programming. The main goal of the library consists in supporting experimental research in neural networks and fast prototyping of inductive machine learning applicati ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
NEURObjects is a set of C library classes for neural network development, exploiting the potentialities of objectoriented design and programming. The main goal of the library consists in supporting experimental research in neural networks and fast prototyping of inductive machine learning applications. We present NEURObjects design issues, its main functionalities, and programming examples, showing how to map neural network concepts into the design of library classes.
Optimization and Global Minimization Methods Suitable for Neural Networks
, 1998
"... Neural networks are usually trained using local, gradientbased procedures. Such methods frequently find suboptimal solutions being trapped in local minima. Optimization of neural structures and global minimization methods applied to network cost functions have strong influence on all aspects of n ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Neural networks are usually trained using local, gradientbased procedures. Such methods frequently find suboptimal solutions being trapped in local minima. Optimization of neural structures and global minimization methods applied to network cost functions have strong influence on all aspects of network performance. Recently genetic algorithms are frequently combined with neural methods to select best architectures and avoid drawbacks of local minimization methods. Many other global minimization methods are suitable for that purpose, although they are used rather rarely in this context. This paper provides a survey of such global methods, including some aspects of genetic algorithms.
A Conjugate Gradient Learning Algorithm for Recurrent Neural Networks
, 1998
"... The realtime recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The realtime recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this could result in undesirable convergence characteristics. This paper attempts to improve the convergence capability and convergence characteristics of the RTRL algorithm by incorporating conjugate gradient computation into its learning procedure. The resulting algorithm, referred to as the conjugate gradient recurrent learning (CGRL) algorithm, is applied to train fully connected recurrent neural networks to simulate a secondorder low pass filter and to predict the chaotic intensity pulsations of NH 3 laser. Results show that the CGRL algorithm exhibits substantial improvement in convergence (in terms of the reduction in mean squared error per epoch) as compared to the RTRL and batch mode RT...
NEURObjects: A set of library classes for neural networks development
"... NEURObjects is a set of C++ library classes for neural networks development, exploiting the potentialities of objectoriented design and programming. The main goal of the library is to support fast prototyping of inductive machine learning applications based on neural networks. In this paper we pres ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
NEURObjects is a set of C++ library classes for neural networks development, exploiting the potentialities of objectoriented design and programming. The main goal of the library is to support fast prototyping of inductive machine learning applications based on neural networks. In this paper we present the library design issues, their main functionalities, and simple examples of programming using NEURObjects. I. Introduction Neural networks play an important role in machine learning, in particular they permitt to efficently face problems such as regression and classification [13]. Moreover, neural networks are often relevant components of complex systems used in inductive learning tasks [12]. Nowadays, the relatively limited diffusion of neural network technology in industrial applications mainly depends on the high costs related to the long development time necessary when neural networks algorithms are implemented from scratch in order to embed those tools in new software products....
Improved Real Time Recurrent Learning Algorithms: a Review and some New Approaches
 Neurocomputing
, 1995
"... This paper reviews the techniques that reduce the time complexity and improve the convergence capability of the realtime recurrent learning algorithm. A comparison among the various approaches was made by training several recurrent networks to model a chaotic time series produced by the Henon model ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper reviews the techniques that reduce the time complexity and improve the convergence capability of the realtime recurrent learning algorithm. A comparison among the various approaches was made by training several recurrent networks to model a chaotic time series produced by the Henon model. 1. INTRODUCTION The realtime recurrent learning (RTRL) algorithm [1] is one of the successful learning algorithms where the gradient of errors is propagated forward in time. Therefore, it is particularly suitable for online training of recurrent neural networks (RNNs). Nevertheless, its time complexity is O(n 4 ), where n is the number of processing units in the network. After its introduction in 1989, a number of suggestions have been made to improve the learning speed and convergence of the algorithm. 2. METHODS TO IMPROVE THE RTRL ALGORITHM Before the improved RTRL algorithms are discussed, we need to define the original RTRL algorithm [1]. Let the parameters of a fully connecte...