Results 1  10
of
27
Classification, Association and Pattern Completion Using Neural Similarity Based Methods
 APPLIED MATH. & COMP. SCIENCE
, 2000
"... A framework for SimilarityBased Methods (SBMs) includes many classification models as special cases: neural network of the Radial Basis Function Networks type, Feature Space Mapping neurofuzzy networks based on separable transfer functions, Learning Vector Quantization, variants of the k nearest ne ..."
Abstract

Cited by 17 (16 self)
 Add to MetaCart
A framework for SimilarityBased Methods (SBMs) includes many classification models as special cases: neural network of the Radial Basis Function Networks type, Feature Space Mapping neurofuzzy networks based on separable transfer functions, Learning Vector Quantization, variants of the k nearest neighbor methods and several new models that may be presented in a network form. Multilayer Perceptrons (MLPs) use scalar products to compute weighted activation of neurons, combining soft hyperplanes to provide decision borders. Distancebased multilayer perceptrons (DMLPs) evaluate similarity of inputs to weights offering a natural generalization of standard MLPs. Clusterbased initialization procedure determining architecture and values of all adaptive parameters is described. Networks
Uncertainty of Data, Fuzzy Membership Functions, and MultiLayer Perceptrons
, 2004
"... Probability that a crisp logical rule applied to imprecise input data is true may be computed using fuzzy membership function. All reasonable assumptions about input uncertainty distributions lead to membership functions of sigmoidal shape. Convolution of several inputs with uniform uncertainty lead ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Probability that a crisp logical rule applied to imprecise input data is true may be computed using fuzzy membership function. All reasonable assumptions about input uncertainty distributions lead to membership functions of sigmoidal shape. Convolution of several inputs with uniform uncertainty leads to bellshaped Gaussianlike uncertainty functions. Relations between input uncertainties and fuzzy rules are systematically explored and several new types of membership functions discovered. Multilayered perceptron (MLP) networks are shown to be a particular implementation of hierarchical sets of fuzzy threshold logic rules based on sigmoidal membership functions. They are equivalent to crisp logical networks applied to input data with uncertainty. Leaving fuzziness on the input side makes the networks or the rule systems easier to understand. Practical applications of these ideas are presented for analysis of questionnaire data and gene expression data.
W.: Projection Pursuit Constructive Neural Networks Based on Quality of Projected Clusters
 Lecture Notes in Computer Science 5164 (2008) 754–762
"... Abstract. Linear projection pursuit index measuring quality of projected clusters (QPC) is used to discover nonlocal clusters in highdimensional multiclass data, reduction of dimensionality, feature selection, visualization of data and classification. Constructive neural networks that optimize the ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Abstract. Linear projection pursuit index measuring quality of projected clusters (QPC) is used to discover nonlocal clusters in highdimensional multiclass data, reduction of dimensionality, feature selection, visualization of data and classification. Constructive neural networks that optimize the QPC index are able to discover simplest models of complex data, solving problems that standard networks based on error minimization are not able to handle. Tests on problems with complex Boolean logic and a few real world datasets show high efficiency of this approach. 1
Heterogeneous Adaptive Systems.
 World Congress of Computational Intelligence
, 2002
"... Most adaptive systems are homogenous, i.e. they are built from processing elements of the same type. MLP neural networks and decision trees use nodes that partition the input space by hyperplanes. Other types of neural networks use nodes that provide spherical or ellipsoidal decision borders. This m ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Most adaptive systems are homogenous, i.e. they are built from processing elements of the same type. MLP neural networks and decision trees use nodes that partition the input space by hyperplanes. Other types of neural networks use nodes that provide spherical or ellipsoidal decision borders. This may not be the best inductive bias for a given data, frequently requiring large number of processing elements even in cases when simple solutions exist. In heterogeneous adaptive systems (HAS) different types of decision borders are used at each stage, enabling discovery of a most appropriate bias for the data. Neural, decision tree and similaritybased systems of this sort are described here. Results from a novel heterogeneous decision tree algorithm are presented as an example of this approach.
Approximation and Classification in Medicine with IncNet Neural Networks
 IN MACHINE LEARNING AND APPLICATIONS. WORKSHOP ON MACHINE LEARNING IN MEDICAL APPLICATIONS
, 1999
"... Structure of incremental neural network (IncNet) is controlled by growing and pruning to match the complexity of training data. Extended Kalman Filter algorithm and its fast version is used as learning algorithm. Bicentral transfer functions, more flexible than other functions commonly used in arti ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Structure of incremental neural network (IncNet) is controlled by growing and pruning to match the complexity of training data. Extended Kalman Filter algorithm and its fast version is used as learning algorithm. Bicentral transfer functions, more flexible than other functions commonly used in artificial neural networks, are used. The latest improvement added is the ability to rotate the contours of constant values of transfer functions in multidimensional spaces with only N  1 adaptive parameters. Results on approximation benchmarks and on the real world psychometric classification problem clearly shows superior generalization performance of presented network comparing with other classification models.
T.: Universal learning machines
 Lecture Notes in Computer Science 5864 (2009) 206–215 Random Projection Machine with Margin Maximization and Kernel Features 49
"... Abstract. All existing learning methods have particular bias that makes them suitable for specific kind of problems. Universal Learning Machine (ULM) should find the simplest data model for arbitrary data distributions. Several ways to create ULMs are outlined, and an algorithm based on creation of ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
Abstract. All existing learning methods have particular bias that makes them suitable for specific kind of problems. Universal Learning Machine (ULM) should find the simplest data model for arbitrary data distributions. Several ways to create ULMs are outlined, and an algorithm based on creation of new global and local features combined with metalearning is introduced. This algorithm is able to find simple solutions that sophisticated algorithms ignore, learn complex Boolean functions, complicated probability distributions, as well as the problems requiring multiresolution decision borders. 1
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Optimal transfer function neural networks
 9th European Symposium on Artificial Neural Networks (ESANN), Brugge 2001. Defacto publications
, 2001
"... Abstract. Neural networks use neurons of the same type in each layer but such architecture cannot lead to data models of optimal complexity and accuracy. Networks with architectures (number of neurons, connections and type of neurons) optimized for a given problem are described here. Each neuron may ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. Neural networks use neurons of the same type in each layer but such architecture cannot lead to data models of optimal complexity and accuracy. Networks with architectures (number of neurons, connections and type of neurons) optimized for a given problem are described here. Each neuron may implement transfer function of different type. Complexity of such networks is controlled by statistical criteria and by adding penalty terms to the error function. Results of numerical experiments on artificial data are reported. 1
Almost Random Projection Machine
"... Abstract. Backpropagation of errors is not only hard to justify from biological perspective but also it fails to solve problems requiring complex logic. A simpler algorithm based on generation and filtering of useful random projections has better biological justification, is faster, easier to train ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Abstract. Backpropagation of errors is not only hard to justify from biological perspective but also it fails to solve problems requiring complex logic. A simpler algorithm based on generation and filtering of useful random projections has better biological justification, is faster, easier to train and may in practice solve nonseparable problems of higher complexity than typical feedforward neural networks. Estimation of confidence in network decisions is done by visualization of the number of nodes that agree with the final decision. Key words: Neural networks, learning, random projections 1
Quo Vadis, Computational Intelligence?
 In: Machine Intelligence: Quo Vadis? Advances in Fuzzy Systems – Applications and Theory
, 2004
"... What are the most important problems of computational intelligence? A sketch of the road to intelligent systems is presented. Several experts have made interesting comments on the most challenging problems. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
What are the most important problems of computational intelligence? A sketch of the road to intelligent systems is presented. Several experts have made interesting comments on the most challenging problems.