Results 1  10
of
267,542
Fuzzy ifthen rulebased nonlinear classifier
 INT. J. APPL. MATH. COMPUT. SCI
, 2003
"... This paper introduces a new classifier design method that is based on a modification of the classical HoKashyap procedure. The proposed method uses the absolute error, rather than the squared error, to design a linear classifier. Additionally, easy control of the generalization ability and robustne ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
and robustness to outliers are obtained. Next, an extension to a nonlinear classifier by the mixtureofexperts technique is presented. Each expert is represented by a fuzzy ifthen rule in the TakagiSugenoKang form. Finally, examples are given to demonstrate the validity of the introduced method.
Selecting fuzzy ifthen rules for classification problems using genetic algorithms
 IEEE TRANS. FUZZY SYST
, 1995
"... This paper proposes a geneticalgorithmbased method for selecting a small number of significant fuzzy ifthen rules to construct a compact fuzzy classification system with high classification power. The rule selection problem is formulated as a combinatorial optimization problem with two objectives ..."
Abstract

Cited by 132 (21 self)
 Add to MetaCart
objectives: to maximize the number of correctly classified patterns and to minimize the number of fuzzy ifthen rules. Genetic algorithms are applied to this problem. A set of fuzzy ifthen rules is coded into a string and treated as an individual in genetic algorithms. The fitness of each individual
How Good Are Fuzzy IfThen Classifiers?
"...  This paper gives some known theoretical results about fuzzy rulebased classiers and oers a few new ones. The ability of TakagiSugenoKang (TSK) fuzzy classiers to match exactly and to approximate classi cation boundaries is discussed. The lemma by Klawonn and Klement about the exact match of a c ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
 This paper gives some known theoretical results about fuzzy rulebased classiers and oers a few new ones. The ability of TakagiSugenoKang (TSK) fuzzy classiers to match exactly and to approximate classi cation boundaries is discussed. The lemma by Klawonn and Klement about the exact match of a
Ensemble Methods in Machine Learning
 MULTIPLE CLASSIFIER SYSTEMS, LBCS1857
, 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boostin ..."
Abstract

Cited by 607 (3 self)
 Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly
Estimating Continuous Distributions in Bayesian Classifiers
 In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Abstract

Cited by 489 (2 self)
 Add to MetaCart
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon
Mining Generalized Association Rules
, 1995
"... We introduce the problem of mining generalized association rules. Given a large database of transactions, where each transaction consists of a set of items, and a taxonomy (isa hierarchy) on the items, we find associations between items at any level of the taxonomy. For example, given a taxonomy th ..."
Abstract

Cited by 577 (7 self)
 Add to MetaCart
We introduce the problem of mining generalized association rules. Given a large database of transactions, where each transaction consists of a set of items, and a taxonomy (isa hierarchy) on the items, we find associations between items at any level of the taxonomy. For example, given a taxonomy
The Ensemble Kalman Filter: theoretical formulation And Practical Implementation
, 2003
"... The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the ..."
Abstract

Cited by 482 (4 self)
 Add to MetaCart
implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (En
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2000
"... We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a marginbased binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class ..."
Abstract

Cited by 560 (20 self)
 Add to MetaCart
is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with errorcorrecting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound
Very simple classification rules perform well on most commonly used datasets
 Machine Learning
, 1993
"... The classification rules induced by machine learning systems are judged by two criteria: their classification accuracy on an independent test set (henceforth "accuracy"), and their complexity. The relationship between these two criteria is, of course, of keen interest to the machin ..."
Abstract

Cited by 542 (5 self)
 Add to MetaCart
The classification rules induced by machine learning systems are judged by two criteria: their classification accuracy on an independent test set (henceforth "accuracy"), and their complexity. The relationship between these two criteria is, of course, of keen interest
Results 1  10
of
267,542