Results 1  10
of
13
Learning Similarity Measures: A Formal View Based on a Generalized CBR Model
 OPTIONAL COMMENT/QUALIFICATION: IST199910357/BRI/WP5/0230 © FORM CONSORTIUM D10: VALIDATION OF INTERENTERPRISE MANAGEMENT FRAMEWORK (TRIAL 2) – ANNEX B PAGE 24 OF 29 12. HOW
, 2005
"... Although similarity measures play a crucial role in CBR applications, clear methodologies for defining them have not been developed yet. One approach to simplify the definition of similarity measures involves the use of machine learning techniques. In this paper we investigate important aspects of t ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Although similarity measures play a crucial role in CBR applications, clear methodologies for defining them have not been developed yet. One approach to simplify the definition of similarity measures involves the use of machine learning techniques. In this paper we investigate important aspects of these approaches in order to support a more goaldirected choice and application of existing approaches and to initiate the development of new techniques. This investigation is based on a novel formal generalization of the classic CBR cycle, which allows a more suitable analysis of the requirements, goals, assumptions and restrictions that are relevant for learning similarity measures.
Heterogeneous distance functions for prototype rules: influence of
"... parameters on probability estimation. ..."
(Show Context)
Probabilistic Distance Measures for PrototypeBased Rules
"... Probabilistic distance functions, including several variants of value difference metrics, minimum risk metric and ShortFukunaga metrics, are used with prototypebased rules (Prules) to provide a very concise and comprehensible classification model. Application of probabilistic metrics to nominal or ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Probabilistic distance functions, including several variants of value difference metrics, minimum risk metric and ShortFukunaga metrics, are used with prototypebased rules (Prules) to provide a very concise and comprehensible classification model. Application of probabilistic metrics to nominal or discrete features is straightforward. Heterogeneous metrics that handle continuous attributes with discretized or interpolated probabilistic metrics were combined with several methods of probability density estimation. Numerical experiments on artificial and real data show the usefulness of such approach as an alternative to neurofuzzy models.
AgainstExpectation Pattern Discovery: Identifying Interactions within Items with Large RelativeContrasts in databases....
"... ͳΆͽͽͶ΅ͺ͑ ..."
Synthesis of digital circuits with ability to generalize WILIAN SOARES LACERDA1
, 2010
"... Abstract. A method for synthesis of digital hardware classification circuits is presented in this paper. The method works by first selecting data from the truth table that are important for generalization of the circuit. The selected subset is provided to a Boolean minimization algorithm (Espresso) ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. A method for synthesis of digital hardware classification circuits is presented in this paper. The method works by first selecting data from the truth table that are important for generalization of the circuit. The selected subset is provided to a Boolean minimization algorithm (Espresso) that, by hypercube expansion, generates a classifier with a smoother separation surface between classes. The results show that obtained circuits have a generalization performance comparable to Support Vector
24 Feature Article: KNNCF Approach: Incorporating Certainty Factor to kNN Classification
"... Abstract—KNN classification finds k nearest neighbors of a query in training data and then predicts the class of the query as the most frequent one occurring in the neighbors. This is a typical method based on the majority rule. Although majorityrule based methods have widely and successfully been ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—KNN classification finds k nearest neighbors of a query in training data and then predicts the class of the query as the most frequent one occurring in the neighbors. This is a typical method based on the majority rule. Although majorityrule based methods have widely and successfully been used in real applications, they can be unsuitable to the learning setting of skewed class distribution. This paper incorporates certainty factor (CF) measure to kNN classification, called kNNCF classification, so as to deal with the above issue. This CFmeasure based strategy can be applied on the top of a kNN classification algorithm (or a hotdeck method) to meet the need of imbalanced learning. This leads to that an existing kNN classification algorithm can easily be extended to the setting of skewed class distribution. Some experiments are conducted for evaluating the efficiency, and demonstrate that the kNNCF classification outperforms standard kNN classification in accuracy. Index Terms—Classification, kNN classification, imbalanced classification. I.
unknown title
"... Influence of probability estimation parameters on stability of accuracy in prototype rules using heterogeneous distance functions. prof. dr hab. mgr inż. ..."
Abstract
 Add to MetaCart
(Show Context)
Influence of probability estimation parameters on stability of accuracy in prototype rules using heterogeneous distance functions. prof. dr hab. mgr inż.
Department of Informatics, Nicolaus Copernicus University,
"... distance functions for prototype rules: ..."
(Show Context)
Threshold rules decision list
"... Understanding data is one of most important problems. Popular crisp logic rules are easy to understand and compare, however for some datasets the number of extracted rules is very large, what affect reduction of generalization and makes the system less transparent. Another solution are fuzzy logic r ..."
Abstract
 Add to MetaCart
(Show Context)
Understanding data is one of most important problems. Popular crisp logic rules are easy to understand and compare, however for some datasets the number of extracted rules is very large, what affect reduction of generalization and makes the system less transparent. Another solution are fuzzy logic rules, which are much more flexible, however they don’t support symbolic and nominal attributes. Alternative systems for rules extraction base on prototype rules, this type of rules drives from similarity base learning. Presented threshold rules algorithm extracts form data small number of ordered rules, which are very accurate. Numerical experiments on real data show the usefulness of such approach as an alternative to neurofuzzy models.
OLine Learning with Transductive Condence Machines: an Empirical Evaluation
"... Abstract. The recently introduced transductive condence machines (TCMs) framework allows to extend classiers such that they satisfy the calibration property. This means that the error rate can be set by the user prior to classication. An analytical proof of the calibration property was given for TCM ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The recently introduced transductive condence machines (TCMs) framework allows to extend classiers such that they satisfy the calibration property. This means that the error rate can be set by the user prior to classication. An analytical proof of the calibration property was given for TCMs applied in the online learning setting. However, the nature of this learning setting restricts the applicability of TCMs. In this paper we provide strong empirical evidence that the calibration property also holds in the oline learning setting. Our results extend the range of applications in which TCMs can be applied. We may conclude that TCMs are appropriate in virtually any application domain. 1