Results 1  10
of
32
Logical Analysis of Numerical Data
 Mathematical Programming
, 2000
"... The "Logical Analysis of Data" (LAD) is a methodology developed since the late eightees, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD for the ana ..."
Abstract

Cited by 45 (12 self)
 Add to MetaCart
The "Logical Analysis of Data" (LAD) is a methodology developed since the late eightees, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD for the analysis of numerical data sets is achieved through the process of "binarization" consisting in the replacement of each numerical variable by binary "indicator" variables, each showing whether the value of the original variable is above or below a certain level. Binarization was successfully applied to the analysis of a variety of real life data sets. This paper develops the theoretical foundations of the binarization process studying the combinatorial optimization problems related to the minimization of the number of binary variables. To provide an algorithmic framework for the practical solution of such problems, we construct compact linear integer programming formulations of them. We develop...
Improved Pairwise Coupling Classification with Correcting Classifiers
, 1997
"... The benefits obtained from the decomposition of a classification task involving several classes, into a set of smaller classification problems involving two classes only, usually called dichotomies, have been exposed in various occasions. Among the multiple ways of applying the referred decompositi ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
The benefits obtained from the decomposition of a classification task involving several classes, into a set of smaller classification problems involving two classes only, usually called dichotomies, have been exposed in various occasions. Among the multiple ways of applying the referred decomposition, Pairwise Coupling is one of the best known. Its principle is to separate a pair of classes in each binary subproblem, ignoring the remaining ones, resulting in a decomposition scheme containing as much subproblems as the number of possible pairs of classes in the original task. Pairwise Coupling decomposition has so far been used in different applications. In this paper, various ways of recombining the outputs of all the classiers solving the existing subproblems are explored, and an important handicap of its intrinsic nature is exposed, which consists in the use, for the classification, of impertinent information. A solution for this problem is suggested and it is shown how it can significa...
On the Decomposition of Polychotomies Into Dichotomies
, 1996
"... Many important classification problems are polychotomies, i.e. the data are organized into K classes with K ? 2. Given an unknown function F :\Omega ! f1; : : : ; Kg representing a polychotomy, an algorithm aimed at "learning" this polychotomy will produce an approximation of F , based on the know ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Many important classification problems are polychotomies, i.e. the data are organized into K classes with K ? 2. Given an unknown function F :\Omega ! f1; : : : ; Kg representing a polychotomy, an algorithm aimed at "learning" this polychotomy will produce an approximation of F , based on the knowledge of a set of pairs f(x p ; F (x p ))g P p=1 . Although in the wide variety of learning tools there exist some learning algorithms capable of handling polychotomies, many of the interesting tools were designed by nature for dichotomies (K = 2). Therefore, many researchers are compelled to use techniques to decompose a polychotomy into a series of dichotomies in order to apply their favorite algorithms to the resolution of a general problem. A decomposition method based on errorcorrecting codes has been lately proposed and shown to be very efficient. However, this decomposition is designed only on the basis of K without taking the data into account. In this paper, we explore alter...
Spanned Patterns for the Logical Analysis of Data
, 2002
"... In a finite dataset consisting of positive and negative observations represented as real valued nvectors, a positive (negative) pattern is an interval in R with the property that it contains sufficiently many positive (negative) observations, and sufficiently few negative (positive) ones. A patt ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
In a finite dataset consisting of positive and negative observations represented as real valued nvectors, a positive (negative) pattern is an interval in R with the property that it contains sufficiently many positive (negative) observations, and sufficiently few negative (positive) ones. A pattern is spanned if it does not include properly any other interval containing the same set of observations. Although large collections of spanned patterns can provide highly accurate classification models within the framework of the Logical Analysis of Data, no efficient method for their generation is currently known. We propose in this paper an incrementally polynomial time algorithm for the generation of all spanned patterns in a dataset, which runs in linear time in the output; the algorithm resembles closely the Blake and Quine consensus method for finding the prime implicants of Boolean functions. The efficiency of the proposed algorithm is tested on various publicly available datasets. In the last part of the paper, we present the results of a series of computational experiments which show the high degree of robustness of spanned patterns.
Finding essential attributes from binary data
, 2002
"... We consider data sets that consist of ndimensional binary vectors representing positive and negative examples for some (possibly unknown) phenomenon. A subset S of the attributes (or variables) of such a data set is called a support set if the positive and negative examples can be distinguished by ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We consider data sets that consist of ndimensional binary vectors representing positive and negative examples for some (possibly unknown) phenomenon. A subset S of the attributes (or variables) of such a data set is called a support set if the positive and negative examples can be distinguished by using only the attributes in S. In this paper we study the problem of finding small support sets, a frequently arising task in various fields, including knowledge discovery, data mining, learning theory, logical analysis of data, etc. We study the distribution of support sets in randomly generated data, and discuss why finding small support sets is important. We propose several measures of separation (real valued set functions over the subsets of attributes), formulate optimization models for finding the smallest subsets maximizing these measures, and devise efficient heuristic algorithms to solve these (typically NPhard) optimization problems. We prove that several of the proposed heuristics have a guaranteed constant approximation ratio, and we report on computational experience comparing these heuristics with some others from the literature both on randomly generated and on real world data sets.
Modeling Country Risk Ratings Using Partial Orders
 EUROPEAN JOURNAL OF OPERATIONAL RESEARCH. RESEARCH
, 2004
"... In order to evaluate the creditworthiness of various countries, a learning model is induced from the 1998 S&P country risk ratings, using the 1998 values of nine economic and three political indicators. This learning model allows the construction of a partially ordered set describing the relative s ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
In order to evaluate the creditworthiness of various countries, a learning model is induced from the 1998 S&P country risk ratings, using the 1998 values of nine economic and three political indicators. This learning model allows the construction of a partially ordered set describing the relative superiority of countries on the basis of their creditworthiness, and it is shown that the Condorcet linear extensions of this poset match closely the S&P ratings. Moreover, the ratings derived from the model correlate highly with those of other rating agencies. The model is shown to provide excellent ratings even when applied to the following years ’ data or to the ratings of previously unrated countries. Rating changes implemented by S&P in subsequent years resolved most of the (few) discrepancies between the constructed poset and S&P’s initial ratings.
Appendix to “Combinatorial analysis of breast cancer data from gene expression microarrays”, http://rutcor.rutgers.edu/ ~alexe/Appendix_LAD_BC.xls
"... Abstract Using the methodology of the Logical Analysis of Data (LAD) we have reanalyzed the publicly available breast cancer gene expression microarray dataset ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Abstract Using the methodology of the Logical Analysis of Data (LAD) we have reanalyzed the publicly available breast cancer gene expression microarray dataset
Pheromonic representation of user quests by digital structures
 In Proceedings of the 62nd Annual Meeting of the American Society for Information Science
, 1999
"... Inanovel approach to information nding in networked environments, each user's speci c purpose or \quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More e ective is an extended text that has been judged as to releva ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Inanovel approach to information nding in networked environments, each user's speci c purpose or \quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More e ective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the \Ant World " project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a \Digital Information Pheromone " (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The e ectiveness of this scheme is explored with some applications to the large judged collections of TREC documents. Vector space model The vector space model represents a document as a bag of words, in which the order of occurrence is ignored, but the frequency with which aword appears in the text is considered (Salton, 1971). We use this approach as the starting point for our analysis.
Patternbased clustering and attribute analysis
 RUTCOR Research
, 2003
"... Abstract. The Logical Analysis of Data (LAD) is a combinatorics, optimization and logic based methodology for the analysis of datasets with binary or numerical input variables, and binary outcomes. It has been established in previous studies that LAD provides a competitive classification tool compar ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. The Logical Analysis of Data (LAD) is a combinatorics, optimization and logic based methodology for the analysis of datasets with binary or numerical input variables, and binary outcomes. It has been established in previous studies that LAD provides a competitive classification tool comparable in efficiency with the top classification techniques available. The goal of this paper is to show that the methodology of LAD can be useful in the discovery of new classes of observations and in the analysis of attributes. After a brief description of the main concepts of LAD, two efficient combinatorial algorithms are described for the generation of all prime, respectively all spanned, patterns (rules) satisfying certain conditions. It is shown that the application of classic clustering techniques to the set of observations represented in prime pattern space leads to the identification of a subclass of, say positive, observations, which is accurately recognizable, and is sharply distinct from the observations in the opposite, say negative, class. It is also shown that the set of all spanned patterns allows the introduction of a measure of significance and of a concept of monotonicity in the set of attributes. Acknowledgements: The partial support provided by ONR grant N0001492J1375 and DIMACS is gratefully acknowledged. 1.
Comprehensive vs. Comprehensible Classifiers in Logical Analysis of Data
 RUTCOR Research Report, RRR 92002; DIMACS Technical Report 200249; Annals of Operations Research (in print
, 2002
"... The main objective of this paper is to compare the classification accuracy provided large, comprehensive collections of patterns (rules) derived from archives of past observations, with that provided by small, comprehensible collections of patterns. This comparison is carried out here on the basi ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The main objective of this paper is to compare the classification accuracy provided large, comprehensive collections of patterns (rules) derived from archives of past observations, with that provided by small, comprehensible collections of patterns. This comparison is carried out here on the basis of an empirical study, using several publicly available datasets. The results of this study show that the use of comprehensive collections allows a slight increase of classification accuracy, and that the "cost of comprehensibility" is small.