Results 1 
7 of
7
Robust Linear Programming Discrimination Of Two Linearly Inseparable Sets
, 1992
"... INTRODUCTION We consider the two pointsets A and B in the ndimensional real space R n represented by the m \Theta n matrix A and the k \Theta n matrix B respectively. Our principal objective here is to formulate a single linear program with the following properties: (i) If the convex hulls of A ..."
Abstract

Cited by 238 (32 self)
 Add to MetaCart
INTRODUCTION We consider the two pointsets A and B in the ndimensional real space R n represented by the m \Theta n matrix A and the k \Theta n matrix B respectively. Our principal objective here is to formulate a single linear program with the following properties: (i) If the convex hulls of A and B are disjoint, a strictly separating plane is obtained. (ii) If the convex hulls of A and B intersect, a plane is obtained that minimizes some measure of misclassification points, for all possible cases. (iii) No extraneous constraints are imposed on the linear program that rule out any specific case from consideration. Most linear programming formulations 6,5,12,4 have property (i)
A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirtythree Old and New Classification Algorithms
, 2000
"... . Twentytwo decision tree, nine statistical, and two neural network algorithms are compared on thirtytwo datasets in terms of classication accuracy, training time, and (in the case of trees) number of leaves. Classication accuracy is measured by mean error rate and mean rank of error rate. Both cr ..."
Abstract

Cited by 225 (8 self)
 Add to MetaCart
. Twentytwo decision tree, nine statistical, and two neural network algorithms are compared on thirtytwo datasets in terms of classication accuracy, training time, and (in the case of trees) number of leaves. Classication accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, splinebased, algorithm called Polyclass at the top, although it is not statistically signicantly dierent from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is Quest with linear splits, which ranks fourth and fth, respectively. Although splinebased statistical algorithms tend to have good accuracy, they also require relatively long training times. Polyclass, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The Quest and logistic regression algor...
Pattern Recognition Via Linear Programming: Theory And Application To Medical Diagnosis
, 1990
"... . A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NPcomplete. Another nonconvex model that employs an 1\Gamma norm instead of the 2norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually sm ..."
Abstract

Cited by 79 (13 self)
 Add to MetaCart
. A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NPcomplete. Another nonconvex model that employs an 1\Gamma norm instead of the 2norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually small) dimensionality of the pattern space. An effective LPbased finite algorithm is proposed for solving the latter model. The algorithm is employed to obtain a nonconvex piecewiselinear function for separating points representing measurements made on fine needle aspirates taken from benign and malignant human breasts. A computer program trained on 369 samples has correctly diagnosed each of 45 new samples encountered and is currently in use at the University of Wisconsin Hospitals. 1. Introduction. The fundamental problem we wish to address is that of distinguishing between elements of two distinct pattern sets. Mathematically we can formulate the problem as follows. Given two disjoint fin...
Extracting Rules From Pruned Neural Networks for Breast Cancer Diagnosis
 Artificial Intelligence in Medicine
, 1996
"... A new algorithm for neural network pruning is presented. Using this algorithm, networks with small number of connections and high accuracy rates for breast cancer diagnosis are obtained. We will then describe how rules can be extracted from a pruned network by considering only a finite number of hid ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
A new algorithm for neural network pruning is presented. Using this algorithm, networks with small number of connections and high accuracy rates for breast cancer diagnosis are obtained. We will then describe how rules can be extracted from a pruned network by considering only a finite number of hidden unit activation values. The accuracy of the extracted rules is as high as the accuracy of the pruned network. For the breast cancer diagnosis problem, the concise rules extracted from the network achieve an accuracy rate of more than 95 % on the training data set and on the test data set. Keywords. Neural network pruning; penalty function; rule extraction; breast cancer diagnosis. 2 1 Introduction Neural networks techniques have recently been applied to many medical diagnostic problems [1, 2, 4, 5, 11, 22]. Although the predictive accuracy of neural networks is often higher than that of other methods or human experts, it is generally difficult to understand how the network arrives a...
NeuroLinear: From neural networks to oblique decision rules
, 1997
"... We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classification of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axisparallel. Al ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classification of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axisparallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a significant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artificial datasets. Our experimental results on realworld datasets show that the system is effective in extracting compact and comprehensible rules with high predictive accuracy from neural networks. Keywords: Rule extraction, obliquerule, pruning, discretization. 1 Introduction Neural networks have been widely applied to solve classification problems. Comparisons between neural networks and decision tre...
An Empirical Comparison of Decision Trees and Other Classification Methods
, 1998
"... Twenty two decision tree, nine statistical, and two neural network classifiers are compared on thirtytwo datasets in terms of classification error rate, computational time, and (in the case of trees) number of terminal nodes. It is found that the average error rates for a majority of the classifiers ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Twenty two decision tree, nine statistical, and two neural network classifiers are compared on thirtytwo datasets in terms of classification error rate, computational time, and (in the case of trees) number of terminal nodes. It is found that the average error rates for a majority of the classifiers are not statistically significant but the computational times of the classifiers differ over a wide range. The statistical POLYCLASS classifier based on a logistic regression spline algorithm has the lowest average error rate. However, it is also one of the most computationally intensive. The classifier based on standard polytomous logistic regression and a decision tree classifier using the QUEST algorithm with linear splits have the second lowest average error rates and are about 50 times faster than POLYCLASS. Among decision tree classifiers with univariate splits, the classifiers based on the C4.5, INDCART, and QUEST algorithms have the best combination of error rate and speed, althoug...
Complexity, and Training Time of Thirtythree Old and New Classification Algorithms
, 1997
"... Abstract. Twentytwo decision tree, nine statistical, and two neural network algorithms are compared on thirtytwo datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error r ..."
Abstract
 Add to MetaCart
Abstract. Twentytwo decision tree, nine statistical, and two neural network algorithms are compared on thirtytwo datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, splinebased, algorithm called Polyclass at the top, although it is not statistically significantly different from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is Quest with linear splits, which ranks fourth and fifth, respectively. Although splinebased statistical algorithms tend to have good accuracy, they also require relatively long training times. Polyclass, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The Quest and logistic regression algorithms are substantially faster. Among decision tree algorithms with univariate splits, C4.5, IndCart, andQuest have the best combinations of error rate and speed. But C4.5 tends to produce trees with twice as many leaves as those from IndCart and Quest.