Results 1  10
of
21
The Complexity and Approximability of Finding Maximum Feasible Subsystems of Linear Relations
 Theoretical Computer Science
, 1993
"... We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of rela ..."
Abstract

Cited by 76 (12 self)
 Add to MetaCart
We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of relations =, , ? and 6=. Various constrained versions of Max FLS, where a subset of relations must be satisfied or where the variables take bounded discrete values, are also considered. We establish the complexity of solving these problems optimally and, whenever they are intractable, we determine their degree of approximability. Max FLS with =, or ? relations is NPhard even when restricted to homogeneous systems with bipolar coefficients, whereas it can be solved in polynomial time for 6= relations with real coefficients. The various NPhard versions of Max FLS belong to different approximability classes depending on the type of relations and the additional constraints. We show that the ran...
On the Approximability of Minimizing Nonzero Variables Or Unsatisfied Relations in Linear Systems
, 1997
"... We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasib ..."
Abstract

Cited by 71 (4 self)
 Add to MetaCart
We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a solution that violates as few relations as possible while satisfying all the others. In the second class (Min RVLS) the linear system is supposed to be feasible and one looks for a solution with as few nonzero variables as possible. For both Min ULR and Min RVLS the four basic types of relational operators =, , ? and 6= are considered. While Min RVLS with equations was known to be NPhard in [27], we established in [2, 5] that Min ULR with equalities and inequalities are NPhard even when restricted to homogeneous systems with bipolar coefficients. The latter problems have been shown hard to approximate in [8]. In this paper we determine strong bou...
FROM FINDING MAXIMUM FEASIBLE SUBSYSTEMS OF LINEAR SYSTEMS TO FEEDFORWARD NEURAL NETWORK DESIGN
, 1994
"... ..."
Marginbased generalization error bounds for threshold decision lists
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... This paper concerns the use of threshold decision lists for classifying data into two classes. The use of such methods has a natural geometrical interpretation and can be appropriate for an iterative approach to data classification, in which some points of the data set are given a particular classif ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper concerns the use of threshold decision lists for classifying data into two classes. The use of such methods has a natural geometrical interpretation and can be appropriate for an iterative approach to data classification, in which some points of the data set are given a particular classification, according to a linear threshold function (or hyperplane), are then removed from consideration, and the procedure iterated until all points are classified. We analyse theoretically the generalization properties of data classification techniques that are based on the use of threshold decision lists and the subclass of multilevel threshold functions. We obtain bounds on the generalization error that depend on the levels of separation — or margins — achieved by the successive linear classifiers.
On the Approximability of Removing the Smallest Number of Relations from Linear Systems to Achieve Feasibility
 Department of Mathematics, Swiss Federal Institute of Technology, Lausanne and
, 1995
"... We investigate the computational complexity of the problem which consists, given a system of linear relations, of finding a solution violating as few relations as possible while satisfying all the others. This general combinatorial problem, referred to as Min ULR, is considered for the four basic ty ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
We investigate the computational complexity of the problem which consists, given a system of linear relations, of finding a solution violating as few relations as possible while satisfying all the others. This general combinatorial problem, referred to as Min ULR, is considered for the four basic types of relational operators =, , ? and 6=. We proved in [3] that Min ULR with =, or ? relations is NPhard even when restricted to homogeneous systems with bipolar coefficients, whereas it is trivial for 6= relations. In this paper we determine strong bounds on the approximability of various intractable variants, including constrained ones where the variables are restricted to take bounded discrete values. The various NPhard versions of Min ULR belong to different approximability classes depending on the type of relations and the additional constraints, but none of them can be approximated within any constant factor unless P=NP. In the process of studying Min ULR we also derive strong boun...
The Decision List Machine
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 15 (PP. 921–928). MITPRESS
, 2003
"... We introduce a new learning algorithm for decision lists to allow features that are constructed from the data and to allow a tradeoff between accuracy and complexity. We bound its generalization error in terms of the number of errors and the size of the classifier it finds on the training data. W ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We introduce a new learning algorithm for decision lists to allow features that are constructed from the data and to allow a tradeoff between accuracy and complexity. We bound its generalization error in terms of the number of errors and the size of the classifier it finds on the training data. We also compare its performance on some natural data sets with the set covering machine and the support vector machine.
On the Approximability of Some NPhard Minimization Problems for Linear Systems
, 1996
"... We investigate the computational complexity of two classes of combinatorial optimization problems related to linear systems and study the relationship between their approximability properties. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We investigate the computational complexity of two classes of combinatorial optimization problems related to linear systems and study the relationship between their approximability properties. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a solution that violates as few relations as possible while satisfying all the others. In the second class (Min RVLS) the linear system is supposed to be feasible and one looks for a solution with as few nonzero variables as possible. For both Min ULR and Min RVLS the four basic types of relational operators =, ≥, > and ≠ are considered. While Min RVLS with equations was known to be NPhard in [27], we established in [2, 6] that Min ULR with equalities and inequalities are NPhard even when restricted to homogeneous systems with bipolar coefficients. The latter problems have been shown hard to approximate in [8]. In this paper we determine strong bounds on the approximability of various variants of Min RVLS and Min ULR, including constrained ones where the variables are restricted to take bounded discrete values or where some relations are mandatory while others are optional. The various NPhard versions turn out to have different approximability properties depending on the type of relations and the additional constraints, but none of them can be approximated within any constant factor, unless P=NP. Two interesting special cases of Min RVLS and Min ULR that arise in discriminant analysis and machine learning are also discussed. In particular, we disprove a conjecture presented in [57] regarding the existence of a polynomial time algorithm to design linear classifiers (or perceptrons) that use a closetominimum number of features.
Learning with decision lists of datadependent features
 JOURNAL OF MACHINE LEARNING REASEARCH
, 2005
"... We present a learning algorithm for decision lists which allows features that are constructed from the data and allows a tradeoff between accuracy and complexity. We provide bounds on the generalization error of this learning algorithm in terms of the number of errors and the size of the classifier ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present a learning algorithm for decision lists which allows features that are constructed from the data and allows a tradeoff between accuracy and complexity. We provide bounds on the generalization error of this learning algorithm in terms of the number of errors and the size of the classifier it finds on the training data. We also compare its performance on some natural data sets with the set covering machine and the support vector machine. Furthermore, we show that the proposed bounds on the generalization error provide effective guides for model selection.
On Learning µPerceptron Networks On the Uniform Distribution
 NEURAL NETWORKS
, 1995
"... We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of nonoverlapping perceptrons (also called perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of nonoverlapping perceptrons (also called perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share any input variables. Specifically, we investigate, within the distributionspecific PAC model, the learnability of perceptron unions, decision lists , and generalized decision lists . In contrast to most neural network learning algorithms, we do not assume that the architecture of the network is known in advance. Rather, it is the task of the algorithm to find both the architecture of the net and the weight values necessary to represent the function to be learned. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target con...