Results 1  10
of
67
The Complexity and Approximability of Finding Maximum Feasible Subsystems of Linear Relations
 Theoretical Computer Science
, 1993
"... We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of rela ..."
Abstract

Cited by 75 (12 self)
 Add to MetaCart
We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of relations =, , ? and 6=. Various constrained versions of Max FLS, where a subset of relations must be satisfied or where the variables take bounded discrete values, are also considered. We establish the complexity of solving these problems optimally and, whenever they are intractable, we determine their degree of approximability. Max FLS with =, or ? relations is NPhard even when restricted to homogeneous systems with bipolar coefficients, whereas it can be solved in polynomial time for 6= relations with real coefficients. The various NPhard versions of Max FLS belong to different approximability classes depending on the type of relations and the additional constraints. We show that the ran...
On the Approximability of Minimizing Nonzero Variables Or Unsatisfied Relations in Linear Systems
, 1997
"... We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasib ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a solution that violates as few relations as possible while satisfying all the others. In the second class (Min RVLS) the linear system is supposed to be feasible and one looks for a solution with as few nonzero variables as possible. For both Min ULR and Min RVLS the four basic types of relational operators =, , ? and 6= are considered. While Min RVLS with equations was known to be NPhard in [27], we established in [2, 5] that Min ULR with equalities and inequalities are NPhard even when restricted to homogeneous systems with bipolar coefficients. The latter problems have been shown hard to approximate in [8]. In this paper we determine strong bou...
A Polynomialtime Algorithm for Learning Noisy Linear Threshold Functions
, 1996
"... In this paper we consider the problem of learning a linear threshold function (a halfspace in n dimensions, also called a "perceptron"). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a Linear Program and solved in p ..."
Abstract

Cited by 61 (12 self)
 Add to MetaCart
In this paper we consider the problem of learning a linear threshold function (a halfspace in n dimensions, also called a "perceptron"). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a Linear Program and solved in polynomial time with the Ellipsoid Algorithm or Interior Point methods. Alternatively, simple greedy algorithms such as the Perceptron Algorithm are often used in practice and have certain provable noisetolerance properties; but, their running time depends on a separation parameter, which quanties the amount of "wiggle room" available for a solution, and can be exponential in the description length of the input. In this paper, we show how simple greedy methods can be used to nd weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. Suitably combining these hypotheses results in a polynomialtime algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. (Also, a polynomialtime algorithm for learning linear threshold functions in the Statistical Query model of Kearns.) Our algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R n , each given to b bits of precision, we show that one can remove only a small fraction of S so that in the remaining set T , for every vector v, max x2T (v x) 2 poly(n; b)E x2T (v x) 2 ; i.e., for any hyperplane through the origin, the maximum distance (squared) from a point in T to the plane is at most polynomially larger than the average. After removing these outliers, we are able to show that a modified v...
A ”Thermal” Perceptron Learning Rule
, 1992
"... The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gradient descent. The learning rule stabilizes the weights (learns) over a fixed training period. For separable problems it finds separating weights much more quickly than the usual rules.
AdaBoosting neural networks
 Neural Computation
, 1997
"... Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multilayer artificial neural networks. We show that training multilayer neural networks in which the number of ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multilayer artificial neural networks. We show that training multilayer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors. 1
Generative Learning Structures and Processes for Generalized Connectionist Networks
, 1991
"... Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes the popular learning structures and processes used in such networks. It ..."
Abstract

Cited by 29 (18 self)
 Add to MetaCart
Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes the popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for patterndirected inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture  the number of processing elements and the connectivity among them  as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network t...
On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists
, 1992
"... In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfs ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
In this paper, we take a close look at the problem of learning simple neural concepts under the uniform distribution of examples. By simple neural concepts we mean concepts that can be represented as simple combinations of perceptrons (halfspaces). One such class of concepts is the class of halfspace intersections. By formalizing the problem of learning halfspace intersections as a set covering problem, we are led to consider the following subproblem: given a set of non linearly separable examples, find the largest linearly separable subset of it. We give an approximation algorithm for this NPhard subproblem. Simulations, on both linearly and non linearly separable functions, show that this approximation algorithm works well under the uniform distribution, outperforming the Pocket algorithm used by many constructive neural algorithms. Based on this approximation algorithm, we present a greedy method for learning halfspace intersections. We also present extensive numerical...
Noise tolerant variants of the perceptron algorithm
 Journal of Machine Learning Research
, 2005
"... A large number of variants of the Perceptron algorithm have been proposed and partially evaluated in recent work. One type of algorithm aims for noise tolerance by replacing the last hypothesis of the perceptron with another hypothesis or a vote among hypotheses. Another type simply adds a margin te ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
A large number of variants of the Perceptron algorithm have been proposed and partially evaluated in recent work. One type of algorithm aims for noise tolerance by replacing the last hypothesis of the perceptron with another hypothesis or a vote among hypotheses. Another type simply adds a margin term to the perceptron in order to increase robustness and accuracy, as done in support vector machines. A third type borrows further from support vector machines and constrains the update function of the perceptron in ways that mimic softmargin techniques. The performance of these algorithms, and the potential for combining different techniques, has not been studied in depth. This paper provides such an experimental study and reveals some interesting facts about the algorithms. In particular the perceptron with margin is an effective method for tolerating noise and stabilizing the algorithm. This is surprising since the margin in itself is not designed or used for noise tolerance, and there are no known guarantees for such performance. In most cases, similar performance is obtained by the votedperceptron which has the advantage that it does not require parameter selection. Techniques using soft margin ideas are runtime intensive and do not give additional performance benefits. The results also highlight the difficulty with automatic parameter selection which is required with some of these variants.
Combining Prior Symbolic Knowledge And Constructive Neural Network Learning
 Connection Science
, 1993
"... The concepts of knowledgebased systems and machine learning are combined by integrating an expert system and a constructive neural networks learning algorithm. Two approaches are explored: embedding the expert system directly and converting the expert system rule base into a neural network. This in ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
The concepts of knowledgebased systems and machine learning are combined by integrating an expert system and a constructive neural networks learning algorithm. Two approaches are explored: embedding the expert system directly and converting the expert system rule base into a neural network. This initial system is then extended by constructively learning additional hidden units in a problemspecific manner. Experiments performed indicate that generalization of a combined system surpasses that of each system individually. Contact: Dr. Zoran Obradovi'c zoran@eecs.wsu.edu School of Electrical Engineering and Computer Science Washington State University Pullman, WA 991642752 (509) 3356601 FAX: (509) 3353818 COMBINING PRIOR SYMBOLIC KNOWLEDGE AND CONSTRUCTIVE NEURAL NETWORK LEARNING Justin Fletcher Zoran Obradovi'c y jfletche@eecs.wsu.edu zoran@eecs.wsu.edu School of Electrical Engineering and Computer Science Washington State University, Pullman WA 991642752 Abstract The conce...
On Sequential Construction of Binary Neural Networks
, 1995
"... A new technique, called Sequential Window Learning (SWL), for the construction of twolayer perceptrons with binary inputs is presented. ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
A new technique, called Sequential Window Learning (SWL), for the construction of twolayer perceptrons with binary inputs is presented.