Results 1  10
of
49
On Projection Algorithms for Solving Convex Feasibility Problems
, 1996
"... Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of the ..."
Abstract

Cited by 189 (32 self)
 Add to MetaCart
Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H09, 49M45, 6502, 65J05, 90C25; Secondary 26B25, 41A65, 46C99, 46N10, 47N10, 52A05, 52A41, 65F10, 65K05, 90C90, 92C55. Key words and phrases. Angle between two subspaces, averaged mapping, Cimmino's method, computerized tomography, convex feasibility problem, convex function, convex inequalities, convex programming, convex set, Fej'er monotone sequence, firmly nonexpansive mapping, H...
Analysis of perceptronbased active learning
 In COLT
, 2005
"... Abstract. We start by showing that in an active learning setting, the Perceptron algorithm needs \Omega ( 1ffl2) labels to learn linear separators within generalization error ffl. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron ..."
Abstract

Cited by 68 (10 self)
 Add to MetaCart
Abstract. We start by showing that in an active learning setting, the Perceptron algorithm needs \Omega ( 1ffl2) labels to learn linear separators within generalization error ffl. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error ffl after asking for just ~O(d log 1ffl) labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex querybycommittee algorithm. 1 Introduction In many machine learning applications, unlabeled data is abundant but labelingis expensive. This distinction is not captured in the standard PAC or online models of supervised learning, and has motivated the field of active learning, inwhich the labels of data points are initially hidden, and the learner must pay for each label it wishes revealed. If query points are chosen randomly, the numberof labels needed to reach a target generalization error ffl, at a target confidencelevel 1 ffi, is similar to the sample complexity of supervised learning. The hopeis that there are alternative querying strategies which require significantly fewer
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 66 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
ArbitraryNorm Separating Plane
 Operations Research Letters
, 1997
"... A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise norm ..."
Abstract

Cited by 47 (13 self)
 Add to MetaCart
A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise normdependent explicit closed form for the projection of a point on a plane. This projection is used to formulate the separatingplane problem as a minimization of a convex function on a unit sphere in a norm dual to that of the arbitrary norm used. For the 1norm, the problem can be solved in polynomial time by solving 2n linear programs or by solving a bilinear program. For a general pnorm, the minimization problem can be transformed via an exact penalty formulation to minimizing the sum of a convex function and a bilinear function on a convex set. For the one and infinity norms, a finite successive linearization algorithm can be used for solving the exact penalty formulation. 1 Introduction...
Mathematical Programming in Neural Networks
 ORSA Journal on Computing
, 1993
"... This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard d ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard description in terms of a mean square error in the output space is also given, which leads to the use of unconstrained minimization techniques for training a neural network. The linear programming approach is demonstrated by a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility. 1 What is a Neural Network? A neural network is a representation of a map between an input space and an output space. A principal aim of such a map is to discriminate between the elements of a finite number of disjoint sets in the input space. Typically one wishes to discriminate between the elements of two disjoint point sets in the ndim...
Multicategory Discrimination via Linear Programming
 OPTIMIZATION METHODS AND SOFTWARE
, 1992
"... A single linear program is proposed for discriminating between the elements of k disjoint point sets in the ndimensional real space R n : When the conical hulls of the k sets are (k \Gamma 1)point disjoint in R n+1 , a kpiece piecewiselinear surface generated by the linear program completely ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
A single linear program is proposed for discriminating between the elements of k disjoint point sets in the ndimensional real space R n : When the conical hulls of the k sets are (k \Gamma 1)point disjoint in R n+1 , a kpiece piecewiselinear surface generated by the linear program completely separates the k sets. This improves on a previous linear programming approach which required that each set be linearly separable from the remaining k \Gamma 1 sets. When the conical hulls of the k sets are not (k \Gamma 1)point disjoint, the proposed linear program generates an errorminimizing piecewiselinear separator for the k sets. For this case it is shown that the null solution is never a unique solver of the linear program and occurs only under the rather rare condition when the mean of each point set equals the mean of the means of the other k \Gamma 1 sets. This makes the proposed linear computational programming formulation useful for approximately discriminating between k sets...
The Maximum Feasible Subsystem Problem and VertexFacet
 Incidence of Polyhedra, PhD thesis
, 2002
"... BranchAndCut for the ..."
Stochastic Algorithms for Exact and Approximate Feasibility of Robust LMIs
, 2001
"... In this note, we discuss fast randomized algorithms for determining an admissible solution for robust linear matrix inequalities (LMIs) of the form ( 1) 0, where is the optimization variable and 1 is the uncertainty, which belongs to a given set 1. The proposed algorithms are based on uncertainty r ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
In this note, we discuss fast randomized algorithms for determining an admissible solution for robust linear matrix inequalities (LMIs) of the form ( 1) 0, where is the optimization variable and 1 is the uncertainty, which belongs to a given set 1. The proposed algorithms are based on uncertainty randomization: the first algorithm finds a robust solution in a finite number of iterations with probability one, if a strong feasibility condition holds. In case no robust solution exists, the second algorithm computes an approximate solution which minimizes the expected value of a suitably selected feasibility indicator function. The theory is illustrated by examples of application to uncertain linear inequalities and quadratic stability of interval matrices.