Results 1  10
of
22
Coresets for Polytope Distance
, 2009
"... Following recent work of Clarkson, we translate the coreset framework to the problems of finding the point closest to the origin inside a polytope, finding the shortest distance between two polytopes, Perceptrons, and soft as well as hardmargin Support Vector Machines (SVM). We prove asymptoticall ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Following recent work of Clarkson, we translate the coreset framework to the problems of finding the point closest to the origin inside a polytope, finding the shortest distance between two polytopes, Perceptrons, and soft as well as hardmargin Support Vector Machines (SVM). We prove asymptotically matching upper and lower bounds on the size of coresets, stating that ɛcoresets of size ⌈(1 + o(1))E ∗ /ɛ⌉ do always exist as ɛ → 0, and that this is best possible. The crucial quantity E ∗ is what we call the excentricity of a polytope, or a pair of polytopes. Additionally, we prove linear convergence speed of Gilbert’s algorithm, one of the earliest known approximation algorithms for polytope distance, and generalize both the algorithm and the proof to the two polytope case. Interestingly, our coreset bounds also imply that we can for the first time prove matching upper and lower bounds for the sparsity of Perceptron and SVM solutions.
Iterative Least Squares Functional Networks Classifier
 IEEE Transactions Neural Networks
, 2007
"... Abstract—This paper proposes unconstrained functional networks as a new classifier to deal with the pattern recognition problems. Both methodology and learning algorithm for this kind of computational intelligence classifier using the iterative least squares optimization criterion are derived. The ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—This paper proposes unconstrained functional networks as a new classifier to deal with the pattern recognition problems. Both methodology and learning algorithm for this kind of computational intelligence classifier using the iterative least squares optimization criterion are derived. The performance of this new intelligent systems scheme is demonstrated and examined using realworld applications. A comparative study with the most common classification algorithms in both machine learning and statistics communities is carried out. The study was achieved with only sets of secondorder linearly independent polynomial functions to approximate the neuron functions. The results show that this new framework classifier is reliable, flexible, stable, and achieves a highquality performance. Index Terms—Functional networks, minimum description length, statistical pattern recognition. I.
Conjugate Relation between Loss Functions and Uncertainty Sets in Classification Problems
"... There are two main approaches to binary classification problems: the loss function approach and the uncertainty set approach. The loss function approach is widely used in realworld data analysis. Statistical decision theory has been used to elucidate its properties such as statistical consistency. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
There are two main approaches to binary classification problems: the loss function approach and the uncertainty set approach. The loss function approach is widely used in realworld data analysis. Statistical decision theory has been used to elucidate its properties such as statistical consistency. Conditional probabilities can also be estimated by using the minimum solution of the loss function. In the uncertainty set approach, an uncertainty set is defined for each binary label from training samples. The best separating hyperplane between the two uncertainty sets is used as the decision function. Although the uncertainty set approach provides an intuitive understanding of learning algorithms, its statistical properties have not been sufficiently studied. In this paper, we show that the uncertainty set is deeply connected with the convex conjugate of a loss function. On the basis of the conjugate relation, we propose a way of revising the uncertainty set approach so that it will have good statistical properties such as statistical consistency. We also introduce statistical models corresponding to uncertainty sets in order to estimate conditional probabilities. Finally, we present numerical experiments, verifying that the learning with revised uncertainty sets improves the prediction accuracy.
Learning Translation Invariant Kernels for Classification
"... Appropriate selection of the kernel function, which implicitly defines the feature space of an algorithm, has a crucial role in the success of kernel methods. In this paper, we consider the problem of optimizing a kernel function over the class of translation invariant kernels for the task of binary ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Appropriate selection of the kernel function, which implicitly defines the feature space of an algorithm, has a crucial role in the success of kernel methods. In this paper, we consider the problem of optimizing a kernel function over the class of translation invariant kernels for the task of binary classification. The learning capacity of this class is invariant with respect to rotation and scaling of the features and it encompasses the set of radial kernels. We show that how translation invariant kernel functions can be embedded in a nested set of subclasses and consider the kernel learning problem over one of these subclasses. This allows the choice of an appropriate subclass based on the problem at hand. We use the criterion proposed by Lanckriet et al. (2004) to obtain a functional formulation for the problem. It will be proven that the optimal kernel is a finite mixture of cosine functions. The kernel learning problem is then formulated as a semiinfinite programming (SIP) problem which is solved by a sequence of quadratically constrained quadratic programming (QCQP) subproblems. Using the fact that the cosine kernel is of rank two, we propose a formulation of a QCQP subproblem which does not require the kernel matrices to be loaded into memory, making the method applicable to largescale problems. We also address the issue of including
ShapeBased Tumor Retrieval in Mammograms Using RelevanceFeedback Techniques
 Arti cial Neural Networks{ICANN 2010
, 2010
"... Abstract. This paper presents an experimental "morphological analysis" retrieval system for mammograms, using RelevanceFeedback techniques. The features adopted are firstorder statistics of the Normalized Radial Distance, extracted from the annotated mass boundary. The system is evaluat ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents an experimental "morphological analysis" retrieval system for mammograms, using RelevanceFeedback techniques. The features adopted are firstorder statistics of the Normalized Radial Distance, extracted from the annotated mass boundary. The system is evaluated on an extensive dataset of 2274 masses of the DDSM database, which involves 7 distinct classes. The experiments verify that the involvement of the radiologist as part of the retrieval process improves the results, even for such a hard classification task, reaching the precision rate of almost 90%. Therefore, RelevanceFeedback can be employed as a very useful complementary tool to a Computer Aided Diagnosis system.
unknown title
"... The usage of convex hulls for classification is discussed with a practical algorithm, in which a sample is classified according to the distances to convex hulls. Sometimes convex hulls of classes are too close to keep a large margin. In this paper, we discuss a way to keep a margin larger than a sp ..."
Abstract
 Add to MetaCart
(Show Context)
The usage of convex hulls for classification is discussed with a practical algorithm, in which a sample is classified according to the distances to convex hulls. Sometimes convex hulls of classes are too close to keep a large margin. In this paper, we discuss a way to keep a margin larger than a specified value. To do this, we introduce a concept of “expanded convex hull ” and confirm its effectiveness. KeywordsPattern recognition, Margin, Convex hull I.
RIDGEADJUSTED SLACK VARIABLE OPTIMIZATION FOR SUPERVISED CLASSIFICATION
"... This document has been downloaded from Chalmers Publication Library (CPL). It is the author´s ..."
Abstract
 Add to MetaCart
(Show Context)
This document has been downloaded from Chalmers Publication Library (CPL). It is the author´s
The Rotating Calipers: An Efficient, Multipurpose, Computational Tool
, 2014
"... A paper published in 1983 established that the rotating calipers paradigm provides an elegant, simple, and yet powerful computational tool for solving several geometric problems. In the present paper the history of this tool is reviewed, and stock is taken of the rich variety of computational twodi ..."
Abstract
 Add to MetaCart
(Show Context)
A paper published in 1983 established that the rotating calipers paradigm provides an elegant, simple, and yet powerful computational tool for solving several geometric problems. In the present paper the history of this tool is reviewed, and stock is taken of the rich variety of computational twodimensional problems and applications that have been tackled with it during the past thirty years.
Complex Support Vector Regression
"... AbstractWe present a support vector regression (SVR) rationale for treating complex data, exploiting the notions of widely linear estimation and pure complex kernels. To compute the Lagrangian and derive the dual problem, we employ the recently presented Wirtinger's calculus on complex RKHS. ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractWe present a support vector regression (SVR) rationale for treating complex data, exploiting the notions of widely linear estimation and pure complex kernels. To compute the Lagrangian and derive the dual problem, we employ the recently presented Wirtinger's calculus on complex RKHS. We prove that this approach is equivalent with solving two real SVR problems exploiting a specific real kernel, which it is induced by the chosen complex kernel.