Results 1  10
of
13,319
Learning Linearly Separable Languages
 In Proceedings of The 17th International Conference on Algorithmic Learning Theory (ALT 2006
, 2006
"... Abstract. This paper presents a novel paradigm for learning languages that consists of mapping strings to an appropriate highdimensional feature space and learning a separating hyperplane in that space. It initiates the study of the linear separability of automata and languages by examining the ric ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. This paper presents a novel paradigm for learning languages that consists of mapping strings to an appropriate highdimensional feature space and learning a separating hyperplane in that space. It initiates the study of the linear separability of automata and languages by examining
Query by Committee, Linear Separation and
 Theoretical Computer Science
, 2002
"... Recent works have shown the advantage of using Active Learning methods, such as the Query by Committee (QBC) algorithm, to various learning problems. This class of Algorithms requires an oracle with the ability to randomly select a consistent hypothesis according to some predefined distribution. Whe ..."
Abstract
 Add to MetaCart
. When trying to implement such an oracle, for the linear separators family of hypotheses, various problems should be solved.
The Linear Separation Problem . . .
, 2002
"... We investigate the use of accpm (analytic center cutting plane method) for solving the linear separation problem, which is an important instance of the general data mining concept. Given two disjoint subsets of points, the problem is to nd a hyperplane which separates these two subsets as well as po ..."
Abstract
 Add to MetaCart
We investigate the use of accpm (analytic center cutting plane method) for solving the linear separation problem, which is an important instance of the general data mining concept. Given two disjoint subsets of points, the problem is to nd a hyperplane which separates these two subsets as well
THE GEOMETRY OF LINEAR SEPARABILITY IN DATA SETS
"... Abstract. We study the geometry of datasets, using an extension of the Fisher linear discriminant to the case of singular covariance, and a new regularization procedure. A dataset is called linearly separable if its different clusters can be reliably separated by a linear hyperplane. We propose a me ..."
Abstract
 Add to MetaCart
Abstract. We study the geometry of datasets, using an extension of the Fisher linear discriminant to the case of singular covariance, and a new regularization procedure. A dataset is called linearly separable if its different clusters can be reliably separated by a linear hyperplane. We propose a
Selection of the Linearly Separable Feature Subsets
"... Abstract. We address a situation when more than one feature subset allows for linear separability of given data sets. Such situation can occur if a small number of cases is represented in a highly dimensional feature space. The method of the feature selection based on minimisation of a special crite ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We address a situation when more than one feature subset allows for linear separability of given data sets. Such situation can occur if a small number of cases is represented in a highly dimensional feature space. The method of the feature selection based on minimisation of a special
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
, 1997
"... We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a highdimensional space. We take advantage of the observation that the images ..."
Abstract

Cited by 2310 (17 self)
 Add to MetaCart
from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes
Large Margin Classification Using the Perceptron Algorithm
 Machine Learning
, 1998
"... We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable with large ..."
Abstract

Cited by 521 (2 self)
 Add to MetaCart
We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable
SupportVector Networks
 Machine Learning
, 1995
"... The supportvector network is a new learning machine for twogroup classification problems. The machine conceptually implements the following idea: input vectors are nonlinearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special pr ..."
Abstract

Cited by 3703 (35 self)
 Add to MetaCart
The supportvector network is a new learning machine for twogroup classification problems. The machine conceptually implements the following idea: input vectors are nonlinearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special
An analysis of transformations
 Journal of the Royal Statistical Society. Series B (Methodological
, 1964
"... In the analysis of data it is often assumed that observations y,, y,,...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedasti ..."
Abstract

Cited by 1067 (3 self)
 Add to MetaCart
In the analysis of data it is often assumed that observations y,, y,,...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 3393 (12 self)
 Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when
Results 1  10
of
13,319