Results 1  10
of
71
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract

Cited by 470 (2 self)
 Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
MultiInstance Kernels
 In Proc. 19th International Conf. on Machine Learning
, 2002
"... Learning from structured data is becoming increasingly important. However, most prior work on kernel methods has focused on learning from attributevalue data. Only recently, research started investigating kernels for structured data. This paper considers kernels for multiinstance problems  a cla ..."
Abstract

Cited by 112 (3 self)
 Add to MetaCart
Learning from structured data is becoming increasingly important. However, most prior work on kernel methods has focused on learning from attributevalue data. Only recently, research started investigating kernels for structured data. This paper considers kernels for multiinstance problems  a class of concepts on individuals represented by sets. The main result of this paper is a kernel on multiinstance data that can be shown to separate positive and negative sets under natural assumptions. This kernel compares favorably with state of the art multiinstance learning algorithms in an empirical study. Finally, we give some concluding remarks and propose future work that might further improve the results.
Unsupervised Learning of Derivational Morphology From Inflectional Lexicons
 UNIVERSITY OF MARYLAND
, 1999
"... We present in this paper an unsupervised method to learn suffixes and suffixation operations from an inflectional lexicon of a language. The elements acquired with our method are used to build stemming procedures and can assist lexicographers in the development of new lexical resources. ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
We present in this paper an unsupervised method to learn suffixes and suffixation operations from an inflectional lexicon of a language. The elements acquired with our method are used to build stemming procedures and can assist lexicographers in the development of new lexical resources.
Clustering methods for the analysis of DNA microarray data
, 1999
"... It is now possible to simultaneously measure the expression of thousands of genes during cellular di erentiation and response, through the use of DNA microarrays. A major statistical task is to understand the structure in the data that arise from this technology. In this paper we review various meth ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
It is now possible to simultaneously measure the expression of thousands of genes during cellular di erentiation and response, through the use of DNA microarrays. A major statistical task is to understand the structure in the data that arise from this technology. In this paper we review various methods of clustering, and illustrate how they can be used to arrange both the genes and cell lines from a set of DNA microarray experiments. The methods discussed are global clustering techniques including hierarchical, Kmeans, and block clustering, and treestructured vector quantization. Finally, we propose a new method for identifying structure in subsets of both genes and cell lines that are potentially obscured by the global clustering approaches. 1
PNrule: A new framework for learning classifier models in data mining (a casestudy in network intrusion detection
 IBM Research Report, Computer Science/Mathematics
, 2000
"... Learning classifier models is an important problem in data mining. Observations from the real world are often recorded as a set of records, each characterized by multiple attributes. Associated with each record is a categorical attribute called class. Given a training set of records with known class ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Learning classifier models is an important problem in data mining. Observations from the real world are often recorded as a set of records, each characterized by multiple attributes. Associated with each record is a categorical attribute called class. Given a training set of records with known class labels, the problem is to
MetricBased Methods for Adaptive Model Selection and Regularization
 Machine Learning
, 2001
"... We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the di ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the distribution of unlabeled data. We show how this metric can be used to detect untrustworthy training error estimates, and devise novel model selection strategies that exhibit theoretical guarantees against overtting (while still avoiding under tting). We then extend the approach to derive a general training criterion for supervised learningyielding an adaptive regularization method that uses unlabeled data to automatically set regularization parameters. This new criterion adjusts its regularization level to the specic set of training data received, and performs well on a variety of regression and conditional density estimation tasks. The only proviso for these methods is that s...
Rule extraction from support vector machines
 In Proceedings of European Symposium on Artificial Neural Networks
, 2002
"... Abstract. Support vector machines (SVMs) are learning systems based on the statistical learning theory, which are exhibiting good generalization ability on real data sets. Nevertheless, a possible limitation of SVM is that they generate black box models. In this work, a procedure for rule extraction ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Abstract. Support vector machines (SVMs) are learning systems based on the statistical learning theory, which are exhibiting good generalization ability on real data sets. Nevertheless, a possible limitation of SVM is that they generate black box models. In this work, a procedure for rule extraction from support vector machines is proposed: the SVM+Prototypes method. This method allows to give explanation ability to SVM. Once determined the decision function by means of a SVM, a clustering algorithm is used to determine prototype vectors for each class. These points are combined with the support vectors using geometric methods to define ellipsoids in the input space, which are later transfers to ifthen rules. By using the support vectors we can establish the limits of these regions. 1.
Using Loglinear Models to Compress Datacubes
 In WAIM ’00: Proceedings of the First International Conference on WebAge Information Management
, 1999
"... A data cube is a popular organization for summary data. A cube is simply a multidimensional structure that contains in each cell an aggregate value, i.e., the result of applying an aggregate function to an underlying relation. In practical situations, cubes can require a large amount of storage, so, ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
A data cube is a popular organization for summary data. A cube is simply a multidimensional structure that contains in each cell an aggregate value, i.e., the result of applying an aggregate function to an underlying relation. In practical situations, cubes can require a large amount of storage, so, compressing them is of practical importance. In this paper, we propose an approximation technique that reduces the storage cost of the cube at the price of getting approximate answers for the queries posed against the cube. The idea is to characterize regions of the cube by using statistical models whose description take less space than the data itself. Then, the model parameters can be used to estimate the cube cells with a certain level of accuracy. To increase the accuracy, some of the "outliers," i.e., cells that incur in the largest errors when estimated are retained. The storage taken by the model parameters and the retained cells, of course, should take a fraction of the space of the...
Knowledge Discovery from Sequential Data
, 2003
"... A new framework for analyzing sequential or temporal data such as time series is proposed. It differs from other approaches by the special emphasis on the interpretability of the results, since interpretability is of vital importance for knowledge discovery, that is, the development of new knowl ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
A new framework for analyzing sequential or temporal data such as time series is proposed. It differs from other approaches by the special emphasis on the interpretability of the results, since interpretability is of vital importance for knowledge discovery, that is, the development of new knowledge (in the head of a human) from a list of discovered patterns. While traditional approaches try to model and predict all time series observations, the focus in this work is on modelling local dependencies in multivariate time series. This
Learning of boolean functions using support vector machines
 In Proc. of the 12th International Conference on Algorithmic Learning Theory
, 2001
"... Abstract. This paper concerns the design of a Support Vector Machine (SVM) appropriate for the learning of Boolean functions. This is motivated by the need of a more sophisticated algorithm for classification in discrete attribute spaces. Classification in discrete attribute spaces is reduced to the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. This paper concerns the design of a Support Vector Machine (SVM) appropriate for the learning of Boolean functions. This is motivated by the need of a more sophisticated algorithm for classification in discrete attribute spaces. Classification in discrete attribute spaces is reduced to the problem of learning Boolean functions from examples of its input/output behavior. Since any Boolean function can be written in Disjunctive Normal Form (DNF), it can be represented as a weighted linear sum of all possible conjunctions of Boolean literals. This paper presents a particular kernel function called the DNF kernel which enables SVMs to efficiently learn such linear functions in the highdimensional space whose coordinates correspond to all possible conjunctions. For a limited form of DNF consisting of positive Boolean literals, the monotone DNF kernel is also presented. SVMs employing these kernel functions can perform the learning in a highdimensional feature space whose features are derived from given basic attributes. In addition, it is expected that SVMs’ wellfounded capacity control alleviates overfitting. In fact, an empirical study on learning of randomly generated Boolean functions shows that the resulting algorithm outperforms C4.5. Furthermore, in comparison with SVMs employing the Gaussian kernel, it is shown that DNF kernel produces accuracy comparable to best adjusted Gaussian kernels. 1