Results 1  10
of
39
Mining DistanceBased Outliers in Near Linear Time with Randomization and a Simple Pruning Rule
, 2003
"... Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic ..."
Abstract

Cited by 145 (4 self)
 Add to MetaCart
Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real highdimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the e#ciency is because the time to process nonoutliers, which are the majority of examples, does not depend on the size of the data set.
A Survey of Kernels for Structured Data
, 2003
"... Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much ‘realworld’ data, however, is structured – it has no natural representation in a single table. Usually, to apply kernel methods to ‘realwor ..."
Abstract

Cited by 138 (2 self)
 Add to MetaCart
Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much ‘realworld’ data, however, is structured – it has no natural representation in a single table. Usually, to apply kernel methods to ‘realworld’ data, extensive preprocessing is performed to embed the data into a real vector space and thus in a single table. This survey describes several approaches of defining positive definite kernels on structured instances directly.
Kernels and Distances for Structured Data
 Machine Learning
, 2004
"... This paper brings together two strands of machine learning of increasing importance: kernel methods and highly structured data. We propose a general method for constructing a kernel following the syntactic structure of the data, as defined by its type signature in a higherorder logic. Our main theo ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
This paper brings together two strands of machine learning of increasing importance: kernel methods and highly structured data. We propose a general method for constructing a kernel following the syntactic structure of the data, as defined by its type signature in a higherorder logic. Our main theoretical result is the positive definiteness of any kernel thus defined. We report encouraging experimental results on a range of realworld datasets. By converting our kernel to a distance pseudometric for 1nearest neighbour, we were able to improve the best accuracy from the literature on the Diterpene dataset by more than 10%.
Logical hidden markov models
 Journal of Artificial Intelligence Research
, 2006
"... Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evalu ..."
Abstract

Cited by 51 (13 self)
 Add to MetaCart
(Show Context)
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics. 1.
Frequent Subgraph Mining in Outerplanar Graphs
 PROC. 12TH ACM SIGKDD INT. CONF. ON KNOWLEDGE DISCOVERY AND DATA MINING
, 2006
"... In recent years there has been an increased interest in frequent pattern discovery in large databases of graph structured objects. While the frequent connected subgraph mining problem for tree datasets can be solved in incremental polynomial time, it becomes intractable for arbitrary graph databases ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
In recent years there has been an increased interest in frequent pattern discovery in large databases of graph structured objects. While the frequent connected subgraph mining problem for tree datasets can be solved in incremental polynomial time, it becomes intractable for arbitrary graph databases. Existing approaches have therefore resorted to various heuristic strategies and restrictions of the search space, but have not identified a practically relevant tractable graph class beyond trees. In this paper, we consider the class of outerplanar graphs, a strict generalization of trees, develop a frequent subgraph mining algorithm for outerplanar graphs, and show that it works in incremental polynomial time for the practically relevant subclass of wellbehaved outerplanar graphs, i.e., which have only polynomially many simple cycles. We evaluate the algorithm empirically on chemo and bioinformatics applications.
Fisher kernels for logical sequences
 In Proc. of 15th European Conference on Machine Learning (ECML04
, 2004
"... Abstract. One approach to improve the accuracy of classifications based on generative models is to combine them with successful discriminative algorithms. Fisher kernels were developed to combine generative models with a currently very popular class of learning algorithms, kernel methods. Empiricall ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract. One approach to improve the accuracy of classifications based on generative models is to combine them with successful discriminative algorithms. Fisher kernels were developed to combine generative models with a currently very popular class of learning algorithms, kernel methods. Empirically, the combination of hidden Markov models with support vector machines has shown promising results. So far, however, Fisher kernels have only been considered for sequences over flat alphabets. This is mostly due to the lack of a method for computing the gradient of a generative model over structured sequences. In this paper, we show how to compute the gradient of logical hidden Markov models, which allow for the modelling of logical sequences, i.e., sequences over an alphabet of logical atoms. Experiments show a considerable improvement over results achieved without Fisher kernels for logical sequences.
Distances and (indefinite) kernels for sets of objects
 In ICDM
, 2006
"... For various classification problems involving complex data, it is most natural to represent each training example as a set of vectors. While several distance measures for sets have been proposed, only a few kernels over these structures exist since it is difficult in general to design a positive sem ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
For various classification problems involving complex data, it is most natural to represent each training example as a set of vectors. While several distance measures for sets have been proposed, only a few kernels over these structures exist since it is difficult in general to design a positive semidefinite (PSD) similarity function. The main disadvantage of most existing set kernels is that they are based on averaging, which might be inappropriate for problems where only specific elements of the two sets should determine the overall similarity. In this paper we propose a class of kernels for sets of vectors directly exploiting set distance measures and, hence, incorporating various semantics into set kernels and lending the power of regularization to learning in structural domains where natural distance functions exist. These kernels belong to two groups: (i) kernels in the proximity space induced by set distances and (ii) set distance substitution kernels (nonPSD in general). We report experimental results which show that our kernels compare favorably with kernels based on averaging and achieve results similar to other stateoftheart methods. At the same time our kernels bring systematically improvement over the naive way of exploiting distances. 1
N.: Relational sequence learning
 Probabilistic Inductive Logic Programming. Volume 4911/2008 of Lecture Notes in Computer Science
, 2008
"... Abstract. Sequential behavior and sequence learning is essential to intelligence. Often the elements of sequences exhibit an internal structure that can elegantly be represented using relational atoms. Applying traditional sequential learning techniques to such relational sequences requires either ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Sequential behavior and sequence learning is essential to intelligence. Often the elements of sequences exhibit an internal structure that can elegantly be represented using relational atoms. Applying traditional sequential learning techniques to such relational sequences requires either to ignore the internal structure or to put up with a combinatorial explosion in the model complexity. This chapter briefly reviews relational sequence learning and describes methods that have been developed such as data mining techniques, (hidden) Markov models, conditional random fields, dynamic programming and reinforcement learning techniques. 1
Bridging the gap between distance and generalisation: Symbolic learning in metric spaces
, 2008
"... Distancebased and generalisationbased methods are two families of artificial intelligence techniques that have been successfully used over a wide range of realworld problems. In the first case, general algorithms can be applied to any data representation by just changing the distance. The metric ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Distancebased and generalisationbased methods are two families of artificial intelligence techniques that have been successfully used over a wide range of realworld problems. In the first case, general algorithms can be applied to any data representation by just changing the distance. The metric space sets the search and learning space, which is generally instanceoriented. In the second case, models can be obtained for a given pattern language, which can be comprehensible. The generalityordered space sets the search and learning space, which is generally modeloriented. However, the concepts of distance and generalisation clash in many different ways, especially when knowledge representation is complex (e.g. structured data). This work establishes a framework where these two fields can be integrated in a consistent way. We introduce the concept of distancebased generalisation, which connects all the generalised examples in such a way that all of them are reachable inside the generalisation by using straight paths in the metric space. This makes the metric space and the generalityordered space coherent (or even dual). Additionally, we also introduce a definition of minimal distancebased generalisation that can be seen as the first formulation of the Minimum Description Length (MDL)/Minimum Message Length (MML) principle in terms of a distance function. We instantiate and develop the framework for the most common data representations and distances, where we show that consistent instances can be found for numerical data, nominal data, sets, lists, tuples, graphs, firstorder atoms and clauses. As a result, general learning methods that integrate the best from distancebased and generalisationbased methods can be defined and adapted to any specific problem by appropriately choosing the distance, the pattern language and the generalisation operator.
M.: Distancebased learning over extended relational algebra structures
 In: Proceedings of the 15th International Conference of Inductive Logic Programming. (2005
"... Abstract. In (Kalousis et al., 2005) we presented a novel unifying framework for relational distancebased learning where learning examples are stored in a relational database. This approach is based on concepts from relational algebra and exploits the notion of foreign keys associations to define a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In (Kalousis et al., 2005) we presented a novel unifying framework for relational distancebased learning where learning examples are stored in a relational database. This approach is based on concepts from relational algebra and exploits the notion of foreign keys associations to define a new attribute of type set. We defined several relational distances whose blocks are distances between tuples of relations and distances between sets. In this paper we extend this relational algebra representation language such that it allows for modeling of lists of complex objects (relational instances in our case). We define a new type of foreign keys associations which, in addition to attributes of type set, gives rise to a new attribute of type list. We extend the well known alignmentbased edit distance measure on lists to fit within our framework. Our extended distancebased learning algorithm in tested on a protein fingerprint classification dataset for which promising results are reported. 1