• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 3,907
Next 10 →

A Guided Tour to Approximate String Matching

by Gonzalo Navarro - ACM COMPUTING SURVEYS , 1999
"... We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining t ..."
Abstract - Cited by 598 (36 self) - Add to MetaCart
We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining

Linear pattern matching algorithms

by Peter Weiner - IN PROCEEDINGS OF THE 14TH ANNUAL IEEE SYMPOSIUM ON SWITCHING AND AUTOMATA THEORY. IEEE , 1972
"... In 1970, Knuth, Pratt, and Morris [1] showed how to do basic pattern matching in linear time. Related problems, such as those discussed in [4], have previously been solved by efficient but sub-optimal algorithms. In this paper, we introduce an interesting data structure called a bi-tree. A linear ti ..."
Abstract - Cited by 546 (0 self) - Add to MetaCart
time algorithm for obtaining a compacted version of a bi-tree associated with a given string is presented. With this construction as the basic tool, we indicate how to solve several pattern matching problems, including some from [4], in linear time.

Suffix arrays: A new method for on-line string searches

by Udi Manber, Gene Myers , 1991
"... A new and conceptually simple data structure, called a suffix array, for on-line string searches is intro-duced in this paper. Constructing and querying suffix arrays is reduced to a sort and search paradigm that employs novel algorithms. The main advantage of suffix arrays over suffix trees is that ..."
Abstract - Cited by 835 (0 self) - Add to MetaCart
A new and conceptually simple data structure, called a suffix array, for on-line string searches is intro-duced in this paper. Constructing and querying suffix arrays is reduced to a sort and search paradigm that employs novel algorithms. The main advantage of suffix arrays over suffix trees

The pyramid match kernel: Discriminative classification with sets of image features

by Kristen Grauman, Trevor Darrell - IN ICCV , 2005
"... Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondenc ..."
Abstract - Cited by 544 (29 self) - Add to MetaCart
Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve

A training algorithm for optimal margin classifiers

by Bernhard E. Boser, et al. - PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY , 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract - Cited by 1865 (43 self) - Add to MetaCart
is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC

Inducing Features of Random Fields

by Stephen Della Pietra, Vincent Della Pietra, John Lafferty - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract - Cited by 670 (10 self) - Add to MetaCart
introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration

How Iris Recognition Works

by John Daugman , 2003
"... Algorithms developed by the author for recogniz-ing persons by their iris patterns have now been tested in six field and laboratory trials, producing no false matches in several million comparison tests. The recognition principle is the failure of a test of statis-tical independence on iris phase st ..."
Abstract - Cited by 509 (4 self) - Add to MetaCart
structure encoded by multi-scale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 244 degrees of freedom and gen-erates a discrimination entropy of about 3.2 bits/mm over the iris, enabling real-time decisions about per-sonal identity

Very simple classification rules perform well on most commonly used datasets

by Robert C. Holte - Machine Learning , 1993
"... The classification rules induced by machine learning systems are judged by two criteria: their classification accuracy on an independent test set (henceforth "accuracy"), and their complexity. The relationship between these two criteria is, of course, of keen interest to the machin ..."
Abstract - Cited by 547 (5 self) - Add to MetaCart
;quot; pruning method in Mingers (1989). This method produced the most accurate decision trees, and in four of the five domains studied these trees had only 2 or 3 leaves

Statistical Decision-Tree Models for Parsing

by David M. Magerman - In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics , 1995
"... Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor per- formance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to textprocessing in gen ..."
Abstract - Cited by 367 (1 self) - Add to MetaCart
in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too

Complexity of finding embeddings in a k-tree

by Stefan Arnborg, Derek G. Corneil , Andrzej Proskurowski - SIAM JOURNAL OF DISCRETE MATHEMATICS , 1987
"... A k-tree is a graph that can be reduced to the k-complete graph by a sequence of removals of a degree k vertex with completely connected neighbors. We address the problem of determining whether a graph is a partial graph of a k-tree. This problem is motivated by the existence of polynomial time al ..."
Abstract - Cited by 386 (1 self) - Add to MetaCart
algorithms for many combinatorial problems on graphs when the graph is constrained to be a partial k-tree for fixed k. These algorithms have practical applications in areas such as reliability, concurrent broadcasting and evaluation of queries in a relational database system. We determine the complexity
Next 10 →
Results 1 - 10 of 3,907
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University