• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 20,447
Next 10 →

Convergent Tree-reweighted Message Passing for Energy Minimization

by Vladimir Kolmogorov - ACCEPTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI), 2006. ABSTRACTACCEPTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (PAMI) , 2006
"... Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper we focus on the recent technique proposed by Wainwright et al. [33]- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound on the energy ..."
Abstract - Cited by 489 (16 self) - Add to MetaCart
Algorithms for discrete energy minimization are of fundamental importance in computer vision. In this paper we focus on the recent technique proposed by Wainwright et al. [33]- tree-reweighted max-product message passing (TRW). It was inspired by the problem of maximizing a lower bound

Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach

by Jiawei Han, Jian Pei, Yiwen Yin, Runying Mao - DATA MINING AND KNOWLEDGE DISCOVERY , 2004
"... Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still co ..."
Abstract - Cited by 1752 (64 self) - Add to MetaCart
costly, especially when there exist a large number of patterns and/or long patterns. In this study, we propose a novel frequent-pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree

Linear pattern matching algorithms

by Peter Weiner - IN PROCEEDINGS OF THE 14TH ANNUAL IEEE SYMPOSIUM ON SWITCHING AND AUTOMATA THEORY. IEEE , 1972
"... In 1970, Knuth, Pratt, and Morris [1] showed how to do basic pattern matching in linear time. Related problems, such as those discussed in [4], have previously been solved by efficient but sub-optimal algorithms. In this paper, we introduce an interesting data structure called a bi-tree. A linear ti ..."
Abstract - Cited by 546 (0 self) - Add to MetaCart
In 1970, Knuth, Pratt, and Morris [1] showed how to do basic pattern matching in linear time. Related problems, such as those discussed in [4], have previously been solved by efficient but sub-optimal algorithms. In this paper, we introduce an interesting data structure called a bi-tree. A linear

The X-tree: An index structure for high-dimensional data

by Stefan Berchtold, Daniel A. Keim, Hans-peter Kriegel - In Proceedings of the Int’l Conference on Very Large Data Bases , 1996
"... In this paper, we propose a new method for index-ing large amounts of point and spatial data in high-dimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the over ..."
Abstract - Cited by 592 (17 self) - Add to MetaCart
In this paper, we propose a new method for index-ing large amounts of point and spatial data in high-dimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures

A tutorial on support vector machines for pattern recognition

by Christopher J. C. Burges - Data Mining and Knowledge Discovery , 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract - Cited by 3393 (12 self) - Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when

Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web

by David Karger, Eric Lehman, Tom Leighton, Matthew Levine, Daniel Lewin, Rina Panigrahy - IN PROC. 29TH ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC , 1997
"... We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and ..."
Abstract - Cited by 699 (10 self) - Add to MetaCart
of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes minimally as the range of the function changes. Through the development of good

Features of similarity.

by Amos Tversky - Psychological Review , 1977
"... Similarity plays a fundamental role in theories of knowledge and behavior. It serves as an organizing principle by which individuals classify objects, form concepts, and make generalizations. Indeed, the concept of similarity is ubiquitous in psychological theory. It underlies the accounts of stimu ..."
Abstract - Cited by 1455 (2 self) - Add to MetaCart
of stimulus and response generalization in learning, it is employed to explain errors in memory and pattern recognition, and it is central to the analysis of connotative meaning. Similarity or dissimilarity data appear in di¤erent forms: ratings of pairs, sorting of objects, communality between associations

Greedy Function Approximation: A Gradient Boosting Machine

by Jerome H. Friedman - Annals of Statistics , 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract - Cited by 1000 (13 self) - Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed

Regression Shrinkage and Selection Via the Lasso

by Robert Tibshirani - JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B , 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract - Cited by 4212 (49 self) - Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients

Inducing Features of Random Fields

by Stephen Della Pietra, Vincent Della Pietra, John Lafferty - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract - Cited by 670 (10 self) - Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing
Next 10 →
Results 1 - 10 of 20,447
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University