• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,902,350
Next 10 →

Learning the Kernel Matrix with Semi-Definite Programming

by Gert R. G. Lanckriet, Nello Cristianini, Laurent El Ghaoui, Peter Bartlett, Michael I. Jordan , 2002
"... Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract - Cited by 780 (22 self) - Add to MetaCart
is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space---classical model selection

Shallow Parsing with Conditional Random Fields

by Fei Sha, Fernando Pereira , 2003
"... Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluati ..."
Abstract - Cited by 575 (8 self) - Add to MetaCart
Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard

Random forests

by Leo Breiman, E. Schapire - Machine Learning , 2001
"... Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the fo ..."
Abstract - Cited by 3433 (2 self) - Add to MetaCart
Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees

Randomized Algorithms

by Rajeev Motwani , 1995
"... Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simp ..."
Abstract - Cited by 2210 (37 self) - Add to MetaCart
Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available

Random walks and electric networks

by Peter G. Doyle, J. Laurie Snell , 2006
"... ..."
Abstract - Cited by 481 (11 self) - Add to MetaCart
Abstract not found

Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

by Emmanuel J. Candès , Terence Tao , 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract - Cited by 1513 (20 self) - Add to MetaCart
-law), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude |f | (1) ≥ |f | (2) ≥... ≥ |f | (N), and define the weak-ℓp ball

Markov Random Field Models in Computer Vision

by S. Z. Li , 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract - Cited by 515 (18 self) - Add to MetaCart
. The latter relates to how data is observed and is problem domain dependent. The former depends on how various prior constraints are expressed. Markov Random Field Models (MRF) theory is a tool to encode contextual constraints into the prior probability. This paper presents a unified approach for MRF modeling

Reversible Markov chains and random walks on graphs

by David Aldous, James Allen Fill , 2002
"... ..."
Abstract - Cited by 549 (13 self) - Add to MetaCart
Abstract not found

Pseudo-Random Generation from One-Way Functions

by Johan Håstad, Russell Impagliazzo, Leonid A. Levin, Michael Luby - PROC. 20TH STOC , 1988
"... Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a one-way function from a pseudorandom generator, this result shows that there is a pseudorandom gene ..."
Abstract - Cited by 887 (22 self) - Add to MetaCart
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a one-way function from a pseudorandom generator, this result shows that there is a pseudorandom generator iff there is a one-way function.

Signal recovery from random measurements via Orthogonal Matching Pursuit

by Joel A. Tropp, Anna C. Gilbert - IEEE TRANS. INFORM. THEORY , 2007
"... This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous ..."
Abstract - Cited by 780 (9 self) - Add to MetaCart
This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over
Next 10 →
Results 1 - 10 of 2,902,350
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University