• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 5,735
Next 10 →

Training Support Vector Machines: an Application to Face Detection

by Edgar Osuna, Robert Freund, Federico Girosi , 1997
"... We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision sur ..."
Abstract - Cited by 727 (1 self) - Add to MetaCart
global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping

Gradient-based learning applied to document recognition

by Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner - Proceedings of the IEEE , 1998
"... Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify hi ..."
Abstract - Cited by 1533 (84 self) - Add to MetaCart
transformer networks (GTN’s), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility

A tutorial on support vector machines for pattern recognition

by Christopher J. C. Burges - Data Mining and Knowledge Discovery , 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract - Cited by 3393 (12 self) - Add to MetaCart
SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very

Statistical shape influence in geodesic active contours

by Michael E. Leventon, W. Eric, L. Grimson, Olivier Faugeras - In Proc. 2000 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head, SC , 2000
"... A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero l ..."
Abstract - Cited by 396 (4 self) - Add to MetaCart
A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero

A discriminative global training algorithm for statistical MT

by Christoph Tillmann - In Proc. of ACL , 2006
"... This paper presents a novel training algorithm for a linearly-scored block sequence translation model. The key component is a new procedure to directly optimize the global scoring function used by a SMT decoder. No translation, language, or distortion model probabilities are used as in earlier work ..."
Abstract - Cited by 43 (2 self) - Add to MetaCart
This paper presents a novel training algorithm for a linearly-scored block sequence translation model. The key component is a new procedure to directly optimize the global scoring function used by a SMT decoder. No translation, language, or distortion model probabilities are used as in earlier work

Global Training of Document Processing Systems using Graph Transformer Networks.

by Leon Bottou, Yoshua Bengio, Yann Le Cun - In Proc. of Computer Vision and Pattern Recognition , 1997
"... We propose a new machine learning paradigm called Graph Transformer Networks that extends the applicability of gradient-based learning algorithms to systems composed of modules that take graphs as inputs and produce graphs as output. Training is performed by computing gradients of a global objective ..."
Abstract - Cited by 24 (5 self) - Add to MetaCart
We propose a new machine learning paradigm called Graph Transformer Networks that extends the applicability of gradient-based learning algorithms to systems composed of modules that take graphs as inputs and produce graphs as output. Training is performed by computing gradients of a global

Relaxed Clipping: A Global Training Method for Robust Regression and Classification

by Yaoliang Yu, Min Yang, Linli Xu, Martha White, Dale Schuurmans
"... Robust regression and classification are often thought to require non-convex loss functions that prevent scalable, global training. However, such a view neglects the possibility of reformulated training methods that can yield practically solvable alternatives. A natural way to make a loss function m ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Robust regression and classification are often thought to require non-convex loss functions that prevent scalable, global training. However, such a view neglects the possibility of reformulated training methods that can yield practically solvable alternatives. A natural way to make a loss function

Global Training of Document Processing Systems using Graph Transformer Networks.

by unknown authors
"... We propose a new machine learning paradigm called Graph Transformer Networks that extends the applicability of gradient-based learning algorithms to systems composed of modules that take graphs as inputs and produce graphs as output. Training is performed by computing gradients of a global objective ..."
Abstract - Add to MetaCart
We propose a new machine learning paradigm called Graph Transformer Networks that extends the applicability of gradient-based learning algorithms to systems composed of modules that take graphs as inputs and produce graphs as output. Training is performed by computing gradients of a global

What Do Packet Dispersion Techniques Measure?

by Constantinos Dovrolis , Parameswaran Ramanathan, David Moore - IN PROCEEDINGS OF IEEE INFOCOM , 2001
"... The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets [1][2][3]. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects th ..."
Abstract - Cited by 313 (8 self) - Add to MetaCart
that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs

Context-Based Vision System for Place and Object Recognition

by Antonio Torralba, Kevin P. Murphy, William T. Freeman, Mark Rubin , 2003
"... While navigating in an environment, a vision system has' to be able to recognize where it is' and what the main objects' in the scene are. In this paper we present a context-based vision system for place and object recognition. The goal is' to identify familiar locations' (e ..."
Abstract - Cited by 317 (9 self) - Add to MetaCart
' (e.g., office 610, conference room 941, Main Street), to categorize new environments' (office, corridor, street) and to use that information to provide contextualpriors for object recognition (e.g., table, chair, car, computeD. We present a low-dimensional global image representation
Next 10 →
Results 1 - 10 of 5,735
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University