• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,559
Next 10 →

Benchmarking Least Squares Support Vector Machine Classifiers

by Tony Van Gestel, Johan A. K. Suykens, Bart Baesens, Stijn Viaene, Jan Vanthienen, Guido Dedene, Bart De Moor, Joos Vandewalle - NEURAL PROCESSING LETTERS , 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract - Cited by 476 (46 self) - Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set

An empirical comparison of voting classification algorithms: Bagging, boosting, and variants.

by Eric Bauer , Philip Chan , Salvatore Stolfo , David Wolpert - Machine Learning, , 1999
"... Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several vari ..."
Abstract - Cited by 707 (2 self) - Add to MetaCart
in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting

Manifold regularization: A geometric framework for learning from labeled and unlabeled examples

by Mikhail Belkin, Partha Niyogi, Vikas Sindhwani - JOURNAL OF MACHINE LEARNING RESEARCH , 2006
"... We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning al ..."
Abstract - Cited by 578 (16 self) - Add to MetaCart
algorithms and standard methods including Support Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize properties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely

The Kernel Recursive Least Squares Algorithm

by Yaakov Engel, Shie Mannor, Ron Meir - IEEE Transactions on Signal Processing , 2003
"... We present a non-linear kernel-based version of the Recursive Least Squares (RLS) algorithm. Our Kernel-RLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared -error regressor. Spars ..."
Abstract - Cited by 141 (2 self) - Add to MetaCart
We present a non-linear kernel-based version of the Recursive Least Squares (RLS) algorithm. Our Kernel-RLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared -error regressor

Concept Decompositions for Large Sparse Text Data using Clustering

by Inderjit S. Dhillon, Dharmendra S. Modha - Machine Learning , 2000
"... . Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors--a few thousand dimensions and a sparsity of 95 to 99 ..."
Abstract - Cited by 407 (27 self) - Add to MetaCart
to 99% is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we

Mixture Kernel Least Mean Square

by Rosha Pokharel, Sohan Seth, Jose C. Principe
"... Abstract—Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
mixture of models. We propose mixture ker-nel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel

The Kernel Recursive Least-Squares Algorithm

by unknown authors
"... Abstract—We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-s ..."
Abstract - Add to MetaCart
Abstract—We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-squares

Kernel partial least squares regression in reproducing kernel Hilbert space

by Roman Rosipal, Leonard J. Trejo - JOURNAL OF MACHINE LEARNING RESEARCH , 2001
"... A family of regularized least squares regression models in a Reproducing Kernel Hilbert Space is extended by the kernel partial least squares (PLS) regression model. Similar to principal components regression (PCR), PLS is a method based on the projection of input (explanatory) variables to the late ..."
Abstract - Cited by 154 (10 self) - Add to MetaCart
A family of regularized least squares regression models in a Reproducing Kernel Hilbert Space is extended by the kernel partial least squares (PLS) regression model. Similar to principal components regression (PCR), PLS is a method based on the projection of input (explanatory) variables

1Bayesian Extensions of Kernel Least Mean Squares

by Il Memming Park, Sohan Seth, Steven Van Vaerenbergh
"... Abstract—The kernel least mean squares (KLMS) algorithm is a computationally efficient nonlinear adaptive filtering method that “kernelizes ” the celebrated (linear) least mean squares algorithm. We demonstrate that the least mean squares algorithm is closely related to the Kalman filtering, and thu ..."
Abstract - Add to MetaCart
Abstract—The kernel least mean squares (KLMS) algorithm is a computationally efficient nonlinear adaptive filtering method that “kernelizes ” the celebrated (linear) least mean squares algorithm. We demonstrate that the least mean squares algorithm is closely related to the Kalman filtering

A Least-Squares Approach to Blind Channel Identification

by Guanghan Xu, Hui Liu, Lung Tong, Thomas Kailath - IEEE Trans. Signal Processing , 1995
"... Conventional blind channel idenffiication algorithm.q are based on channel outputs and knowledge of the probabilistic model of channel input. In some practical applications, however, the input statistical model may not be known, or there may not be sufficient data to obtain accurate enough estimates ..."
Abstract - Cited by 183 (7 self) - Add to MetaCart
Conventional blind channel idenffiication algorithm.q are based on channel outputs and knowledge of the probabilistic model of channel input. In some practical applications, however, the input statistical model may not be known, or there may not be sufficient data to obtain accurate enough
Next 10 →
Results 1 - 10 of 2,559
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University