• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,686
Next 10 →

Near-optimal algorithms for online matrix prediction

by Elad Hazan, Satyen Kale, Shai Shalev-shwartz - CoRR
"... In several online prediction problems of recent interest the comparison class is composed of matrices with bounded entries. For example, in the online max-cut problem, the comparison class is matrices which represent cuts of a given graph and in online gambling the comparison class is matrices which ..."
Abstract - Cited by 16 (6 self) - Add to MetaCart
online learning algorithm, that enjoys a regret bound of Õ ( √ β τ T) for all problems in which the comparison class is composed of (β, τ)-decomposable matrices. By analyzing the decomposability of cut matrices, triangular matrices, and low tracenorm matrices, we derive near optimal regret bounds

Commentary on “Near-Optimal Algorithms for Online Matrix Prediction”

by Rina Foygel, Shie Mannor, Nathan Srebro, Robert C. Williamson
"... This piece is a commentary on the paper by Hazan et al. (2012b). In their paper, they introduce the class of (β, τ)-decomposable matrices, and show that well-known matrix regularizers and matrix classes (e.g. matrices with bounded trace norm) can be viewed as special cases of their construction. The ..."
Abstract - Add to MetaCart
This piece is a commentary on the paper by Hazan et al. (2012b). In their paper, they introduce the class of (β, τ)-decomposable matrices, and show that well-known matrix regularizers and matrix classes (e.g. matrices with bounded trace norm) can be viewed as special cases of their construction

25th Annual Conference on Learning Theory Near-Optimal Algorithms for Online Matrix Prediction

by Elad Hazan, Satyen Kale, Shie Mannor, Nathan Srebro, Robert C. Williamson
"... In several online prediction problems of recent interest the comparison class is composed of matrices with bounded entries. For example, in the online max-cut problem, the comparison class is matrices which represent cuts of a given graph and in online gambling the comparison class is matrices which ..."
Abstract - Add to MetaCart
online learning algorithm, that enjoys a regret bound of Õ( √ β τ T) for all problems in which the comparison class is composed of (β, τ)-decomposable matrices. By analyzing the decomposability of cut matrices, low trace-norm matrices and triangular matrices, we derive near optimal regret bounds

Near-optimal sensor placements in gaussian processes

by Andreas Krause, Ajit Singh, Carlos Guestrin, Chris Williams - In ICML , 2005
"... When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in t ..."
Abstract - Cited by 342 (34 self) - Add to MetaCart
) in the GP model, and A-, D-, or E-optimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual

A new learning algorithm for blind signal separation

by S. Amari, A. Cichocki, H. H. Yang - , 1996
"... A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract - Cited by 622 (80 self) - Add to MetaCart
A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number

Dynamic Bayesian Networks: Representation, Inference and Learning

by Kevin Patrick Murphy , 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Abstract - Cited by 770 (3 self) - Add to MetaCart
) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy

Eigentaste: A Constant Time Collaborative Filtering Algorithm

by Ken Goldberg, Theresa Roeder, Dhruv Gupta, Chris Perkins , 2000
"... Eigentaste is a collaborative filtering algorithm that uses universal queries to elicit real-valued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline clusterin ..."
Abstract - Cited by 378 (6 self) - Add to MetaCart
Eigentaste is a collaborative filtering algorithm that uses universal queries to elicit real-valued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline

Online learning for matrix factorization and sparse coding

by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro , 2010
"... Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to ad ..."
Abstract - Cited by 330 (31 self) - Add to MetaCart
to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations

Greedy layer-wise training of deep networks

by Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle , 2006
"... Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allow ..."
Abstract - Cited by 394 (48 self) - Add to MetaCart
introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this al-gorithm empirically and explore variants to better understand its success

Ultraconservative Online Algorithms for Multiclass Problems

by Koby Crammer, Yoram Singer - Journal of Machine Learning Research , 2001
"... In this paper we study online classification algorithms for multiclass problems in the mistake bound model. The hypotheses we use maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance and th ..."
Abstract - Cited by 320 (21 self) - Add to MetaCart
In this paper we study online classification algorithms for multiclass problems in the mistake bound model. The hypotheses we use maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance
Next 10 →
Results 1 - 10 of 4,686
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University