Results 1  10
of
290,138
Learning Preferences with Millions of Parameters by Enforcing Sparsity
"... Abstract—We study the retrieval task that ranks a set of objects for a given query in the pairwise preference learning framework. Recently researchers found out that raw features (e.g. words for text retrieval) and their pairwise features which describe relationships between two raw features (e.g. w ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
of parameters. In this paper, we propose to learn a sparse representation of the pairwise features under the preference learning framework using the L1 regularization. Based on stochastic gradient descent, an online algorithm is devised to enforce the sparsity using a minibatch shrinkage strategy. On multiple
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
, 2008
"... ..."
An Efficient Boosting Algorithm for Combining Preferences
, 1999
"... The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting ..."
Abstract

Cited by 707 (18 self)
 Add to MetaCart
The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
SemiSupervised Learning Literature Survey
, 2006
"... We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a chapter ..."
Abstract

Cited by 757 (8 self)
 Add to MetaCart
We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a
Machine Learning in Automated Text Categorization
 ACM COMPUTING SURVEYS
, 2002
"... The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this p ..."
Abstract

Cited by 1658 (22 self)
 Add to MetaCart
to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance
What Can Economists Learn from Happiness Research?
 FORTHCOMING IN JOURNAL OF ECONOMIC LITERATURE
, 2002
"... Happiness is generally considered to be an ultimate goal in life; virtually everybody wants to be happy. The United States Declaration of Independence of 1776 takes it as a selfevident truth that the “pursuit of happiness” is an “unalienable right”, comparable to life and liberty. It follows that e ..."
Abstract

Cited by 517 (24 self)
 Add to MetaCart
Happiness is generally considered to be an ultimate goal in life; virtually everybody wants to be happy. The United States Declaration of Independence of 1776 takes it as a selfevident truth that the “pursuit of happiness” is an “unalienable right”, comparable to life and liberty. It follows that economics is – or should be – about individual happiness. In particular, the question is how do economic growth, unemployment and inflation, as well as institutional factors such as good governance, affect individual wellbeing? In addition to this intrinsic interest, there are three major reasons for economists to consider happiness. The first is economic policy. At the microlevel, it is often impossible to make a Paretooptimal proposal, because a social action entails costs for some individuals. Hence an evaluation of the net effects, in terms of individual utilities, is needed. On an aggregate level, economic policy must deal with tradeoffs, especially those between unemployment and
KSVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
, 2006
"... In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signalatoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and inc ..."
Abstract

Cited by 930 (41 self)
 Add to MetaCart
signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the KSVD algorithm—generalizing the umeans clustering process. KSVD is an iterative method
Results 1  10
of
290,138