Results 1  10
of
5,967
BiasVariance Analysis in Estimating True Query Model for Information Retrieval
"... and other research outputs Biasvariance analysis in estimating true query model for information retrieval ..."
Abstract
 Add to MetaCart
and other research outputs Biasvariance analysis in estimating true query model for information retrieval
oro.open.ac.uk BiasVariance Analysis in Estimating True Query Model for Information Retrieval
"... and other research outputs Biasvariance analysis in estimating true query model for information retrieval ..."
Abstract
 Add to MetaCart
and other research outputs Biasvariance analysis in estimating true query model for information retrieval
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 516 (18 self)
 Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model
The adaptive LASSO and its oracle properties
 Journal of the American Statistical Association
"... The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain sc ..."
Abstract

Cited by 683 (10 self)
 Add to MetaCart
as well as if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be nearminimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso
Investing for the long run when returns are predictable
 Journal of Finance
, 2000
"... We examine how the evidence of predictability in asset returns affects optimal portfolio choice for investors with long horizons. Particular attention is paid to estimation risk, or uncertainty about the true values of model parameters. We find that even after incorporating parameter uncertainty, th ..."
Abstract

Cited by 444 (0 self)
 Add to MetaCart
We examine how the evidence of predictability in asset returns affects optimal portfolio choice for investors with long horizons. Particular attention is paid to estimation risk, or uncertainty about the true values of model parameters. We find that even after incorporating parameter uncertainty
MAC/FAC: A Model of Similaritybased Retrieval
 Cognitive Science
, 1991
"... We present a model of similaritybased retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes ex ..."
Abstract

Cited by 409 (111 self)
 Add to MetaCart
redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been
Efficient noisetolerant learning from statistical queries
 JOURNAL OF THE ACM
, 1998
"... In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from stat ..."
Abstract

Cited by 353 (5 self)
 Add to MetaCart
statistical queries. Intuitively, in this model, a learning algorithm is forbidden to examine individual examples of the unknown target function, but is given access to an oracle providing estimates of probabilities over the sample space of random examples. One of our main results shows that any class
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
 In Proc. 18th International Conf. on Machine Learning
, 2001
"... This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expec ..."
Abstract

Cited by 353 (2 self)
 Add to MetaCart
of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two realworld data sets we reach high accuracy very quickly, sometimes with four times fewer
Eigentaste: A Constant Time Collaborative Filtering Algorithm
, 2000
"... Eigentaste is a collaborative filtering algorithm that uses universal queries to elicit realvalued user ratings on a common set of items and applies principal component analysis (PCA) to the resulting dense subset of the ratings matrix. PCA facilitates dimensionality reduction for offline clusterin ..."
Abstract

Cited by 378 (6 self)
 Add to MetaCart
, an online joke recommending system. Jester has collected approximately 2,500,000 ratings from 57,000 users. We use the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms. In the Appendix we use Uniform and Normal distribution models to derive analytic estimates
Results 1  10
of
5,967