Results 1  10
of
67,022
BradleyTerry Models in R: The BradleyTerry2
 Package.” Journal of Statistical Software,
, 2012
"... Abstract This is a short overview of the R addon package BradleyTerry2, which facilitates the specification and fitting of BradleyTerry logit, probit or cauchit models to paircomparison data. Included are the standard 'unstructured' BradleyTerry model, structured versions in which the ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract This is a short overview of the R addon package BradleyTerry2, which facilitates the specification and fitting of BradleyTerry logit, probit or cauchit models to paircomparison data. Included are the standard 'unstructured' BradleyTerry model, structured versions in which
The Mathematics of Statistical Machine Translation: Parameter Estimation
 COMPUTATIONAL LINGUISTICS
, 1993
"... ..."
A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models
, 1997
"... We describe the maximumlikelihood parameter estimation problem and how the Expectationform of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract

Cited by 693 (4 self)
 Add to MetaCart
We describe the maximumlikelihood parameter estimation problem and how the Expectationform of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2
Performance tournaments with crowdsourced judges
 In Proceedings of the American Statistical Association, section on marketing statistics, 732 North Washtington
, 2013
"... A performance slam is a competition among a fixed set of performances whereby pairs of performances are judged by audience participants. When performances are recorded on electronic media, performance slams become amenable to audiences that watch online and judge asynchronously (crowdsourced). In o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
). (2) Its parameters we calculate by rewriting the likelihood gradient into a fixed point estimate, one which mimics the estimate of Mantel and Haenszel (1959). (3) Each pair of performances is chosen sequentially, always chosen to minimize the weighted variance of (the logarithms of) the BradleyTerry
MM algorithms for generalized BradleyTerry models
 The Annals of Statistics
, 2004
"... The Bradley–Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among the many studies of the model in the past 75 years, numerous authors have generalized it in several ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
The Bradley–Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among the many studies of the model in the past 75 years, numerous authors have generalized it in several
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 542 (20 self)
 Add to MetaCart
single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total
A HeteroskedasticityConsistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract

Cited by 3211 (5 self)
 Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n
A Study of CrossValidation and Bootstrap for Accuracy Estimation and Model Selection
 INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1995
"... We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), te ..."
Abstract

Cited by 1283 (11 self)
 Add to MetaCart
), tenfold crossvalidation may be better than the more expensive leaveoneout crossvalidation. We report on a largescale experiment  over half a million runs of C4.5 and a NaiveBayes algorithm  to estimate the effects of different parameters on these algorithms on realworld datasets. For cross
Choosing multiple parameters for support vector machines
 MACHINE LEARNING
, 2002
"... The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing para ..."
Abstract

Cited by 470 (17 self)
 Add to MetaCart
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing
Results 1  10
of
67,022