• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 5,757
Next 10 →

Additive Logistic Regression: a Statistical View of Boosting

by Jerome Friedman, Trevor Hastie, Robert Tibshirani - Annals of Statistics , 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract - Cited by 1750 (25 self) - Add to MetaCart
Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input

Structural Equation Modeling And Regression: Guidelines For Research Practice

by David Gefen, Detmar W. Straub, Marie-Claude Boudreau - COMMUNICATIONS OF THE ASSOCIATION FOR INFORMATION SYSTEMS , 2000
"... The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be appropriately selected. After assessing the extent to which these techn ..."
Abstract - Cited by 454 (9 self) - Add to MetaCart
The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be appropriately selected. After assessing the extent to which

The group Lasso for logistic regression

by Lukas Meier, Sara Van De Geer, Peter Bühlmann, Eidgenössische Technische Hochschule Zürich - Journal of the Royal Statistical Society, Series B , 2008
"... Summary. The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regressi ..."
Abstract - Cited by 276 (11 self) - Add to MetaCart
Summary. The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic

The Determinants of Credit Spread Changes.

by Pierre Collin-Dufresne , Robert S Goldstein , J Spencer Martin , Gurdip Bakshi , Greg Bauer , Dave Brown , Francesca Carrieri , Peter Christoffersen , Susan Christoffersen , Greg Duffee , Darrell Duffie , Vihang Errunza , Gifford Fong , Mike Gallmeyer , Laurent Gauthier , Rick Green , John Griffin , Jean Helwege , Kris Jacobs , Chris Jones , Andrew Karolyi , Dilip Madan , David Mauer , Erwan Morellec , Federico Nardari , N R Prabhala , Tony Sanders , Sergei Sarkissian , Bill Schwert , Ken Singleton , Chester Spatt , René Stulz - Journal of Finance , 2001
"... ABSTRACT Using dealer's quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are ..."
Abstract - Cited by 422 (2 self) - Add to MetaCart
are highly cross-correlated, and principal components analysis implies that they are mostly driven by a single common factor. An important implication of this finding is that if any explanatory variables have been omitted, they are likely not firm-specific. We therefore re-run the regression, but 1 this time

The robust beauty of improper linear models in decision making

by Robyn M. Dawes - American Psychologist , 1979
"... ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regres ..."
Abstract - Cited by 267 (1 self) - Add to MetaCart
ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge

Evaluating the predictive performance of habitat models developed using logistic regression

by Jennie Pearce, Simon Ferrier - Ecological Modelling , 2000
"... The use of statistical models to predict the likely occurrence or distribution of species is becoming an increasingly important tool in conservation planning and wildlife management. Evaluating the predictive performance of models using independent data is a vital step in model development. Such eva ..."
Abstract - Cited by 191 (3 self) - Add to MetaCart
). Lack of reliability can be attributed to two systematic sources, calibration bias and spread. Techniques are described for evaluating both of these sources of error. The discrimination capacity of logistic regression models is often measured by cross-classifying observations and predictions in a two

Kernel Logistic Regression and the Import Vector Machine

by Ji Zhu, Trevor Hastie - Journal of Computational and Graphical Statistics , 2001
"... The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on ker ..."
Abstract - Cited by 119 (4 self) - Add to MetaCart
on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the "support points

Transforming Data to Satisfy Privacy Constraints

by Vijay S. Iyengar , 2002
"... Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially identifying ..."
Abstract - Cited by 250 (0 self) - Add to MetaCart
Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially

Partially Improper Gaussian Priors for Nonparametric Logistic Regression

by Nandini Raghavan, Dennis D. Cox , 1995
"... A "partially improper" Gaussian prior is considered for Bayesian inference in logistic regression. This includes generalized smoothing spline priors that are used for nonparametric inference about the logit, and also priors that correspond to generalized random effect models. Necessary and ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
A "partially improper" Gaussian prior is considered for Bayesian inference in logistic regression. This includes generalized smoothing spline priors that are used for nonparametric inference about the logit, and also priors that correspond to generalized random effect models. Necessary

An Empirical Comparison of Supervised Learning Algorithms

by Rich Caruana, Alexandru Niculescu-mizil - In Proc. 23 rd Intl. Conf. Machine learning (ICML’06 , 2006
"... A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90’s. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, n ..."
Abstract - Cited by 212 (6 self) - Add to MetaCart
, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our
Next 10 →
Results 1 - 10 of 5,757
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University