Results 1  10
of
26
Logistic Regression in Rare Events Data
, 1999
"... We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quartermillion dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99 % of their (nonfixed) data collection costs or to collect much more meaningful explanatory
On the distribution of discounted loss reserves using generalized linear models. Scandinavian Actuarial Journal, forthcoming
 Scandinavian Actuarial Journal
, 2004
"... Renshaw and Verrall (1994) specified the generalized linear model (GLM) underlying the chainladder technique and suggested some other GLMs which might be useful in claims reserving. The purpose of this paper is to construct bounds for the discounted loss reserve within the framework of GLMs. Exact ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Renshaw and Verrall (1994) specified the generalized linear model (GLM) underlying the chainladder technique and suggested some other GLMs which might be useful in claims reserving. The purpose of this paper is to construct bounds for the discounted loss reserve within the framework of GLMs. Exact calculation of the distribution of the total reserve is not feasible, and hence the determination of lower and upper bounds with a simpler structure is a possible way out. The paper ends with numerical examples illustrating the usefulness of the presented approximations.
Explaining Rare Events in International Relations
, 2000
"... Some of the most important phenomena in international conflict are coded as "rare events data," binary dependent variables with dozens to thousands of times fewer events, such as wars, coups, etc., than "nonevents". Unfortunately, rare events data are difficult to explain and predict, a problem that ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Some of the most important phenomena in international conflict are coded as "rare events data," binary dependent variables with dozens to thousands of times fewer events, such as wars, coups, etc., than "nonevents". Unfortunately, rare events data are difficult to explain and predict, a problem that seems to have at least two sources. First, and most importantly, the data collection strategies used in international conflict are grossly inefficient. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs, or to collect much more meaningful explanatory variables. Second, logistic regression, and other commonly ...
Local likelihood estimation in varyingcoefficient models including additive bias correction
, 1995
"... Varying coefficient models result from generalized linear models by allowing the parameter of the linear predictor to vary across some additional explanatory quantity called effect modifier. While Hastie & Tibshirani (1993) have used spline smoothing techniques in varyingcoefficient models with uni ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Varying coefficient models result from generalized linear models by allowing the parameter of the linear predictor to vary across some additional explanatory quantity called effect modifier. While Hastie & Tibshirani (1993) have used spline smoothing techniques in varyingcoefficient models with univariate response here the local likelihood approach is considered within the framework of multivariate generalized models. The local likelihood approach has several advantages. It allows the derivation of asymptotic properties under weak assumptions, consistency and asymptotic normality of the estimates are shown under rather general conditions. The estimation procedure may be performed with standard software. This holds even for the additive bias reduction method which is proposed and investigated. The results are given for discrete as well as for continuous effect modifiers and asymptotically optimal rates of smoothing are derived. An alternative normalization of weights is proposed which corresponds to the augmentation of the information supplied by the observation. The normalization results from theoretical considerations and is supported by simulations which show improved variance properties for finite sample size. A real data example demonstrates the applicability of the results.
Bias in odds ratios by logistic regression modelling and sample size
, 2009
"... This is an Open Access article distributed under the terms of the Creative Commons Attribution License ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This is an Open Access article distributed under the terms of the Creative Commons Attribution License
Birnbaum–Saunders nonlinear regression models
, 901
"... We introduce, for the first time, a new class of Birnbaum–Saunders nonlinear regression models potentially useful in lifetime data analysis. The class generalizes the regression model described by Rieck and Nedelman [1991, A loglinear model for the Birnbaum–Saunders distribution, Technometrics, 33, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We introduce, for the first time, a new class of Birnbaum–Saunders nonlinear regression models potentially useful in lifetime data analysis. The class generalizes the regression model described by Rieck and Nedelman [1991, A loglinear model for the Birnbaum–Saunders distribution, Technometrics, 33, 51–60]. We discuss maximum likelihood estimation for the parameters of the model, and derive closedform expressions for the secondorder biases of these estimates. Our formulae are easily computed as ordinary linear regressions and are then used to define bias corrected maximum likelihood estimates. Some simulation results show that the bias correction scheme yields nearly unbiased estimates without increasing the mean squared errors. We also give an application to a real fatigue data set. Key words: Bias correction, Birnbaum–Saunders distribution, maximum likelihood estimation, nonlinear regression.
Translated by Kimon FriarContents
"... Discipline is the highest of all virtues. Only so may strength and desire be counterbalanced and the endeavors of man bear fruit. N. KAZANTZAKIS, ..."
Abstract
 Add to MetaCart
Discipline is the highest of all virtues. Only so may strength and desire be counterbalanced and the endeavors of man bear fruit. N. KAZANTZAKIS,
binary regression models
, 2009
"... iterative adjustment of responses for the reduction of bias in ..."
and
, 2010
"... A general iterative algorithm is developed for the computation of reducedbias parameter estimates in regular statistical models using the adjusted score approach of Firth (1993, Biometrika 80, 27–38). The algorithm unifies and provides appealing new interpretation for iterative methods that have be ..."
Abstract
 Add to MetaCart
A general iterative algorithm is developed for the computation of reducedbias parameter estimates in regular statistical models using the adjusted score approach of Firth (1993, Biometrika 80, 27–38). The algorithm unifies and provides appealing new interpretation for iterative methods that have been published previously for some specific model classes. The new algorithm can usefully be viewed as a series of iterative bias corrections, thus facilitating the adjusted score approach to bias reduction in any model for which the firstorder bias of the maximum likelihood estimator has already been derived. The method is tested by application to a logitlinear multiple regression model with betadistributed responses; the results confirm the effectiveness of the new algorithm, and also reveal some important errors in the existing literature on beta regression.