Results 1  10
of
97
Least angle regression
 Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 816 (38 self)
 Add to MetaCart
The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising
Spatial Econometrics
 PALGRAVE HANDBOOK OF ECONOMETRICS: VOLUME 1, ECONOMETRIC THEORY
, 2001
"... Spatial econometric methods deal with the incorporation of spatial interaction and spatial structure into regression analysis. The field has seen a recent and rapid growth spurred both by theoretical concerns as well as by the need to be able to apply econometric models to emerging large geocoded da ..."
Abstract

Cited by 108 (6 self)
 Add to MetaCart
Spatial econometric methods deal with the incorporation of spatial interaction and spatial structure into regression analysis. The field has seen a recent and rapid growth spurred both by theoretical concerns as well as by the need to be able to apply econometric models to emerging large geocoded data bases. The review presented in this chapter outlines the basic terminology and discusses in some detail the specification of spatial effects, estimation of spatial regression models, and specification tests for spatial effects.
The Decomposition of Promotional Response: An Empirical Generalization
 Marketing Science
, 1999
"... Price promotions are used extensively in marketing for one simple reason  consumers respond. The sales increase for a brand on promotion could be due to consumers accelerating their purchases (i.e., buying earlier than usual and/or buying more than usual) and/or consumers switching their choice ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
Price promotions are used extensively in marketing for one simple reason  consumers respond. The sales increase for a brand on promotion could be due to consumers accelerating their purchases (i.e., buying earlier than usual and/or buying more than usual) and/or consumers switching their choice from other brands. Purchase acceleration and brand switching relate to the primary demand and secondary demand effects of a promotion. Gupta (1988) captures these effects in a single model and decomposes a brand's total price elasticity into these components. He reports, for the coffee product category, that the main impact of a price promotion is on brand choice (84%), and that there is a smaller impact on purchase incidence (14%) and stockpiling (2%). In other words, the majority of the effect of a promotion is at the secondary level (84%) and there is a relatively small primary demand effect (16%). This paper reports the decomposition of total price elasticity for 173 brands acros...
The Proximity of an Individual to a Population With Applications in Discriminant Analysis
, 1995
"... : We develop a proximity function between an individual and a population from a distance between multivariate observations. We study some properties of this construction and apply it to a distancebased discrimination rule, which contains the classic linear discriminant function as a particular ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
: We develop a proximity function between an individual and a population from a distance between multivariate observations. We study some properties of this construction and apply it to a distancebased discrimination rule, which contains the classic linear discriminant function as a particular case. Additionally, this rule can be used advantageously for categorical or mixed variables, or in problems where a probabilistic model is not well determined. This approach is illustrated and compared with other classic procedures using four real data sets. Keywords: Categorical and mixed data; Distances between observations; Multidimensional scaling; Discrimination; Classification rules. AMS Subject Classification: 62H30 The authors thank M.Abrahamowicz, J. C. Gower and M. Greenacre for their helpful comments, and W. J. Krzanowski for providing us with a data set and his quadratic location model program. Work supported in part by CGYCIT grant PB930784. Authors' address: Departam...
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
A unified measure of uncertainty of estimated best linear unbiased predictors in small area estimation problems
 Statistica Sinica
, 2000
"... ..."
System Misspecification Testing and Structural Change in the Demand for Meats
"... A misspecification testing strategy designed to ensure that the statistical assumptions underlying a system of equations are appropriate is outlined. The system tests take into account information in, and interactions between, all equations in the system and can be used in a wide variety of applicat ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
A misspecification testing strategy designed to ensure that the statistical assumptions underlying a system of equations are appropriate is outlined. The system tests take into account information in, and interactions between, all equations in the system and can be used in a wide variety of applications where systems of equations are estimated. The system testing approach is demonstrated by modeling U.S. consumer demand for meats. The example illustrates how the approach can be used to disentangle issues regarding structural change and other forms of model misspecification. Key words: econometric modeling, misspecification testing, regression diagnostics, systems of equations
Design and analysis of MIMO spatial multiplexing systems with quantized feedback
 IEEE Trans. Signal Process
, 2006
"... ..."
On Selection Biases in BooktoMarket Based Tests of Asset Pricing Models
, 1995
"... Many studies have documented portfolio strategies that provide returns in excess of those expected, given the level of risk of the portfolio. Variables that seem to have predictive power for equity returns include the market capitalization of the firm’s equity and the ratio of the firm’s book equity ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Many studies have documented portfolio strategies that provide returns in excess of those expected, given the level of risk of the portfolio. Variables that seem to have predictive power for equity returns include the market capitalization of the firm’s equity and the ratio of the firm’s book equity to market equity (BE/ME). Firms with low market capitalization and high booktomarket values seem to earn high returns. With respect to the booktomarket anomaly, it has been argued that the apparent superior performance is due to a subtle selection bias in a typical data source used to implement the tests of asset pricing models, the COMPUSTAT data. We use a sample of COMPUSTAT data that is free from this bias to investigate whether the previous evidence on the booktomarket anomaly is an artifact of this selection bias. The postulated selection bias does not seem to be important for samples restricted to NYSE/AMEX firms. There is some difference when NASDAQ firms are included in the standard COMPUSTAT sample. This may be due to a truly stronger BE/ME effect or to a more severe selection bias in that sample. Our data do not allow us to disentangle these two possible explanations.
A KullbackLeibler approach to Gaussian mixture reduction
 IEEE Trans. Aerosp. Electron. Syst
, 2007
"... c○2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
c○2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Abstract — A common problem in multitarget tracking is to approximate a Gaussian mixture by one containing fewer components; similar problems can arise in integrated navigation. A common approach is successively to merge pairs of components, replacing the pair with a single Gaussian component whose moments up to second order match those of the merged pair. Salmond [1] and Williams [2], [3] have each proposed algorithms along these lines, but using different criteria for selecting the pair to be merged at each stage. The paper shows how under certain circumstances each of these pairselection criteria can give rise to anomalous behaviour, and proposes that a key consideration should be the KullbackLeibler discrimination of the reduced mixture with respect to the original mixture. Although computing this directly would normally be impractical, the paper shows how an easilycomputed upper bound can be used as a pairselection criterion which avoids the anomalies of the earlier approaches. The behaviour of the three algorithms is compared using a highdimensional example drawn from terrainreferenced navigation. Index Terms — Gaussian mixture, data fusion, integrated navigation, tracking. I.