Results 1  10
of
78
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 186 (13 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
Predictive Model Selection
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... this article we propose three criteria that can be used to address model selection. These emphasize observables rather than parameters and are based on a certain Bayesian predictive density. They have a unifying basis that is simple and interpretable,are free of asymptotic de#nitions,and allow the i ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
this article we propose three criteria that can be used to address model selection. These emphasize observables rather than parameters and are based on a certain Bayesian predictive density. They have a unifying basis that is simple and interpretable,are free of asymptotic de#nitions,and allow the incorporation of prior information. Moreover,two of these criteria are readily calibrated.
Model Selection and Accounting for Model Uncertainty in Linear Regression Models
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called "Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small number of models including the null model, thus largely resolving the paradox.
The variable selection problem
 Journal of the American Statistical Association
, 2000
"... The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables or predictors, but there is uncertainty about which subset to use. This vignette reviews some of the key developments which have led to the wide variety of approaches for this problem. 1
Bayesian Model Averaging in proportional hazard models: Assessing the risk of a stroke
 Applied Statistics
, 1997
"... Evaluating the risk of stroke is important in reducing the incidence of this devastating disease. Here, we apply Bayesian model averaging to variable selection in Cox proportional hazard models in the context of the Cardiovascular Health Study, a comprehensive investigation into the risk factors for ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Evaluating the risk of stroke is important in reducing the incidence of this devastating disease. Here, we apply Bayesian model averaging to variable selection in Cox proportional hazard models in the context of the Cardiovascular Health Study, a comprehensive investigation into the risk factors for stroke. We introduce a technique based on the leaps and bounds algorithm which e ciently locates and ts the best models in the very large model space and thereby extends all subsets regression to Cox models. For each independent variable considered, the method provides the posterior probability that it belongs in the model. This is more directly interpretable than the corresponding Pvalues, and also more valid in that it takes account of model uncertainty. Pvalues from models preferred by stepwise methods tend to overstate the evidence for the predictive value of a variable. In our data Bayesian model averaging predictively outperforms standard model selection methods for assessing
Shotgun stochastic search for “large p” regression
 Journal of the American Statistical Association
, 2007
"... Model search in regression with very large numbers of candidate predictors raises challenges for both model specification and computation, and standard approaches such as Markov chain Monte Carlo (MCMC) and stepwise methods are often infeasible or ineffective. We describe a novel shotgun stochastic ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Model search in regression with very large numbers of candidate predictors raises challenges for both model specification and computation, and standard approaches such as Markov chain Monte Carlo (MCMC) and stepwise methods are often infeasible or ineffective. We describe a novel shotgun stochastic search (SSS) approach that explores “interesting” regions of the resulting, very highdimensional model spaces to quickly identify regions of high posterior probability over models. We describe algorithmic and modeling aspects, priors over the model space that induce sparsity and parsimony over and above the traditional dimension penalization implicit in Bayesian and likelihood analyses, and parallel computation using cluster computers. We discuss an example from gene expression cancer genomics, comparisons with MCMC and other methods, and theoretical and simulationbased aspects of performance characteristics in largescale regression model search. We also provide software implementing the methods.
Gradient Radial Basis Function Networks for Nonlinear and Nonstationary Time Series Prediction
, 1995
"... We present a method of modifying the structure of radial basis function (RBF) network to work with nonstationary series that exhibit homogeneous nonstationary behaviour. In the original RBF network, the hidden node's function is to sense the trajectory of the time series and to respond when there ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We present a method of modifying the structure of radial basis function (RBF) network to work with nonstationary series that exhibit homogeneous nonstationary behaviour. In the original RBF network, the hidden node's function is to sense the trajectory of the time series and to respond when there is a strong correlation between the input pattern and the hidden node's center. This type of response, however, is highly sensitive to changes in the level and trend of the time series. To counter these effects, the hidden node's function is modified to one which detects and reacts to the gradient of the series. We call this new network the gradient RBF (GRBF) model. Single and multistep predictive performance for the MackeyGlass chaotic time series were evaluated using the classical RBF and GRBF models. The simulation results for the series without and with a time varying mean confirm the superior performance of the GRBF predictor over the RBF predictor.
Neural Networks for Intelligent Sensors and Control  Practical Issues and Some Solutions
, 1996
"... Multilayer neural networks have been successfully applied as intelligent sensors for process modeling and control. In this paper, a few practical issues are discussed and some solutions are presented. Several biased regression approaches, including ridge regression, PCA, and PLS, are integrated with ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Multilayer neural networks have been successfully applied as intelligent sensors for process modeling and control. In this paper, a few practical issues are discussed and some solutions are presented. Several biased regression approaches, including ridge regression, PCA, and PLS, are integrated with neural net training to reduce the prediction variance. 1 Introduction The availability of process control computers and associated data historians make it easy to generate neural network solutions for process modeling and control. Numerous applications of neural networks in the field of process engineering have been reported in recent annual meetings and technical journals. Neural network solutions are well accepted in process industries since they are costeffective, easytounderstand, nonlinear, and datadriven. This chapter addresses several practical issues and some solutions regarding to use of neural networks for intelligent sensors and control. The chapter begins with an introducti...
Testing composite hypotheses applied to AR order estimation; the Akaikecriterion revised
, 1997
"... Akaike's criterion is used to test composite hypotheses; for example to determine the order of AR models. A modification is presented to test composite hypotheses given an upperbound on the error of the first kind (NeymanPearson). The presented theory is applied to AR order estimation and verified ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Akaike's criterion is used to test composite hypotheses; for example to determine the order of AR models. A modification is presented to test composite hypotheses given an upperbound on the error of the first kind (NeymanPearson). The presented theory is applied to AR order estimation and verified by simulations. The experimental results are so good that we consider the AR order estimation problem as solved. 1 Introduction This paper is a shortened version of the full paper submitted for publication [1]. We test composite hypotheses [2, pp. 8696], hypotheses specified except for a few parameters to be estimated, to select the AutoRegressive (AR) model order. This is a difficult estimation problem because lower order models are contained in higher order models. Especially in the critical test, e.g. if both models are nearly equally likely, a reliable test is needed. The Akaike criterion [3, 4], a heuristic which is not convincingly motivated, integrates the method of Maximum Like...