Results 1 
6 of
6
Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities
, 2002
"... We consider the input variable selection in complex Bayesian hierarchical models. Our goal is to find a model with the smallest number of input variables having statistically or practically at least the same expected utility as the full model with all the available inputs. A good estimate for the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider the input variable selection in complex Bayesian hierarchical models. Our goal is to find a model with the smallest number of input variables having statistically or practically at least the same expected utility as the full model with all the available inputs. A good estimate for the expected utility can be computed using crossvalidation predictive densities. In the case of input selection and a large number of input combinations, the computation of the crossvalidation predictive densities for each model easily becomes computationally prohibitive. We propose to use the posterior probabilities obtained via variable dimension MCMC methods to find out potentially useful input combinations, for which the final model choice and assessment is done using the expected utilities.
Darwinian Evolution in Parallel Universes: A Parallel Genetic Algorithm for Variable Selection
"... The need to identify a few important variables that affect a certain outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm (GA) appears to be a natural tool for solving such a problem. In this article we first demonstrate that the GA is actually no ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The need to identify a few important variables that affect a certain outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm (GA) appears to be a natural tool for solving such a problem. In this article we first demonstrate that the GA is actually not a particularly effective variable selection tool, and then propose a very simple modification. Our idea is to run a number of GAs in parallel without allowing each GA to fully converge, and to consolidate the information from all the individual GAs in the end. We call the resulting algorithm the parallel genetic algorithm (PGA). Using a number of both simulated and real examples, we show that the PGA is an interesting as well as highly competitive and easytouse variable selection tool.
Model Selection via Predictive Explanatory Power 20
 Helsinki University of Technology, Laboratory of Computational Engineering
, 1998
"... We consider model selection as a decision problem from a predictive perspective. The optimal Bayesian way of handling model uncertainty is to integrate over model space. Model selection can then be seen as point estimation in the model space. We propose a model selection method based on KullbackLei ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider model selection as a decision problem from a predictive perspective. The optimal Bayesian way of handling model uncertainty is to integrate over model space. Model selection can then be seen as point estimation in the model space. We propose a model selection method based on KullbackLeibler divergence from the predictive distribution of the full model to the predictive distributions of the submodels. The loss of predictive explanatory power is defined as the expectation of this predictive discrepancy. The goal is to find the simplest submodel which has a similar predictive distribution as the full model, that is, the simplest submodel whose loss of explanatory power is acceptable. To compute the expected predictive discrepancy between complex models, for which analytical solutions do not exist, we propose to use predictive distributions obtained via kfold crossvalidation. We compare the performance of the method to posterior probabilities (Bayes factors), deviance information criteria (DIC) and direct maximization of the expected utility via crossvalidation.
Expected Utility Estimation via CrossValidation
, 2003
"... SSVALIDATION; MODEL ASSESSMENT; MODEL COMPARISON; PREDICTIVE DENSITIES; INFORMATION CRITERIA. 1. INTRODUCTION 1.1. Expected Utilities In prediction and decision problems, it is natural to assess the predictive ability of the model by estimating the expected utilities, that is, the relative valu ..."
Abstract
 Add to MetaCart
SSVALIDATION; MODEL ASSESSMENT; MODEL COMPARISON; PREDICTIVE DENSITIES; INFORMATION CRITERIA. 1. INTRODUCTION 1.1. Expected Utilities In prediction and decision problems, it is natural to assess the predictive ability of the model by estimating the expected utilities, that is, the relative values of consequences of using the model (Good, 1952; Bernardo and Smith, 1994). The posterior predictive distribution of output y for the new input x given the training data D = ); i = 1, 2, . . . , n} is obtained by , D,M) = Z , #, D,M)p(# , D,M)d#. where # denotes all the model parameters and hyperparameters of the prior structures and M is all the prior knowledge in the model specification. We assume that knowing does not give more information about #, that is, p(# , D,M) = p(# D,M ). We would like to estimate how good our model is by estimating the quality of the predictions the model makes for future observations from the same process that generate
unknown title
, 901
"... Bayesian projection approaches to variable selection and exploring model uncertainty ..."
Abstract
 Add to MetaCart
Bayesian projection approaches to variable selection and exploring model uncertainty