Results 11  20
of
158
Model Selection and Accounting for Model Uncertainty in Linear Regression Models
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called "Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small number of models including the null model, thus largely resolving the paradox.
Homo Heuristicus: Why Biased Minds Make Better Inferences
, 2008
"... Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the discovery of lessismore effects; (b) the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; (c) an advancement from vague labels to computational models of heuristics; (d) the development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an ‘‘adaptive toolbox;’ ’ and (e) the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people’s adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resourceintensive and generalpurpose processing strategies.
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
Multivariate autoregressive modeling of fmri time series. NeuroImage
, 1477
"... We propose the use of Multivariate Autoregressive (MAR) models of fMRI time series to make inferences about functional integration within the human brain. The method is demonstrated with synthetic and real data showing how such models are able to characterise interregional dependence. We extend lin ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
We propose the use of Multivariate Autoregressive (MAR) models of fMRI time series to make inferences about functional integration within the human brain. The method is demonstrated with synthetic and real data showing how such models are able to characterise interregional dependence. We extend linear MAR models to accommodate nonlinear interactions to model topdown modulatory processes with bilinear terms. MAR models are time series models and thereby model temporal order within measured brain activity. A further benefit of the MAR approach is that connectivity maps may contain loops, yet exact inference can proceed within a linear framework. Model order selection and parameter estimation are implemented using Bayesian methods. 2 1
Efficient LeaveOneOut CrossValidation of Kernel Fisher Discriminant Classifiers
 PATTERN RECOGNITION
, 2003
"... Mika et al. [1] apply the "kernel trick" to obtain a nonlinear variant of Fisher's linear discriminant analysis method, demonstrating stateoftheart performance on a range of benchmark datasets. We show that leaveoneout crossvalidation of kernel Fisher discriminant classifiers can be implement ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Mika et al. [1] apply the "kernel trick" to obtain a nonlinear variant of Fisher's linear discriminant analysis method, demonstrating stateoftheart performance on a range of benchmark datasets. We show that leaveoneout crossvalidation of kernel Fisher discriminant classifiers can be implemented with a computational complexity of only O(l³) operations rather than the O(l^4) of a nave implementation, where l is the number of training patterns. Leaveoneout crossvalidation then becomes an attractive means of model selection in largescale applications of kernel Fisher discriminant analysis, being significantly faster than conventional kfold crossvalidation procedures commonly used.
A method for simultaneous variable selection and outlier identification in linear regression
 COMPUTATIONAL STATISTICS & DATA ANALYSIS
, 1996
"... ..."
Experiments With Noise Filtering in a Medical Domain
 Proc. of 16 th ICML
, 1999
"... The paper presents a series of noise detection experiments in a medical problem of coronary artery disease diagnosis. The following algorithms for noise detection and elimination are tested: a saturation filter, a classification filter, a combined classificationsaturation filter, and a consen ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
The paper presents a series of noise detection experiments in a medical problem of coronary artery disease diagnosis. The following algorithms for noise detection and elimination are tested: a saturation filter, a classification filter, a combined classificationsaturation filter, and a consensus saturation filter. The distinguishing feature of the novel consensus saturation filter is its high reliability which is due to the multiple detection of potentially noisy examples. Reliable detection of noisy examples is important for the analysis of patient records in medical databases, as well as for the induction of rules from filtered data, representing genuine characteristics of the diagnostic domain. Medical evaluation in the problem of coronary artery disease diagnosis shows that the detected noisy examples are indeed noisy or nontypical class representatives.
Evolutionary Monte Carlo: Applications to C_p Model Sampling and Change Point Problem
 STATISTICA SINICA
, 2000
"... Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms and simulated annealing into the framework of MCMC. It works by simulating a population of Markov chains in parallel, where each chain is attached to a different temperature. The population is updated by mutation (Metropolis update), crossover (partial state swapping) and exchange operators (full state swapping). The algorithm is illustrated through examples of the Cpbased model selection and changepoint identification. The numerical results and the extensive comparisons show that evolutionary Monte Carlo is a promising approach for simulation and optimization.
Privacy preserving regression modelling via distributed computation
 In Proc. Tenth ACM SIGKDD Internat. Conf. on Knowledge Discovery and Data Mining
, 2004
"... www.niss.org ..."