Results 1  10
of
25
Lang S: Generalized structured additive regression based on Bayesian P splines
 Computational Statistics & Data Analysis
"... Generalized additive models (GAM) for modelling nonlinear effects of continuous covariates are now well established tools for the applied statistician. In this paper we develop Bayesian GAM’s and extensions to generalized structured additive regression based on one or two dimensional Psplines as th ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Generalized additive models (GAM) for modelling nonlinear effects of continuous covariates are now well established tools for the applied statistician. In this paper we develop Bayesian GAM’s and extensions to generalized structured additive regression based on one or two dimensional Psplines as the main building block. The approach extends previous work by Lang and Brezger (2003) for Gaussian responses. Inference relies on Markov chain Monte Carlo (MCMC) simulation techniques, and is either based on iteratively weighted least squares (IWLS) proposals or on latent utility representations of (multi)categorical regression models. Our approach covers the most common univariate response distributions, e.g. the Binomial, Poisson or Gamma distribution, as well as multicategorical responses. As we will demonstrate through two applications on the forest health status of trees and a spacetime analysis of health insurance data, the approach allows realistic modelling of complex problems. We consider the enormous flexibility and extendability of our approach as a main advantage of Bayesian inference based on MCMC techniques compared to more traditional approaches. Software for the methodology presented in the paper is provided within the public domain package BayesX. Key words: geoadditive models, IWLS proposals, multicategorical response, structured additive
Bayesian Variable Selection and the SwendsenWang Algorithm
"... The need to explore model uncertainty in linear regression models with many predictors has motivated improvements in Markov chain Monte Carlo sampling algorithms for Bayesian variable selection. Currently used sampling algorithms for Bayesian variable selection may perform poorly when there are seve ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
The need to explore model uncertainty in linear regression models with many predictors has motivated improvements in Markov chain Monte Carlo sampling algorithms for Bayesian variable selection. Currently used sampling algorithms for Bayesian variable selection may perform poorly when there are severe multicollinearities among the predictors. This article describes a new sampling method based on an analogy with the SwendsenWang algorithm for the Ising model, and which can give substantial improvements over alternative sampling schemes in the presence of multicollinearity. In linear regression with a given set of potential predictors we can index possible models by a binary parameter vector that indicates which of the predictors are included or excluded. By thinking of the posterior distribution of this parameter as a binary spatial field, we can use auxiliary variable methods inspired by the SwendsenWang algorithm for the Ising model to sample from the posterior where dependence among parameters is reduced by conditioning on auxiliary variables. Performance of the method is described for both simulated and real data.
Learning to Recognize Objects with Little Supervision
, 2008
"... This paper shows (i) improvements over stateoftheart local feature recognition systems, (ii) how to formulate principled models for automatic local feature selection in object class recognition when there is little supervised data, and (iii) how to formulate sensible spatial image context models ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper shows (i) improvements over stateoftheart local feature recognition systems, (ii) how to formulate principled models for automatic local feature selection in object class recognition when there is little supervised data, and (iii) how to formulate sensible spatial image context models using a conditional random field for integrating local features and segmentation cues (superpixels). By adopting sparse kernel methods, Bayesian learning techniques and data association with constraints, the proposed model identifies the most relevant sets of local features for recognizing object classes, achieves performance comparable to the fully supervised setting, and obtains excellent results for image classification.
Eils R: Inferring Genetic Regulatory Logic from Expression Data
 Bioinformatics
"... Motivation: High throughput molecular genetics methods allow to collect data about expression of genes at different time points and under different conditions. The challenge is to infer gene regulatory interactions from this data and to get insight into the mechanisms of genetic regulation. Results: ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Motivation: High throughput molecular genetics methods allow to collect data about expression of genes at different time points and under different conditions. The challenge is to infer gene regulatory interactions from this data and to get insight into the mechanisms of genetic regulation. Results: We propose a model for genetic regulatory interactions, which has a biologically motivated Boolean logic semantics, but is of probabilistic nature, hence being able to confront noisy biological process and data. We propose a method for learning the model from data based on Bayesian approach and utilizing Gibbs sampling. We tested our method with previously published data of the S.cerevisiae cell cycle and found relations between genes consistent with biological knowledge.
On the consistency of Bayesian variable selection for high dimensional binary regression and classification
 Neural Comput
, 2006
"... Bayesian variable selection has gained much empirical success recently in a variety of applications when the number K of explanatory variables (x1,...,xK) is possibly much larger than the sample size n. For generalized linear models, if most of the xj’s have very small effects on the response y, we ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Bayesian variable selection has gained much empirical success recently in a variety of applications when the number K of explanatory variables (x1,...,xK) is possibly much larger than the sample size n. For generalized linear models, if most of the xj’s have very small effects on the response y, we show that it is possible to use Bayesian variable selection to reduce overfitting caused by the curse of dimensionality K ≫ n. In this approach a suitable prior can be used to choose a few out of the many xj’s to model y, so that the posterior will propose probability densities p that are “often close ” to the true density p ∗ in some sense. The closeness can be described by a Hellinger distance between p and p ∗ that scales at a power very close to n −1/2, which is the “finitedimensional rate ” corresponding to a lowdimensional situation. These findings extend some recent work of Jiang [Technical Report 0502 (2005) Dept. Statistics, Northwestern Univ.] on consistency of Bayesian variable selection for binary classification.
VAR forecasting using Bayesian variable selection
 Journal of Applied Econometrics
, 2012
"... VAR forecasting using Bayesian variable selection ..."
Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities
, 2002
"... We consider the input variable selection in complex Bayesian hierarchical models. Our goal is to find a model with the smallest number of input variables having statistically or practically at least the same expected utility as the full model with all the available inputs. A good estimate for the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider the input variable selection in complex Bayesian hierarchical models. Our goal is to find a model with the smallest number of input variables having statistically or practically at least the same expected utility as the full model with all the available inputs. A good estimate for the expected utility can be computed using crossvalidation predictive densities. In the case of input selection and a large number of input combinations, the computation of the crossvalidation predictive densities for each model easily becomes computationally prohibitive. We propose to use the posterior probabilities obtained via variable dimension MCMC methods to find out potentially useful input combinations, for which the final model choice and assessment is done using the expected utilities.
Modelling Longitudinal Data using a PairCopula Decomposition of Serial Dependence
, 2009
"... Copulas have proven to be very successful tools for the flexible modelling of crosssectional dependence. In this paper we express the dependence structure of continuous time series data using a sequence of bivariate copulas. This corresponds to a type of decomposition recently called a ‘vine ’ in t ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Copulas have proven to be very successful tools for the flexible modelling of crosssectional dependence. In this paper we express the dependence structure of continuous time series data using a sequence of bivariate copulas. This corresponds to a type of decomposition recently called a ‘vine ’ in the graphical models literature, where each copula is entitled a ‘paircopula’. We propose a Bayesian approach for the estimation of this dependence structure for longitudinal data. Bayesian selection ideas are used to identify any independence paircopulas, with the end result being a parsimonious representation of a timeinhomogeneous Markov process of varying order. Estimates are Bayesian model averages over the distribution of the lag structure of the Markov process. Overall, the paircopula construction is very general and the Bayesian approach generalises many previous methods for the analysis of longitudinal data. Both the reliability of the proposed Bayesian methodology, and the advantages of the paircopula formulation, are demonstrated via simulation and two examples. The first is an agricultural science example, while the second is an econometric model for the forecasting of intraday electricity load. For both examples the Bayesian paircopula model is substantially more flexible than longitudinal models employed previously.
Bayesian Prediction Using Adaptive Ridge Estimators
"... The Bayesian linear model framework has become increasingly popular building block in regression problems. It has been shown to produce models with good predictive power and can be used with basis functions that are nonlinear in the data to provide exible estimated regression functions. Further, ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The Bayesian linear model framework has become increasingly popular building block in regression problems. It has been shown to produce models with good predictive power and can be used with basis functions that are nonlinear in the data to provide exible estimated regression functions. Further, model uncertainty can be accounted for by Bayesian model averaging. We propose a more simple way to account for model uncertainty that is based on generalized ridge regression estimators. This is shown to predict well and to be much more computationally ecient than standard model averaging methods. Further, we demonstrate how to eciently mix over dierent sets of basis functions, letting the data determine which are most appropriate for the problem at hand. Keywords: Bayesian model averaging, generalized ridge regression, prediction, regression splines, shrinkage. 1