Results 1  10
of
115
Bayesian computing with INLA: New features
 Computational Statistics & Data Analysis
, 2013
"... ar ..."
A computational framework for infinitedimensional Bayesian inverse problems Part I: The linearized case, with application to global seismic inversion
, 2013
"... We present a computational framework for estimating the uncertainty in the numerical solution of linearized infinitedimensional statistical inverse problems. We adopt the Bayesian inference formulation: given observational data and their uncertainty, the governing forward problem and its uncertai ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
We present a computational framework for estimating the uncertainty in the numerical solution of linearized infinitedimensional statistical inverse problems. We adopt the Bayesian inference formulation: given observational data and their uncertainty, the governing forward problem and its uncertainty, and a prior probability distribution describing uncertainty in the parameter field, find the posterior probability distribution over the parameter field. The prior must be chosen appropriately in order to guarantee wellposedness of the infinitedimensional inverse problem and facilitate computation of the posterior. Furthermore, straightforward discretizations may not lead to convergent approximations of the infinitedimensional problem. And finally, solution of the discretized inverse problem via explicit construction of the covariance matrix is prohibitive due to the need to solve the forward problem as many times as there are parameters. Our computational framework builds on the infinitedimensional formulation proposed by Stuart [Acta Numer., 19 (2010), pp. 451–559] and incorporates a number of components aimed at ensuring a convergent discretization of the underlying infinitedimensional inverse problem. The framework additionally incorporates algorithms for manipulating the prior, constructing a low rank approximation of the datainformed component of the posterior covariance operator, and exploring the posterior that to
A generalized leastsquare matrix decomposition
 Journal of the American Statistical Association
"... Variables in many massive highdimensional data sets are structured, arising for example from measurements on a regular grid as in imaging and time series or from spatialtemporal measurements as in climate studies. Classical multivariate techniques ignore these structural relationships often result ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Variables in many massive highdimensional data sets are structured, arising for example from measurements on a regular grid as in imaging and time series or from spatialtemporal measurements as in climate studies. Classical multivariate techniques ignore these structural relationships often resulting in poor performance. We propose a generalization of the singular value decomposition (SVD) and principal components analysis (PCA) that is appropriate for massive data sets with structured variables or known twoway dependencies. By finding the best low rank approximation of the data with respect to a transposable quadratic norm, our decomposition, entitled the Generalized least squares Matrix Decomposition (GMD), directly accounts for structural relationships. As many variables in highdimensional settings are often irrelevant or noisy, we also regularize our matrix decomposition by adding twoway penalties to encourage sparsity or smoothness. We develop fast computational algorithms using our methods to perform generalized PCA (GPCA), sparse GPCA, and functional GPCA on massive data sets. Through simulations and a whole brain functional MRI example we demonstrate the utility of our methodology for dimension reduction, signal recovery, and feature selection with highdimensional structured data.
Estimation and prediction in spatial models with block composite likelihoods
 Journal of Computational and Graphical Statistics (To Appear
, 2013
"... A block composite likelihood is developed for estimation and prediction in large spatial datasets. The composite likelihood is constructed from the joint densities of pairs of adjacent spatial blocks. This allows large datasets to be split into many smaller datasets, each of which can be evaluated s ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
A block composite likelihood is developed for estimation and prediction in large spatial datasets. The composite likelihood is constructed from the joint densities of pairs of adjacent spatial blocks. This allows large datasets to be split into many smaller datasets, each of which can be evaluated separately, and combined through a simple summation. Estimates for unknown parameters are obtained by maximizing the block composite likelihood function. In addition, a new method for optimal spatial prediction under the block composite likelihood is presented. Asymptotic variances for both parameter estimates and predictions are computed using Godambe sandwich matrices. The approach gives considerable improvements in computational efficiency, and the composite structure obviates the need to load entire datasets into memory at once, completely avoiding memory limitations imposed by massive datasets. Moreover, computing time can be reduced even further by distributing the operations using parallel computing. A simulation study shows that composite likelihood estimates and predictions, as well as their corresponding asymptotic confidence intervals, are competitive with those based on the full likelihood. The procedure is demonstrated on one dataset from the mining industry and one dataset of satellite retrievals. The realdata examples
A toolbox for fitting complex spatial point process models using integrated Laplace transformation (INLA)
, 2010
"... ..."
Multivariate Gaussian Random Fields Using Systems of Stochastic Partial Differential Equations
, 2013
"... In this paper a new approach for constructing multivariate Gaussian random fields (GRFs) using systems of stochastic partial differential equations (SPDEs) has been introduced and applied to simulated data and real data. By solving a system of SPDEs, we can construct multivariate GRFs. On the theore ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
In this paper a new approach for constructing multivariate Gaussian random fields (GRFs) using systems of stochastic partial differential equations (SPDEs) has been introduced and applied to simulated data and real data. By solving a system of SPDEs, we can construct multivariate GRFs. On the theoretical side, the notorious requirement of nonnegative definiteness for the covariance matrix of the GRF is satisfied since the constructed covariance matrices with this approach are automatically symmetric positive definite. Using the approximate stochastic weak solutions to the systems of SPDEs, multivariate GRFs are represented by multivariate Gaussian Markov random fields (GMRFs) with sparse precision matrices. Therefore, on the computational side, the sparse structures make it possible to use numerical algorithms for sparse matrices to do fast sampling from the random fields and statistical inference. Therefore, the bign problem can also be partially resolved for these models. These models outpreform existing multivariate GRF models on a commonly used real dataset.
Parameter Estimation in High Dimensional Gaussian Distributions
, 2012
"... In order to compute the loglikelihood for high dimensional Gaussian models, it is necessary to compute the determinant of the large, sparse, symmetric positive definite precision matrix. Traditional methods for evaluating the loglikelihood, which are typically based on Choleksy factorisations, are ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
In order to compute the loglikelihood for high dimensional Gaussian models, it is necessary to compute the determinant of the large, sparse, symmetric positive definite precision matrix. Traditional methods for evaluating the loglikelihood, which are typically based on Choleksy factorisations, are not feasible for very large models due to the massive memory requirements. We present a novel approach for evaluating such likelihoods that only requires the computation of matrixvector products. In this approach we utilise matrix functions, Krylov subspaces, and probing vectors to construct an iterative numerical method for computing the loglikelihood.
InfiniteDimensional Kalman Filtering Approach to SpatioTemporal Gaussian Process Regression
"... We show how spatiotemporal Gaussian process (GP) regression problems (or the equivalent Kriging problems) can be formulated as infinitedimensional Kalman filtering and RauchTungStriebel (RTS) smoothing problems, and present a procedure for converting spatiotemporal covariance functions into inf ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
We show how spatiotemporal Gaussian process (GP) regression problems (or the equivalent Kriging problems) can be formulated as infinitedimensional Kalman filtering and RauchTungStriebel (RTS) smoothing problems, and present a procedure for converting spatiotemporal covariance functions into infinitedimensional stochastic differential equations (SDEs). The resulting infinitedimensional SDEs belong to the class of stochastic pseudodifferential equations and can be numerically treated using the methods developed for deterministic counterparts oftheequations. Thescalingofthecomputational cost in the proposed approach is linear in the number of time steps as opposed to the cubic scaling of the direct GP regression solution. We also show how separable covariance functions lead to a finitedimensional Kalman filtering and RTS smoothing problem, present analytical and numerical examples, and discuss numerical methods for computing the solutions. 1
Interpolation of Spatial Data  A Stochastic or a Deterministic Problem?
, 2010
"... Interpolation of spatial data is a very general mathematical problem with various applications. Different ways corresponding to different modeling assumptions have been proposed to tackle it. In geostatistics, it is assumed that the underlying structure of the data is a stochastic process which lead ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Interpolation of spatial data is a very general mathematical problem with various applications. Different ways corresponding to different modeling assumptions have been proposed to tackle it. In geostatistics, it is assumed that the underlying structure of the data is a stochastic process which leads an interpolation procedure known as kriging. This method is mathematically equivalent to kernel interpolation, a method used in numerical analysis for the same problem, but derived under completely different modeling assumptions. In this article we present the two approaches and discuss their modeling assumptions, notions of optimality and the different concepts to quantify the interpolation accuracy. It turns out that their relation is much closer than has been apprehended so far. In particular we show that results in the literature on kernel interpolation on the convergence rates of approximation errors apply to the statistical framework as well, though with a different interpretation. We finally sketch the answers given in both frameworks to the issue of kernel misspecification, present some methods