Results 1  10
of
13
Interactive furniture layout using interior design guidelines
 ACM Trans. Graph
, 2011
"... Figure 1: Interactive furniture layout. For a given layout (left), our system suggests new layouts (middle) that respect the user’s constraints and follow interior design guidelines. The red chair has been fixed in place by the user. One of the suggestions is shown on the right. We present an intera ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Figure 1: Interactive furniture layout. For a given layout (left), our system suggests new layouts (middle) that respect the user’s constraints and follow interior design guidelines. The red chair has been fixed in place by the user. One of the suggestions is shown on the right. We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates the layout guidelines as terms in a density function and generates layout suggestions by rapidly sampling the density function using a hardwareaccelerated Monte Carlo sampler. Our results demonstrate that the suggestion generation functionality measurably increases the quality of furniture arrangements produced by participants with no prior training in interior design.
Understanding GPU programming for statistical computation: Studies in massively parallel massive mixtures
 Journal of Computational and Graphical Statistics
, 2010
"... This paper describes advances in statistical computation for largescale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to incr ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
This paper describes advances in statistical computation for largescale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large data sets. An example context concerns common biological studies using highthroughput technologies generating many, very large data sets and requiring increasingly highdimensional mixture models with large numbers of mixture components. We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, examples of the benefits of GPU implementations in terms of processing speed and scaleup in ability to analyze large data sets, and provide a detailed, tutorialstyle exposition that will benefit readers interested in developing GPUbased approaches in other statistical models. Novel, GPUoriented approaches to modifying existing algorithms software design can lead to vast speedup and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplemental materials are provided with all source code, example data and details that will enable readers to implement and explore the GPU approach in this mixture modelling context.
Efficient Bayesian Inference for Switching StateSpace Models using Particle Markov Chain Monte Carlo Methods
, 2010
"... Switching statespace models (SSSM) are a popular class of time series models that have found many applications in statistics, econometrics and advanced signal processing. Bayesian inference for these models typically relies on Markov chain Monte Carlo (MCMC) techniques. However, even sophisticated ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Switching statespace models (SSSM) are a popular class of time series models that have found many applications in statistics, econometrics and advanced signal processing. Bayesian inference for these models typically relies on Markov chain Monte Carlo (MCMC) techniques. However, even sophisticated MCMC methods dedicated to SSSM can prove quite inefficient as they update potentially strongly correlated variables oneatatime. Particle Markov chain Monte Carlo (PMCMC) methods are a recently developed class of MCMC algorithms which use particle filters to build efficient proposal distributions in highdimensions [1]. The existing PMCMC methods of [1] are applicable to SSSM, but are restricted to employing standard particle filtering techniques. Yet, in the context of SSSM, much more efficient particle techniques have been developed [22, 23, 24]. In this paper, we extend the PMCMC framework to enable the use of these efficient particle methods within MCMC. We demonstrate the resulting generic methodology on a variety of examples including a multiple changepoints model for welllog data and a model for U.S./U.K. exchange rate data. These new PMCMC algorithms are shown to outperform experimentally stateoftheart MCMC techniques for a fixed computational complexity. Additionally they can be easily parallelized [39] which allows further substantial gains.
Optimising Performance of Quadrature Methods with Reduced Precision
"... Abstract. This paper presents a generic precision optimisation methodology for quadrature computation targeting reconfigurable hardware to maximise performance at a given error tolerance level. The proposed methodology optimises performance by considering integration grid density versus mantissa siz ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This paper presents a generic precision optimisation methodology for quadrature computation targeting reconfigurable hardware to maximise performance at a given error tolerance level. The proposed methodology optimises performance by considering integration grid density versus mantissa size of floatingpoint operators. The optimisation provides the number of integration points and mantissa size with maximised throughput while meeting given error tolerance requirement. Three case studies show that the proposed reduced precision designs on a Virtex6 SX475T FPGA are up to 6 times faster than comparable FPGA designs with double precision arithmetic. They are up to 15.1 times faster and 234.9 times more energy efficient than an i7870 quadcore CPU, and are 1.2 times faster and 42.2 times more energy efficient than a Tesla C2070 GPU. 1
Iterative Numerical Methods for Sampling from High Dimensional Gaussian Distributions
, 2011
"... Many applications require efficient sampling from Gaussian distributions. The method of choice depends on the dimension of the problem as well as the structure of the covariance (Σ) or precision matrix (Q). The most common blackbox routine for computing a sample is based on Cholesky factorisation. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Many applications require efficient sampling from Gaussian distributions. The method of choice depends on the dimension of the problem as well as the structure of the covariance (Σ) or precision matrix (Q). The most common blackbox routine for computing a sample is based on Cholesky factorisation. In high dimensions, computingtheCholesky factor of Σor Q may beprohibitive duetomassive fillin. We comparedifferent methods for computing the samples iteratively adapting ideas from numerical linear algebra. These methods assume that matrixvector products, Qv, are fast to compute. We show that some of the methods are competitive and faster than Cholesky sampling and that a parallel version of one method on a Graphical Processing Unit (GPU) using CUDA can introduce a speedup of up to 30x. Moreover, one method is used to sample from the posterior distribution of petroleum reservoir parameters in a North Sea field, given seismic reflection data on a large 3D grid.
An efficient computational approach for prior sensitivity analysis and crossvalidation
 LA REVUE CANADIENNE DE STATISTIQUE
, 2010
"... ..."
Some of the What?, Why?, How?, Who? and Where? of Graphics Processing Unit Computing for Bayesian Analysis
"... Over the last 20 years or so, a number of Bayesian researchers and groups have invested a good deal of time, effort and money in parallel computing for Bayesian analysis. The growth of “small research group ” to “institutionally supported ” cluster computational facilities has had a substantial impa ..."
Abstract
 Add to MetaCart
Over the last 20 years or so, a number of Bayesian researchers and groups have invested a good deal of time, effort and money in parallel computing for Bayesian analysis. The growth of “small research group ” to “institutionally supported ” cluster computational facilities has had a substantial impact on a number of areas of Bayesian analysis, enabling analyses that are otherwise practically infeasible. Parallel computing has also motivated new approaches to simulation and optimisationbased Bayesian computations that aim to maximally exploit the “masterslave ” and “embarrassingly parallel ” computational model [e.g., 3, 4, 6]. In more recent years, increasingly prevalent
TOWARDS OPTIMAL SCALING OF METROPOLISCOUPLED
"... Abstract. We consider optimal temperature spacings for Metropoliscoupled ..."
Abstract
 Add to MetaCart
Abstract. We consider optimal temperature spacings for Metropoliscoupled
Parallel Implementation of Particle MCMC Methods on a GPU ⋆
"... Abstract: This paper examines the problem of estimating the parameters describing system models of quite general nonlinear and multivariable form. The approach is a computational one in which quantities that are intractable to evaluate exactly are approximated by sample averages from randomized alg ..."
Abstract
 Add to MetaCart
Abstract: This paper examines the problem of estimating the parameters describing system models of quite general nonlinear and multivariable form. The approach is a computational one in which quantities that are intractable to evaluate exactly are approximated by sample averages from randomized algorithms. The main contribution is to illustrate the viability and utility of this approach by examining how high computational loads can be simply managed using commodity hardware. The proposed algorithms and solution architectures are profiled on concrete examples.
ON THE CHOICE OF MCMC KERNELS FOR APPROXIMATE BAYESIAN COMPUTATION WITH SMC SAMPLERS
"... Approximate Bayesian computation (ABC) is a class of simulationbased statistical inference procedures that are increasingly being applied in scenarios where the likelihood function is either analytically unavailable or computationally prohibitive. These methods use, in a principled manner, simulati ..."
Abstract
 Add to MetaCart
Approximate Bayesian computation (ABC) is a class of simulationbased statistical inference procedures that are increasingly being applied in scenarios where the likelihood function is either analytically unavailable or computationally prohibitive. These methods use, in a principled manner, simulations of the output of a parametrized system in lieu of computing the likelihood to perform parametric Bayesian inference. Such methods have wide applicability when the data generating mechanism can be simulated. While approximate, they can usually be made arbitrarily accurate at the cost of computational resources. In fact, computational issues are central to the successful use of ABC in practice. We focus here on the use of sequential Monte Carlo (SMC) samplers for ABC and in particular on the choice of Markov chain Monte Carlo (MCMC) kernels used to drive their performance, investigating the use of kernels whose mixing properties are less sensitive to the quality of the approximation than standard kernels. 1