Results 1  10
of
46
Spatial Econometrics
 PALGRAVE HANDBOOK OF ECONOMETRICS: VOLUME 1, ECONOMETRIC THEORY
, 2001
"... Spatial econometric methods deal with the incorporation of spatial interaction and spatial structure into regression analysis. The field has seen a recent and rapid growth spurred both by theoretical concerns as well as by the need to be able to apply econometric models to emerging large geocoded da ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
Spatial econometric methods deal with the incorporation of spatial interaction and spatial structure into regression analysis. The field has seen a recent and rapid growth spurred both by theoretical concerns as well as by the need to be able to apply econometric models to emerging large geocoded data bases. The review presented in this chapter outlines the basic terminology and discusses in some detail the specification of spatial effects, estimation of spatial regression models, and specification tests for spatial effects.
On Block Updating in Markov Random Field Models For . . .
 SCANDINAVIAN JOURNAL OF STATISTICS
, 2002
"... Gaussian Markov random field (GMRF) models are commonlyufz to model spatial correlation in disease mapping applications. For Bayesian inference by MCMC, so far mainly singlesiteuinglealgorithms have been considered. However, convergence and mixing properties ofsuD algorithms can be extremely ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
Gaussian Markov random field (GMRF) models are commonlyufz to model spatial correlation in disease mapping applications. For Bayesian inference by MCMC, so far mainly singlesiteuinglealgorithms have been considered. However, convergence and mixing properties ofsuD algorithms can be extremely poordu to strong dependencies ofparameters in the posteriordistribuQ84K In this paper, we propose variou block sampling algorithms in order to improve the MCMC performance. The methodology is rather general, allows for nonstandardfu6 conditionals, and can be applied in amoduzK fashion in a large nugef of di#erent scenarios. For illu##Kzf0 n we consider three di#erent applications: twoformu8Df0z3 for spatial modelling of a single disease (with andwithou additionaluditionalfL parameters respectively), and one formu## ion for the joint analysis of two diseases. TheresuKK indicate that the largest benefits are obtained ifparameters and the corresponding hyperparameter areuefz#L jointly in one large block. Implementation ofsuQ block algorithms is relatively easy usyf methods for fast sampling ofGaungf3 Markov random fields (Rus 2001). By comparison, Monte Carlo estimates based on singlesiteungles can be rather misleading, even for very long rugfOu resuL6 may have wider relevance for efficient MCMCsimu6z8#f in hierarchical models with Markov random field components.
Under the hood: issues in the specification and interpretation of spatial regression models
 Agricultural Economics
, 2002
"... This paper reviews a number of conceptual issues pertaining to the implementation of an explicit “spatial ” perspective in applied econometrics. It provides an overview of the motivation for including spatial effects in regression models, both from a theorydriven as well as from a datadriven persp ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
This paper reviews a number of conceptual issues pertaining to the implementation of an explicit “spatial ” perspective in applied econometrics. It provides an overview of the motivation for including spatial effects in regression models, both from a theorydriven as well as from a datadriven perspective. Considerable attention is paid to the inferential framework necessary to carry out estimation and testing and the different assumptions, constraints and implications embedded in the various specifications available in the literature. The review combines insights from the traditional spatial econometrics literature as well as from geostatistics, biostatistics and medical image analysis.
Modelling Risk from a Disease in Time and Space
 Statistics in Medicine 17
, 1997
"... This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time and space varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a ten ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time and space varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a tentative reanalysis of Ohio lung cancer data 196888. Two approaches that adjust for unmeasured spatial covariates, particularly tobacco consumption, are described. The first includes random effects in the model to account for unobserved heterogeneity; the second adds a simple urbanization measure as a surrogate for smoking behaviour. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility in the southwest of the state may have caused increased levels of lung cancer there. However, we contend here that the data are inadequate for a proper investigation of this issue. Email: leo@stat.unimuenchen.de 1 Introduction Data on disease incidence or mor...
Modelling spatially correlated data via mixtures: a Bayesian approach
 Journal of the Royal Statistical Society, Series B
, 2002
"... This paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
This paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson distributed data, and applications in disease mapping. We work in a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatiallydependent weights, based on transformations of autoregressive gaussian processes: in one (the Logistic normal model), the mixture component labels are exchangeable, in the other (the Grouped continuous model), they are ordered. Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performance of both of these formulations is examined on synthetic data and real data on mortality from rare disease.
MCMC Methods for Computing Bayes Factors: A Comparative Review
 Journal of the American Statistical Association
, 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint modelparameter space search methods perform adequately but can be difficult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
Hierarchical Bayesian SpaceTime Models
 Environmental and Ecological Statistics
, 1998
"... Spacetime data are ubiquitous in the environmental sciences. Often, as is the case with atmospheric and oceanographic processes, these data contain many different scales of spatial and temporal variability. Such data are often nonstationary in space and time and may involve many observation/predic ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Spacetime data are ubiquitous in the environmental sciences. Often, as is the case with atmospheric and oceanographic processes, these data contain many different scales of spatial and temporal variability. Such data are often nonstationary in space and time and may involve many observation/prediction locations. These factors can limit the effectiveness of traditional spacetime statistical models and methods. In this article, we propose the use of hierarchical spacetime models to achieve more flexible models and methods for the analysis of environmental data distributed in space and time. The first stage of the hierarchical model specifies a measurementerror process for the observational data in terms of some "state" process. The second stage allows for sitespecific time series models for this state variable. This stage includes largescale (e.g., seasonal) variability plus a spacetime dynamic process for the "anomalies". Much of our interest is with this anomaly process. In the...
Bayesian Modelling of Inseparable SpaceTime Variation in Disease Risk
, 1998
"... This paper proposes a unified framework for the analysis of incidence or mortality data in space and time. The problem with such analysis is that the number of cases and the corresponding population at risk in any single unit of space \Theta time are too small to produce a reliable estimate of the u ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
This paper proposes a unified framework for the analysis of incidence or mortality data in space and time. The problem with such analysis is that the number of cases and the corresponding population at risk in any single unit of space \Theta time are too small to produce a reliable estimate of the underlying disease risk without "borrowing strength" from neighbouring cells. The goal here could be described as one of smoothing, in which both spatial and nonspatial considerations may arise, and spatiotemporal interactions may become an important feature. Based on an extended version of the main effects model proposed in KnorrHeld and Besag (1998), four generic types of space \Theta time interactions are introduced. Each type implies a certain degree of prior (in)dependence for interaction parameters, and corresponds to the product of one of the two spatial main effects with one of the two temporal main effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by an analysis of Ohio lung cancer data 196888. We compare the fit and the complexity of each model by the DIC criterion, recently proposed in Spiegelhalter et al. (1998).
Markov random field models for highdimensional parameters in simulations of fluid flow in porous media
 Technometrics
, 2002
"... We give an approach for using flow information from a system of wells to characterize hydrologic properties of an aquifer. In particular, we consider experiments where an impulse of tracer fluid is injected along with the water at the input wells and its concentration is recorded over time at the up ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
We give an approach for using flow information from a system of wells to characterize hydrologic properties of an aquifer. In particular, we consider experiments where an impulse of tracer fluid is injected along with the water at the input wells and its concentration is recorded over time at the uptake wells. We focus on characterizing the spatially varying permeability field which is a key attribute of the aquifer for determining flow paths and rates for a given flow experiment. As is standard for estimation from such flow data, we make use of complicated subsurface flow code which simulates the fluid flow through the aquifer for a particular well configuration and aquifer specification, which includes the permeability field over a grid. This illposed problem requires that some regularity be imposed on the permeability field. Typically this is accomplished by specifying a stationary Gaussian process model for the permeability field. Here we use an intrinsically stationary Markov random field which compares favorably and offers some additional flexibility and computational advantages. Our interest in quantifying uncertainty leads us to take a Bayesian approach, using Markov chain Monte Carlo for exploring the highdimensional posterior distribution. We demonstrate our approach with several examples.
A Bayesian model to forecast new product performance in domestic and international markets. Marketing Science 115–136
, 1999
"... This paper attempts to shed light on the following research questions: When a firm introduces a new product (or service) how can it effectively use the different information sources available to generate reliable new product performance forecasts? How can the firm account for varying information ava ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
This paper attempts to shed light on the following research questions: When a firm introduces a new product (or service) how can it effectively use the different information sources available to generate reliable new product performance forecasts? How can the firm account for varying information availability at different stages of the new product launch and generate forecasts at each stage? We address these questions in the context of the sequential launches of motion pictures in international markets. Players in the motion picture industry require forecasts at different stages of the movie launch process to aid decisionmaking, and the information sets available to generate such forecasts vary at different stages. Despite the importance of such forecasts, the industry struggles to understand and predict