Results 1  10
of
17
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 113 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Model Selection and Accounting for Model Uncertainty in Linear Regression Models
, 1993
"... We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete B ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
We consider the problems of variable selection and accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. The complete Bayesian solution to this problem involves averaging over all possible models when making inferences about quantities of interest. This approach is often not practical. In this paper we offer two alternative approaches. First we describe a Bayesian model selection algorithm called "Occam's "Window" which involves averaging over a reduced set of models. Second, we describe a Markov chain Monte Carlo approach which directly approximates the exact solution. Both these model averaging procedures provide better predictive performance than any single model which might reasonably have been selected. In the extreme case where there are many candidate predictors but there is no relationship between any of them and the response, standard variable selection procedures often choose some subset of variables that yields a high R² and a highly significant overall F value. We refer to this unfortunate phenomenon as "Freedman's Paradox" (Freedman, 1983). In this situation, Occam's vVindow usually indicates the null model as the only one to be considered, or else a small number of models including the null model, thus largely resolving the paradox.
Spatstat: An R package for analyzing spatial point patterns
 Journal of Statistical Software
, 2005
"... spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, modelfitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, modelfitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, and ‘marks ’ attached to the points of the point pattern. A unique feature of spatstat is its generic algorithm for fitting point process models to point pattern data. The interface to this algorithm is a function ppm that is strongly analogous to lm and glm. This paper is a general description of spatstat and an introduction for new users.
Intelligent Data Analysis: Issues and Opportunities
, 1998
"... Data analysis is an interdisciplinary science. Traditionally its development has been driven by the areas of application, but nowadays its development is also stimulated by the everchanging possibilities promised by progress in computer technology. Huge data sets and nonnumerical data, such as tex ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Data analysis is an interdisciplinary science. Traditionally its development has been driven by the areas of application, but nowadays its development is also stimulated by the everchanging possibilities promised by progress in computer technology. Huge data sets and nonnumerical data, such as text data, image data, and metadata, present both challenges and opportunities for modern data analysts. These in turn lead to new types of problems and require the development of new types of models. Intelligent data analysis also requires that one take proper advantage of the largely complementary abilities of humans and computers. Interactive graphics, an important tool for modern intelligent data analysis, nicely illustrates this: the production of such graphics, and the ability to manipulate them in real time, requires advanced computational facilities; but the ability to interpret them requires the capacity to synthesise possessed only by the human eye and mind. Intelligent data analysis ...
Modelling spatial point patterns in R
 Case Studies in Spatial Point Pattern Modelling. Lecture Notes in Statistics 185, 23–74
, 2006
"... Summary. We describe practical techniques for fitting stochastic models to spatial point pattern data in the statistical package R. The techniques have been implemented in our package spatstat in R. They are demonstrated on two example datasets. 1 ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Summary. We describe practical techniques for fitting stochastic models to spatial point pattern data in the statistical package R. The techniques have been implemented in our package spatstat in R. They are demonstrated on two example datasets. 1
Data Mining At The Interface Of Computer Science And Statistics
, 2001
"... This chapter is written for computer scientists, engineers, mathematicians, and scientists who wish to gain a better understanding of the role of statistical thinking in modern data mining. Data mining has attracted considerable attention both in the research and commercial arenas in recent years, i ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This chapter is written for computer scientists, engineers, mathematicians, and scientists who wish to gain a better understanding of the role of statistical thinking in modern data mining. Data mining has attracted considerable attention both in the research and commercial arenas in recent years, involving the application of a variety of techniques from both computer science and statistics. The chapter discusses how computer scientists and statisticians approach data from different but complementary viewpoints and highlights the fundamental differences between statistical and computational views of data mining. In doing so we review the historical importance of statistical contributions to machine learning and data mining, including neural networks, graphical models, and flexible predictive modeling. The primary conclusion is that closer integration of computational methods with statistical thinking is likely to become increasingly important in data mining applications. Keywords: Data mining, statistics, pattern recognition, transaction data, correlation. 1.
A hierarchical statistical modeling approach for the unsupervised 3D reconstruction of the scoliotic spine
 in Proc. 10th IEEE Int. Conf. Image Processing
, 2003
"... In this paper, we propose a new and accurate 3D reconstruction technique for the scoliotic spine from a pair planar and conventional radiographic images (posteroanterior and lateral). The proposed model uses a priori hierarchical global knowledge, both on the geometric structure of the whole spine ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper, we propose a new and accurate 3D reconstruction technique for the scoliotic spine from a pair planar and conventional radiographic images (posteroanterior and lateral). The proposed model uses a priori hierarchical global knowledge, both on the geometric structure of the whole spine and of each vertebra. More precisely, it relies on the specification of two 3D templates. The first, a rough geometric template on which rigid admissible deformations are defined, is used to ensure a crude registration of the whole spine. 3D reconstruction is then refined for each vertebra, by a template on which nonlinear admissible global deformations are modeled, with statistical modal analysis of the pathological deformations observed on a representative scoliotic vertebra population. This unsupervised coarsetofine 3D reconstruction procedure is stated as a double energy function minimization problems efficiently solved with a stochastic optimization algorithm. The proposed method, tested on several pairs of biplanar radiographic images with scoliotic deformities, is comparable in terms of accuracy with the classical CTscan technique while being unsupervised and requiring a lower amount of radiation for the patient. 1.
Vertical Restraints and Technology Transfer: Interfirm Agreements in Eastern Europe's Car Component Industry
, 2000
"... We test the relationship between exclusive agreements and technology transfer among firms in the automotive supply industry in EU candidate countries. Exclusive agreements come in bundles, are reciprocal and are passed on up or downstream. The type of exclusivity employed by a firm depends on its p ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We test the relationship between exclusive agreements and technology transfer among firms in the automotive supply industry in EU candidate countries. Exclusive agreements come in bundles, are reciprocal and are passed on up or downstream. The type of exclusivity employed by a firm depends on its position in the supply chain. Downstream firms are more likely to be subject to and/or impose vertical restraints. Technology trickles upstream: Multinational final assemblers transfer a lot of technology; lowertier suppliers less. Technology transfer is negatively related to the exclusive agreements that should protect it, suggesting a certain incidence of anti rather than procompetitive motives. Complementary case studies reveal three possible motives for vertical restraints. Owners of technology protect their intellectual property; recipients of technology protect investments in relationspecific assets; and either or both engage in attempts to increase market power. This has implications for competition policy in an enlarging Europe. [Keywords: vertical restraints, technology transfer, automotive supply networks, competition policy] Journal of Economic Literature classification: F23; L42; L62 * Corresponding author: Department of International Economics and Management, Copenhagen Business School, Howitzvej 60, DK  2000 Frederiksberg, Fax: +45.38 15 25 00. Fritz Laux, Gustavo Merino, and Guillermo Musik of Instituto Tecnolgico Autnomo de Mexico provided helpful comments, as did Klaus Meyer and Niels Mygind, both at the Centre for East European Studies. Participants in seminars at CBS, at the Centre for Industrial Economics in Copenhagen, at AIBPhoenix, and EIBAMaastricht offered valuable suggestions. Marek Jakoby, Tiiu Paas, Marcin Peterlik, Matija Rojec, Miklo...
2013a). Accurate directional inference for vector parameters
"... We consider a vectorvalued parameter of interest in the presence of a finitedimensional nuisance parameter, based on higher order asymptotic theory for likelihood inference. We propose a directional test for the vector parameter of interest, that is computed using onedimensional integration. For d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider a vectorvalued parameter of interest in the presence of a finitedimensional nuisance parameter, based on higher order asymptotic theory for likelihood inference. We propose a directional test for the vector parameter of interest, that is computed using onedimensional integration. For discrete responses this extends the development of Davison et al. (2006), and several examples below concern testing hypotheses in contingency tables. For continuous responses the work extends the directional test of Cheah et al. (1994). Exponential family examples and simulations illustrate the high accuracy of the method, which we compare with an adjusted likelihood ratio test of Skovgaard (2001). In a highdimensional covariance selection example the approach works essentially perfectly, whereas its competitors fail catastrophically.
On a directionally adjusted metropolishastings algorithm
, 2008
"... We propose a new MetropolisHastings algorithm for sampling from smooth, unimodal distributions; a restriction to the method is that the target be optimizable. The method can be viewed as a mixture of two types of MCMC algorithm; specifically, we seek to combine the versatility of the random walk Me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a new MetropolisHastings algorithm for sampling from smooth, unimodal distributions; a restriction to the method is that the target be optimizable. The method can be viewed as a mixture of two types of MCMC algorithm; specifically, we seek to combine the versatility of the random walk Metropolis and the efficiency of the independence sampler as found with various types of target distribution. This is achieved through a directional argument that allows us to adjust the thickness of the tails of the proposal density from one iteration to another. We discuss the relationship between the acceptance rate of the algorithm and its efficiency. We finally apply the method to a regression example concerning the cost of construction of nuclear power plants, and compare its performance to the random walk Metropolis algorithm with Gaussian proposal.