Results 11  20
of
174
The valueadded model
 In
, 1986
"... Service sy stems fail all too fre quently. `Overdue, over budget and disappointing ' are the words frequently used by organisations to describe their experience in the development and comm issioning of complex information systems enabled services. More considered analyses question anticipated p ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
Service sy stems fail all too fre quently. `Overdue, over budget and disappointing ' are the words frequently used by organisations to describe their experience in the development and comm issioning of complex information systems enabled services. More considered analyses question anticipated productivity gains, and in the longer term, a failure of service provision to track the changing requirements of the organisation. As a major supplier of IT and ITenab led services, HewlettPackard has invested heavil y in devel oping and u nderstanding of the reasons that services fail to delight, as well as developin g technologi es and management processes that mitigate against failure. This paper describes a (predictive) model based approach to servicesystems analysis that aids in understanding the goals, the specifications and d ynamics of a service system. Our contribution is a model based service discovery process and technology t hat can be u sed to dr amatically im prove interstakeholder communications, provide a design and management infrastructure that is robust to the inevitable changes that affect any comm issioning organisation, and lay th e groun ds fo r m ore sophisticated costbenefit analyses than are currently commonly used. We draw on a number of large scale ( multibillion dollar) service projects to illustrate the application and bene fits of this approach to service discovery and management.
Shotgun stochastic search for “large p” regression
 Journal of the American Statistical Association
, 2007
"... Model search in regression with very large numbers of candidate predictors raises challenges for both model specification and computation, and standard approaches such as Markov chain Monte Carlo (MCMC) and stepwise methods are often infeasible or ineffective. We describe a novel shotgun stochastic ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
(Show Context)
Model search in regression with very large numbers of candidate predictors raises challenges for both model specification and computation, and standard approaches such as Markov chain Monte Carlo (MCMC) and stepwise methods are often infeasible or ineffective. We describe a novel shotgun stochastic search (SSS) approach that explores “interesting” regions of the resulting, very highdimensional model spaces to quickly identify regions of high posterior probability over models. We describe algorithmic and modeling aspects, priors over the model space that induce sparsity and parsimony over and above the traditional dimension penalization implicit in Bayesian and likelihood analyses, and parallel computation using cluster computers. We discuss an example from gene expression cancer genomics, comparisons with MCMC and other methods, and theoretical and simulationbased aspects of performance characteristics in largescale regression model search. We also provide software implementing the methods.
Variable selection and Bayesian model averaging in casecontrol studies
, 1998
"... Covariate and confounder selection in casecontrol studies is most commonly carried out using either a twostep method or a stepwise variable selection method in logistic regression. Inference is then carried out conditionally on the selected model, but this ignores the model uncertainty implicit in ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
(Show Context)
Covariate and confounder selection in casecontrol studies is most commonly carried out using either a twostep method or a stepwise variable selection method in logistic regression. Inference is then carried out conditionally on the selected model, but this ignores the model uncertainty implicit in the variable selection process, and so underestimates uncertainty about relative risks. We report on a simulation study designed to be similar to actual casecontrol studies. This shows that pvalues computed after variable selection can greatly overstate the strength of conclusions. For example, for our simulated casecontrol studies with 1,000 subjects, of variables declared to be "significant" with pvalues between.01 and.05, only 49 % actually were risk factors when stepwise variable selection was used. We propose Bayesian model averaging as a formal way of taking account of model uncertainty in casecontrol studies. This yields an easily interpreted summary, the posterior probability that a variable is a risk factor, and our simulation study indicates this to be reasonably well calibrated in the situations simulated. The methods are applied and compared
Static Detection Of Deadlocks In Polynomial Time
, 1993
"... Parallel and distributed programming languages often include explicit synchronization primitives, such as rendezvous and semaphores. Such programs are subject to synchronization anomalies; the program behaves incorrectly because it has a faulty synchronization structure. A deadlock is an anomaly in ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
Parallel and distributed programming languages often include explicit synchronization primitives, such as rendezvous and semaphores. Such programs are subject to synchronization anomalies; the program behaves incorrectly because it has a faulty synchronization structure. A deadlock is an anomaly in which some subset of the active tasks of the program mutually wait on each other to advance; thus, the program cannot complete execution. In static anomaly detection, the source code of a program is automatically analyzed to determine if the program can ever exhibit a specific anomaly. Static anomaly detection has the unique advantage that it can certify programs to be free of the tested anomaly; dynamic testing cannot generally do this. Though exact static detection of deadlocks is NPhard [Tay83a], many researchers have tried to detect deadlock by ...
Methods and criteria for model selection
 Journal of the American Statistical Association
"... Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the \best model " have been suggested in the lite ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the \best model " have been suggested in the literature. This paper evaluates the various proposals from a decision{theoretic perspective, as a way of bringing coherence to a complex and central question in the eld.
Bayesian Adaptive Sampling for Variable Selection and Model Averaging
"... For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
For the problem of model choice in linear regression, we introduce a Bayesian adaptive sampling algorithm (BAS), that samples models without replacement from the space of models. For problems that permit enumeration of all models BAS is guaranteed to enumerate the model space in 2 p iterations where p is the number of potential variables under consideration. For larger problems where sampling is required, we provide conditions under which BAS provides perfect samples without replacement. When the sampling probabilities in the algorithm are the marginal variable inclusion probabilities, BAS may be viewed as sampling models “near ” the median probability model of Barbieri and Berger. As marginal inclusion probabilities are not known in advance we discuss several strategies to estimate adaptively the marginal inclusion probabilities within BAS. We illustrate the performance of the algorithm using simulated and real data and show that BAS can outperform Markov chain Monte Carlo methods. The algorithm is implemented in the R package BAS available at CRAN.
Likelihoodbased Data Squashing: A Modeling Approach to Instance Construction.
, 2002
"... Squashing is a lossy data compression technique that preserves statistical information. Specifically, squashing compresses a massive dataset to a much smaller one so that outputs from statistical analyses carried out on the smaller (squashed) dataset reproduce outputs from the same statistical analy ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Squashing is a lossy data compression technique that preserves statistical information. Specifically, squashing compresses a massive dataset to a much smaller one so that outputs from statistical analyses carried out on the smaller (squashed) dataset reproduce outputs from the same statistical analyses carried out on the original dataset. Likelihoodbased data squashing (LDS) differs from a previously published squashing algorithm insofar as it uses a statistical model to squash the data. The results show that LDS provides excellent squashing performance even when the target statistical analysis departs from the model used to squash the data.
The Geography of Output Volatility
, 2005
"... This paper examines the structural determinants of output volatility in developing countries, and especially the roles of geography and institutions. We investigate the volatility effects of market access, climate variability, the geographic predisposition to trade, and various measures of instituti ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
This paper examines the structural determinants of output volatility in developing countries, and especially the roles of geography and institutions. We investigate the volatility effects of market access, climate variability, the geographic predisposition to trade, and various measures of institutional quality. We find an especially important role for market access: remote countries are more likely to have undiversified exports and to experience greater volatility in output growth. Our results are based on Bayesian methods that allow us to address formally the problem of model uncertainty and to examine robustness across a wide range of specifications.