Results 1  10
of
110
Making the most of statistical analyses: Improving interpretation and presentation
 American Journal of Political Science
, 2000
"... Social scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. ..."
Abstract

Cited by 164 (18 self)
 Add to MetaCart
Social scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. In this article, we offer an approach, built on the technique of statistical simulation, to extract the currently overlooked information from any statistical method and to interpret and present it in a readerfriendly manner. Using this technique requires some expertise,
Analyzing Incomplete Political Science Data: An Alternative Algorithm for Multiple Imputation
 American Political Science Review
, 2000
"... We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through ..."
Abstract

Cited by 141 (40 self)
 Add to MetaCart
We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through one's explanatory and dependent variables than the methods currently used in applied data analysis. The reason for this discrepancy lies with the fact that the computational algorithms used to apply the best multiple imputation models have been slow, difficult to implement, impossible to run with existing commercial statistical packages, and demanding of considerable expertise. In this paper, we adapt an existing algorithm, and use it to implement a generalpurpose, multiple imputation model for missing data. This algorithm is considerably faster and easier to use than the leading method recommended in the statistics literature. We also quantify the risks of current missing data practices, ...
Convergence of a stochastic approximation version of the EM algorithm
, 1997
"... The Expectation Maximization (EM) algorithm is a powerful computational technique for locating maxima of functions... ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
The Expectation Maximization (EM) algorithm is a powerful computational technique for locating maxima of functions...
Knowledge Discovery from Telecommunication Network Alarm Databases
, 1996
"... A telecommunication network produces daily large amounts of alarm data. The data contains hidden valuable knowledge about the behavior of the network. This knowledge can be used in filtering redundant alarms, locating problems in the network, and possibly in predicting severe faults. We describe the ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
A telecommunication network produces daily large amounts of alarm data. The data contains hidden valuable knowledge about the behavior of the network. This knowledge can be used in filtering redundant alarms, locating problems in the network, and possibly in predicting severe faults. We describe the TASA (Telecommunication Network Alarm Sequence Analyzer) system for discovering and browsing knowledge from large alarm databases. The system is built on the basis of viewing knowledge discovery as an interactive and iterative process, containing data collection, pattern discovery, rule postprocessing, etc. The system uses a novel framework for locating frequently occurring episodes from sequential data. The TASA system offers a variety of selection and ordering criteria for episodes, and supports iterative retrieval from the discovered knowledge. This means that a large part of the iterative nature of the KDD process can be replaced by iteration in the rule postprocessing stage. The user i...
Practical maximum pseudolikelihood for spatial point patterns
 Australian and New Zealand Journal of Statistics
, 2000
"... This paper describes a technique for computing approximate maximum pseudolikelihood estimates of the parameters of a spatial point process. The method is an extension of Berman & Turner’s (1992) device for maximizing the likelihoods of inhomogeneous spatial Poisson processes. For a very wide class o ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
This paper describes a technique for computing approximate maximum pseudolikelihood estimates of the parameters of a spatial point process. The method is an extension of Berman & Turner’s (1992) device for maximizing the likelihoods of inhomogeneous spatial Poisson processes. For a very wide class of spatial point process models the likelihood is intractable, while the pseudolikelihood is known explicitly, except for the computation of an integral over the sampling region. Approximation of this integral by a finite sum in a special way yields an approximate pseudolikelihood which is formally equivalent to the (weighted) likelihood of a loglinear model with Poisson responses. This can be maximized using standard statistical software for generalized linear or additive models, provided the conditional intensity of the process takes an ‘exponential family ’ form. Using this approach a wide variety of spatial point process models of Gibbs type can be fitted rapidly, incorporating spatial trends, interaction between points, dependence on spatial covariates, and mark information. Key words: areainteraction process; Berman–Turner device; Dirichlet tessellation; edge effects; generalized additive models; generalized linear models; Gibbs point processes; GLIM; hard core process; inhomogeneous point process; marked point processes; Markov spatial point processes; Ord’s process; pairwise interaction; profile pseudolikelihood; spatial clustering; soft core process; spatial trend; SPLUS; Strauss process; Widom–Rowlinson model. 1.
Physiological Pharmacokinetic Analysis Using Population Modeling and Informative Prior Distributions
 Journal of the American Statistical Association
, 1996
"... We describe a general approach using Bayesian analysis for the estimation of parameters in physiological pharmacokinetic models. The chief statistical difficulty in estimation with these models is that any physiological model that is even approximately realistic will have a large number of parameter ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
We describe a general approach using Bayesian analysis for the estimation of parameters in physiological pharmacokinetic models. The chief statistical difficulty in estimation with these models is that any physiological model that is even approximately realistic will have a large number of parameters, often comparable to the number of observations in a typical pharmacokinetic experiment (for example, 28 measurements and 15 parameters for each subject). In addition, the parameters are generally poorly identified, akin to the wellknown illconditioned problem of estimating a mixture of declining exponentials. Our modeling includes (1) hierarchical population modeling as in Wakefield (1994), which allows partial pooling of information among different experimental subjects; (2) a pharmacokinetic model including compartments for wellperfused tissues, poorlyperfused tissues, fat, and the liver; and (3) informative prior distributions for population parameters, which is possible be Sch...
Inference in longhorizon event studies: A bayesian approach with an application to initial public offerings
 Journal of Finance
, 2000
"... Statistical inference in longhorizon event studies has been hampered by the fact that abnormal returns are neither normally distributed nor independent. This study presents a new approach to inference that overcomes these difficulties and dominates other popular testing methods. I illustrate the us ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Statistical inference in longhorizon event studies has been hampered by the fact that abnormal returns are neither normally distributed nor independent. This study presents a new approach to inference that overcomes these difficulties and dominates other popular testing methods. I illustrate the use of the methodology by examining the longhorizon returns of initial public offerings ~IPOs!. I find that the Fama and French ~1993! threefactor model is inconsistent with the observed longhorizon price performance of these IPOs, whereas a characteristicbased model cannot be rejected. RECENT EMPIRICAL STUDIES IN FINANCE document systematic longrun abnormal price reactions subsequent to numerous corporate activities. 1 Since these results imply that stock prices react with a long delay to publicly available information, they appear to be at odds with the Efficient Markets Hypothesis ~EMH!. Longrun event studies, however, are subject to serious statistical difficulties
Robust visual tracking by integrating multiple cues based on coinference learning
 International Journal of Computer Vision
, 2004
"... Abstract. Visual tracking can be treated as a parameter estimation problem that infers target states based on image observations from video sequences. A richer target representation would incur better chances of successful tracking in cluttered and dynamic environments, and thus enhance the robustne ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
Abstract. Visual tracking can be treated as a parameter estimation problem that infers target states based on image observations from video sequences. A richer target representation would incur better chances of successful tracking in cluttered and dynamic environments, and thus enhance the robustness. Richer representations can be constructed by either specifying a detailed model of a single cue or combining a set of rough models of multiple cues. Both approaches increase the dimensionality of the state space, which results in a dramatic increase of computation. To investigate the integration of rough models from multiple cues and to explore computationally efficient algorithms, this paper formulates the problem of multiple cue integration and tracking in a probabilistic framework based on a factorized graphical model. Structured variational analysis of such a graphical model factorizes different modalities and suggests a coinference process among these modalities. Based on the importance sampling technique, a sequential Monte Carlo algorithm is proposed to provide an efficient simulation and approximation of the coinferencing of multiple cues. This algorithm runs in realtime at around 30Hz. Our extensive experiments show that the proposed algorithm performs robustly in a large variety of tracking scenarios. The approach presented in this paper has the potential to solve other problems including sensor fusion problems.
Empirical Bayes Estimation in Wavelet Nonparametric Regression
"... Bayesian methods based on hierarchical mixture models have demonstrated excellent mean squared error properties in constructing data dependent shrinkage estimators in wavelets, however, subjective elicitation of the hyperparameters is challenging. In this chapter we use an Empirical Bayes approach t ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
Bayesian methods based on hierarchical mixture models have demonstrated excellent mean squared error properties in constructing data dependent shrinkage estimators in wavelets, however, subjective elicitation of the hyperparameters is challenging. In this chapter we use an Empirical Bayes approach to estimate the hyperparameters for each level of the wavelet decomposition, bypassing the usual difficulty of hyperparameter specification in the hierarchical model. The EB approach is computationally competitive with standard methods and offers improved MSE performance over several Bayes and classical estimators in a wide variety of examples.
Geophysical inversion with a neighbourhood algorithmöI. Searching a parameter space, Geophys
 J. Int
, 1999
"... etc., are often used to explore a ¢nitedimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information ' collected in the search process. Searc ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
etc., are often used to explore a ¢nitedimensional parameter space. They require the solving of the forward problem many times, that is, making predictions of observables from an earth model. The resulting ensemble of earth models represents all `information ' collected in the search process. Search techniques have been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble. Often inferences are based on only a small subset of the ensemble, and sometimes a single member. This paper presents a new approach to the appraisal problem. To our knowledge this is the ¢rst time the general case has been addressed, that is, how to infer information from a complete ensemble, previously generated by any search method. The essence of the new approach is to use the information in the available ensemble to guide a resampling of the parameter space. This requires no further solving of the forward problem, but from the new `resampled ' ensemble we are able to obtain measures of resolution and tradeo ¡ in the model parameters, or any combinations of them. The new ensemble inference algorithm is illustrated on a highly nonlinear waveform inversion problem. It is shown how the computation time and memory requirements scale with the dimension of the parameter space and size of the ensemble. The method is highly parallel, and may easily be distributed across several computers. Since little is assumed about the initial ensemble of earth models, the technique is applicable to a wide variety of situations. For example, it may be applied to perform `error analysis ' using the ensemble generated by a genetic algorithm, or any other direct search method. Key words: numerical techniques, receiver functions, waveform inversion. 1