Results 1  10
of
56
On choosing and bounding probability metrics
 Internat. Statist. Rev. (2002
"... Abstract. When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can prov ..."
Abstract

Cited by 84 (2 self)
 Add to MetaCart
Abstract. When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric. Abrégé. Le choix de métrique de probabilité est une décision très importante lorsqu’on étudie la convergence des mesures. Nous vous fournissons avec un sommaire de plusieurs métriques/distances de probabilité couramment utilisées par des statisticiens(nes) at par des probabilistes, ainsi que certains nouveaux résultats qui se rapportent à leurs bornes. Avoir connaissance d’autres métriques peut vous fournir avec un moyen de dériver des bornes pour une autre métrique dans un problème appliqué. Le fait de prendre en considération plusieurs métriques vous permettra d’approcher des problèmes d’une manière différente. Ainsi, nous vous démontrons que les taux de convergence peuvent dépendre de façon importante sur votre choix de métrique. Il est donc important de tout considérer lorsqu’on doit choisir une métrique. 1.
Toward Simplifying and Accurately Formulating Fragment Assembly
 JOURNAL OF COMPUTATIONAL BIOLOGY
, 1995
"... The fragment assembly problem is that of reconstructing a DNA sequence from a collection of randomly sampled fragments. Traditionally the objective of this problem has been to produce the shortest string that contains all the fragments as substrings, but in the case of repetitive target sequence ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
The fragment assembly problem is that of reconstructing a DNA sequence from a collection of randomly sampled fragments. Traditionally the objective of this problem has been to produce the shortest string that contains all the fragments as substrings, but in the case of repetitive target sequences this objective produces answers that are overcompressed. In this paper, the problem is reformulated as one of finding a maximumlikelihood reconstruction with respect to the 2sided KolmogorovSmirnov statistic, and it is argued that this is a better formulation of the problem. Next the fragment assembly problem is recast in graphtheoretic terms as one of finding a noncyclic subgraph with certain properties and the objectives of being shortest or maximallylikely are also recast in this framework. Finally, a series of graph reduction transformations are given that dramatically reduce the size of the graph to be explored in practical instances of the problem. This reduction is ...
CONFIDENCE MEASURES FOR MULTIMODAL IDENTITY VERIFICATION
, 2002
"... Multimodal fusion for identity verification has already shown great improvement compared to unimodal algorithms. In this paper, we propose to integrate confidence measures during the fusion process. We present a comparison of three different methods to generate such confidence information from unim ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
Multimodal fusion for identity verification has already shown great improvement compared to unimodal algorithms. In this paper, we propose to integrate confidence measures during the fusion process. We present a comparison of three different methods to generate such confidence information from unimodal identity verification systems. These methods can be used either to enhance the performance of a multimodal fusion algorithm or to obtain a confidence level on the decisions taken by the system. All the algorithms are compared on the same benchmark database, namely XM2VTS, containing both speech and face information. Results show that some confidence measures did improve statistically significantly the performance, while other measures produced reliable confidence levels over the fusion decisions.
Monte Carlo test methods in econometrics
 Companion to Theoretical Econometrics’, Blackwell Companions to Contemporary Economics
, 2001
"... The authors thank three anonymous referees and the Editor Badi Baltagi for several useful comments. This work was supported by the Bank of Canada and by grants from the Canadian Network of Centres of Excellence [program on Mathematics ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
The authors thank three anonymous referees and the Editor Badi Baltagi for several useful comments. This work was supported by the Bank of Canada and by grants from the Canadian Network of Centres of Excellence [program on Mathematics
On Rank Correlation in Information Retrieval Evaluation
, 2007
"... Some methods for rank correlation in evaluation are examined and their relative advantages and disadvantages are discussed. In particular, it is suggested that different test statistics should be used for providing additional information about the experiments other that the one provided by statistic ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Some methods for rank correlation in evaluation are examined and their relative advantages and disadvantages are discussed. In particular, it is suggested that different test statistics should be used for providing additional information about the experiments other that the one provided by statistical significance testing. Kendall’s τ is often used for testing rank correlation, yet it is little appropriate if the objective of the test is different from what τ was designed for. In particular, attention should be paid to the null hypothesis. Other measures for rank correlation are described. If one test statistic suggests to reject a hypothesis, other test statistics should be used to support or to revise the decision. The paper then focuses on rank correlation between webpage lists ordered by PageRank for applying the general reflections on these test statistics. An interpretation of PageRank behaviour is provided on the basis of the discussion of the test statistics for rank correlation.
A Nonlinear Filter That Extends to High Dimensional Systems
 J. of Geophys. Res.  Atmosphere
"... Many geophysical problems (e.g., numerical weather prediction) are characterized by highdimensional, nonlinear systems and pose di#cult challenges for realtime data assimilation (updating) and forecasting. This work builds on the ensemble Kalman filter (EnsKF) to produce ensemble filtering tec ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Many geophysical problems (e.g., numerical weather prediction) are characterized by highdimensional, nonlinear systems and pose di#cult challenges for realtime data assimilation (updating) and forecasting. This work builds on the ensemble Kalman filter (EnsKF) to produce ensemble filtering techniques applicable to nonGaussian densities. These techniques also extend to highdimensional systems.
SHARP PROBABILITY ESTIMATES FOR GENERALIZED SMIRNOV STATISTICS
"... Dedicated to the memory of Walter Philipp Abstract. We give sharp, uniform estimates for the probability that the empirical distribution function for n uniform[0, 1] random variables stays to one side of a given line. 1. ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Dedicated to the memory of Walter Philipp Abstract. We give sharp, uniform estimates for the probability that the empirical distribution function for n uniform[0, 1] random variables stays to one side of a given line. 1.
Quantifying software process improvement
, 2005
"... Many ITmetrics display large variation, time dependencies and noise, making it seemingly impossible to draw conclusions from them. Most of the software engineering literature proposed ways to stamp out this undesired behaviour, so that simple questions by management become simple to answer. In this ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Many ITmetrics display large variation, time dependencies and noise, making it seemingly impossible to draw conclusions from them. Most of the software engineering literature proposed ways to stamp out this undesired behaviour, so that simple questions by management become simple to answer. In this paper we accepted that ITmetrics misbehave, in fact, we argued that large variation, time dependencies, and considerable noise are inherent to many ITmetrics. Many other fields know misbehaving metrics as well. These metrics range from the longterm temperature dynamics of beaver to the intratick graphs of the S&P500, their behaviour being sometimes even worse than our ITmetrics. We successfully applied the analysis methods common in other fields to software engineering questions. We illustrated our approach by solving a realworld problem. We answered the simple question by management whether a software process improvement program affecting 1500 ITdevelopers and business staff delivered its value. Moreover, we were able to predict the trends of important KPIs, like cost per function point, which enabled proactive steering and control. Our approach is not limited to this single question, but has a rich application potential to countless management and control issues concerning information technology.
How simulation gains acceptance as a manufacturing productivity improvement tool
 In Proceedings of the 11 th European Simulation Multiconference, eds. Ali Riza Kaylan and Axel Lehmann, P3—P7
, 1997
"... Continuous simulation, discrete simulation, combined simulation Simulation models, whether discrete, continuous, or a combination of both, are characteristically built to improve the understanding of a system and the processes operating within that system. Continuous simulation models study continuo ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Continuous simulation, discrete simulation, combined simulation Simulation models, whether discrete, continuous, or a combination of both, are characteristically built to improve the understanding of a system and the processes operating within that system. Continuous simulation models study continuous variables, amenable to analysis via mathematical techniques such as differential and difference equations. Discreteevent process simulation models study integervalued or binary variables requiring analysis via methods of discrete mathematics, statistics, and operations research. Additionally, random, stochastic variation is frequently both a significant provocation for undertaking a discrete process simulation study and a significant challenge within that study. After describing the similarities and differences between continuous and discreteevent process simulation, this paper discusses typical business motivations for use of discrete simulation and presents a methodology and workplan for such studies in the context of example applications. Next, we describe data characteristically needed to drive a discreteevent process simulation model and the statistical concerns and methods pertinent to analyses of both input data and output results. We conclude with a generic description of computer software tools for building discrete models and a brief presentation of three case studies from manufacturing. 1 THE SIMILARITIES OF AND DIFFERENCES BETWEEN CONTINUOUS AND DISCRETE SIMULATION
Logarithmic Pooling of Priors Linked by a Deterministic Simulation Model
 Journal of Computational and Graphical Statistics
, 1999
"... We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e. pooling) expert opinion. We survey alternative strategies for aggregation, then describe c ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e. pooling) expert opinion. We survey alternative strategies for aggregation, then describe computational approaches for implementing pooled inference for simulation models. Our approach (1) numerically transforms all priors to the same space, (2) uses log pooling to combine priors, and (3) then draws standard Bayesian inference. We use importance sampling methods, including an iterative, adaptive approach which is more flexible and has less bias in some instances than a simpler alternative. Our exploratory examples are the first steps toward extension of the approach for highly complex and even noninvertible models. Key Words: Prior Coherization, Adaptive Importance Sampling, Bayesian Statistics, Model Inversion. 1 Introduction Much research of natural processes and systems is bas...