Results 1  10
of
26
The development and comparison of robust methods for estimating the fundamental matrix
 International Journal of Computer Vision
, 1997
"... Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mest ..."
Abstract

Cited by 220 (9 self)
 Add to MetaCart
Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mestimators and random sampling, and the paper develops the theory required to apply them to nonlinear orthogonal regression problems. Although a considerable amount of interest has focussed on the application of robust estimation in computer vision, the relative merits of the many individual methods are unknown, leaving the potential practitioner to guess at their value. The second goal is therefore to compare and judge the methods. Comparative tests are carried out using correspondences generated both synthetically in a statistically controlled fashion and from feature matching in real imagery. In contrast with previously reported methods the goodness of fit to the synthetic observations is judged not in terms of the fit to the observations per se but in terms of fit to the ground truth. A variety of error measures are examined. The experiments allow a statistically satisfying and quasioptimal method to be synthesized, which is shown to be stable with up to 50 percent outlier contamination, and may still be used if there are more than 50 percent outliers. Performance bounds are established for the method, and a variety of robust methods to estimate the standard deviation of the error and covariance matrix of the parameters are examined. The results of the comparison have broad applicability to vision algorithms where the input data are corrupted not only by noise but also by gross outliers.
Thresholding for Change Detection
, 1998
"... Image differencing is used for many applications involving change detection. Although it is usually followed by a thresholding operation to isolate regions of change there are few methods available in the literature specific to (and appropriate for) change detection. We describe four different metho ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
Image differencing is used for many applications involving change detection. Although it is usually followed by a thresholding operation to isolate regions of change there are few methods available in the literature specific to (and appropriate for) change detection. We describe four different methods for selecting thresholds that work on very different principles. Either the noise or the signal is modelled, and the model covers either the spatial or intensity distribution characteristics. The methods are: 1/ a Normal model is used for the noise intensity distribution, 2/ signal intensities are tested by making local intensity distribution comparisons in the two image frames (i.e. the difference map is not used), 3/ the spatial properties of the noise are modelled by a Poisson distribution, and 4/ the spatial properties of the signal are modelled as a stable number of regions (or stable Euler number).
MultiModal Identity Verification Using Expert Fusion
 Information Fusion
, 2000
"... The contribution of this paper is to compare paradigms coming from the classes of parametric, and nonparametric techniques to solve the decision fusion problem encountered in the design of a multimodal biometrical identity verification system. The multimodal identity verification system under con ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
The contribution of this paper is to compare paradigms coming from the classes of parametric, and nonparametric techniques to solve the decision fusion problem encountered in the design of a multimodal biometrical identity verification system. The multimodal identity verification system under consideration is built of d modalities in parallel, each one delivering as output a scalar number, called score, stating how well the claimed identity is verified. A decision fusion module receiving as input the d scores has to take a binary decision: accept or reject the claimed identity. We have solved this fusion problem using parametric and nonparametric classifiers. The performances of all these fusion modules have been evaluated and compared with other approaches on a multimodal database, containing both vocal and visual biometric modalities. Keywords: Multimodal identity verification, biometrics, decision fusion. 1 Introduction The automatic verification 1 of a person is more and...
An Empirical Comparison of Static Concurrency Analysis Techniques
, 1996
"... This paper reports the results of an empirical comparison of several static analysis tools for evaluating properties of concurrent software and also reports the results of our attempts to build predictive models for each of the tools based on program and property characteristics. Although this area ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
This paper reports the results of an empirical comparison of several static analysis tools for evaluating properties of concurrent software and also reports the results of our attempts to build predictive models for each of the tools based on program and property characteristics. Although this area seems well suited to empirical investigation, we encountered a number of significant issues that make designing a sound and unbiased study surprisingly difficult. These experiment design issues are also discussed in this paper.
M.: Hypothesis tests for evaluating numerical precipitation forecasts
 Wea. Forecasting
, 1999
"... When evaluating differences between competing precipitation forecasts, formal hypothesis testing is rarely performed. This may be due to the difficulty in applying common tests given the spatial correlation of and nonnormality of errors. Possible ways around these difficulties are explored here. Two ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
When evaluating differences between competing precipitation forecasts, formal hypothesis testing is rarely performed. This may be due to the difficulty in applying common tests given the spatial correlation of and nonnormality of errors. Possible ways around these difficulties are explored here. Two datasets of precipitation forecasts are evaluated, a set of two competing gridded precipitation forecasts from operational weather prediction models and sets of competing probabilistic quantitative precipitation forecasts from model output statistics and from an ensemble of forecasts. For each test, data from each competing forecast are collected into one sample for each case day to avoid problems with spatial correlation. Next, several possible hypothesis test methods are evaluated: the paired t test, the nonparametric Wilcoxon signedrank test, and two resampling tests. The more involved resampling test methodology is the most appropriate when testing threat scores from nonprobabilistic forecasts. The simpler paired t test or Wilcoxon test is appropriate to use in testing the skill of probabilistic forecasts evaluated with the ranked probability score. 1.
A Nonparametric Approach to Noisy and Costly Optimization
, 2000
"... This paper describes PAIRWISE BISECTION: a nonparametric approach to optimizing a noisy function with few function evaluations. ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
This paper describes PAIRWISE BISECTION: a nonparametric approach to optimizing a noisy function with few function evaluations.
An Evaluation of LOLITA and Related Natural Language Processing Systems
, 1998
"... An Evaluation of LOLITA and related Natural Language Processing Systems Paul Callaghan Submitted to the University of Durham for the degree of Ph.D., August 1997  This research addresses the question, "how do we evaluate systems like LOLITA?" LOLITA is the Natural Language P ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
An Evaluation of LOLITA and related Natural Language Processing Systems Paul Callaghan Submitted to the University of Durham for the degree of Ph.D., August 1997  This research addresses the question, "how do we evaluate systems like LOLITA?" LOLITA is the Natural Language Processing (NLP) system under development at the University of Durham. It is intended as a platform for building NL applications. We are therefore interested in questions of evaluation for such general NLP systems. The thesis has two parts.
Operatorprobability Adaptation in a Geneticalgorithm/Heuristic Hybrid for Optical Network Wavelength Allocation
 In: IEEE Intl. Conf. on Evolutionary Computation (ICEC'98
, 1998
"... Operatorprobability adaptation in a geneticalgorithm/ heuristic hybrid for minimum cost routing and wavelength allocation of multiwavelength alloptical transport networks is described. The hybrid algorithm uses an objectoriented representation of networks, and incorporates four operators: path ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Operatorprobability adaptation in a geneticalgorithm/ heuristic hybrid for minimum cost routing and wavelength allocation of multiwavelength alloptical transport networks is described. The hybrid algorithm uses an objectoriented representation of networks, and incorporates four operators: path mutation, singlepoint crossover, reroute and shiftout. The adaptation algorithm is based on that by Davis, but uses simplified operator accounting. Experimental results from three fifteennode test networks, obtained using a tool for optical network optimisation, modelling and design (NOMaD), illustrate the interesting dynamic behaviour of the adaptation algorithm. They suggest that, in this application, with powerful problemspecific operators, the main benefits of operatorprobability adaptation are in relieving the experimenter of the burden of setting initial probabilities and in the early performance of the hybrid, rather than in improvements of the final solution quality obtained. 1....
The Debate on AgricultureIndustry Terms of Trade in India," Working Paper No
, 2002
"... In this paper, we focus on the vast literature that involved analysis of the agricultureindustry terms of trade in India. We first state the key policy issues that are found to be associated with changes in terms of trade variable, and subsequently discuss specific issues concerning the empirical e ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper, we focus on the vast literature that involved analysis of the agricultureindustry terms of trade in India. We first state the key policy issues that are found to be associated with changes in terms of trade variable, and subsequently discuss specific issues concerning the empirical estimation of agricultural terms of trade. We find that the barter terms of trade measure may not only be subject to an aggregation error associated with the index number construction, but is also exposed to the aggregation problems of empirical estimation. We also undertake a set of statistical tests to examine the difference among various agricultural net barter terms of trade series on India. The results indicate that in spite of the methodological differences, the alternate series reflect similar attributes over comparable time periods. JEL Classification: Q11, C14 and C43
Simulation and Bootstrapping for Teaching Statistics
"... Some key ideas in statistics and probability are hard for students, including sampling distributions. Computer simulation lets students gain experience with and intuition for these concepts. Bootstrapping can reinforce that learning, and provide a way for students (and future practitioners!) to esti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Some key ideas in statistics and probability are hard for students, including sampling distributions. Computer simulation lets students gain experience with and intuition for these concepts. Bootstrapping can reinforce that learning, and provide a way for students (and future practitioners!) to estimate sampling distributions when they have data but do not know the underlying distribution. Bootstrapping also frees us from the requirement to teach inference only for statistics for which simple formulas are availablewe can bootstrap robust statistics like the median as easily as the mean. For the promise of simulation and bootstrapping to be realized, they must be available and easy to use in generalpurpose statistical software, complete with the exploratory data analysis and inferential capabilities required in teaching and practice. We discuss some of the available software for simulation and bootstrapping, in particular software built on SPlus. Key words: bootstrap, resampling, simulation,