Results 1  10
of
144
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
Geometric Motion Segmentation and Model Selection
 Phil. Trans. Royal Society of London A
, 1998
"... this paper we place the three problems into a common statistical framework; investigating the use of information criteria and robust mixture models as a principled way for motion segmentation of images. The final result is a general fully automatic algorithm for clustering that works in the presence ..."
Abstract

Cited by 105 (2 self)
 Add to MetaCart
this paper we place the three problems into a common statistical framework; investigating the use of information criteria and robust mixture models as a principled way for motion segmentation of images. The final result is a general fully automatic algorithm for clustering that works in the presence of noise and outliers. 1. Introduction
Comparing Dynamic Causal Models
 NEUROIMAGE
, 2004
"... This article describes the use of Bayes factors for comparing Dynamic Causal Models (DCMs). DCMs are used to make inferences about effective connectivity from functional Magnetic Resonance Imaging (fMRI) data. These inferences, however, are contingent upon assumptions about model structure, that is, ..."
Abstract

Cited by 79 (33 self)
 Add to MetaCart
This article describes the use of Bayes factors for comparing Dynamic Causal Models (DCMs). DCMs are used to make inferences about effective connectivity from functional Magnetic Resonance Imaging (fMRI) data. These inferences, however, are contingent upon assumptions about model structure, that is, the connectivity pattern between the regions included in the model. Given the current lack of detailed knowledge on anatomical connectivity in the human brain, there are often considerable degrees of freedom when defining the connectional structure of DCMs. In addition, many plausible scientific hypotheses may exist about which connections are changed by experimental manipulation, and a formal procedure for directly comparing these competing hypotheses is highly desirable. In this article, we show how Bayes factors can be used to guide choices about model structure, both with regard to the intrinsic connectivity pattern and the contextual modulation of individual connections. The combined use of Bayes factors and DCM thus allows one to evaluate competing scientific theories about the architecture of largescale neural networks and the neuronal interactions that mediate perception and cognition.
An Assessment of Information Criteria for Motion Model Selection
 In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR
, 1997
"... Rigid motion imposes constraints on the motion of image points between the two images. The matched points must conform to one of several possible constraints, such as that given by the fundamental matrix or imageimage homography, and it is essential to know which model to fit to the data before rec ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
Rigid motion imposes constraints on the motion of image points between the two images. The matched points must conform to one of several possible constraints, such as that given by the fundamental matrix or imageimage homography, and it is essential to know which model to fit to the data before recovery of structure, matching or segmentation can be performed successfully. This paper compares several model selection methods with a particular emphasis on providing a method that will work fully automatically on real imagery. 1 Introduction Robotic vision has its basis in geometric modelling of the world, and many vision algorithms attempt to estimate these geometric models from perceived data. Usually only one model is fitted to the data. But what if the data might have arisen from one of several possible models? In this case the fitting procedure needs to fit all the potential models and select which of these fits the data best. This is the task of robust model selection which, in spi...
Randomeffects analysis
 In
, 2004
"... of the structural measures of flexibility and agility using a measurement theoretical framework $ ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
of the structural measures of flexibility and agility using a measurement theoretical framework $
Proactive Management of Software Aging
, 2001
"... this paper may be copied or distributed royalty free without further permission by computerbased and other informationservice systems. Permission to republish any other portion of this paper must be obtained from the Editor. ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
this paper may be copied or distributed royalty free without further permission by computerbased and other informationservice systems. Permission to republish any other portion of this paper must be obtained from the Editor.
Key Concepts in Model Selection: Performance and Generalizability
 Journal of Mathematical Psychology
, 2000
"... methods of model selection, and how do they work? Which methods perform better than others, and in what circumstances? These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this essay is to explain some background concepts, highlight some of the results in ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
methods of model selection, and how do they work? Which methods perform better than others, and in what circumstances? These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this essay is to explain some background concepts, highlight some of the results in this special issue, and to add my own. The standard methods of model selection include classical hypothesis testing, maximum likelihood, Bayes method, minimum description length, crossvalidation and Akaike’s information criterion. They all provide an implementation of Occam’s razor, in which parsimony or simplicity is balanced against goodnessoffit. These methods primarily take account of the sampling errors in parameter estimation, although their relative success at this task depends on the circumstances. However, the aim of model selection should also include the ability of a model to generalize to predictions in a different domain. Errors of extrapolation, or generalization, are different from errors of parameter estimation. So, it seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam’s razor. 1. WHAT IS MODEL SELECTION? William of Ockham (1285 1347/49) will always be remembered for his famous postulations of Ockham’s razor (also spelled ‘Occam’), which states that entities are not to be multiplied beyond necessity. In a similar vein, Sir Isaac Newton’s first rule of hypothesizing instructs us that we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. While they This paper is derived from a presentation at the Methods of Model Selection symposium at Indiana University
Population Rule Learning in Symmetric NormalForm Games: Theory and Evidence
, 2001
"... A model of population rule learning is formulated and estimated using experimental data. When predicting the population distribution of choices and accounting for the number of parameters, the population rule learning model is much better than aggregation of individually estimated rule learning mode ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
A model of population rule learning is formulated and estimated using experimental data. When predicting the population distribution of choices and accounting for the number of parameters, the population rule learning model is much better than aggregation of individually estimated rule learning models. Further, rule learning is a statistically significant and important phenomena even when focusing on population statistics, and is much better than onerule learning dynamics. 2001 Elsevier Science B.V. All rights reserved. JEL classification: C15; C52; C72 Keywords: Rules; Learning; Games; Experimental; Testing 1. Introduction Recent learning research in oneshot games can be divided into two domains: (i) population learning or evolutionary dynamics as typified by replicator dynamics, 1 and (ii) individual learning. 23 The first domain focuses on how the population distribution of play changes over time, while the second domain focuses on how an individual's behavior changes over...
Robust Detection of Degenerate Configurations whilst Estimating the Fundamental Matrix
, 1998
"... We present a new method for the detection of multiple solutions or degeneracy when estimating the Fundamental Matrix, with specific emphasis on robustness to data contamination (mismatches). The Fundamental Matrix encapsulates all the information on camera motion and internal parameters available f ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
We present a new method for the detection of multiple solutions or degeneracy when estimating the Fundamental Matrix, with specific emphasis on robustness to data contamination (mismatches). The Fundamental Matrix encapsulates all the information on camera motion and internal parameters available from image feature correspondences between two views. It is often used as a first step in structure from motion algorithms. If the set of correspondences is degenerate, then this structure cannot be accurately recovered and many solutions explain the data equally well. It is essential that we are alerted to such eventualities. As current feature matchers are very prone to mismatching the degeneracy detection method must also be robust to outliers. In this paper a definition of degeneracy is given and all two view nondegenerate and degenerate cases are catalogued in a logical way by introducing the language of varieties from algebraic geometry. It is then shown how each of the cases can be ro...
Bayesian model selection in structural equation models
, 1993
"... A Bayesian approach to model selection for structural equation models is outlined. This enables us to compare individual models, nested or nonnested, and also to search through the (perhaps vast) set of possible models for the best ones. The approach selects several models rather than just one, whe ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
A Bayesian approach to model selection for structural equation models is outlined. This enables us to compare individual models, nested or nonnested, and also to search through the (perhaps vast) set of possible models for the best ones. The approach selects several models rather than just one, when appropriate, and so enables us to take account, both informally and formally, of uncertainty about model structure when making inferences about quantities of interest. The approach tends to select simpler models than strategies based on multiple Pvaluebased tests. It may thus help to overcome the criticism of structural