Results 1  10
of
34
Bundle adjustment – a modern synthesis
 Vision Algorithms: Theory and Practice, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 386 (12 self)
 Add to MetaCart
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
Regression Analysis, Nonlinear or Nonnormal: Simple and accurate pvalues from Likelihood Analysis
 J. Amer. Statist. Assoc
, 1999
"... We develop simple approximations for the pvalues to use with regression models having linear or nonlinear parameter structure and normal or nonnormal error distribution; computer iteration then gives confidence intervals. Both frequentist and Bayesian versions are given. The approximations are deri ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
We develop simple approximations for the pvalues to use with regression models having linear or nonlinear parameter structure and normal or nonnormal error distribution; computer iteration then gives confidence intervals. Both frequentist and Bayesian versions are given. The approximations are derived from recent developments in likelihood analysis and have third order accuracy. Also for very small and medium sized samples the accuracy can typically be high. The likelihood basis of the procedure seems to provide the grounds for this general accuracy. Examples are discussed and simulations record the distributional accuracy. KEYWORDS: Asymptotics; Likelihood analysis; Nonlinear; Nonnormal; pvalues; Regression 1 1. INTRODUCTION Regression analysis is a central technique of statistical methodology, but a large part of the theory is organized around the special case with linear location and normal error. This case of course corresponds to mathematical simplicities and has a long histo...
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
Blind minimax estimation
 2005, EE Dept., Technion–Israel Institute of Technology; submitted to IEEE Trans. Signal Processing
"... Abstract—We consider the linear regression problem of estimating an unknown, deterministic parameter vector based on measurements corrupted by colored Gaussian noise. We present and analyze blind minimax estimators (BMEs), which consist of a bounded parameter set minimax estimator, whose parameter s ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
Abstract—We consider the linear regression problem of estimating an unknown, deterministic parameter vector based on measurements corrupted by colored Gaussian noise. We present and analyze blind minimax estimators (BMEs), which consist of a bounded parameter set minimax estimator, whose parameter set is itself estimated from measurements. Thus, our approach does not require any prior assumption or knowledge, and the proposed estimator can be applied to any linear regression problem. We demonstrate analytically that the BMEs strictly dominate the leastsquares (LS) estimator, i.e., they achieve lower meansquared error (MSE) for any value of the parameter vector. Both Stein’s estimator and its positivepart correction can be derived within the blind minimax framework. Furthermore, our approach can be readily extended to a wider class of estimation problems than Stein’s estimator, which is defined only for white noise and nontransformed measurements. We show through simulations that the BMEs generally outperform previous extensions of Stein’s technique. Index Terms—Biased estimation, James–Stein estimation, minimax estimation, linear regression model. I.
Recursive estimation in econometrics
 Computational Statistics and Data Analysis
, 2003
"... An account is given of recursive regression and Kalman filtering that gathers the important results and the ideas that lie behind them. It emphasises areas where econometricians have made contributions, including methods for handling the initialvalue problem associated with nonstationary processes ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An account is given of recursive regression and Kalman filtering that gathers the important results and the ideas that lie behind them. It emphasises areas where econometricians have made contributions, including methods for handling the initialvalue problem associated with nonstationary processes and algorithms for fixedinterval smoothing.
CurveFitting, the Reliability of Inductive Inference and the ErrorStatistical Approach,” forthcoming Philosophy of Science
"... The main aim of this paper is to revisit the curvefitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest ’ curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curvefitting problem is, o ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
The main aim of this paper is to revisit the curvefitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest ’ curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curvefitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodnessoffit as the primary criterion for best, the mathematical approximation perspective undermines the reliability of inference objective by giving rise to selection rules which pay insufficient attention to ‘capturing the regularities in the data’. A more appropriate framework is offered by the errorstatistical approach, where(i)statistical adequacy provides the criterion for assessing when a curve captures the regularities in the data adequately, and (ii) the relevant error probabilities canbeusedtoassessthereliabilityof inductive inference. Broadly speaking, the fittest curve (statistically adequate) is not determined by the smallness if its residuals, tempered by simplicity or other pragmatic criteria, but by the nonsystematic (e.g. whitenoise) nature of its residuals. The advocated errorstatistical arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds. ∗ Forthcoming in Philosophy of Science, 2007. † I’m grateful to Deborah Mayo and Clark Glymour for many valuable suggestions and comments on an earlier draft of the paper; estimating the Ptolemaic model was the result of Glymour’s prompting and encouragement. 1 1
Bayesian computation: a statistical revolution
, 2003
"... The 1990s saw a statistical revolution sparked predominantly by the phenomenal advances in computing technology from the early 1980s onwards. These advances enabled the development of powerful new computational tools, which reignited interest in a philosophy of statistics that had lain almost dorman ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The 1990s saw a statistical revolution sparked predominantly by the phenomenal advances in computing technology from the early 1980s onwards. These advances enabled the development of powerful new computational tools, which reignited interest in a philosophy of statistics that had lain almost dormant since the turn of the century. In this paper we briefly review the historic and philosophical foundations of the two schools of statistical thought, before examining the implications of the reascendance of the Bayesian paradigm for both current and future statistical practice.
2006c) “The CurveFitting Problem, Akaiketype Model Selection, and the Error Statistical Approach.” Virginia Tech working paper
"... The curvefitting problem is often viewed as an exemplar which encapsulates the multitude of dimensions and issues associated with inductive inference, including underdetermination and the reliability of inference. The prevailing view is that the ‘fittest ’ curve is one which provides the optimal tr ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The curvefitting problem is often viewed as an exemplar which encapsulates the multitude of dimensions and issues associated with inductive inference, including underdetermination and the reliability of inference. The prevailing view is that the ‘fittest ’ curve is one which provides the optimal tradeoff between goodnessoffit and simplicity, with the Akaike Information Criterion (AIC) the preferred method. The paper argues that the AICtype procedures do not provide an adequate solution to the curve fitting problem because (a) they have no criterion to assess when a curve captures the regularities in the data inadequately, and (b) they are prone to unreliable inferences. The thesis advocated is that for more satisfactory answers one needs to view the curvefitting problem in the context of errorstatistical approach where (i) statistical adequacy provides a criterion for selecting the fittest curve and (ii) the error probabilities can be used to calibrate the reliability of inductive inference. This thesis is illustrated by comparing the Kepler and Ptolemaic models in terms of statistical adequacy, showing that the latter does not ‘save the phenomena ’ as often claimed. This calls into question the view concerning the pervasiveness of the problem of underdetermination; statistically adequate ‘fittest ’ curves are rare, not common. ∗Thanks are due to Deborah Mayo and Clark Glymour for valuable suggestions and comments on an earlier draft of the paper. 1 1
Gauss, Statistics, and Gaussian Elimination
, 1994
"... This report gives a historical survey of Gauss's work on the solution of linear systems. This report is available by anonymous ftp from thales.cs.umd.edu in the directory pub/reports. It will appear in the proceedings of Interface 94. y Department of Computer Science and Institute for Advanced ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This report gives a historical survey of Gauss's work on the solution of linear systems. This report is available by anonymous ftp from thales.cs.umd.edu in the directory pub/reports. It will appear in the proceedings of Interface 94. y Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742. Gauss, Statistics, and Gaussian Elimination G. W. Stewart Department of Comuter Science and Institute for Advanced Computer Studies University of Maryland at College Park 1. Introduction Everyone knows that Gauss invented Gaussian elimination, and, excepting a quibble, everyone is right. 1 What is less well known is that Gauss introduced the procedure as a mathematical tool to get at the precision of least squares estimates. In fact the computational component in the original description is so little visible, that it takes some doing to see an algorithm in it. Gaussian elimination, therefore, was not conceived as a gene...