Results 1  10
of
11
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 148 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Computing Maximum Likelihood Estimates in loglinear models
, 2006
"... We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of loglinear models of widespred use, under Poisson and productmultinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of loglinear models of widespred use, under Poisson and productmultinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating design matrices and we propose various algorithms for computing the extended maximum likelihood estimates of the expectations of the cell counts. These algorithms allow to identify the set of estimable cell means for any given observable table and can be used for modifying traditional goodnessoffit tests to accommodate for a nonexistent MLE. We describe and take advantage of the connections between extended maximum likelihood
Recovering Latent TimeSeries from their Observed Sums: Network Tomography with Particle Filters
, 2004
"... Hidden variables, evolving over time, appear in multiple settings, where it is valuable to recover them, typically from observed sums. Our driving application is ’network tomography’, where we need to estimate the origindestination (OD) traffic flows to determine, e.g., who is communicating with wh ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Hidden variables, evolving over time, appear in multiple settings, where it is valuable to recover them, typically from observed sums. Our driving application is ’network tomography’, where we need to estimate the origindestination (OD) traffic flows to determine, e.g., who is communicating with whom in a local area network. This information allows network engineers and managers to solve problems in design, routing, configuration debugging, monitoring and pricing. Unfortunately the direct measurement of the OD traffic is usually difficult, or even impossible; instead, we can easily measure the loads on every link, that is, sums of desirable OD flows. In this paper we propose iFILTER, a method to solve this problem, which improves the stateoftheart by (a) introducing explicit time dependence, and by (b) using realistic, nonGaussian marginals in the statistical models for the traffic flows, as never attempted before. We give experiments on real data, where iFILTER scales linearly with new observations and outperforms the best existing solutions, in a wide variety of settings. Specifically, on real network traffic measured at CMU, and at AT&T, iFILTER reduced the estimation errors between 15 % and 46 % in all cases.
Three Centuries of Categorical Data Analysis: Loglinear Models and Maximum Likelihood Estimation
"... The common view of the history of contingency tables is that it begins in 1900 with the work of Pearson and Yule, but it extends back at least into the 19th century. Moreover it remains an active area of research today. In this paper we give an overview of this history focussing on the development o ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
The common view of the history of contingency tables is that it begins in 1900 with the work of Pearson and Yule, but it extends back at least into the 19th century. Moreover it remains an active area of research today. In this paper we give an overview of this history focussing on the development of loglinear models and their estimation via the method of maximum likelihood. S. N. Roy played a crucial role in this development with two papers coauthored with his students S. K. Mitra and Marvin Kastenbaum, at roughly the midpoint temporally in this development. Then we describe a problem that eluded Roy and his students, that of the implications of sampling zeros for the existence of maximum likelihood estimates for loglinear models. Understanding the problem of nonexistence is crucial to the analysis of large sparse contingency tables. We introduce some relevant results from the application of algebraic geometry to the study of this statistical problem. 1
The geometry of statistical models for twoway contingency tables with fixed odds ratios. Rendiconti dell'Istituto di Matematica dell'Universitá di Trieste 37
, 2005
"... Abstract. We study the geometric structure of the statistical models for twobytwo contingency tables. One or two odds ratios are fixed and the corresponding models are shown to be a portion of a ruled quadratic surface or a segment. Some pointers to the general case of twoway contingency tables a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We study the geometric structure of the statistical models for twobytwo contingency tables. One or two odds ratios are fixed and the corresponding models are shown to be a portion of a ruled quadratic surface or a segment. Some pointers to the general case of twoway contingency tables are also given and an application to casecontrol studies is presented. 1.
Analysis of the Binary Instrumental Variable Model
"... We give an explicit geometric characterization of the set of distributions over counterfactuals that are compatible with a given observed joint distribution fortheobservablesinthebinary instrumental variable model. This paper will appear as Chapter 25 in Heuristics, Probability and Causality: A Trib ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We give an explicit geometric characterization of the set of distributions over counterfactuals that are compatible with a given observed joint distribution fortheobservablesinthebinary instrumental variable model. This paper will appear as Chapter 25 in Heuristics, Probability and Causality: A Tribute to Pearl’s seminal work on instrumental variables [Chickering andPearl1996;BalkeandPearl 1997] for discrete data represented a leap forwards in terms of understanding: Pearl showed that, contrary to what many had supposed based on linear models, in the discrete case the assumption that a variable was an instrument could be subjected to empirical test. In
William Kruskal: My Scholarly and Scientific Model
, 710
"... When I arrived at the University of Chicago as an assistant professor in the summer of 1968, Bill Kruskal was department chair and he became a constant presence in my life, introducing me to new topics and people, gently advising me, encouraging me to look more deeply into almost everything we talke ..."
Abstract
 Add to MetaCart
(Show Context)
When I arrived at the University of Chicago as an assistant professor in the summer of 1968, Bill Kruskal was department chair and he became a constant presence in my life, introducing me to new topics and people, gently advising me, encouraging me to look more deeply into almost everything we talked about. Many of the activities of my subsequent career, in statistics proper and at the interface with other fields, had their roots in my interactions with Bill during my time at Chicago. My arrival occurred just before the Democratic convention to pick a candidate for that year’s presidential elections. Over lunch one day I expressed to Bill an interest in the accuracy of public opinion polls and their scientific foundation. The next thing I knew Bill had recommended me to the producers of a university television interview program that was about to air on a local station. A group of faculty ended up doing three successive panel discussion programs on polling. Norman Bradburn and Ken Prewitt were part of this effort and I’ve continued to interact with both of them throughout my career. I also began to look carefully at the regular newspaper reports of the Chicago Sun–Times Poll, and Bill encouraged me to make a plan to assess its accuracy—this meant assembling a data set of predictions and of course election results. Before too long this became a working manuscript and Bill encouraged me to submit it to the Journal of the American Statistical Association (JASA) for publi
Computing Maximum Likelihood Estimates in loglinear models
"... We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of loglinear models of widespred use, under Poisson and productmultinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating ..."
Abstract
 Add to MetaCart
We develop computational strategies for extended maximum likelihood estimation, as defined in Rinaldo (2006), for general classes of loglinear models of widespred use, under Poisson and productmultinomial sampling schemes. We derive numerically efficient procedures for generating and manipulating design matrices and we propose various algorithms for computing the extended maximum likelihood estimates of the expectations of the cell counts. These algorithms allow to identify the set of estimable cell means for any given observable table and can be used for modifying traditional goodnessoffit tests to accommodate for a nonexistent MLE. We describe and take advantage of the connections between extended maximum likelihood
DOI: 10.1007/S113360010899Y COMPARING LATENT STRUCTURES OF THE GRADE OF MEMBERSHIP, RASCH, AND LATENT CLASS MODELS
"... ..."
(Show Context)