Results 1  10
of
95
Prior Probabilities
 IEEE Transactions on Systems Science and Cybernetics
, 1968
"... e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determ ..."
Abstract

Cited by 219 (4 self)
 Add to MetaCart
e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determined by the prior information, independently of the choice of parameters. In a certain class of problems, therefore, the prior distributions may now be claimed to be fully as "objective" as the sampling distributions. I. Background of the problem Since the time of Laplace, applications of probability theory have been hampered by difficulties in the treatment of prior information. In realistic problems of decision or inference, we often have prior information which is highly relevant to the question being asked; to fail to take it into account is to commit the most obvious inconsistency of reasoning and may lead to absurd or dangerously misleading results. As an extreme examp
Some Impossibility Theorems In Econometrics With Applications To Instrumental Variables, Dynamic Models And Cointegration
 Econometrica
, 1995
"... General characterizations of valid confidence sets and tests in problems which involve locally almost unidentified (LAU) parameters are provided and applied to several econometric models. Two types of inference problems are studied: (1) inference about parameters which are not identifiable on certai ..."
Abstract

Cited by 202 (38 self)
 Add to MetaCart
General characterizations of valid confidence sets and tests in problems which involve locally almost unidentified (LAU) parameters are provided and applied to several econometric models. Two types of inference problems are studied: (1) inference about parameters which are not identifiable on certain subsets of the parameter space, and (2) inference about parameter transformations with singularities (discontinuities). When a LAU parameter or parametric function has an unbounded range, it is shown under general regularity conditions that any valid confidence set with level 1 \Gamma ff for this parameter should be unbounded with probability close to 1 \Gamma ff in the neighborhood of nonidentification subsets and should as well have a nonzero probability of being unbounded under any distribution compatible with the model: no valid confidence set which is bounded with probability one does exist. These properties hold even if "identifying restrictions" are imposed. Similar results also ob...
2010 “Synthetic Control Methods for Comparative Case Studies: Estimating the E¤ect of California’s Tobacco Control Program
 Journal of the American Statistical Association
"... Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a largescale tobacco control program that Calif ..."
Abstract

Cited by 127 (4 self)
 Add to MetaCart
(Show Context)
Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a largescale tobacco control program that California implemented in 1988. We demonstrate that, following Proposition 99, tobacco consumption fell markedly in California relative to a comparable synthetic control region. We estimate that by the year 2000 annual percapita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99. Using new inferential methods proposed in this article, we demonstrate the significance of our estimates. Given that many policy interventions and events of interest in social sciences take place at an aggregate level (countries, regions, cities, etc.) and affect a small number of aggregate units, the potential applicability of synthetic control methods to comparative case studies is very large, especially in situations where traditional regression methods are not appropriate.
Identification, Weak Instruments, and Statistical Inference in Econometrics
 JOURNAL OF ECONOMICS
, 2003
"... ..."
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 48 (19 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
ProjectionBased Statistical Inference in Linear Structural Models with Possibly Weak Instruments
, 2003
"... ..."
Monte Carlo test methods in econometrics
 Companion to Theoretical Econometrics’, Blackwell Companions to Contemporary Economics
, 2001
"... The authors thank three anonymous referees and the Editor Badi Baltagi for several useful comments. This work was supported by the Bank of Canada and by grants from the Canadian Network of Centres of Excellence [program on Mathematics ..."
Abstract

Cited by 33 (23 self)
 Add to MetaCart
The authors thank three anonymous referees and the Editor Badi Baltagi for several useful comments. This work was supported by the Bank of Canada and by grants from the Canadian Network of Centres of Excellence [program on Mathematics
A robust procedure for gaussian graphical model search from microarray data with p larger than n
 Journal of Machine Learning Research
, 2006
"... Learning of largescale networks of interactions from microarray data is an important and challenging problem in bioinformatics. A widely used approach is to assume that the available data constitute a random sample from a multivariate distribution belonging to a Gaussian graphical model. As a conse ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
Learning of largescale networks of interactions from microarray data is an important and challenging problem in bioinformatics. A widely used approach is to assume that the available data constitute a random sample from a multivariate distribution belonging to a Gaussian graphical model. As a consequence, the prime objects of inference are fullorder partial correlations which are partial correlations between two variables given the remaining ones. In the context of microarray data the number of variables exceed the sample size and this precludes the application of traditional structure learning procedures because a sampling version of fullorder partial correlations does not exist. In this paper we consider limitedorder partial correlations, these are partial correlations computed on marginal distributions of manageable size, and provide a set of rules that allow one to assess the usefulness of these quantities to derive the independence structure of the underlying Gaussian graphical model. Furthermore, we introduce a novel structure learning procedure based on a quantity, obtained from limitedorder partial correlations, that we call the nonrejection rate. The applicability and usefulness of the procedure are demonstrated by both simulated and real data.
Further results on projectionbased inference in IV regressions with weak, collinear or . . .
 JOURNAL OF ECONOMETRICS
, 2006
"... ..."
Network Routing
 Phil. Trans. R. Soc. Lond. A,337
, 1991
"... How should flows through a network be organized, so that the network responds sensibly to failures and overloads? The question is currently of considerable technological importance in connection with the development of computer and telecommunication networks, while in various other forms it has a lo ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
How should flows through a network be organized, so that the network responds sensibly to failures and overloads? The question is currently of considerable technological importance in connection with the development of computer and telecommunication networks, while in various other forms it has a long history in the fields of physics and economics. In all of these areas there is interest in how simple, local rules, often involving random actions, can produce coherent and purposeful behaviour at the macroscopic level. This paper describes some examples from these various fields, and indicates how analogies with fundamental concepts such as energy and price can provide powerful insights into the design of routing schemes for communication networks.