Results 11  20
of
270
Causal Parameters and Policy Analysis in Economics: A Twentieth Century Retrospective." Quarterly Journal of Economics 115 (February
 In MeansTested Transfers in the
"... JEL No. C10 The major contributions of twentieth century econometrics to knowledge were the definition of causal parameters when agents are constrained by resources and markets and causes are interrelated, the analysis of what is required to recover causal parameters from data (the identification pr ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
JEL No. C10 The major contributions of twentieth century econometrics to knowledge were the definition of causal parameters when agents are constrained by resources and markets and causes are interrelated, the analysis of what is required to recover causal parameters from data (the identification problem), and clarification of the role of causal parameters in policy evaluation and in forecasting the effects of policies never previously experienced. This paper summarizes the development of those ideas by the Cowles Commission, the response to their work by structural econometricians and VAR econometricians, and the response to structural and VAR econometrics by calibrators, advocates of natural and social experiments, and by nonparametric econometricians and statisticians.
Models of integration given multiple sources of information
 Psychol. Rev
, 1990
"... Several models of information integration are developed and analyzed within the context ofa prototypical patternrecognition task. The central concerns are whether the models prescribe maximally efficient (optimal) integration and to what extent the models are psychologically valid. Evaluation, inte ..."
Abstract

Cited by 49 (17 self)
 Add to MetaCart
Several models of information integration are developed and analyzed within the context ofa prototypical patternrecognition task. The central concerns are whether the models prescribe maximally efficient (optimal) integration and to what extent the models are psychologically valid. Evaluation, integration, and decision processes are specified for each model. Important features are whether evaluation is noisy, whether integration follows Bayes's theorem, and whether decision consists of a criterion rule or a relative goodness rule. Simulations of the models and predictions of the results by the same models are carried out to provide a measure of identifiability or the extent to which the models can be distinguished from one another. The models are also contrasted against empirical results from tasks with 2 and 4 response alternatives and with graded responses. Conceptual Framework There is a growing consensus that behavior reflects the influence of multiple sources of information. Auditory and visual perception, reading and speech perception, and decision making and judgment are modulated by a wide variety of influences
Multiresolution image classification by hierarchical modeling with two dimensional hidden Markov models
 IEEE TRANS. INFORMATION THEORY
, 2000
"... This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum likelihood, in particular by employing the expectationmaximization algorithm. An image is classified by finding the optimal set of states with maximum a posteriori probability. States are then mapped into classes. The multiresolution model enables multiscale information about context to be incorporated into classification. Suboptimal algorithms based on the model provide progressive classification that is much faster than the algorithm based on singleresolution hidden Markov models.
Fishing for Exactness
 In Proceedings of the SouthCentral SAS Users Group Conference
, 1996
"... Statistical methods for automatically identifying dependent word pairs (i.e. dependent bigrams) in a corpus of natural language text have traditionally been performed using asymptotic tests of significance. This paper suggests that Fisher's exact test is a more appropriate test due to the skewed and ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
Statistical methods for automatically identifying dependent word pairs (i.e. dependent bigrams) in a corpus of natural language text have traditionally been performed using asymptotic tests of significance. This paper suggests that Fisher's exact test is a more appropriate test due to the skewed and sparse data samples typical of this problem. Both theoretical and experimental comparisons between Fisher's exact test and a variety of asymptotic tests (the ttest, Pearson's chisquare test, and Likelihoodratio chisquare test) are presented. These comparisons show that Fisher's exact test is more reliable in identifying dependent word pairs. The usefulness of Fisher's exact test extends to other problems in statistical natural language processing as skewed and sparse data appears to be the rule in natural language. The experiment presented in this paper was performed using PROC FREQ of the SAS System. Introduction Due to advances in computing power and the increasing availability of l...
ConstrainedRealization MonteCarlo method for Hypothesis Testing
 Physica D
"... : We compare two theoretically distinct approaches to generating artificial (or "surrogate") data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single "best" model to the original data, and then to generate surrogate data sets that are "typica ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
: We compare two theoretically distinct approaches to generating artificial (or "surrogate") data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single "best" model to the original data, and then to generate surrogate data sets that are "typical realizations" of that model. The second approach concentrates not on the model but directly on the original data; it attempts to constrain the surrogate data sets so that they exactly agree with the original data for a specified set of sample statistics. Examples of these two approaches are provided for two simple cases: a test for deviations from a gaussian distribution, and a test for serial dependence in a time series. Additionally, we consider tests for nonlinearity in time series based on a Fourier transform (FT) method and on more conventional autoregressive movingaverage (ARMA) fits to the data. The comparative performance of hypothesis testing schemes based on these two approaches...
Stochastic Plans for Robotic Manipulation
, 1990
"... Geometric uncertainty is unavoidable when programming robots for physical applications. We propose a stochastic framework for manipulation planning where plans are ranked on the basis of expected cost. That is, we express the desirability of states and actions with a cost function and describe uncer ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
Geometric uncertainty is unavoidable when programming robots for physical applications. We propose a stochastic framework for manipulation planning where plans are ranked on the basis of expected cost. That is, we express the desirability of states and actions with a cost function and describe uncertainty with probability distributions. We illustrate the approach with a new design for a programmable parts feeder, a mechanism that orients twodimensional parts using a sequence of openloop mechanical motions. We present a planning algorithm that accepts an nsided polygonal part as input and, in time O(n²), generates a stochastically optimal plan for orienting the part.
Separate modifiability, mental modules, and the use of pure and composite measures to reveal them
 ACTA PSYCHOLOGICA
, 2001
"... ..."
An Extended Class of Instrumental Variables for the Estimation of Causal Effects
 UCSD DEPT. OF ECONOMICS DISCUSSION PAPER
, 1996
"... This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regresso ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regressors and standard “valid” and “relevant” instrumental variables. We then build on this interpretation to characterize extended instrumental variables (EIV) methods, that is methods that make use of variables that need not be valid instruments in the standard sense, but that are nevertheless instrumental in the recovery of causal effects of interest. After examining special cases of single and double EIV methods, we provide necessary and sufficient conditions for the identification of causal effects by means of EIV and provide consistent and asymptotically normal estimators for the effects of interest.
Free Distribution or CostSharing? Evidence from a Randomized Malaria Prevention Experiment.
 QUARTERLY JOURNAL OF ECONOMICS
, 2010
"... It is often argued that costsharing—charging a subsidized, positive price— for a health product is necessary to avoid wasting resources on those who will not use or do not need the product. We explore this argument through a field experiment in Kenya, in which we randomized the price at which prena ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
It is often argued that costsharing—charging a subsidized, positive price— for a health product is necessary to avoid wasting resources on those who will not use or do not need the product. We explore this argument through a field experiment in Kenya, in which we randomized the price at which prenatal clinics could sell longlasting antimalarial insecticidetreated bed nets (ITNs) to pregnant women. We find no evidence that costsharing reduces wastage on those who will not use the product: women who received free ITNs are not less likely to use them than those who paid subsidized positive prices. We also find no evidence that costsharing induces selection of women who need the net more: those who pay higher prices appear no sicker than the average prenatal client in the area in terms of measured anemia (an important indicator of malaria). Costsharing does, however, considerably dampen demand. We find that uptake drops by sixty percentage points when the price of ITNs increases from zero to $0.60 (i.e., from 100 % to 90% subsidy), a price still $0.15 below the price at which ITNs are currently sold to pregnant women in Kenya. We combine our estimates in a costeffectiveness analysis of the impact of ITN prices on child mortality that incorporates both private and social returns to ITN usage. Overall, our results suggest that free distribution of ITNs could save many more lives than costsharing programs have achieved so far, and, given the large positive externality associated with widespread usage of ITNs, would likely do so at a lesser cost per life saved. ∗ We thank Larry Katz, the editor, and four anonymous referees for comments that significantly improved the paper. We also thank David Autor, Moshe Bushinsky,