Results 1  10
of
66
Of time and space: The contemporary relevance of the Chicago School. Social Forces 75(4
, 1997
"... This essay argues that sociology's major current problems are intellectual. It traces these problems to the exhaustion of the current "variables paradigm " and considers the Chicago School's "contextualist paradigm " as an alternative. Examples of new methodologies foun ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This essay argues that sociology's major current problems are intellectual. It traces these problems to the exhaustion of the current "variables paradigm " and considers the Chicago School's "contextualist paradigm " as an alternative. Examples of new methodologies founded on contextual thinking are considered. Anniversaries are often valedictions. A centennial sometimes shows an association to be moribund, just as a diamond jubilee may reveal a queen's irrelevance and a golden anniversary finds many a marriage dead. By contrast, living social relations celebrate themselves daily. Anniversaries merely punctuate their excitement. What then are we to make of this centennial year of sociology at the University of Chicago? Is it simply a time for eulogy? After all, Chicago dominance of sociology is half a century gone. And while the Chicago tradition renewed itself after the war in Goffman, Becker, Janowitz, and their like, many of Chicago's most distinguished alumni since its dominant years belong more to the mainstream than to the Chicago tradition proper: methodologists like Stouffer and Duncan, demographers like Hauser and Keyfitz, macrosociologists like Bendix and Wilensky. Nonetheless, at the heart of the Chicago tradition he insights central to the advancement of contemporary sociology. Therefore, I do not today eulogize the Chicago tradition. One eulogizes only the dead. 1 * This article sparked a lot of commentary. Surprisingly, helpful comments came not only from people I knew well, but also from relative strangers. I have therefore had more help with this article than with virtually anything else I have written. The following all
A Mixture Likelihood Approach for Generalized LinearModels
 Journal of Classification
, 1995
"... Abstract: A mixture model approach is developed that simultaneously estimates the posterior membership robabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some me ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
Abstract: A mixture model approach is developed that simultaneously estimates the posterior membership robabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of the exponential family, to a set of specified covariates within each Class. We demonstrate how this approach andles many of the existing latent class regression procedures as special cases, as well as a host of other parametric specifications in the exponential family heretofore not mentioned in the latent class literature. As such we generalize the McCullagh and Nelder approach to a latent class framework. The parameters are estimated using maximum likelihood, and an EM algorithm for estimation is provided. A Monte Carlo study of the performance of the algorithm for several distributions i provided, and the model is illustrated in two empirical applications.
Efficient estimation in the bivariate normal copula model: normal margins are least favorable
 BERNOULLI
, 1997
"... Consider semiparametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter of interest and in which the two unknown marginal distributions are the (infinite dimensional) nuisance parameters. The efficient score for can be characterized in terms ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
Consider semiparametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter of interest and in which the two unknown marginal distributions are the (infinite dimensional) nuisance parameters. The efficient score for can be characterized in terms of the solutions of two coupled SturmLiouville equations. In case the family of copula functions corresponds to the normal distributions with mean 0, variance 1, and correlation, the solution of these equations is given, and we thereby show that the Van der Waerden normal scores rank correlation coe cient is asymptotically efficient. We also show that the bivariate normal model with equal variances constitutes the least favorable parametric submodel. Finally, we discuss the interpretation of j j in the normal copula model as the maximum (monotone) correlation coefficient.
From association to causation: Some remarks on the history of statistics
 Statist. Sci
, 1999
"... The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
The “numerical method ” in medicine goes back to Pierre Louis ’ study of pneumonia (1835), and John Snow’s book on the epidemiology of cholera (1855). Snow took advantage of natural experiments and used convergent lines of evidence to demonstrate that cholera is a waterborne infectious disease. More recently, investigators in the social and life sciences have used statistical models and significance tests to deduce causeandeffect relationships from patterns of association; an early example is Yule’s study on the causes of poverty (1899). In my view, this modeling enterprise has not been successful. Investigators tend to neglect the difficulties in establishing causal relations, and the mathematical complexities obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work—a principle honored more often in the breach than the observance. Snow’s work on cholera will be contrasted with modern studies that depend on statistical models and tests of significance. The examples may help to clarify the limits of current statistical techniques for making causal inferences from patterns of association. 1.
Statistical Inference and Data Mining
, 1996
"... es of probability distributions, estimation, hypothesis testing, model scoring, Gibb's sampling, rational decision making, causal inference, prediction, and model averaging. For a rigorous survey of statistics, the mathematically inclined reader should see [7]. Due to space limitations ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
es of probability distributions, estimation, hypothesis testing, model scoring, Gibb's sampling, rational decision making, causal inference, prediction, and model averaging. For a rigorous survey of statistics, the mathematically inclined reader should see [7]. Due to space limitations, we must also ignore a number of interesting topics, including time series analysis and metaanalysis. Probability Distributions The statistical literature contains mathematical characterizations of a wealth of probability distributions, as well as properties of random variablesfunctions defined on the "events" to which a probability measure assigns values. Important relations among probability distributions include marginalization (summing over a subset of values) and conditionalization (forming a conditional probability measure from a probability measure on a sample space and some event of positive probability. Essential relations among random variable
From association to causation via regression
 Indiana: University of Notre Dame
, 1997
"... For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
(Show Context)
For nearly a century, investigators in the social sciences have used regression models to deduce causeandeffect relationships from patterns of association. Path models and automated search procedures are more recent developments. In my view, this enterprise has not been successful. The models tend to neglect the difficulties in establishing causal relations, and the mathematical complexities tend to obscure rather than clarify the assumptions on which the analysis is based. Formal statistical inference is, by its nature, conditional. If maintained hypotheses A, B, C,... hold, then H can be tested against the data. However, if A, B, C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work a principle honored more often in the breach than the observance.
Applications of generalized method of moments estimation
 Journal of Economic Perspectives
, 2001
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
The Design Argument
, 2004
"... The design argument is one of three main arguments for the existence of God; the others are the ontological argument and the cosmological argument. Unlike the ontological argument, the design argument and the cosmological argument are a posteriori. And whereas the cosmological argument could focus o ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
The design argument is one of three main arguments for the existence of God; the others are the ontological argument and the cosmological argument. Unlike the ontological argument, the design argument and the cosmological argument are a posteriori. And whereas the cosmological argument could focus on any present event to get the ball rolling (arguing that it must trace back to a first cause, namely God), design theorists are usually more selective. Design arguments have typically been of two types – organismic and cosmic. Organismic design arguments start with the observation that organisms have features that adapt them to the environments in which they live and that exhibit a kind of delicacy. Consider, for example, the vertebrate eye. This organ helps organisms survive by permitting them to perceive objects in their environment. And were the parts of the eye even slightly different in their shape and assembly, the resulting organ would not allow us to see. Cosmic design arguments begin with an observation concerning features of the entire cosmos – the universe obeys simple laws, it has a kind of stability, its physical features permit life and intelligent life to exist. However, not all design arguments fit into these two neat compartments. Kepler, for example, thought that the face we see when we look at the moon requires explanation in terms of intelligent design. Still, the common thread is that design theorists
Recursive estimation in econometrics
 Computational Statistics and Data Analysis
, 2003
"... An account is given of recursive regression and Kalman filtering that gathers the important results and the ideas that lie behind them. It emphasises areas where econometricians have made contributions, including methods for handling the initialvalue problem associated with nonstationary processes ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
An account is given of recursive regression and Kalman filtering that gathers the important results and the ideas that lie behind them. It emphasises areas where econometricians have made contributions, including methods for handling the initialvalue problem associated with nonstationary processes and algorithms for fixedinterval smoothing.
Exploiting Hidden Meanings Using Bilingual Text
 In A. Gelbukh (Ed.), Lecture Notes in Computer Science 2945: Computational Linguistics and Intelligent Text Processing: Fifth International Conference, CICLing 2004 Proceedings (pp. 283–299
, 2004
"... The last decade has taught computational linguists that high performance on broadcoverage natural language processing tasks is best obtained using supervised learning techniques, which require annotation of large quantities of training data. But annotated text is hard to obtain. ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
The last decade has taught computational linguists that high performance on broadcoverage natural language processing tasks is best obtained using supervised learning techniques, which require annotation of large quantities of training data. But annotated text is hard to obtain.