Results 1  10
of
164,179
Hierarchical modelbased motion estimation
, 1992
"... This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) a,re a global model that constrains the overall structure of the motion estimated, a local rnodel that ..."
Abstract

Cited by 664 (15 self)
 Add to MetaCart
This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) a,re a global model that constrains the overall structure of the motion estimated, a local rnodel
New results in linear filtering and prediction theory
 TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 607 (0 self)
 Add to MetaCart
in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest
Estimating Wealth Effects without Expenditure Data— or Tears
 Policy Research Working Paper 1980, The World
, 1998
"... Abstract: We use the National Family Health Survey (NFHS) data collected in Indian states in 1992 and 1993 to estimate the relationship between household wealth and the probability a child (aged 6 to 14) is enrolled in school. A methodological difficulty to overcome is that the NFHS, modeled closely ..."
Abstract

Cited by 871 (16 self)
 Add to MetaCart
Abstract: We use the National Family Health Survey (NFHS) data collected in Indian states in 1992 and 1993 to estimate the relationship between household wealth and the probability a child (aged 6 to 14) is enrolled in school. A methodological difficulty to overcome is that the NFHS, modeled
Bayesian density estimation and inference using mixtures.
 J. Amer. Statist. Assoc.
, 1995
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about J ..."
Abstract

Cited by 653 (18 self)
 Add to MetaCart
JSTOR, please contact support@jstor.org. We describe and illustrate Bayesian inference in models for density estimation using mixtures of Dirichlet processes. These models provide natural settings for density estimation and are exemplified by special cases where data are modeled as a sample from
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 542 (20 self)
 Add to MetaCart
runtime of our method is Õ(d/(λɛ)), where d is a bound on the number of nonzero features in each example. Since the runtime does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non
The file drawer problem and tolerance for null results
 Psychological Bulletin
, 1979
"... For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the "file drawer problem " is that journals are filled with the 5 % of the studies that show Type I errors, while the file drawers are filled with the 95 % of the stud ..."
Abstract

Cited by 497 (0 self)
 Add to MetaCart
% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed. Both behavioral researchers and statisticians have long suspected that the studies published in the behavioral
A Study of CrossValidation and Bootstrap for Accuracy Estimation and Model Selection
 INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1995
"... We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), te ..."
Abstract

Cited by 1283 (11 self)
 Add to MetaCart
We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection
Using Maimonides’ Rule to Estimate the Effect of Class Size on Scholastic Achievement
 QUARTERLY JOURNAL OF ECONOMICS
, 1999
"... The twelfth century rabbinic scholar Maimonides proposed a maximum class size of 40. This same maximum induces a nonlinear and nonmonotonic relationship between grade enrollment and class size in Israeli public schools today. Maimonides’ rule of 40 is used here to construct instrumental variables e ..."
Abstract

Cited by 582 (40 self)
 Add to MetaCart
estimates of effects of class size on test scores. The resulting identification strategy can be viewed as an application of Donald Campbell’s regressiondiscontinuity design to the classsize question. The estimates show that reducing class size induces a significant and substantial increase in test scores
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
‖ˆx − x ‖ 2 ℓ2 ≤ C2 ( · 2 log p · σ 2 + ∑ min(x 2 i, σ 2) Our results are nonasymptotic and we give values for the constant C. In short, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information
How much should we trust differencesindifferences estimates?
, 2003
"... Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on femal ..."
Abstract

Cited by 828 (1 self)
 Add to MetaCart
Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data
Results 1  10
of
164,179