Results 1 
7 of
7
Deconvoluting kernel density estimators
 Statistics
, 1990
"... This paper considers estimation ofa continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. Wh ..."
Abstract

Cited by 63 (7 self)
 Add to MetaCart
This paper considers estimation ofa continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. When the kernel used is sufficiently smooth the deconvolved estimator is shown to be pointwise consistent and bounds on its integrated mean squared error are derived. Very weak assumptions are made on the measurementerror density thereby permitting a comparison of the effects of different types of measurement error on the deconvolved estimator.
Modeling for Optimal Probability Prediction
 In Proceedings of the Nineteenth International Conference on Machine Learning
, 2002
"... We present a general modeling method for optimal probability prediction over future observations, in which model dimensionality is determined as a natural byproduct. This new method yields several estimators, and we establish theoretically that they are optimal (either overall or under stated ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We present a general modeling method for optimal probability prediction over future observations, in which model dimensionality is determined as a natural byproduct. This new method yields several estimators, and we establish theoretically that they are optimal (either overall or under stated restrictions) when the number of free parameters is infinite.
A new approach to fitting linear models in high dimensional spaces
, 2000
"... This thesis presents a new approach to fitting linear models, called “pace regression”, which also overcomes the dimensionality determination problem. Its optimality in minimizing the expected prediction loss is theoretically established, when the number of free parameters is infinitely large. In th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis presents a new approach to fitting linear models, called “pace regression”, which also overcomes the dimensionality determination problem. Its optimality in minimizing the expected prediction loss is theoretically established, when the number of free parameters is infinitely large. In this sense, pace regression outperforms existing procedures for fitting linear models. Dimensionality determination, a special case of fitting linear models, turns out to be a natural byproduct. A range of simulation studies are conducted; the results support the theoretical analysis. Through the thesis, a deeper understanding is gained of the problem of fitting linear models. Many key issues are discussed. Existing procedures, namely OLS, AIC, BIC, RIC, CIC, CV(d), BS(m), RIDGE, NNGAROTTE and LASSO, are reviewed and compared, both theoretically and empirically, with the new methods. Estimating a mixing distribution is an indispensable part of pace regression. A measurebased minimum distance approach, including probability measures and nonnegative measures, is proposed, and strongly consistent estimators are produced. Of all minimum distance methods for estimating a mixing distribution, only the
Pace Regression
, 1999
"... This paper articulates a new method of linear regression, \pace regression," that addresses many drawbacks of standard regression reported in the literatureparticularly the subset selection problem. Pace regression improves on classical ordinary least squares (ols) regression by evaluating the ee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper articulates a new method of linear regression, \pace regression," that addresses many drawbacks of standard regression reported in the literatureparticularly the subset selection problem. Pace regression improves on classical ordinary least squares (ols) regression by evaluating the eect of each variable and using a clustering analysis to improve the statistical basis for estimating their contribution to the overall regression. As well as outperforming ols, it also outperformsin a remarkably general senseother linear modeling techniques in the literature, including subset selection procedures, which seek a reduction in dimensionality that falls out as a natural byproduct of pace regression. The paper denes six procedures that share the fundamental idea of pace regression, all of which are theoretically justied in terms of asymptotic performance. Experiments conrm the performance improvement over other techniques. Keywords: Linear regression; subset model sele...
Reader BYUNG sao KIM. Studies of Multinomial Mixture Models
, 1984
"... (Under the direction of Barry H. Margolin) We investigate certain inferential aspects of mixtures of multinomial distributions, both in nonparametric and parametric contexts. As a nonparametric mixture model we propose a kpopulation finite mixture of binomial distributions, which can be applied to ..."
Abstract
 Add to MetaCart
(Under the direction of Barry H. Margolin) We investigate certain inferential aspects of mixtures of multinomial distributions, both in nonparametric and parametric contexts. As a nonparametric mixture model we propose a kpopulation finite mixture of binomial distributions, which can be applied to the analysis of noniid data generated from a series of toxicological experiments. A necessary and sufficient identifiability condition for the kpopulation finite mixture of binomials is obtained. The maximum likelihood estimates (MLE's) of the kpopulation finite mixture of binomials is computed via the EM algorithm (Dempster, Laird and Rubin, 1977), and the asymptotic properties of the MLE's are discussed. The identifiability condition is equivalent to the positive definiteness of the information matrix for the parameters. The MLE's and their sampling distributions, together with the data mentioned above, provide an empirical check of the statistical procedures proposed by Margolin, Kaplan and Zeiger (1981).
and
"... This paper considers estimation of a continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. W ..."
Abstract
 Add to MetaCart
This paper considers estimation of a continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. When the kernel used is sufficiently smooth the deconvolved estimator is shown to be pointwise consistent and bounds on its integrated mean squared error are derived. Very weak assumptions are made on the measurementerror density thereby permitting a comparison of the effects of different types of measurement error on the deconvolved estimator. Some key words: