Results 1 
6 of
6
GENERAL MAXIMUM LIKELIHOOD EMPIRICAL BAYES ESTIMATION OF NORMAL MEANS
, 908
"... We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp balls when the order of the lengthnormalized norm of the unknown means is between (log n) κ1 /n
A new approach to fitting linear models in high dimensional spaces
, 2000
"... This thesis presents a new approach to fitting linear models, called “pace regression”, which also overcomes the dimensionality determination problem. Its optimality in minimizing the expected prediction loss is theoretically established, when the number of free parameters is infinitely large. In th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis presents a new approach to fitting linear models, called “pace regression”, which also overcomes the dimensionality determination problem. Its optimality in minimizing the expected prediction loss is theoretically established, when the number of free parameters is infinitely large. In this sense, pace regression outperforms existing procedures for fitting linear models. Dimensionality determination, a special case of fitting linear models, turns out to be a natural byproduct. A range of simulation studies are conducted; the results support the theoretical analysis. Through the thesis, a deeper understanding is gained of the problem of fitting linear models. Many key issues are discussed. Existing procedures, namely OLS, AIC, BIC, RIC, CIC, CV(d), BS(m), RIDGE, NNGAROTTE and LASSO, are reviewed and compared, both theoretically and empirically, with the new methods. Estimating a mixing distribution is an indispensable part of pace regression. A measurebased minimum distance approach, including probability measures and nonnegative measures, is proposed, and strongly consistent estimators are produced. Of all minimum distance methods for estimating a mixing distribution, only the
Shrinkage To Smooth NonConvex Cone : Principal Component Analysis As Stein Estimation
 Commun. Statist.  Theory Meth. (in honor of N. Sugiura
"... In Kuriki and Takemura (1997a) we established a general theory of JamesStein type shrinkage to convex sets with smooth boundary. In this paper we show that our results can be generalized to the case where shrinkage is toward smooth nonconvex cones. A primary example of this shrinkage is descriptiv ..."
Abstract
 Add to MetaCart
In Kuriki and Takemura (1997a) we established a general theory of JamesStein type shrinkage to convex sets with smooth boundary. In this paper we show that our results can be generalized to the case where shrinkage is toward smooth nonconvex cones. A primary example of this shrinkage is descriptive principal component analysis, where one shrinks small singular values of the data matrix. Here principal component analysis is interpreted as the problem of estimation of matrix mean and the shrinkage of the small singular values is regarded as shrinkage of the data matrix toward the manifold of matrices of smaller rank. 1. INTRODUCTION In Kuriki and Takemura (1997a) we established a general theory of JamesStein type shrinkage to convex sets with smooth boundary using techniques of differential geometry. Tools developed in Kuriki and Takemura (1997a) allow us to investigate shrinkage to much more general sets than the affine subspaces extensively studied in existing literature on Stein e...
Approved by:
"... This dissertation generalizes Duncan/s kratio methodology to include a covariate. The analysis assumes proportionality between the covariance matrix of the prior distribution for the true treatment means and the covariance matrix of the conditional distribution of the observations given the values ..."
Abstract
 Add to MetaCart
This dissertation generalizes Duncan/s kratio methodology to include a covariate. The analysis assumes proportionality between the covariance matrix of the prior distribution for the true treatment means and the covariance matrix of the conditional distribution of the observations given the values of the true treatment means. When the covariance matrices are known, analysis demonstrates that the power for the covariate kratio procedure is increasingly greater than the power of Duncan/s kratio test as the correlation between the variable under investigation and the covariate increases. The dissertation also investigates the effect of three nuisance parameters on the power function: