Results 11  20
of
62
Learning LargeScale Graphical Gaussian Models from Genomic Data
 In Science of Complex Networks: From Biology to the Internet and WWW
, 2005
"... The inference and modeling of networklike structures in genomic data is of prime importance in systems biology. Complex stochastic associations and interdependencies can very generally be described as a graphical model. However, the paucity of available samples in current highthroughput experiments ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The inference and modeling of networklike structures in genomic data is of prime importance in systems biology. Complex stochastic associations and interdependencies can very generally be described as a graphical model. However, the paucity of available samples in current highthroughput experiments renders learning graphical models from genome data, such as microarray expression profiles, a challenging and very hard problem. Here we review several recently developed approaches to smallsample inference of graphical Gaussian modeling and discuss strategies to cope with the high dimensionality of functional genomics data. Particular emphasis is put on regularization methods and an empirical Bayes network inference procedure.
Modeling for Optimal Probability Prediction
 In Proceedings of the Nineteenth International Conference on Machine Learning
, 2002
"... We present a general modeling method for optimal probability prediction over future observations, in which model dimensionality is determined as a natural byproduct. This new method yields several estimators, and we establish theoretically that they are optimal (either overall or under stated ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We present a general modeling method for optimal probability prediction over future observations, in which model dimensionality is determined as a natural byproduct. This new method yields several estimators, and we establish theoretically that they are optimal (either overall or under stated restrictions) when the number of free parameters is infinite.
Adaptive Training for Large Vocabulary Continuous Speech Recognition
, 2006
"... Summary In recent years, there has been a trend towards training large vocabulary continuous speech recognition (LVCSR) systems on a large amount of found data. Found data is recorded from spontaneous speech without careful control of the recording acoustic conditions, for example, conversational te ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Summary In recent years, there has been a trend towards training large vocabulary continuous speech recognition (LVCSR) systems on a large amount of found data. Found data is recorded from spontaneous speech without careful control of the recording acoustic conditions, for example, conversational telephone speech. Hence, it typically has greater variability in terms of speaker and acoustic conditions than specially collected data. Thus, in addition to the desired speech variability required to discriminate between words, it also includes various nonspeech variabilities, for example, the change of speakers or acoustic environments. The standard approach to handle this type of data is to train hidden Markov models (HMMs) on the whole data set as if all data comes from a single acoustic condition. This is referred to as multistyle training, for example speakerindependent training. Effectively, the nonspeech variabilities are ignored. Though good performance has been obtained with multistyle systems, these systems account for all variabilities. Improvement may be obtained if the two types of variabilities in the found data are modelled separately. Adaptive training has been proposed for this purpose. In contrast to multistyle training, a set of transforms is used to represent the nonspeech variabilities. A canonical
Empirical Bayes and compound estimation of normal means
 Statistica Sinica
, 1997
"... Abstract: This article concerns the canonical empirical Bayes problem of estimating normal means under squarederror loss. General empirical estimators are derived which are asymptotically minimax and optimal. Uniform convergence and the speed of convergence are considered. The general empirical Bay ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract: This article concerns the canonical empirical Bayes problem of estimating normal means under squarederror loss. General empirical estimators are derived which are asymptotically minimax and optimal. Uniform convergence and the speed of convergence are considered. The general empirical Bayes estimators are compared with the shrinkage estimators of Stein (1956) and James and Stein (1961). Estimation of the mixture density and its derivatives are also discussed.
Weighted analysis of paired microarray experiments
 Statistical Applications in Genetics and Molecular Biology
"... Copyright c○2005 by the authors. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, bepres ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Copyright c○2005 by the authors. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, bepress, which has been given certain exclusive rights by the author. Statistical Applications in Genetics and Molecular Biology is produced by The
Bayesian multitask inverse reinforcement learning
"... Abstract. We generalise the problem of inverse reinforcement learning to multiple tasks, from a set of demonstrations. Each demonstration may represent one expert trying to solve a different task. Alternatively, one may see each demonstration as given by a different expert trying to solve the same t ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. We generalise the problem of inverse reinforcement learning to multiple tasks, from a set of demonstrations. Each demonstration may represent one expert trying to solve a different task. Alternatively, one may see each demonstration as given by a different expert trying to solve the same task. Our main technical contribution is to solve the problem by formalising it as statistical preference elicitation, via a number of structured priors, whose form captures our biases about the relatedness of different tasks or expert policies. We show that our methodology allows us not only to learn to efficiently from multiple experts but to also effectively differentiate between the goals of each. Possible applications include analysing the intrinsic motivations of subjects in behavioural experiments and imitation learning from multiple teachers.
Empirical Bayes least squares estimation without an explicit prior.” NYU Courant Inst
, 2007
"... Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measurements of the variable being estimated. In many cases however, including the case of cont ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measurements of the variable being estimated. In many cases however, including the case of contamination with additive Gaussian noise, the Bayesian least squares estimator can be formulated directly in terms of the distribution of noisy measurements. We demonstrate the use of this formulation in removing noise from photographic images. We use a local approximation of the noisy measurement distribution by exponentials over adaptively chosen intervals, and derive an estimator from this approximate distribution. We demonstrate through simulations that this adaptive Bayesian estimator performs as well or better than previously published estimators based on simple prior models. 1
Empirical Bayes Adjustments for Multiple Results in Hypothesisgenerating or Surveillance Studies
, 2000
"... Traditional methods of adjustment for multiple comparisons (e.g., Bonferroni adjustments) have fallen into disuse in epidemiological studies. However, alternative kinds of adjustment for data with multiple comparisons may sometimes be advisable. When a large number of comparisons are made, and when ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Traditional methods of adjustment for multiple comparisons (e.g., Bonferroni adjustments) have fallen into disuse in epidemiological studies. However, alternative kinds of adjustment for data with multiple comparisons may sometimes be advisable. When a large number of comparisons are made, and when there is a high cost to investigating false positive leads, empirical or semiBayes adjustments may help in the selection of the most promising leads. Here we offer an example of such adjustments in a large surveillance data set of occupation and cancer in Nordic countries, in which we used empirical Bayes (EB) adjustments to evaluate standardized incidence ratios (SIRs) for cancer and occupation among craftsmen and laborers. For men,
GENERAL MAXIMUM LIKELIHOOD EMPIRICAL BAYES ESTIMATION OF NORMAL MEANS
, 908
"... We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal f ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a general maximum likelihood empirical Bayes (GMLEB) method for the estimation of a mean vector based on observations with i.i.d. normal errors. We prove that under mild moment conditions on the unknown means, the average mean squared error (MSE) of the GMLEB is within an infinitesimal fraction of the minimum average MSE among all separable estimators which use a single deterministic estimating function on individual observations, provided that the risk is of greater order than (log n) 5 /n. We also prove that the GMLEB is uniformly approximately minimax in regular and weak ℓp balls when the order of the lengthnormalized norm of the unknown means is between (log n) κ1 /n
Empirical Bayes Forecasts of One Time Series Using Many Predictors
, 2000
"... We consider both frequentist and empirical Bayes forecasts of a single time series using a linear model with T observations and K orthonormal predictors. The frequentist formulation considers estimators that are equivariant under permutations (reorderings) of the regressors. The empirical Bayes form ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider both frequentist and empirical Bayes forecasts of a single time series using a linear model with T observations and K orthonormal predictors. The frequentist formulation considers estimators that are equivariant under permutations (reorderings) of the regressors. The empirical Bayes formulation (both parametric and nonparametric) treats the coefficients as i.i.d. and estimates their prior. Asymptotically, when K is proportional to T the empirical Bayes estimator is shown to be: (i) optimal in Robbins' (1955, 1964) sense; (ii) the minimum risk equivariant estimator; and (iii) minimax in both the frequentist and Bayesian problems over a class of nonGaussian error distributions. Also, the asymptotic frequentist risk of the minimum risk equivariant estimator is shown to equal the Bayes risk of the (infeasible subjectivist) Bayes estimator in the Gaussian case, where the "prior" is the weak limit of the empirical cdf of the true parameter values. Monte Carlo results are encourag...