Results 1 
3 of
3
GTM: The generative topographic mapping
 Neural Computation
, 1998
"... Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper ..."
Abstract

Cited by 275 (5 self)
 Add to MetaCart
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of nonlinear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used SelfOrganizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multiphase oil pipeline. Copyright c○MIT Press (1998). 1
VARIAXCE FU~CTIONS AND THE MI~IMUM DETECTABLE COXCE~TRATIOX I ~ ASSAYS
"... Key Words and Phrases: Weighted least squares, extended least squares, Assay data are often fit by a nonlinear regression model incorporating heterogeneity of variance, as in radioimmunoassay, for example. Typically, the standard deviation of the response is taken to be proporLonal to a power B of t ..."
Abstract
 Add to MetaCart
Key Words and Phrases: Weighted least squares, extended least squares, Assay data are often fit by a nonlinear regression model incorporating heterogeneity of variance, as in radioimmunoassay, for example. Typically, the standard deviation of the response is taken to be proporLonal to a power B of the:nean. There is considerable empirical evidence suggesting that for assays of a reasonable size, how one estimates the parameter ~ does not greatly affect how well one estimates the mean regression function. An additional component of assay analysis is the estimation of auxilIary constructs such as the minimum detectable concentration, for which many definitions exist; we focus on one such definition. The minimum detectable concentration depends both on B and the mean regression function. We compare three standard methods of estimating the parameter e due to Rodbard (1978), Raab (1981a) and Carroll and Ruppert (1982b). When duplicate counts are taken at each concentration. the first method is only 20 % efficient asymptotically in comparison to the third, and the resulting estimate of the minimum detectable concentration is asymptotically 3.3 times more variable for first than the third. Less dramatic results obtain for the second estimator compared to the third; this estimator is still not efficient, however. Simulation results and an example are supportive of the asymptotic theory. 1 1.
Getting Bad News Out Early: Does it Really Help Stock Prices?
, 2003
"... In this paper, we examine the stock price benefit of meeting or beating earnings expectations. Using a general methodology, we find no evidence that the timing of earnings news has any benefit for firms ’ stock returns. In fact, in many cases we find firms attempting to engineer positive earnings su ..."
Abstract
 Add to MetaCart
In this paper, we examine the stock price benefit of meeting or beating earnings expectations. Using a general methodology, we find no evidence that the timing of earnings news has any benefit for firms ’ stock returns. In fact, in many cases we find firms attempting to engineer positive earnings surprises by beating down expectations only to discover that their efforts are counterproductive. Our results appear to overturn the findings of previous authors who, using less general methodologies, have suggested that firms can boost their stock returns by getting bad news out early. Our results are robust across time periods, for different scaling factors on earnings revisions and surprises, when controlling for firm size and growth prospects, and when conditioned on past earnings news.