Results 1  10
of
185
Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models
, 1995
"... ..."
Prior Probabilities
 IEEE Transactions on Systems Science and Cybernetics
, 1968
"... e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determ ..."
Abstract

Cited by 166 (3 self)
 Add to MetaCart
e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determined by the prior information, independently of the choice of parameters. In a certain class of problems, therefore, the prior distributions may now be claimed to be fully as "objective" as the sampling distributions. I. Background of the problem Since the time of Laplace, applications of probability theory have been hampered by difficulties in the treatment of prior information. In realistic problems of decision or inference, we often have prior information which is highly relevant to the question being asked; to fail to take it into account is to commit the most obvious inconsistency of reasoning and may lead to absurd or dangerously misleading results. As an extreme examp
On Differential Variability of Expression Ratios: Improving . . .
 JOURNAL OF COMPUTATIONAL BIOLOGY
, 2001
"... We consider the problem of inferring fold changes in gene expression from cDNA microarray data. Standard procedures focus on the ratio of measured fluorescent intensities at each spot on the microarray, but to do so is to ignore the fact that the variation of such ratios is not constant. Estimates o ..."
Abstract

Cited by 165 (5 self)
 Add to MetaCart
We consider the problem of inferring fold changes in gene expression from cDNA microarray data. Standard procedures focus on the ratio of measured fluorescent intensities at each spot on the microarray, but to do so is to ignore the fact that the variation of such ratios is not constant. Estimates of gene expression changes are derived within a simple hierarchical model that accounts for measurement error and fluctuations in absolute gene expression levels. Significant gene expression changes are identified by deriving the posterior odds of change within a similar model. The methods are tested via simulation and are applied to a panel of Escherichia coli microarrays.
TimeChanged Lévy Processes and Option Pricing
, 2002
"... As is well known, the classic BlackScholes option pricing model assumes that returns follow Brownian motion. It is widely recognized that return processes differ from this benchmark in at least three important ways. First, asset prices jump, leading to nonnormal return innovations. Second, return ..."
Abstract

Cited by 89 (12 self)
 Add to MetaCart
As is well known, the classic BlackScholes option pricing model assumes that returns follow Brownian motion. It is widely recognized that return processes differ from this benchmark in at least three important ways. First, asset prices jump, leading to nonnormal return innovations. Second, return volatilities vary stochastically over time. Third, returns and their volatilities are correlated, often negatively for equities. We propose that timechanged Lévy processes be used to simultaneously address these three facets of the underlying asset return process. We show that our framework encompasses almost all of the models proposed in the option pricing literature. Despite the generality of our approach, we show that it is straightforward to select and test a particular option pricing model through the use of characteristic function technology.
Smoothing by Local Regression: Principles and Methods
"... this paper we describe two adaptive procedures, one based on C p and the other based on crossvalidation. Still, when we have a final adaptive fit in hand, it is critical to subject it to graphical diagnostics to study its performance. The important implication of these statements is that the above c ..."
Abstract

Cited by 88 (1 self)
 Add to MetaCart
this paper we describe two adaptive procedures, one based on C p and the other based on crossvalidation. Still, when we have a final adaptive fit in hand, it is critical to subject it to graphical diagnostics to study its performance. The important implication of these statements is that the above choices must be tailored to each data set in practice; that is, the choices represent a modeling of the data. It is widely accepted that in global parametric regression there are a variety of choices that must be made  for example, the parametric family to be fitted and the form of the distribution of the response  and that we must rely on our knowledge of the mechanism generating the data, on model selection diagnostics, and on graphical diagnostic methods to make the choices. The same is true for smoothing. Cleveland (1993) presents many examples of this modeling process. For example, in one application, oxides of nitrogen from an automobile engine are fitted to the equivalence ratio, E, of the fuel and the compression ratio, C, of the engine. Coplots show that it is reasonable to use quadratics as the local parametric family but with the added assumption that given E the fitted f
Spanning and DerivativeSecurity Valuation
, 1999
"... This paper proposes a methodology for the valuation of contingent securities. In particular, it establishes how the characteristic function (of the future uncertainty) is basis augmenting and spans the payoff universe of most, if not all, derivative assets. In one specific application, from the char ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
This paper proposes a methodology for the valuation of contingent securities. In particular, it establishes how the characteristic function (of the future uncertainty) is basis augmenting and spans the payoff universe of most, if not all, derivative assets. In one specific application, from the characteristic function of the stateprice density, it is possible to analytically price options on any arbitrary transformation of the underlying uncertainty. By differentiating (or translating) the characteristic function, limitless pricing and/or spanning opportunities can be designed. As made lucid via example contingent claims, by exploiting the unifying spanning concept, the valuation approach affords substantial analytical tractability. The strength and versatility of the methodology is inherent when valuing (1) Averageinterest options; (2) Correlation options; and (3) Discretelymonitored knockout options. For each optionlike security, the characteristic function is strikingly simple (although the corresponding density is unmanageable/indeterminate). This article provides the economic foundations for valuing derivative securities.
Efficient estimation in the bivariate normal copula model: normal margins are least favorable
 BERNOULLI
, 1997
"... Consider semiparametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter of interest and in which the two unknown marginal distributions are the (infinite dimensional) nuisance parameters. The efficient score for can be characterized in terms ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Consider semiparametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter of interest and in which the two unknown marginal distributions are the (infinite dimensional) nuisance parameters. The efficient score for can be characterized in terms of the solutions of two coupled SturmLiouville equations. In case the family of copula functions corresponds to the normal distributions with mean 0, variance 1, and correlation, the solution of these equations is given, and we thereby show that the Van der Waerden normal scores rank correlation coe cient is asymptotically efficient. We also show that the bivariate normal model with equal variances constitutes the least favorable parametric submodel. Finally, we discuss the interpretation of j j in the normal copula model as the maximum (monotone) correlation coefficient.
An operational calculus for probability distributions via Laplace transforms
 ADVANCES IN APPLIED PROBABILITY
, 1996
"... In this paper we investigate operators that map one or more probability distributions on the positive real line into another via their LaplaceStieltjes transforms. Our goal is to make it easier to construct new transforms by manipulating known transforms. We envision the results here assisting mode ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
In this paper we investigate operators that map one or more probability distributions on the positive real line into another via their LaplaceStieltjes transforms. Our goal is to make it easier to construct new transforms by manipulating known transforms. We envision the results here assisting modelling in conjunction with numerical transform inversion software. We primarily focus on operators related to infinitely divisible distributions and Le vy ´ processes, drawing upon Feller (1971). We give many concrete examples of infinitely divisible distributions. We consider a cumulantmomenttransfer operator that allows us to relate the cumulants of one distribution to the moments of another. We consider a powermixture operator corresponding to an independently stopped Lévy process. The special case of exponential power mixtures is a continuous analog of geometric random sums. We introduce a further special case which is remarkably tractable, exponential mixtures of inverse Gaussian distributions (EMIGs). EMIGs arise naturally as approximations for busy periods in queues. We show that the steadystate waiting time in an M/G/1 queue is the difference of two EMIGs when the servicetime distribution is an EMIG. We consider several transforms related to first passage times, e.g., for the M/M/1 queue, reflected Brownian motion and Lévy processes. Some of the associated probability density functions involve Bessel functions and theta functions. We describe properties of the operators, including how they transform moments.
Total Path Length for Random Recursive Trees
, 1998
"... Total path length, or search cost, for a rooted tree is defined as the sum of all roottonode distances. Let T n be the total path length for a random recursive tree of order n. Mahmoud (1991) showed that W n := (T n \Gamma E[T n ])=n converges almost surely and in L 2 to a nondegenerate limiting ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Total path length, or search cost, for a rooted tree is defined as the sum of all roottonode distances. Let T n be the total path length for a random recursive tree of order n. Mahmoud (1991) showed that W n := (T n \Gamma E[T n ])=n converges almost surely and in L 2 to a nondegenerate limiting random variable W . Here we give recurrence relations for the moments of W n and of W and show that W n converges to W in L p for each 0 ! p ! 1. We confirm the conjecture that the distribution of W is not normal. We also show that the distribution of W is characterized among all distributions having zero mean and finite variance by the distributional identity W d = U(1 +W ) + (1 \Gamma U)W \Gamma E(U); where E(x) := \Gammax ln x \Gamma (1 \Gamma x) ln(1 \Gamma x) is the binary entropy function, U is a uniform(0; 1) random variable, W and W have the same distribution, and U; W , and W are mutually independent. Finally, we derive an approximation for the distribution of W usi...
Modeling Uncertainty: Predictive Accuracy as a Proxy for Predictive Confidence,” Federal Reserve Bank of New York, Staff Reports No.161
, 2003
"... This paper evaluates current strategies for the empirical modeling of forecast behavior. In particular, we focus on the reliability of using proxies from time series models of heteroskedasticity to describe changes in predictive confidence. We address this issue by examining the relationship between ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
This paper evaluates current strategies for the empirical modeling of forecast behavior. In particular, we focus on the reliability of using proxies from time series models of heteroskedasticity to describe changes in predictive confidence. We address this issue by examining the relationship between ex post forecast errors and ex ante measures of forecast uncertainty from data on inflation forecasts from the Survey of Professional Forecasters. The results provide little evidence of a strong link between observed heteroskedasticity in the consensus forecast errors and forecast uncertainty. Instead, the findings indicate a significant link between observed heteroskedasticity in the consensus forecast errors and forecast dispersion. We conclude that conventional modelbased measures of uncertainty may be capturing not the degree of confidence that individuals attach to their forecasts but rather the degree of disagreement across individuals in their forecasts. We thank Nathan Barczi and Rema Hanna for excellent research assistance. The authors have benefited from the suggestions of participants in the Federal Reserve System Committee on Macroeconomics Conference in Minneapolis. We are also grateful to Michelle Barnes and Dean Croushore for valuable comments on earlier versions of this paper. The views expressed in this paper are those of the individual authors and do not necessarily reflect the position of the Federal