Results 1  10
of
82
A closedform solution for options with stochastic volatility with applications to bond and currency options
 Review of Financial Studies
, 1993
"... I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond option ..."
Abstract

Cited by 952 (4 self)
 Add to MetaCart
I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond options and foreign currency options. Simulations show that correlation between volatility and the spot asset’s price is important for explaining return skewness and strikeprice biases in the BlackScholes (1973) model. The solution technique is based on characteristic functions and can be applied to other problems. Many plaudits have been aptly used to describe Black and Scholes ’ (1973) contribution to option pricing theory. Despite subsequent development of option theory, the original BlackScholes formula for a European call option remains the most successful and widely used application. This formula is particularly useful because it relates the distribution of spot returns I thank Hans Knoch for computational assistance. I am grateful for the suggestions of Hyeng Keun (the referee) and for comments by participants
Group reaction time distributions and an analysis of distribution statistics
 Psychological Bulletin
, 1979
"... A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subj ..."
Abstract

Cited by 112 (23 self)
 Add to MetaCart
(Show Context)
A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subject are organized in ascending order, and quantiles are calculated. The quantiles are then averaged over subjects to give group quantiles (cf. Vincent learning curves). From the group quantiles, a group reaction time distribution can be constructed. It is shown that this method of averaging is exact for certain distributions (i.e., the resulting distribution belongs to the same family as the individual distributions). Furthermore, Monte Carlo studies and application of the method to the combined data from three large experiments provide evidence that properties derived from the group reaction time distribution are much the same as average properties derived from the data of individual subjects. This article also examines how to quantitatively describe the shape of reaction time distributions. The use of moments and cumulants as sources of information about distribution shape is evaluated and rejected because of extreme dependence on long, outlier reaction times. As an alternative, the use of explicit distribution functions as approximations to reaction time distributions is considered. Despite the recent popularity of reaction time research, the use of reaction time distributions for both model testing and model development has been largely ignored. This is surprising in view of the fact that properties of distributions can prove decisive in discriminating among models i(Sternberg, Note 1) and can falsify models that quite adequately describe the behavior of mean reaction time (Ratcliff & Murdock, 1976). Two methods have been used to obtain distributional or shape information. One
A developmental study of the relationship between geometry and kinematics in drawing movements.Journal of Experimental Psychology: Human Perception and Performance
, 1991
"... Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. Performances in a simple motor task—the continuous tracing of elliptic trajectories—demonstrate that both the phenomenon of isochrony (increase of the average movement velocity with the linear extent of the trajectory) and the socalled twothirds power law (relation between tangential velocity and curvature) are qualitatively present already at the age of 5. The quantitative aspects of these regularities evolve with age, however, and steadystate adult performance is not attained even by the oldest children. The powerlaw formalism developed in previous reports is generalized to encompass these developmental aspects of the control of movement. Two general frameworks are currently available to conceptualize the motorcontrol problem. Broadly, the two frameworks differ in the answer that they give to the question "Where do form and structure come from? " According to the
The role of preparation in tuning anticipatory and reflex responses during catching
 J Neurosci
, 1989
"... The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produc ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
(Show Context)
The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produced at a roughly constant latency (about 130 msec) from the time of ball release. Their mean amplitude decreased with increasing height of fall. Late components represented the major buildup of muscle activity preceding the impact and were accompanied by limb flexion. Their onset time was roughly constant (about 100 msec) with respect to the time of impact (except in wrist extensors). This indicates that the timing of these responses was based on an accurate estimate of the instantaneous values of the timetocontact (time remaining before impact).
Batch Means and Spectral Variance Estimation in Markov Chain Monte Carlo
, 2009
"... Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the meansquare sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finitesample properties of spectral variance and batch means estimators and provide recommendations for practitioners.
Heterogeneity of Variance in Experimental Studies: A Challenge to Conventional Interpretations
 Psychological Bulletin
, 1988
"... The presence of heterogeneity of variance across groups indicates that the standard statistical model for treatment effects no longer applies. Specifically, the assumption that treatments add a constant to each subject's development fails. An alternative model is required to represent how treat ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
The presence of heterogeneity of variance across groups indicates that the standard statistical model for treatment effects no longer applies. Specifically, the assumption that treatments add a constant to each subject's development fails. An alternative model is required to represent how treatment effects are distributed across individuals. We develop in this article a simple statistical model to demonstrate the link between heterogeneity of variance and random treatment effects. Next, we illustrate with results from two previously published studies how a failure to recognize the substantive importance of heterogeneity of variance obscured significant results present in these data. The article concludes with a review and synthesis of techniques for modeling variances. Although these methods have been well established in the statistical literature, they are not widely known by social and behavioral scientists. Psychological researchers have tended historically to view heterogeneity of variance as a methodological nuisance, an unwelcome obstacle in the pursuit of inferences about the effects of treatments on means. In their discussion of variance heterogeneity, standard texts concentrate on identifying conditions under which such heterogeneity can safely be ignored so that standard analyses of means may proceed. It is usually argued that heterogeneity can be ignored when statistical tests for means are robust to violation of the homogeneity assumption (Glass & Hopkins, 1984, pp. 238240; Hays, 1981, p. 287; Winer, 1971, pp. 3739). When such violations cannot be ignored, analysts tend to assume heterogeneity must be eliminated. The primary strategy for eliminating heterogeneity is to find a transformation of the dependent variable that stabilizes treatment group variances, enabling retention of the homoge
Simultaneous Modelling of the Cholesky Decomposition of Several Covariance Matrices
 J. Multivar. Anal
, 2006
"... A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical spectral (eigenvalue) and variancecorrelation decompositions. All these methods amount to decomposing complicated covariance matrices into “dependence” and “variance” components, and then modelling them virtually separately using regression techniques. The entries of the “dependence” component of the Cholesky decomposition have the unique advantage of being unconstrained so that further reduction of the dimension of its parameter space is fairly simple. Normal theory maximum likelihood estimates for complete and incomplete data are presented using iterative methods such as EM (ExpectationMaximization) algorithm and their improvements. These procedures are illustrated using a dataset from a growth hormone longitudinal clinical trial.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions
, 2000
"... The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothing and are particularly wellsuited to problems in which there is little or no theory to guide a choice of probability models, e.g., smoothing a distribution to eliminate roughness and zero frequencies in order to equate scores from different tests. Attention is given to efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and to computationally efficient methods for obtaining the asymptotic standard errors of the fitted frequencies and proportions. We discuss tools that can be used to diagnose the quality of the fitted frequencies for both the univariate and the bivariate cases. Five examples, using real data, are used to illustrate the methods of this paper.
The impedance of frog skeletal muscle fibers in various solutions
, 1974
"... ABSTRACT The linear circuit parameters of 140 muscle fibers in nine solutions are determined from phase measurements fitted with three circuit models: the disk model, in which the resistance to radial current flow is in the lumen of the tubules; the lumped model, in which the resistance is at the mo ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
ABSTRACT The linear circuit parameters of 140 muscle fibers in nine solutions are determined from phase measurements fitted with three circuit models: the disk model, in which the resistance to radial current flow is in the lumen of the tubules; the lumped model, in which the resistance is at the mouth of the tubules; and the hybrid model, in which it is in both places. The lumped model fails to fit the data. The disk and hybrid model fit the data, but the optimal circuit values of the hybrid model seem more reasonable. The circuit values depend on sarcomere length. The conductivity of the lumen of the tubules is less than, and varies in a nonlinear manner with, the conductivity of the bathing solution, suggesting that the tubules are partially occluded by some material like basement membrane which restricts the mobility of ions and has fixed charge. The X2.5 hypertonic sucrose solution used in many voltage clamp experiments produces a large increase in the radial resistance, suggesting that control of the potential across the tubular membranes would be difficult to achieve. Glyceroltreated fibers have 90 % of their tubular system