Results 1  10
of
45
A closedform solution for options with stochastic volatility with applications to bond and currency options
 Review of Financial Studies
, 1993
"... I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond option ..."
Abstract

Cited by 704 (4 self)
 Add to MetaCart
I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond options and foreign currency options. Simulations show that correlation between volatility and the spot asset’s price is important for explaining return skewness and strikeprice biases in the BlackScholes (1973) model. The solution technique is based on characteristic functions and can be applied to other problems. Many plaudits have been aptly used to describe Black and Scholes ’ (1973) contribution to option pricing theory. Despite subsequent development of option theory, the original BlackScholes formula for a European call option remains the most successful and widely used application. This formula is particularly useful because it relates the distribution of spot returns I thank Hans Knoch for computational assistance. I am grateful for the suggestions of Hyeng Keun (the referee) and for comments by participants
Group reaction time distributions and an analysis of distribution statistics
 Psychological Bulletin
, 1979
"... A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subj ..."
Abstract

Cited by 77 (22 self)
 Add to MetaCart
A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subject are organized in ascending order, and quantiles are calculated. The quantiles are then averaged over subjects to give group quantiles (cf. Vincent learning curves). From the group quantiles, a group reaction time distribution can be constructed. It is shown that this method of averaging is exact for certain distributions (i.e., the resulting distribution belongs to the same family as the individual distributions). Furthermore, Monte Carlo studies and application of the method to the combined data from three large experiments provide evidence that properties derived from the group reaction time distribution are much the same as average properties derived from the data of individual subjects. This article also examines how to quantitatively describe the shape of reaction time distributions. The use of moments and cumulants as sources of information about distribution shape is evaluated and rejected because of extreme dependence on long, outlier reaction times. As an alternative, the use of explicit distribution functions as approximations to reaction time distributions is considered. Despite the recent popularity of reaction time research, the use of reaction time distributions for both model testing and model development has been largely ignored. This is surprising in view of the fact that properties of distributions can prove decisive in discriminating among models i(Sternberg, Note 1) and can falsify models that quite adequately describe the behavior of mean reaction time (Ratcliff & Murdock, 1976). Two methods have been used to obtain distributional or shape information. One
A developmental study of the relationship between geometry and kinematics in drawing movements.Journal of Experimental Psychology: Human Perception and Performance
, 1991
"... Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. Performances in a simple motor task—the continuous tracing of elliptic trajectories—demonstrate that both the phenomenon of isochrony (increase of the average movement velocity with the linear extent of the trajectory) and the socalled twothirds power law (relation between tangential velocity and curvature) are qualitatively present already at the age of 5. The quantitative aspects of these regularities evolve with age, however, and steadystate adult performance is not attained even by the oldest children. The powerlaw formalism developed in previous reports is generalized to encompass these developmental aspects of the control of movement. Two general frameworks are currently available to conceptualize the motorcontrol problem. Broadly, the two frameworks differ in the answer that they give to the question "Where do form and structure come from? " According to the
The role of preparation in tuning anticipatory and reflex responses during catching
 J Neurosci
, 1989
"... The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produc ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produced at a roughly constant latency (about 130 msec) from the time of ball release. Their mean amplitude decreased with increasing height of fall. Late components represented the major buildup of muscle activity preceding the impact and were accompanied by limb flexion. Their onset time was roughly constant (about 100 msec) with respect to the time of impact (except in wrist extensors). This indicates that the timing of these responses was based on an accurate estimate of the instantaneous values of the timetocontact (time remaining before impact).
Simultaneous Modelling of the Cholesky Decomposition of Several Covariance Matrices
 J. Multivar. Anal
, 2006
"... A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical spectral (eigenvalue) and variancecorrelation decompositions. All these methods amount to decomposing complicated covariance matrices into “dependence” and “variance” components, and then modelling them virtually separately using regression techniques. The entries of the “dependence” component of the Cholesky decomposition have the unique advantage of being unconstrained so that further reduction of the dimension of its parameter space is fairly simple. Normal theory maximum likelihood estimates for complete and incomplete data are presented using iterative methods such as EM (ExpectationMaximization) algorithm and their improvements. These procedures are illustrated using a dataset from a growth hormone longitudinal clinical trial.
The impedance of frog skeletal muscle fibers in various solutions
, 1974
"... ABSTRACT The linear circuit parameters of 140 muscle fibers in nine solutions are determined from phase measurements fitted with three circuit models: the disk model, in which the resistance to radial current flow is in the lumen of the tubules; the lumped model, in which the resistance is at the mo ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
ABSTRACT The linear circuit parameters of 140 muscle fibers in nine solutions are determined from phase measurements fitted with three circuit models: the disk model, in which the resistance to radial current flow is in the lumen of the tubules; the lumped model, in which the resistance is at the mouth of the tubules; and the hybrid model, in which it is in both places. The lumped model fails to fit the data. The disk and hybrid model fit the data, but the optimal circuit values of the hybrid model seem more reasonable. The circuit values depend on sarcomere length. The conductivity of the lumen of the tubules is less than, and varies in a nonlinear manner with, the conductivity of the bathing solution, suggesting that the tubules are partially occluded by some material like basement membrane which restricts the mobility of ions and has fixed charge. The X2.5 hypertonic sucrose solution used in many voltage clamp experiments produces a large increase in the radial resistance, suggesting that control of the potential across the tubular membranes would be difficult to achieve. Glyceroltreated fibers have 90 % of their tubular system
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions
, 2000
"... The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothing and are particularly wellsuited to problems in which there is little or no theory to guide a choice of probability models, e.g., smoothing a distribution to eliminate roughness and zero frequencies in order to equate scores from different tests. Attention is given to efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and to computationally efficient methods for obtaining the asymptotic standard errors of the fitted frequencies and proportions. We discuss tools that can be used to diagnose the quality of the fitted frequencies for both the univariate and the bivariate cases. Five examples, using real data, are used to illustrate the methods of this paper.
Unsupervised Learning in Neural Computation
, 2002
"... In thisarticl5 we consider unsupervised lsupervised the point of view ofapplV9gx"@@@" computation onsignal and dataanal#VA problVA4 Thearticl is an introductory survey, concentrating on the mainprinciplg and categories of unsupervisedlnsuperv Inneural computation, there are twoclgA"353 categories fo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In thisarticl5 we consider unsupervised lsupervised the point of view ofapplV9gx"@@@" computation onsignal and dataanal#VA problVA4 Thearticl is an introductory survey, concentrating on the mainprinciplg and categories of unsupervisedlnsuperv Inneural computation, there are twoclgA"353 categories for unsupervisedlnsuper methods andmodel" #rst, extensions of principal componentanalnen and factoranalrg#4 and second, lond,g"@@35gx coding or cl""#gx"VV4"glA that are based on competitivelompeti . These are covered in thisarticl3 The more recent trend in unsupervisedlsupervis to consider thisprobl" in the framework of probabilx#95 generativemodela If it ispossibl tobuil and estimate amodel thatexpl"53 the data in terms of somelmegA variabl5g key insights may be obtained into the true nature and structure of the data. This approach isal# brie#y reviewed. c 2002ElgAA33 Science B.V.Al rights reserved. 1. I363hBRX5X Unsupervisedlupervise a deep concept that can be approached from very di#erent perspectives, frompsychol#g and cognitive science to engineering. It is often calng "lng"A3@"gxA35 a teacher". Thisimpl## that al955VgxA@4#3g animal orarti#cial system observes its surroundings and, based on these observations, adapts its behavior without being tol to associate given observations to given desired responses (supervisedledgAV35 or without even given any hints about the goodness of a given response (reinforcementlinforcem Usualor theresul of unsupervisedlnsuper is a newexplV35 tion or representation of the observation data, whichwil then len to improved future responses or decisions. In machinelhinegA"A9 arti#cial intelia ence, such a representation is a set of concepts andrulA between these concepts, which give asymbolg explgx3A3 for the data. In arti#cial neural networks, ...
Pulsar timing and the upper limits on a gravitational wave background: a Bayesian approach
, 1996
"... Stringent limits on \Omega\Gamma the energy density in a gravitational wave background per logarithmic frequency interval in units of the closure density, have recently been suggested by Thorsett and Dewey using observational data of PSR B1855+09. We show that their use of the NeymanPearson test of ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Stringent limits on \Omega\Gamma the energy density in a gravitational wave background per logarithmic frequency interval in units of the closure density, have recently been suggested by Thorsett and Dewey using observational data of PSR B1855+09. We show that their use of the NeymanPearson test of hypotheses cannot, in the general case, provide reliable upper limits on an unknown parameter. The alternative presented here is the calculation of the probability distribution and repartition function for\Omega using a Bayesian formalism. A prior distribution must be specified and the choice of `Jeffreys' prior' is justified on the grounds that it best represents a total lack of prior knowledge about the parameter. The Bayesian approach yields an upper limit at 95% confidence of 9:3 \Theta 10 \Gamma8 for\Omega h 2 . This limit is less stringent by a factor Present address: Louisiana State University, Department of Physics and Astronomy, Baton Rouge, LA 70803 y Electronic address: ...