Results 1  10
of
136
A closedform solution for options with stochastic volatility with applications to bond and currency options
 Review of Financial Studies
, 1993
"... I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond option ..."
Abstract

Cited by 1470 (5 self)
 Add to MetaCart
I use a new technique to derive a closedform solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond options and foreign currency options. Simulations show that correlation between volatility and the spot asset’s price is important for explaining return skewness and strikeprice biases in the BlackScholes (1973) model. The solution technique is based on characteristic functions and can be applied to other problems. Many plaudits have been aptly used to describe Black and Scholes ’ (1973) contribution to option pricing theory. Despite subsequent development of option theory, the original BlackScholes formula for a European call option remains the most successful and widely used application. This formula is particularly useful because it relates the distribution of spot returns I thank Hans Knoch for computational assistance. I am grateful for the suggestions of Hyeng Keun (the referee) and for comments by participants
Group reaction time distributions and an analysis of distribution statistics
 Psychological Bulletin
, 1979
"... A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subj ..."
Abstract

Cited by 180 (24 self)
 Add to MetaCart
(Show Context)
A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subject are organized in ascending order, and quantiles are calculated. The quantiles are then averaged over subjects to give group quantiles (cf. Vincent learning curves). From the group quantiles, a group reaction time distribution can be constructed. It is shown that this method of averaging is exact for certain distributions (i.e., the resulting distribution belongs to the same family as the individual distributions). Furthermore, Monte Carlo studies and application of the method to the combined data from three large experiments provide evidence that properties derived from the group reaction time distribution are much the same as average properties derived from the data of individual subjects. This article also examines how to quantitatively describe the shape of reaction time distributions. The use of moments and cumulants as sources of information about distribution shape is evaluated and rejected because of extreme dependence on long, outlier reaction times. As an alternative, the use of explicit distribution functions as approximations to reaction time distributions is considered. Despite the recent popularity of reaction time research, the use of reaction time distributions for both model testing and model development has been largely ignored. This is surprising in view of the fact that properties of distributions can prove decisive in discriminating among models i(Sternberg, Note 1) and can falsify models that quite adequately describe the behavior of mean reaction time (Ratcliff & Murdock, 1976). Two methods have been used to obtain distributional or shape information. One
Consequences of failure to meet assumptions underlying the analysis of variance and covariance
 Review of Educational Research
, 1972
"... The effects of violating the assumptions underlying the fixedeffects analyses of variance (ANOVA) and covariance (ANCOVA) on TypeI and TypeII error rates have been of great concern to researchers and statisticians. The major effects of violation of assumptions are now well known, after nearly fou ..."
Abstract

Cited by 107 (0 self)
 Add to MetaCart
The effects of violating the assumptions underlying the fixedeffects analyses of variance (ANOVA) and covariance (ANCOVA) on TypeI and TypeII error rates have been of great concern to researchers and statisticians. The major effects of violation of assumptions are now well known, after nearly four decades of
Unrestricted Statistical Inference with Lorenz Curves and Income Shares
 Dept. of Economics, Queen's University Discussion Paper
, 1982
"... The paper considers the problem of statistical inference with estimated Lorenz curves and income shares. The full variancecovariance structure of the (asymptotic) normal distribution of a vector of Lorenz curve ordinates is derived and shown to depend only on conditional first and second moments th ..."
Abstract

Cited by 85 (5 self)
 Add to MetaCart
The paper considers the problem of statistical inference with estimated Lorenz curves and income shares. The full variancecovariance structure of the (asymptotic) normal distribution of a vector of Lorenz curve ordinates is derived and shown to depend only on conditional first and second moments that can be estimated consistently without prior specification of the population density underlying the sample data. Lorenz curves and income shares can thus be used as tools for statistical inference instead of simply as descriptive statistics. 1.
A developmental study of the relationship between geometry and kinematics in drawing movements.Journal of Experimental Psychology: Human Perception and Performance
, 1991
"... Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
(Show Context)
Trajectory and kinematics of drawing movements are mutually constrained by functional relationships that reduce the degrees of freedom of the handarm system. Previous investigations of these relationships are extended here by considering their development in children between 5 and 12 years of age. Performances in a simple motor task—the continuous tracing of elliptic trajectories—demonstrate that both the phenomenon of isochrony (increase of the average movement velocity with the linear extent of the trajectory) and the socalled twothirds power law (relation between tangential velocity and curvature) are qualitatively present already at the age of 5. The quantitative aspects of these regularities evolve with age, however, and steadystate adult performance is not attained even by the oldest children. The powerlaw formalism developed in previous reports is generalized to encompass these developmental aspects of the control of movement. Two general frameworks are currently available to conceptualize the motorcontrol problem. Broadly, the two frameworks differ in the answer that they give to the question "Where do form and structure come from? " According to the
The role of preparation in tuning anticipatory and reflex responses during catching
 J Neurosci
, 1989
"... The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produc ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
(Show Context)
The pattern of muscle responses associated with catching a ball in the presence of vision was investigated by inciependently varying the height of the drop and the mass of the ball. It was found that the anticipatory EMG responses comprised early and late components. The early components were produced at a roughly constant latency (about 130 msec) from the time of ball release. Their mean amplitude decreased with increasing height of fall. Late components represented the major buildup of muscle activity preceding the impact and were accompanied by limb flexion. Their onset time was roughly constant (about 100 msec) with respect to the time of impact (except in wrist extensors). This indicates that the timing of these responses was based on an accurate estimate of the instantaneous values of the timetocontact (time remaining before impact).
Fitting a polytomous item response model to Likerttype data
 Applied Psychological Measurement
, 1990
"... algorithm to the parameter estimation problems of the normal ogive and logistic polytomous response models for Likerttype items. A ratingscale model was developed based on Samejima’s (1969) graded response model. The graded response model includes a separate slope parameter for each item and an i ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
algorithm to the parameter estimation problems of the normal ogive and logistic polytomous response models for Likerttype items. A ratingscale model was developed based on Samejima’s (1969) graded response model. The graded response model includes a separate slope parameter for each item and an item response parameter. In the ratingscale model, the item response parameter is resolved into two parameters: the item location parameter, and the category threshold parameter characterizing the boundary between response categories. For a Likerttype questionnaire, where a single scale is employed to elicit different responses to the items, this item response model is expected to be more useful for analysis because the item parameters can be estimated separately from the threshold parameters associated with the points on a single Likert scale. The advantages of this type of model are shown by analyzing simulated data and data from the General Social Surveys. Index terms: EM algorithm, General Social Surveys, graded response model, item response model, Likert scale, marginal
Truth and consequences of ordinal differences in statistical distributions: Toward a theory of hierarchical inference
 Psychological Bulletin
, 1990
"... A theory is presented that establishes a dominance hierarchy of potential distinctions (order relations) between two distributions. It is proposed that it is worthwhile for researchers to ascertain the strongest possible distinction, because all weaker distinctions are logically implied. Implicatio ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
(Show Context)
A theory is presented that establishes a dominance hierarchy of potential distinctions (order relations) between two distributions. It is proposed that it is worthwhile for researchers to ascertain the strongest possible distinction, because all weaker distinctions are logically implied. Implications of the theory for hypothesis testing, theory construction, and scales of measurement are considered. Open problems for future research are outlined. There are many occasions in psychological research when experimenters are concerned with two or more treatments and their effects on experimental groups. In most instances, the measurements assessing the treatments are assumed to be on at least an ordinal scale. The measurements typically form a sample probability distribution. The investigator may then ask the question as to whether the two distributions differ from one another in some way. The way chosen is often through the testing of summary statistics, particularly sample means, but there are techniques that test entire distributions against one another. In this article I develop a theory that shows that some differences between the distributions are stronger than others, in that the stronger ones imply the weaker ones but not vice versa. In fact, a dominance hierarchy of distinctions between two arbitrary distributions is developed: A distinction may imply another, be implied by another, be equivalent to another, or none of these. The dominance hierarchy is put into the form of an implication graph. The power of the results is that they are, except for special cases to be specified later, general across distributions. That is, most of the relations do not depend on a particular type of distribution such as the normal distribution. Furthermore, as long as the measurements are on at least an ordinal scale, again with the exception of the special cases, they
Batch Means and Spectral Variance Estimation in Markov Chain Monte Carlo
, 2009
"... Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the meansquare sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finitesample properties of spectral variance and batch means estimators and provide recommendations for practitioners.