Results 1  10
of
153
Nonparametric Estimation of StatePrice Densities Implicit In Financial Asset Prices
 JOURNAL OF FINANCE
, 1997
"... Implicit in the prices of traded financial assets are ArrowDebreu prices or, with continuous states, the stateprice density (SPD). We construct a nonparametric estimator for the SPD implicit in option prices and derive its asymptotic sampling theory. This estimator provides an arbitragefree metho ..."
Abstract

Cited by 334 (5 self)
 Add to MetaCart
Implicit in the prices of traded financial assets are ArrowDebreu prices or, with continuous states, the stateprice density (SPD). We construct a nonparametric estimator for the SPD implicit in option prices and derive its asymptotic sampling theory. This estimator provides an arbitragefree method of pricing new, complex, or illiquid securities while capturing those features of the data that are most relevant from an assetpricing perspective, e.g., negative skewness and excess kurtosis for asset returns, volatility "smiles" for option prices. We perform Monte Carlo experiments and extract the SPD from actual S&P 500 option prices.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 181 (19 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
The mathematics of learning: Dealing with data
 Notices of the American Mathematical Society
, 2003
"... Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1 ..."
Abstract

Cited by 167 (18 self)
 Add to MetaCart
(Show Context)
Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1
Pricing american options: a duality approach
 Operations Research
, 2001
"... We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial ap ..."
Abstract

Cited by 152 (5 self)
 Add to MetaCart
We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worstcase performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest. ∗An earlier draft of this paper was titled Pricing HighDimensional American Options: A Duality
Optionimplied Riskneutral Distributions and Implied Binomial Trees: A Literature Review
 JOURNAL OF DERIVATIVES
, 1999
"... In this partial and selective literature review of option implied riskneutral distributions and of implied binomial trees, we start by observing that in efficient markets, there is information contained in option prices, which might help us to design option pricing models. To this end, we review ..."
Abstract

Cited by 73 (3 self)
 Add to MetaCart
In this partial and selective literature review of option implied riskneutral distributions and of implied binomial trees, we start by observing that in efficient markets, there is information contained in option prices, which might help us to design option pricing models. To this end, we review the numerous methods of recovering riskneutral probability distributions from option prices at one particular timetoexpiration and their applications. Next, we extend our attention beyond one timetoexpiration to the construction of implied binomial trees, which model the stochastic process of the underlying asset. Finally, we describe extensions of implied binomial trees, which incorporate stochastic volatility, as well as other nonparametric methods.
Pricing and hedging derivative securities with neural networks and a homogeneity hint
 J. Econometrics
, 2000
"... Abstract—We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our res ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
(Show Context)
Abstract—We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and deltahedging errors than the baseline neuralnetwork (NN) model and the BlackScholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and deltahedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the AHE of bagging is far less than that of the baseline model in five out of six years. Since we find in general these regularization methods work as effectively as homogeneity hint, we suggest they be used at least in cases when no appropriate hints are available. Index Terms—Bagging, Bayesian regularization, early stopping, hedging error, neural networks (NNs), option price. I.
Support vector machine with adaptive parameters in financial time series forecasting
 IEEE Transactions on Neural Networks
, 2003
"... Abstract—A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
(Show Context)
Abstract—A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer backpropagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting. Index Terms—Backpropagation (BP) neural network, nonstationarity, regularized radial basis function (RBF) neural network, support vector machine (SVM). I.
Financial timeseries prediction using least squares support vector machines within the evidence framework
 IEEE Transactions on Neural Networks
, 2001
"... Abstract—For financial time series, the generation of error bars on the point prediction is important in order to estimate the corresponding risk. The Bayesian evidence framework, already successfully applied to design of multilayer perceptrons, is applied in this paper to least squares support vect ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
Abstract—For financial time series, the generation of error bars on the point prediction is important in order to estimate the corresponding risk. The Bayesian evidence framework, already successfully applied to design of multilayer perceptrons, is applied in this paper to least squares support vector machine (LSSVM) regression in order to infer nonlinear models for predicting a time series and the related volatility. On the first level of inference, a statistical framework is related to the LSSVM formulation which allows to include the timevarying volatility of the market by an appropriate choice of several hyperparameters. By the use of equality constraints and a 2norm, the model parameters of the LSSVM are obtained from a linear KarushKuhnTucker system in the dual space. Error bars on the model predictions are obtained by marginalizing over the model parameters. The hyperparameters of the model are inferred on the second level of inference. The inferred hyperparameters, related to the volatility, are used to construct a volatility model within the evidence framework. Model comparison is performed on the third level of inference in order to automatically tune the parameters of the kernel function and to select the relevant inputs. The LSSVM formulation allows to derive analytic expressions in the feature space and practical expressions are obtained in the dual space replacing the inner product by the related kernel function using Mercer’s theorem. The one step ahead prediction performances obtained on the prediction of the weekly 90day Tbill rate and the daily DAX30 closing prices show that significant out of sample sign predictions can be made with respect to the PesaranTimmerman test statistic. Index Terms—Bayesian inference, financial time series prediction, hyperparameter selection, least squares support vector machines (LSSVMs), model comparison, volatility modeling. I.
Sequential Monte Carlo Methods to Train Neural Network Models
, 2000
"... We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequentia ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving online, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the onestepahead probability density functions of the options prices.