Results 1  10
of
82
Nonparametric Estimation of StatePrice Densities Implicit In Financial Asset Prices
 JOURNAL OF FINANCE
, 1997
"... Implicit in the prices of traded financial assets are ArrowDebreu prices or, with continuous states, the stateprice density (SPD). We construct a nonparametric estimator for the SPD implicit in option prices and derive its asymptotic sampling theory. This estimator provides an arbitragefree metho ..."
Abstract

Cited by 191 (3 self)
 Add to MetaCart
Implicit in the prices of traded financial assets are ArrowDebreu prices or, with continuous states, the stateprice density (SPD). We construct a nonparametric estimator for the SPD implicit in option prices and derive its asymptotic sampling theory. This estimator provides an arbitragefree method of pricing new, complex, or illiquid securities while capturing those features of the data that are most relevant from an assetpricing perspective, e.g., negative skewness and excess kurtosis for asset returns, volatility "smiles" for option prices. We perform Monte Carlo experiments and extract the SPD from actual S&P 500 option prices.
The mathematics of learning: Dealing with data
 Notices of the American Mathematical Society
, 2003
"... Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1 ..."
Abstract

Cited by 106 (15 self)
 Add to MetaCart
Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 89 (13 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Sequential Monte Carlo Methods to Train Neural Network Models
, 2000
"... We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequentia ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving online, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the onestepahead probability density functions of the options prices.
Financial timeseries prediction using least squares support vector machines within the evidence framework
 IEEE Transactions on Neural Networks
, 2001
"... Abstract—For financial time series, the generation of error bars on the point prediction is important in order to estimate the corresponding risk. The Bayesian evidence framework, already successfully applied to design of multilayer perceptrons, is applied in this paper to least squares support vect ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Abstract—For financial time series, the generation of error bars on the point prediction is important in order to estimate the corresponding risk. The Bayesian evidence framework, already successfully applied to design of multilayer perceptrons, is applied in this paper to least squares support vector machine (LSSVM) regression in order to infer nonlinear models for predicting a time series and the related volatility. On the first level of inference, a statistical framework is related to the LSSVM formulation which allows to include the timevarying volatility of the market by an appropriate choice of several hyperparameters. By the use of equality constraints and a 2norm, the model parameters of the LSSVM are obtained from a linear KarushKuhnTucker system in the dual space. Error bars on the model predictions are obtained by marginalizing over the model parameters. The hyperparameters of the model are inferred on the second level of inference. The inferred hyperparameters, related to the volatility, are used to construct a volatility model within the evidence framework. Model comparison is performed on the third level of inference in order to automatically tune the parameters of the kernel function and to select the relevant inputs. The LSSVM formulation allows to derive analytic expressions in the feature space and practical expressions are obtained in the dual space replacing the inner product by the related kernel function using Mercer’s theorem. The one step ahead prediction performances obtained on the prediction of the weekly 90day Tbill rate and the daily DAX30 closing prices show that significant out of sample sign predictions can be made with respect to the PesaranTimmerman test statistic. Index Terms—Bayesian inference, financial time series prediction, hyperparameter selection, least squares support vector machines (LSSVMs), model comparison, volatility modeling. I.
Pricing and hedging derivative securities with neural networks and a homogeneity hint
 J. Econometrics
, 2000
"... Abstract—We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
Abstract—We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and deltahedging errors than the baseline neuralnetwork (NN) model and the BlackScholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and deltahedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the AHE of bagging is far less than that of the baseline model in five out of six years. Since we find in general these regularization methods work as effectively as homogeneity hint, we suggest they be used at least in cases when no appropriate hints are available. Index Terms—Bagging, Bayesian regularization, early stopping, hedging error, neural networks (NNs), option price. I.
MemoryUniversal Prediction of Stationary Random Processes
 IEEE Trans. Inform. Theory
, 1998
"... We consider the problem of onestepahead prediction of a realvalued, stationary, strongly mixing random process fX i g i=01 . The best meansquare predictor of X0 is its conditional mean given the entire infinite past fX i g i=01 . Given a sequence of observations X1 X2 111 XN, we propose estimato ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We consider the problem of onestepahead prediction of a realvalued, stationary, strongly mixing random process fX i g i=01 . The best meansquare predictor of X0 is its conditional mean given the entire infinite past fX i g i=01 . Given a sequence of observations X1 X2 111 XN, we propose estimators for the conditional mean based on sequences of parametric models of increasing memory and of increasing dimension, for example, neural networks and Legendre polynomials. The proposed estimators select both the model memory and the model dimension, in a datadriven fashion, by minimizing certain complexity regularized least squares criteria. When the underlying predictor function has a finite memory, we establish that the proposed estimators are memoryuniversal: the proposed estimators, which do not know the true memory, deliver the same statistical performance (rates of integrated meansquared error) as that delivered by estimators that know the true memory. Furthermore, when the underlying predictor function does not have a finite memory, we establish that the estimator based on Legendre polynomials is consistent.
Support vector machine with adaptive parameters in financial time series forecasting
 IEEE Transactions on Neural Networks
, 2003
"... Abstract—A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Abstract—A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer backpropagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting. Index Terms—Backpropagation (BP) neural network, nonstationarity, regularized radial basis function (RBF) neural network, support vector machine (SVM). I.
Evaluating Hedging Errors: An Asymptotic Approach
 Math. Finance
, 2005
"... We propose a methodology for evaluating the hedging errors of derivative securities due to the discreteness of trading times or the observation times of market prices, or both. Utilizing a weak convergence approach, we derive the asymptotic distributions of the hedging errors as the discreteness dis ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We propose a methodology for evaluating the hedging errors of derivative securities due to the discreteness of trading times or the observation times of market prices, or both. Utilizing a weak convergence approach, we derive the asymptotic distributions of the hedging errors as the discreteness disappears in several situations. First, we examine the hedging error due to discretetime trading when the true strategy is known, which generalizes the result of Bertsimas, Kogan, and Lo (2000) to continuous Itôprocesses. Then we consider a datadriven strategy, when the true strategy is unknown. This strategy is free of parametric model assumptions, therefore it is expected to serve as a benchmark for the evaluation of parametric strategies. Finally, we consider a case study of the BlackScholes deltahedging strategy when the volatility is unknown in the proposed framework. The results obtained give us a prospect for further developments of the framework under which various parametric strategies could be compared in a unified manner.