Results 1  10
of
31
A nonparametric approach to pricing and hedging derivative securities via learning networks
 Journal of Finance
, 1994
"... http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncom ..."
Abstract

Cited by 104 (4 self)
 Add to MetaCart
http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
A review of dimension reduction techniques
, 1997
"... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A cl ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Uniqueness Of Weights For Neural Networks
 in Artificial Neural Networks with Applications in Speech and Vision
, 1993
"... Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches betwe ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches between the desired outputs and those that a candidate net with a fixed architecture and varying weights produces. There are many numerical issues that arise naturally when using such a design approach, in particular: (i) the possibility of local minima which are not globally optimal, and (ii) the possibility of multiple global minimizers. The first question was dealt with by many different authors see for instance [5, 13, 14] and will not reviewed here. Regarding point (ii), observe that there are obvious transformations that leave the behavior of a network invariant, such as interchanges of all incoming and outgoing weights between two neurons, that is the relabeling of neu
Nonlinear Partial Least Squares
, 1995
"... We propose a new nonparametric regression method for highdimensional data, nonlinear partial least squares (NLPLS). NLPLS is motivated by projectionbased regression methods, e.g., partial least squares (PLS), projection pursuit (PPR), and feedforward neural networks. The model takes the form of a ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We propose a new nonparametric regression method for highdimensional data, nonlinear partial least squares (NLPLS). NLPLS is motivated by projectionbased regression methods, e.g., partial least squares (PLS), projection pursuit (PPR), and feedforward neural networks. The model takes the form of a composition of two functions. The first function in the composition projects the predictor variables onto a lowerdimensional curve or surface yielding scores, and the second predicts the response variable from the scores. We implement NLPLS with feedforward neural networks. NLPLS will often produce a more parsimonious model (fewer score vectors) than projectionbased methods, and the model is well suited for detecting outliers and future covariates requiring extrapolation. The scores are also shown to have useful interpretations. We also extend the model for multiple response variables and discuss situations when multiple response variab...
Implementing Projection Pursuit Learning
, 1996
"... This paper examines the implementation of projection pursuit regression (PPR) in the context of machine learning and neural networks. We propose a parametric PPR with direct training which achieves improved training speed and accuracy when compared with nonparametric PPR. Analysis and simulations ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper examines the implementation of projection pursuit regression (PPR) in the context of machine learning and neural networks. We propose a parametric PPR with direct training which achieves improved training speed and accuracy when compared with nonparametric PPR. Analysis and simulations are done for heuristics to choose good initial projection directions. A comparison of a projection pursuit learning network with a one hidden layer sigmoidal neural network shows why grouping hidden units in a projection pursuit learning network is useful. Learning robot arm inverse dynamics is used as an example problem.
Neural Networks in System Identification
, 1994
"... . Neural Networks are nonlinear blackbox model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable nonlinear systems. When estimating the parameters in these structures, there is also good adaptability to conce ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
. Neural Networks are nonlinear blackbox model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable nonlinear systems. When estimating the parameters in these structures, there is also good adaptability to concentrate on those parameters that have the most importance for the particular data set. Key Words. Neural Networks, Parameter estimation, Model Structures, NonLinear Systems. 1. EXECUTIVE SUMMARY 1.1. Purpose The purpose of this tutorial is to explain how Artificial Neural Networks (NN) can be used to solve problems in System Identification, to focus on some key problems and algorithmic questions for this, as well as to point to the relationships with more traditional estimation techniques. We also try to remove some of the "mystique" that sometimes has accompanied the Neural Network approach. 1.2. What's the problem? The identification problem is to infer relationships between past inp...
On Kolmogorov's Representation of Functions of Several Variables by Functions of One Variable
, 2003
"... This paper proposes a nonparametric estimator for general regression functions with multiple regressors. The method used here is motivated by a remarkable result derived by Kolmogorov (1957) and later tightened by Lorentz (1966). In short, any continuous function f(x 1 ; : : : ; x d ) has the repres ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This paper proposes a nonparametric estimator for general regression functions with multiple regressors. The method used here is motivated by a remarkable result derived by Kolmogorov (1957) and later tightened by Lorentz (1966). In short, any continuous function f(x 1 ; : : : ; x d ) has the representation P 2d+1 k=1 ~ g( 1 ~ OE k (x 1 ) + \Delta \Delta \Delta + d ~ OE k (x d )), where ~ g(\Delta) is a continuous function, OE k (\Delta), k = 1; : : : ; 2d + 1, is Lipschitz of order one and strictly increasing, and j , j = 1; : : : ; d, is some constant. Extending this result to the case of smoother functions, we restrict f(\Delta) to be of the form k=1 g k ( k;1 OE k (x 1 ) + \Delta \Delta \Delta + k;d OE k (x d )), 1 t ! 1, where both g k (\Delta) and OE k (\Delta) are three times continuously differentiable and OE k (\Delta) is nondecreasing. These functions are estimated using regression cubic Bsplines, which have excellent numerical and approximating properties. One of the main contributions of this paper is that we develop a method for imposing monotonicity on the cubic Bsplines, a priori, such that the estimator is dense in the set of all monotonic cubic Bsplines. The method requires only 2(r+1)+1 restrictions per each OE k (\Delta), where r is the number of interior knots. Rates of convergence in L 2 are the same as the optimal rate for the onedimensional case. A simulation experiment shows that the estimator works well when t is small, f(\Delta) does not belong to this class of linear superpositions, and optimization is performed using the backfitting algorithm. The monotonic restriction has many other applications besides the one presented here, such as estimating a demand function. With only r + 2 more constraints, it is also possible to impose ...
Ridge Functions, Sigmoidal Functions and Neural Networks
 In Approximation Theory VII
, 1993
"... . This paper considers mainly approximation by ridge functions. Fix a point a 2 IR n and a function g : IR ! IR. Then the function f : IR n ! IR defined by f(x) = g(ax), x 2 IR n , is a ridge or plane wave function. A sigmoidal function is a particular example of the function g which closely ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
. This paper considers mainly approximation by ridge functions. Fix a point a 2 IR n and a function g : IR ! IR. Then the function f : IR n ! IR defined by f(x) = g(ax), x 2 IR n , is a ridge or plane wave function. A sigmoidal function is a particular example of the function g which closely resembles 1 at 1 and 0 at \Gamma1. This paper discusses approximation problems involving general ridge functions and specific research connected with sigmoidal functions. The type of problems discussed lead naturally to a consideration of neural networks, particularly multilayered feedforward networks. Most important is the existence of constructive proofs of the fact that networks of this type can approximate a given continuous function to any desired accuracy. A mathematician's view of these networks may be found in Section 5. x1. Introduction There has been much attention paid recently to the development of simple strategies for approximating a function f : D ! IR, where D is some suit...
Classification and multiple regression through projection pursuit
 Stanford University
, 1985
"... Projection pursuit regression is generalized to multivariate responses. By viewing classification as a special case, this generalization serves to extend classification and discriminant analysis via the projection pursuit approach. ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Projection pursuit regression is generalized to multivariate responses. By viewing classification as a special case, this generalization serves to extend classification and discriminant analysis via the projection pursuit approach.
Logistic Response Projection Pursuit
, 1993
"... A highly flexible nonparametric regression model for predicting a response y given covariates x is the projection pursuit regression (PPR) model y = h(x) = fi 0 + P j fi j f j (ff T j x), where the f j are general smooth functions with mean zero and norm one, and P d k=1 ff 2 kj = 1. With a ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A highly flexible nonparametric regression model for predicting a response y given covariates x is the projection pursuit regression (PPR) model y = h(x) = fi 0 + P j fi j f j (ff T j x), where the f j are general smooth functions with mean zero and norm one, and P d k=1 ff 2 kj = 1. With a binary response y, the common approach to fitting a PPR model is to fit y to minimize average squared error without explicitly considering the binary nature of the response. We develop an alternative logistic response projection pursuit model, in which y is take to be binomial(p), where log( p 1\Gammap ) = h(x). This may be fit by minimizing either binomial deviance or average squared error. We compare the logistic response models to the linear model on simulated data. In addition, we develop a generalized projection pursuit framework for exponential family models. We also present a smoothing spline based PPR algorithm, and compare it to supersmoother and polynomial based PPR algorithms...