Results 1 
9 of
9
Consistent Specification Testing With Nuisance Parameters Present Only Under The Alternative
, 1995
"... . The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly ex ..."
Abstract

Cited by 83 (13 self)
 Add to MetaCart
. The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bears closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banachvalued random elements and applying the Banach Central Limit Theorem and Law of Iterated Logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted. 1. Introduction In testing whether or not a parametric statistical model is correctly specified, there are a number of apparently distinct approaches one might take. T...
Forecast Combining with Neural Networks 61
 Journal of Forecasting
, 1989
"... This paper investigates the use of Artificial Neural Networks (ANNs) to combine time series forecasts of stock market volatility from the USA, Canada, Japan and the UK. We demonstrate that combining with nonlinear ANNs generally produces forecasts which, on the basis of outofsample forecast encomp ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
This paper investigates the use of Artificial Neural Networks (ANNs) to combine time series forecasts of stock market volatility from the USA, Canada, Japan and the UK. We demonstrate that combining with nonlinear ANNs generally produces forecasts which, on the basis of outofsample forecast encompassing tests and mean squared error comparisons, routinely dominate forecasts from traditional linear combining procedures. Superiority of the ANN arises because of its flexibility to account for potentially complex nonlinear relationships not easily captured by traditional linear models. KEY WORDS forecast combing; artificial neural network; encompassing test When combining n individual forecasts II '..., In ' the single combined forecast F is traditionally obtained by selecting 13 weights in the linear model F = 130 + Lt. I 13Ji ' a popular example being the simple average across forecasts (Le. 130 = 0, f3 i = 1 / n"'l i). I However, a linear combination
Using artificial neural networks to combine financial forecasts
 J. Forecasting
, 1996
"... Abstract—We conduct evolutionary programming experiments to evolve artificial neural networks for forecast combination. Using stock price volatility forecast data we find evolved networks compare favorably with a naı̈ve average combination, a least squares method, and a Kernel method on outofsampl ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We conduct evolutionary programming experiments to evolve artificial neural networks for forecast combination. Using stock price volatility forecast data we find evolved networks compare favorably with a naı̈ve average combination, a least squares method, and a Kernel method on outofsample forecasting ability—the best evolved network showed strong superiority in statistical tests of encompassing. Further, we find that the result is not sensitive to the nature of the randomness inherent in the evolutionary optimization process. Index Terms—Evolutionary programming, financial forecasting, forecast combination, neural networks, selfadaptive evolutionary programming. I.
SOME GENERICITY ANALYSES IN NONPARAMETRIC STATISTICS ‡
, 2002
"... Abstract. Many nonparametric estimators and tests are naturally set in infinite dimensional contexts. Prevalence is the infinite dimensional analogue of full Lebesgue measure, shyness the analogue of being a Lebesgue null set. A prevalent set of prior distributions lead to wildly inconsistent Bayesi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Many nonparametric estimators and tests are naturally set in infinite dimensional contexts. Prevalence is the infinite dimensional analogue of full Lebesgue measure, shyness the analogue of being a Lebesgue null set. A prevalent set of prior distributions lead to wildly inconsistent Bayesian updating when independent and identically distributed observations happen in class of infinite spaces that includes R n and N. For any rate of convergence, no matter how slow, only a shy set of target functions can be approximated by consistent nonparametric regression schemes in a class that includes series approximations, kernels and other locally weighted regressions, splines, and artificial neural networks. When the instruments allow for the existence of an instrumental regression, the regression function only exists for a shy set of dependent variables. The instruments allow for existence in a counterintuitive dense set of cases, shyness is an open question. A prevalent set of integrated conditional moment (ICM) specification tests are consistent, a dense subset of the finitely parametrized ICM tests are consistent, prevalence is an open question.
www.ilades.cl/economia/publi.htm Nonlinearities in the Money Demand: A Neural Network Approach
"... A crucial element when undertaking monetary policies is to count on reliable projections regarding the likely effects of changes in income, interest rates, and other macroeconomic variables on monetary aggregates. Understandably, the estimation of money demand functions has been a dynamic field of e ..."
Abstract
 Add to MetaCart
A crucial element when undertaking monetary policies is to count on reliable projections regarding the likely effects of changes in income, interest rates, and other macroeconomic variables on monetary aggregates. Understandably, the estimation of money demand functions has been a dynamic field of econometric analysis. The frequently observed
PhD Thesis Financial Risk Management and Portfolio Optimization Using Artificial Neural Networks and Extreme Value Theory
, 2002
"... 1 ..."
Technical Solutions for Agent Movement Equation and Localization Problems
"... This paper provides technical solutions for two problems selected from Soccer Simulation 3D domain. The agent movement equation problem and the localization problem are selected from this domain and investigated using several machine learning methods, including neural networks, evolutionary learning ..."
Abstract
 Add to MetaCart
(Show Context)
This paper provides technical solutions for two problems selected from Soccer Simulation 3D domain. The agent movement equation problem and the localization problem are selected from this domain and investigated using several machine learning methods, including neural networks, evolutionary learning and statistical learning. Results show the remarkable advantage of reinforcing ordinary multilayer perceptron neural networks with evolutionary algorithms. Also they confirm the superior performance of support vector machines for regression tasks when the underlying system is neither dynamic nor chaotic.
unknown title
"... Our purpose here is to unify the apparently disparate nonparametric and nuisance parameter approaches to testing models consistently for arbitrary misspecification (i.e., with power approach one asymptotically for all deviations from the null). The insight providing this unification is that, fundame ..."
Abstract
 Add to MetaCart
Our purpose here is to unify the apparently disparate nonparametric and nuisance parameter approaches to testing models consistently for arbitrary misspecification (i.e., with power approach one asymptotically for all deviations from the null). The insight providing this unification is that, fundamentally, all the different tests, and in particular the two of direct interest to us, are based on estimates of topological &quot;distances &quot; between a restricted (e.g., parametric) model and an unrestricted model. In this context, the notion of weak denseness or weak denseness of a span in the space containing the object of interest plays the central role. Further, verifying weak denseness is often quite easy. As we shall see, the two forms of the tests are distinct because one estimates the topological distance directly in the nonparametric approach and indirectly in the nuisance parameter approach. By identifying the topological basis for a test and applying the notion of weak denseness appropriately, the fundamental relations between many of the different specifications testing approaches can be appreciated. As just one example, Eubank
unknown title
"... 270 T.H. Lee et al.. Neural netll.ork test.for neglected nonlinearit}. economic series or group of series appears to be generated by a linear model against the alternative that they are nonlinearly related. There are many tests presently available to do this. This paper considers a 'neural net ..."
Abstract
 Add to MetaCart
270 T.H. Lee et al.. Neural netll.ork test.for neglected nonlinearit}. economic series or group of series appears to be generated by a linear model against the alternative that they are nonlinearly related. There are many tests presently available to do this. This paper considers a 'neural network ' test recently proposed by White (1989b), and compares its performance with several alternative tests using a Monte Carlo study. It is important to be precise about the meaning of the word 'linearity'. Throughout, we focus on a property best described as 'linearity in conditional mean'. Let { Zt} be a stochastic process, and partition Zt as Zt = (Yt, X;)', where (for simplicity) Yt is a scalar and X t is a k x 1 vector. X t may (but need not necessarily) contain a constant and lagged values of Yt.The process {Yt} is linear in mean conditional on X t if P[E(ytIXt} = X;()*] = 1 for some () * E ~k Thll~, ~ process exhibiting autoregressive conditional heteroskedasticity (ARCH) [Engle (1982)] may nevertheless exhibit linearity of this sort because ARCH does not refer to the conditional mean. Our focus is appropriate whenever we are concerned with the adequacy of linear models for forecasting. The alternative of interest is that Yt is not linear in mean conditional on X t, so that P[E(ytIXt} x;eJ < 1 for all