Results 1  10
of
36
How Effective are Neural Networks at Forecasting and Prediction? A Review and Evaluation
 JOURNAL OF FORECASTING J. FORECAST. 17, 481495 (1998)
, 1998
"... Despite increasing applications of artificial neural networks (NNs) to forecasting over the past decade, opinions regarding their contribution are mixed. Evaluating research in this area has been difficult, due to lack of clear criteria. We identified eleven guidelines that could be used in evaluati ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Despite increasing applications of artificial neural networks (NNs) to forecasting over the past decade, opinions regarding their contribution are mixed. Evaluating research in this area has been difficult, due to lack of clear criteria. We identified eleven guidelines that could be used in evaluating this literature. Using these, we examined applications of NNs to business forecasting and prediction. We located 48 studies done between 1988 and 1994. For each, we evaluated how effectively the proposed technique was compared with alternatives (effectiveness of validation) and how well the technique was implemented (effectiveness of implementation). We found that eleven of the studies were both effectively validated and implemented. Another eleven studies were effectively validated and produced positive results, even though there were some problems with respect to the quality of their NN implementations. Of these 22 studies, 18 supported the potential of NNs for forecasting and prediction.
Dynamical Recurrent Neural Networks  Towards Environmental Time Series Prediction
, 1995
"... Dynamical Recurrent Neural Networks (DRNN) (Aussem 1994) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference e ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1994) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide historysensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporalrecurrentbackpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meteorological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hou...
Parallel Computer Workload Modeling with Markov Chains
 Proc. of the 10th Job Scheduling Strategies for Parallel Processing (JSSPP), volume 3277 of Lecture Notes in Computer Science
, 2004
"... In order to evaluate di#erent scheduling strategies for parallel computers, simulations are often executed. As the scheduling quality highly depends on the workload that is served on the parallel machine, a representative workload model is required. Common approaches such as using a probability d ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
In order to evaluate di#erent scheduling strategies for parallel computers, simulations are often executed. As the scheduling quality highly depends on the workload that is served on the parallel machine, a representative workload model is required. Common approaches such as using a probability distribution model can capture the static feature of real workloads, but they do not consider the temporal relation in the traces. In this paper, a workload model is presented which uses Markov chains for modeling job parameters. In order to consider the interdependence of individual parameters without requiring large scale Markov chains, a novel method for transforming the states in di#erent Markov chains is presented. The results show that the model yields closer results to the real workloads than other common approaches.
Learning a Class of Large Finite State Machines with a Recurrent Neural Network
, 1995
"... One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be le ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 427) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine,
Neural Dual Extended Kalman Filtering: Applications In Speech Enhancement And Monaural Blind Signal Separation
 in Proc. of IEEE Workshop on Neural Networks and Signal Processing
, 1997
"... The removal of noise from speech signals has applications ranging from speech enhancement for cellular communications, to front ends for speech recognition systems. A nonlinear timedomain method called dual extended Kalman filtering (DEKF) is presented for removing nonstationary and colored noise f ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
The removal of noise from speech signals has applications ranging from speech enhancement for cellular communications, to front ends for speech recognition systems. A nonlinear timedomain method called dual extended Kalman filtering (DEKF) is presented for removing nonstationary and colored noise from speech. We further generalize the algorithmto perform the blind separation of two speech signals from a single recording. INTRODUCTION Traditionalapproaches to noise removal in speech involve spectral techniques, which frequently result in audible distortion of the signal. Recent timedomain nonlinear filtering methods utilize data sets where the clean speech is available as a target signal to train a neural network. Such methods are often effective within the training set, but tend to generalize poorly for actual sources with varying signal and noise levels. Furthermore, the network models in these methods do not fully take into account the nonstationary nature of speech. In the approa...
Dual Kalman Filtering Methods for Nonlinear Prediction, Smoothing, and Estimation
 In Advances in Neural Information Processing Systems 9
, 1997
"... Prediction, estimation, and smoothing are fundamental to signal processing. To perform these interrelated tasks given noisy data, we form a time series model of the process that generates the data. Taking noise in the system explicitly into account, maximumlikelihood and Kalman frameworks are discus ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Prediction, estimation, and smoothing are fundamental to signal processing. To perform these interrelated tasks given noisy data, we form a time series model of the process that generates the data. Taking noise in the system explicitly into account, maximumlikelihood and Kalman frameworks are discussed which involve the dual process of estimating both the model parameters and the underlying state of the system. We review several established methods in the linear case, and propose several extensions utilizing dual Kalman filters (DKF) and forwardbackward (FB) filters that are applicable to neural networks. Methods are compared on several simulations of noisy time series. We also include an example of nonlinear noise reduction in speech. 1 INTRODUCTION Consider the general autoregressive model of a noisy time series with both process and additive observation noise: x(k) = f(x(k \Gamma 1); :::x(k \Gamma M ); w) + v(k \Gamma 1) (1) y(k) = x(k) + r(k); (2) where x(k) corresponds to the ...
A TwoObservation Kalman Framework for MaximumLikelihood Modeling of Noisy Time Series
 In Proceedings of International Joint Conference on Neural Networks. IEEE,INNS
, 1998
"... Modeling a noisy time series requires the dual estimation of both the model parameters and the underlying clean time series. Most approaches estimate the model parameters by minimizing the mean squared prediction error, but estimate the time series by minimizing another cost function. We justify the ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Modeling a noisy time series requires the dual estimation of both the model parameters and the underlying clean time series. Most approaches estimate the model parameters by minimizing the mean squared prediction error, but estimate the time series by minimizing another cost function. We justify the use of the same maximumlikelihood cost for both parameter and time series estimation, and present a new weight update procedure for recursive minimization of this cost. This learning algorithm uses a twoobservation form of the extended Kalman filter, and provides a natural extension of the Dual Extended Kalman Filter procedure previously developed by the authors. I. Introduction The estimation and prediction of noisy time series have applications ranging from speech enhancement to financial and economic forecasting. While a model of the time series can be used for performing these tasks, it is well known that the presence of noise will cause standard regression techniques to produce a bi...
BlackBox Modeling with StateSpace Neural Networks
 in Neural Adaptive Control Technology I
, 1996
"... Neural network blackbox modeling is usually performed using nonlinear inputoutput models. The goal of this paper is to show that there are advantages in using nonlinear statespace models, which constitute a larger class of nonlinear dynamical models, and their corresponding statespace neural pred ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Neural network blackbox modeling is usually performed using nonlinear inputoutput models. The goal of this paper is to show that there are advantages in using nonlinear statespace models, which constitute a larger class of nonlinear dynamical models, and their corresponding statespace neural predictors. We recall the fundamentals of both inputoutput and statespace blackbox modeling, and show the statespace neural networks to be potentially more efficient and more parsimonious than their conventional inputoutput counterparts. This is examplified on simulated processes as well as on a real one, the hydraulic actuator of a robot arm. 1. Introduction During the past few years, several authors [Narendra and Parthasarathy 1990, Nerrand et al. 1994] have suggested the use of neural networks for the blackbox modeling of nonlinear dynamical systems. The problem of designing a mathematical model of a process using only observed data has attracted much attention, both from an academic a...
An Adaptable NeuralNetwork Model for Recursive Nonlinear Traffic Prediction and Modeling of MPEG Video Sources
 IEEE Trans. on Neural Networks
, 2003
"... Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operat ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operation. The generated models can be used as traffic rate predictors, during the network operation phase (online traffic modeling), or as video generators for estimating the network resources, during the network design phase (offline traffic modeling). In this paper, an adaptable neuralnetwork architecture is proposed covering both cases. The scheme is based on an efficient recursive weight estimation algorithm, which adapts the network response to current conditions. In particular, the algorithm updates the network weights so that 1) the network output, after the adaptation, is approximately equal to current bit rates (current traffic statistics) and 2) a minimal degradation over the obtained network knowledge is provided. It can be shown that the proposed adaptable neuralnetwork architecture simulates a recursive nonlinear autoregressive model (RNAR) similar to the notation used in the linear case. The algorithm presents low computational complexity and high efficiency in tracking traffic rates in contrast to conventional retraining schemes. Furthermore, for the problem of offline traffic modeling, a novel correlation mechanism is proposed for capturing the burstness of the actual MPEG video traffic. The performance of the model is evaluated using several reallife MPEG coded video sources of long duration and compared with other linear/nonlinear techniques used for both cases. The results indicate that the proposed adaptable neuralnetwork architecture presents better perform...