Results 1 
9 of
9
D (2007) MultistepAhead NeuralNetwork Predictors for Network Traffic Reduction
 in Distributed Interactive Applications. ACM Transactions on Modeling and Computer Simulation 17:1–30
"... Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and easeofimplementation. In this article, we present a novel extension of dead reckoning, termed neuroreckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuroreckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short timespan. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity
On the Prediction Methods Using Neural Networks
"... Abstract: The paper aims to present the main ideas related to the prediction of time series using neural networks. The paper begins with an overview of the main concepts implied in neural networks, an identification of the similarities between biological nervous cell and the mathematic model is trie ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract: The paper aims to present the main ideas related to the prediction of time series using neural networks. The paper begins with an overview of the main concepts implied in neural networks, an identification of the similarities between biological nervous cell and the mathematic model is tried here. The main advantages and disadvantages of using neural networks are presented and also a general presentation of the methods used for multi step ahead prediction is made.
A MULTISTEP PREDICTION MODEL BASED ON INTERPOLATION AND ADAPTIVE TIME DELAY NEURAL NETWORK FOR TIME SERIES
"... The drawback of indirect multistepahead prediction is error accumulation. In order to tackle this problem and improve the capacity of adaptive time delay neural network (ATNN) for prediction, a threestage prediction model SATNN based on spline interpolation and ATNN is presented. With spline inte ..."
Abstract
 Add to MetaCart
(Show Context)
The drawback of indirect multistepahead prediction is error accumulation. In order to tackle this problem and improve the capacity of adaptive time delay neural network (ATNN) for prediction, a threestage prediction model SATNN based on spline interpolation and ATNN is presented. With spline interpolation and ATNN, the impact of last prediction errors that would be iterated into the model for the next step prediction is decreased, and then the better prediction can be obtained. The annual sunspot, considered as the benchmark chaotic nonlinear systems, is selected to test the multistep prediction model. Validation studies indicate that the proposed model is quite effective in multistep prediction.
unknown title
"... Simple algorithm for recurrent neural networks that can learn sequence completion Abstract — We can memorize long sequences like melodies or poems and it is intriguing to develop efficient connectionist representations for this problem. Recurrent neural networks have been proved to offer a reasonabl ..."
Abstract
 Add to MetaCart
(Show Context)
Simple algorithm for recurrent neural networks that can learn sequence completion Abstract — We can memorize long sequences like melodies or poems and it is intriguing to develop efficient connectionist representations for this problem. Recurrent neural networks have been proved to offer a reasonable approach here. We start from a few axiomatic assumptions and provide a simple mathematical framework that encapsulates the problem. A gradientdescent based algorithm is derived in this framework. Demonstrations on a benchmark problem show the applicability of our approach.
unknown title
"... Simple algorithm for recurrent neural networks that can learn sequence completion Abstract — We can memorize long sequences like melodies or poems and it is intriguing to develop efficient connectionist representations for this problem. Recurrent neural networks have been proved to offer a reasonabl ..."
Abstract
 Add to MetaCart
(Show Context)
Simple algorithm for recurrent neural networks that can learn sequence completion Abstract — We can memorize long sequences like melodies or poems and it is intriguing to develop efficient connectionist representations for this problem. Recurrent neural networks have been proved to offer a reasonable approach here. We start from a few axiomatic assumptions and provide a simple mathematical framework that encapsulates the problem. A gradientdescent based algorithm is derived in this framework. Demonstrations on a benchmark problem show the applicability of our approach.
APPLYING POLICY ITERATION FOR TRAINING RECURRENT NEURAL NETWORKS
"... Abstract. Recurrent neural networks are often used for learning timeseries data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear leastsquares cost function. The special structure of the cost function allows us to build a connection to reinforcement l ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Recurrent neural networks are often used for learning timeseries data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear leastsquares cost function. The special structure of the cost function allows us to build a connection to reinforcement learning. We exploit this connection and derive a convergent, policy iterationbased algorithm. Furthermore, we argue that RNN training can be fit naturally into the reinforcement learning framework. recurrent neural networks, policy iteration, sequence learning, reinforcement learning 1.
PIRANHA: Policy Iteration for Recurrent Artificial Neural Networks with Hidden Activities Abstract
"... It is an intriguing task to develop efficient connectionist representations for learning long time series. Recurrent neural networks have great promises here. We model the learning task as a minimization problem of a nonlinear leastsquares cost function, that takes into account both onestep and mul ..."
Abstract
 Add to MetaCart
It is an intriguing task to develop efficient connectionist representations for learning long time series. Recurrent neural networks have great promises here. We model the learning task as a minimization problem of a nonlinear leastsquares cost function, that takes into account both onestep and multistep prediction errors. The special structure of the cost function is constructed to build a bridge to reinforcement learning. We exploit this connection and derive a convergent, policy iterationbased algorithm, and show that RNN training can be made to fit the reinforcement learning framework in a natural fashion. The relevance of this connection is discussed. We also present experimental results, which demonstrate the appealing properties of the unique parameter structure prescribed by reinforcement learning. Experiments cover both sequence learning and longterm prediction. Key words: recurrent neural networks, policy iteration, sequence learning, multistep prediction 1
APPLYING POLICY ITERATION FOR TRAINING RECURRENT NEURAL NETWORKS
, 2004
"... Abstract. Recurrent neural networks are often used for learning timeseries data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear leastsquares cost function. The special structure of the cost function allows us to build a connection to reinforcement l ..."
Abstract
 Add to MetaCart
Abstract. Recurrent neural networks are often used for learning timeseries data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear leastsquares cost function. The special structure of the cost function allows us to build a connection to reinforcement learning. We exploit this connection and derive a convergent, policy iterationbased algorithm. Furthermore, we argue that RNN training can be fit naturally into the reinforcement learning framework.
EVENT PREDICTION FOR MODELING MENTAL SIMULATION IN NATURALISTIC DECISION MAKING
, 2005
"... Event prediction for modeling mental simulation in naturalistic decision making ..."
Abstract
 Add to MetaCart
Event prediction for modeling mental simulation in naturalistic decision making