#### DMCA

## Multiple-Output Modelling for Multi-Step-Ahead Forecasting

### Citations

596 | Locally weighted learning’,
- Atkeson, Moore, et al.
- 1997
(Show Context)
Citation Context ...]. return y (k) qh . It is well-known that the adoption of a local approach requires the choice of a set of model parameters (e.g. the number k of neighbors, the kernel function, the distance metric) =-=[2]-=-. In this paper, we will adopt a constant kernel, an euclidean distance and we will select the number of neighbors by a Leave-One-Out (LOO) criterion. 7A computationally efficient way to perform LOO ... |

525 | A learning algorithm for continually running fully recurrent neural networks”,
- Williams, Zipser
- 1989
(Show Context)
Citation Context ...dictor takes estimated values as inputs, instead of actual observations with evident negative consequences in terms of error propagation. Examples of iterated approaches are recurrent neural networks =-=[25]-=- or local learning iterated techniques [13, 18]. Direct methods perform H-step-ahead forecasting by estimating a set of H prediction models, each returning a direct forecast of ϕN+h with h ∈{1,...,H} ... |

328 | Time Series Prediction: Forecasting the Future and Understanding the Past, - Weigend, Gershenfeld - 1994 |

231 |
The Relationship between Variable Selection and Data Augmentation and a Method for Prediction”,
- Allen
- 1974
(Show Context)
Citation Context ...) criterion. 7A computationally efficient way to perform LOO cross-validation and to assess the performance in generalization of local linear models is the PRESS statistic, proposed in 1974 by Allen =-=[1]-=-. By assessing the performance of each local model, alternative configurations can be tested and compared in order to select the best one in terms of expected prediction. The idea consists in associat... |

221 |
Predicting chaotic time series,
- Farmer, Sidorowich
- 1987
(Show Context)
Citation Context ...stead of actual observations with evident negative consequences in terms of error propagation. Examples of iterated approaches are recurrent neural networks [25] or local learning iterated techniques =-=[13, 18]-=-. Direct methods perform H-step-ahead forecasting by estimating a set of H prediction models, each returning a direct forecast of ϕN+h with h ∈{1,...,H} [21]. Direct methods often require higher funct... |

104 |
Exploiting chaos to predict the future and reduce noise. In
- Farmer, Sidorowich
- 1988
(Show Context)
Citation Context ...consider only local learning approaches. The use of local learning approaches in forecasting literature dates back to the seminal work of Lorenz [17] on chaotic series. Other classical references are =-=[13, 14, 15]-=- Here we will consider a local learning approach where the problem of adjusting the size of the neighborhood is solved by a Lazy Learning (LL) algorithm [7]. This algorithm selects on a query-by-query... |

96 |
Atmospheric predictability as revealed by naturally occurring analogues
- Lorenz
- 1969
(Show Context)
Citation Context ...the function f in Equation (1)). In this work, we will consider only local learning approaches. The use of local learning approaches in forecasting literature dates back to the seminal work of Lorenz =-=[17]-=- on chaotic series. Other classical references are [13, 14, 15] Here we will consider a local learning approach where the problem of adjusting the size of the neighborhood is solved by a Lazy Learning... |

90 | State Space Reconstruction in the Presence of Noise,"
- Casdagli, Eubank, et al.
- 1991
(Show Context)
Citation Context ...ands to fit from historical data the Nonlinear Auto Regressive (NAR) dependency [12] ϕt+1 = f (ϕt,ϕt−1,...,ϕt−d+1) + w, (1) where w is the scalar zero-mean noise term and d is the embedding dimension =-=[9]-=-, that is the number of past values taken into consideration to predict one future value. After the learning process, the estimation of the H next values is returned by ˆϕN+h = ⎧ f ˆ(ϕN,...,ϕN−d+1) if... |

58 |
Time series prediction by using delay coordinate embedding, in: Weigend A.S., Gershenfeld N.A. (Eds.), Time series prediction: Forecasting the future and understanding the past
- Sauer
- 1993
(Show Context)
Citation Context ...pendency between two series values at two distant instants. In computational intelligence literature, we have also examples of research works, where the two approaches have been successfully combined =-=[20]-=-. In spite of their diversity, iterated and direct techniques for multi-step-ahead prediction share a common feature: they model from data, a multiple-input single-output mapping, where the outputs ar... |

53 |
2003): Nonlinear Time Series
- Fan, Yao
(Show Context)
Citation Context ... one-step-ahead prediction is computed, the value is fed back as an input to the following step. The iterated method demands to fit from historical data the Nonlinear Auto Regressive (NAR) dependency =-=[12]-=- ϕt+1 = f (ϕt,ϕt−1,...,ϕt−d+1) + w, (1) where w is the scalar zero-mean noise term and d is the embedding dimension [9], that is the number of past values taken into consideration to predict one futur... |

39 | Methodology for long-term prediction of time series,”Neurocomputing,
- Sorjamaa, Hao, et al.
- 2007
(Show Context)
Citation Context ... {ϕ1,...,ϕN} over a long horizon H is still an open problem in forecasting [24]. Currently, the most common approaches to long-term forecasting rely either on iterated or direct prediction techniques =-=[16, 21]-=-. Preprint submitted to Neurocomputing July 4, 2009In iterated methods an H-step-ahead prediction problem is tackled by iterating, H times, a one-step-ahead predictor. Once the predictor has estimate... |

29 | Lazy Learning Meets the Recursive Least Squares Algorithm
- Birattari, Bontempi, et al.
- 1999
(Show Context)
Citation Context ...testing them on 111 series of the NN3 international competition benchmark. Note that, in order to setup a common and reliable benchmark, we adopt here the same local learning algorithm, Lazy Learning =-=[3, 7]-=-, to implement the different prediction strategies. As shown in the previous contributions, the Lazy Learning algorithm returns accurate predictions both in a single-output [8, 6] and a multiple-outpu... |

28 |
Lazy learning for modeling and control design
- Bontempi, Birattari, et al.
- 1999
(Show Context)
Citation Context ...testing them on 111 series of the NN3 international competition benchmark. Note that, in order to setup a common and reliable benchmark, we adopt here the same local learning algorithm, Lazy Learning =-=[3, 7]-=-, to implement the different prediction strategies. As shown in the previous contributions, the Lazy Learning algorithm returns accurate predictions both in a single-output [8, 6] and a multiple-outpu... |

26 |
Local Learning Techniques for Modeling, Prediction and Control
- Bontempi
- 1999
(Show Context)
Citation Context ...iction. The idea consists in associating a LOO error eLOO(k) to the estimation ˆyk = 1 k k∑ j=1 y[ j] (11) returned by k neighbors. In case of a constant model, the LOO term can be derived as follows =-=[4]-=-: where eLOO(k) = 1 k k∑ (e j(k)) 2 , (12) j=1 ∑ ki=1(i� j) y[i] e j(k) = y[ j] − = k k − 1 y[ j] − ˆyk . (13) k − 1 The best number of neighbors is then defined as the number k ∗ = arg mink∈{2,...,K}... |

26 | A nearest trajectory strategy for time series prediction
- McNames
- 1998
(Show Context)
Citation Context ...stead of actual observations with evident negative consequences in terms of error propagation. Examples of iterated approaches are recurrent neural networks [25] or local learning iterated techniques =-=[13, 18]-=-. Direct methods perform H-step-ahead forecasting by estimating a set of H prediction models, each returning a direct forecast of ϕN+h with h ∈{1,...,H} [21]. Direct methods often require higher funct... |

20 | Local learning for iterated time-series prediction
- Bontempi, Birattari, et al.
- 1999
(Show Context)
Citation Context ...rithm, Lazy Learning [3, 7], to implement the different prediction strategies. As shown in the previous contributions, the Lazy Learning algorithm returns accurate predictions both in a single-output =-=[8, 6]-=- and a multiple-output context [5, 23]. Lazy Learning (LL) is a local modeling technique, which is query-based in the sense that the whole learning procedure (i.e. structural and parametric identifica... |

20 | Using the Delta test for variable selection
- Eirola, Liitiäinen, et al.
- 2008
(Show Context)
Citation Context ...cess adopted a 10-fold cross-validation scheme to select the value of the parameter s. The input selection was performed by means of the Delta test for the methods based on the single-output strategy =-=[11]-=- and an extension of the Delta test for the methods based on the multiple-output strategy [23] (maximum embedding order d equal to 12). 14Time Series n° 56 5500 6500 7500 0 20 40 60 80 100 120 140 Ti... |

8 | A.: Long-term prediction of time series by combining direct and mimo strategies
- Taieb, Bontempi, et al.
- 2009
(Show Context)
Citation Context ...he original task into n = H s prediction tasks, each with multiple outputs of size s, where s ∈{1,...,H}. This approach, called Multiple-Input Several Multiple-Outputs (MISMO) was first introduced in =-=[23]-=- and trades off the property of preserving the stochastic dependency between future values with a greater flexibility of the predictor. For instance, the fact of having n > 1different models allows th... |

7 |
Prediction of chaotic time series with noise
- Ikeguchi, Aihara
- 1995
(Show Context)
Citation Context ...consider only local learning approaches. The use of local learning approaches in forecasting literature dates back to the seminal work of Lorenz [17] on chaotic series. Other classical references are =-=[13, 14, 15]-=- Here we will consider a local learning approach where the problem of adjusting the size of the neighborhood is solved by a Lazy Learning (LL) algorithm [7]. This algorithm selects on a query-by-query... |

5 |
Long term time series prediction with multi-input multi-output local learning
- Bontempi
- 2008
(Show Context)
Citation Context ...a common feature: they model from data, a multiple-input single-output mapping, where the outputs are the variable ϕN+1 in the iterated case and the variable ϕN+h in the direct case, respectively. In =-=[5]-=-, the author proposed a MIMO approach for multi-step-ahead time series prediction, where the predicted value is not a scalar quantity but a vector of future values {ϕN+1,...,ϕN+H} of the time series ϕ... |

4 |
Nima Reyhani, and Amaury Lendasse. Direct and recursive prediction of time series using mutual information selection
- Ji, Hao
(Show Context)
Citation Context ... {ϕ1,...,ϕN} over a long horizon H is still an open problem in forecasting [24]. Currently, the most common approaches to long-term forecasting rely either on iterated or direct prediction techniques =-=[16, 21]-=-. Preprint submitted to Neurocomputing July 4, 2009In iterated methods an H-step-ahead prediction problem is tackled by iterating, H times, a one-step-ahead predictor. Once the predictor has estimate... |

4 |
Nima Reyhani, Yongnan Ji, and Amaury Lendasse. Methodology for long-term prediction of time series. Neurocomputing
- Sorjamaa, Hao
- 2007
(Show Context)
Citation Context ...sections we will provide a common notation to introduce and discuss these two approaches. 32.1.1. Iterated method The iterated method is the oldest technique to carry out multi-step-ahead prediction =-=[22]-=-. In this method, once a one-step-ahead prediction is computed, the value is fed back as an input to the following step. The iterated method demands to fit from historical data the Nonlinear Auto Regr... |

1 |
Comptition NN3. http://www.neural-forecasting-competition.com/NN3/index.htm. Dernire mise jour faite le 19/02/2008. Consult le 2/05/2009
- Crone
(Show Context)
Citation Context ...134. Experiments This section presents a comparison of single-output and multiple-output techniques for multistep-ahead prediction. In order to provide enough experimental evidence we focused on NN3 =-=[10]-=-, an international competition based on a large number of real time series. 4.1. Datasets and Preprocessing The NN3 Dataset is made of 111 monthly time series starting at January, each containing 50 t... |