Results 1  10
of
19
A Monte Carlo Approach to Nonnormal and Nonlinear StateSpace Modeling
, 1992
"... this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht ..."
Abstract

Cited by 126 (13 self)
 Add to MetaCart
this article then is to develop methodology for modeling the nonnormality of the ut, the vt, or both. A second departure from the model specification ( 1 ) is to allow for unknown variances in the state or observational equation, as well as for unknown parameters in the transition matrices Ft and Ht. As a third generalization we allow for nonlinear model structures; that is, X t = ft(Xtl) q Ut, and Yt = ht(xt) + vt, t = 1, ..., n, (2) whereft( ) and ht(. ) are given, but perhaps also depend on some unknown parameters. The experimenter may wish to entertain a variety of error distributions. Our goal throughout the article is an analysis for general statespace models that does not resort to convenient assumptions at the expense of model adequacy
Bayesian Forecasting
, 1996
"... rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with time ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with timevarying trends and seasonal patterns, and eventually to the associated Bayesian formalism of methods of inference and prediction. From the early 1960s, practical Bayesian forecasting systems in this context involved the combination of formal time series models and historical data analysis together with methods for subjective intervention and forecast monitoring, so that complete forecasting systems, rather than just routine and automatic data analysis and extrapolation, were in use at that time ([19, 22]). Methods developed in those early days are still in use now in some companies in sales forecasting and stock control areas. There have been major developments in models and methods since t
Architectures for Efficient Implementation of Particle Filters
, 2004
"... Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such proble ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such problems in many applications are based on the Kalman filters or extended Kalman filters. In situations when the problems are nonlinear or the noise that distorts the signals is nonGaussian, the Kalman filters provide a solution that may be far from optimal. Particle filters are an intriguing alternative to the Kalman filters due to their excellent performance in very di#cult problems including communications, signal processing, navigation, and computer vision. Hence, particle filters have been the focus of wide research recently and immense literature can be found on their theory. Most of these works recognize the complexity and computational intensity of these filters, but there has been no e#ort directed toward the implementation of these filters in hardware. The objective of this dissertation is to develop, design, and build e#cient hardware for particle filters, and thereby bring them closer to practical applications. The fact that particle filters outperform most of the traditional filtering methods in many complex practical scenarios, coupled with the challenges related to decreasing their computational complexity and improving realtime performance, makes this work worthwhile. The main
Nonlinear and NonGaussian StateSpace Modeling with Monte Carlo Techniques: A Survey and Comparative Study
 In Rao, C., & Shanbhag, D. (Eds.), Handbook of Statistics
, 2000
"... Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vect ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vector. Therefore, to improve the above problem, the sampling techniques such as Monte Carlo integration with importance sampling, resampling, rejection sampling, Markov chain Monte Carlo and so on are utilized, which can be easily applied to multidimensional cases. Thus, in the last decade, several kinds of nonlinear and nonGaussian filters and smoothers have been proposed using various computational techniques. The objective of this paper is to introduce the nonlinear and nonGaussian filters and smoothers which can be applied to any nonlinear and/or nonGaussian cases. Moreover, by Monte Carlo studies, each procedure is compared by the root mean square error criterion.
Prediction Of Final Data With Use Of Preliminary And/or Revised Data
 Journal of Forecasting
, 1995
"... : In the case of U.S. national accounts, the data are revised for the first few years and every decade, which implies that we do not really have the final data. In this paper, we aim to predict the final data, using the preliminary data and/or the revised data. The following predictors are introduce ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
: In the case of U.S. national accounts, the data are revised for the first few years and every decade, which implies that we do not really have the final data. In this paper, we aim to predict the final data, using the preliminary data and/or the revised data. The following predictors are introduced and derived from a context of the nonlinear filtering or smoothing problem, which are: (i) prediction of the final data of time t given the preliminary data up to time t
Penalized Likelihood Smoothing in Robust State Space Models
 Metrika
, 1998
"... In likelihoodbased approaches to robustify state space models, Gaussian error distributions are replaced by nonnormal alternatives with heavier tails. Robustified observation models are appropriate for time series with additive outliers, while state or transition equations with heavytailed error d ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In likelihoodbased approaches to robustify state space models, Gaussian error distributions are replaced by nonnormal alternatives with heavier tails. Robustified observation models are appropriate for time series with additive outliers, while state or transition equations with heavytailed error distributions lead to filters and smoothers that can cope with structural changes in trend or slope caused by innovations outliers. As a consequence, however, conditional filtering and smoothing densities become analytically intractable. Various attempts have been made to deal with this problem, reaching from approximate conditional mean type estimation to fully Bayesian analysis using MCMC simulation. In this article we consider penalized likelihood smoothers, this means estimators which maximize penalized likelihoods or, equivalently, posterior densities. Filtering and smoothing for additive and innovations outlier models can be carried out by computationally efficient Fisher scoring steps ...
Robust Bayesian nonparametric regression
"... this paper satisfies all three goals. Our approach uses Markov chain Monte Carlo methods to perform a Bayesian analysis of conditionally Gaussian state space models. The use of Gaussian state space models for nonparametric regression using spline smoothing is well known; see, for example, Wecker and ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
this paper satisfies all three goals. Our approach uses Markov chain Monte Carlo methods to perform a Bayesian analysis of conditionally Gaussian state space models. The use of Gaussian state space models for nonparametric regression using spline smoothing is well known; see, for example, Wecker and Ansley (1983) and the references therein. In this approach, the smoothing parameter is estimated either by generalised cross validation or by marginal likelihood and the regression function is estimated using the Kalman filter and a state space smoothing algorithm. However, it seems computationally difficult to extend the approach in Wecker and Ansley (1983) to allow for outliers in the observations or discontinuities in the regression function. Recent developments in Markov chain Monte Carlo methods have made it possible to perform a Bayesian analysis of conditionally Gaussian state space models; see, for example, Carter and Kohn (1994a and b) and Shephard (1994). In these models, the observation and state transition errors are assumed to be mixtures of normals, so the model is Gaussian conditionally on the mixture indicator variables. In this paper, we present several examples of the use of conditionally Gaussian state space models for robust nonparametric regression. For each example, we discuss the possible Markov chain Monte Carlo sampling schemes and show empirically that there exist sampling schemes which converge rapidly to the posterior distribution. The paper has two aims. The first is to acquaint the reader with the Bayesian approach to spline smoothing and its implementation by Markov chain Monte Carlo. The second aim is to show that sampling schemes to carry out Markov chain Monte Carlo can have very different rates of convergence. The best schemes converge rapi...
SimulationBased Estimation of a Nonlinear, Latent Factor AggregateProduction Function
 in SimulationBased Inference in Econometrics: Theory and Applications (edited by
, 1997
"... In this paper, we analyze in detail econometric issues associated with the specification and estimation of an aggregate, nonlinear production function with latent variables. The production function has been used to assess the extent to which different substitution possibilities between capital, u ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, we analyze in detail econometric issues associated with the specification and estimation of an aggregate, nonlinear production function with latent variables. The production function has been used to assess the extent to which different substitution possibilities between capital, unskilled labor, and skilled labor can account for the recent increase in wage inequality. We use Monte Carlo methods to evaluate the performance in our environment of three different simulationbased estimation procedures, Stochastic Integration, Extended Kalman Filter with Indirect Inference correction and Simulated Pseudo ML, with a focus on how reliable these techniques are in small samples and when the latent variables have trends. We find that when the unobservable states are modeled as trend stationary processes the estimators peform much better than when the states are specified as I(1) series. For the trend stationary specification, the simulated MLE was judged to be the best...
Dynamic Neural Regression Models
, 2000
"... We consider sequential or online learning in dynamic neural regression models. By using a state space representation for the neural network's parameter evolution in time we obtain approximations to the unknown posterior by either deriving posterior modes via the Fisher scoring algorithm or by derivi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We consider sequential or online learning in dynamic neural regression models. By using a state space representation for the neural network's parameter evolution in time we obtain approximations to the unknown posterior by either deriving posterior modes via the Fisher scoring algorithm or by deriving approximate posterior means with the importance sampling method. Furthermore, we replace the commonly used Gaussian noise assumption in the neural regression model by a more flexible noise model based on the Student tdensity. Since the tdensity can be interpreted as being an infinite mixture of Gaussians, hyperparameters such as the degrees of freedom of the tdensity can be learned from the data based on an online EMtype algorithm. We show experimentally that our novel methods outperform stateofthe art neural network online learning algorithms like the extended Kalman filter method for both, situations with standard Gaussian noise terms and situations with measurement outliers. 1 I...
Learning an OutlierRobust Kalman Filter
"... Abstract. We introduce a modified Kalman filter that performs robust, realtime outlier detection, without the need for manual parameter tuning by the user. Systems that rely on high quality sensory data (for instance, robotic systems) can be sensitive to data containing outliers. The standard Kalma ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We introduce a modified Kalman filter that performs robust, realtime outlier detection, without the need for manual parameter tuning by the user. Systems that rely on high quality sensory data (for instance, robotic systems) can be sensitive to data containing outliers. The standard Kalman filter is not robust to outliers, and other variations of the Kalman filter have been proposed to overcome this issue. However, these methods may require manual parameter tuning, use of heuristics or complicated parameter estimation procedures. Our Kalman filter uses a weighted least squareslike approach by introducing weights for each data sample. A data sample with a smaller weight has a weaker contribution when estimating the current time step’s state. Using an incremental variational ExpectationMaximization framework, we learn the weights and system dynamics. We evaluate our Kalman filter algorithm on data from a robotic dog. 1