Results 1  10
of
26
Bayesian Forecasting
, 1996
"... rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with time ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
rapolation techniques, especially exponential smoothing and exponentially weighted moving average methods ([20, 71]). Developments of smoothing and discounting techniques in stock control and production planning areas led to formalisms in terms of linear, statespace models for time series with timevarying trends and seasonal patterns, and eventually to the associated Bayesian formalism of methods of inference and prediction. From the early 1960s, practical Bayesian forecasting systems in this context involved the combination of formal time series models and historical data analysis together with methods for subjective intervention and forecast monitoring, so that complete forecasting systems, rather than just routine and automatic data analysis and extrapolation, were in use at that time ([19, 22]). Methods developed in those early days are still in use now in some companies in sales forecasting and stock control areas. There have been major developments in models and methods since t
Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting
 Psychological Review
, 2006
"... A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probab ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probability of the next component’s target. Each layer then does locally Bayesian learning. The approach assumes online trialbytrial learning. The resulting parameter updating is not globally Bayesian but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesian updating allows the associative model to exhibit retrospective revaluation effects such as backward blocking and unovershadowing, which have been challenging for associative learning models. The backpropagation of target values to attention allows the model to show trialorder effects, including highlighting and differences in magnitude of forward and backward blocking, which have been challenging for Bayesian learning models.
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
A TwoStage Ensemble Kalman Filter for Smooth Data Assimilation
, 2005
"... The ensemble Kalman Filter (EnKF) and variants derived therefrom have become important techniques in data assimilation problems. One breakdown of the EnKF method is that in the case of sparsely observed, accurate data, least squares properties of the EnKF create posterior ensemble members that are n ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
The ensemble Kalman Filter (EnKF) and variants derived therefrom have become important techniques in data assimilation problems. One breakdown of the EnKF method is that in the case of sparsely observed, accurate data, least squares properties of the EnKF create posterior ensemble members that are not compatible with the dynamic model. We propose a modification of Kalman and EnKF filters by imposing a constraint either by projection or by a penalty. A twostep ensemble Kalman Filter is proposed that imposes smoothness as a penalized constraint. The smoothing step consists of another application of the same EnKF code with the smoothness constraint as an independent observation. The utility of the method is demonstrated on a nonlinear dynamic model of wildfire.
THE ELECTROENCEPHALOGRAM AND THE ADAPTIVE AUTOREGRESSIVE MODEL: THEORY AND APPLICATIONS
, 2000
"... ..."
Hierarchical Bayesian Time Series Models
, 1996
"... Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of th ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article.
NONLINEAR DYNAMICAL SYSTEM IDENTIFICATION FROM UNCERTAIN AND INDIRECT MEASUREMENTS
, 2002
"... We review the problem of estimating parameters and unobserved trajectory components from noisy time series measurements of continuous nonlinear dynamical systems. It is first shown that in parameter estimation techniques that do not take the measurement errors explicitly into account, like regressio ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We review the problem of estimating parameters and unobserved trajectory components from noisy time series measurements of continuous nonlinear dynamical systems. It is first shown that in parameter estimation techniques that do not take the measurement errors explicitly into account, like regression approaches, noisy measurements can produce inaccurate parameter estimates. Another problem is that for chaotic systems the cost functions that have to be minimized to estimate states and parameters are so complex that common optimization routines may fail. We show that the inclusion of information about the timecontinuous nature of the underlying trajectories can improve parameter estimation considerably. Two approaches, which take into account both the errorsinvariables problem and the problem of complex cost functions, are described in detail: shooting approaches and recursive estimation techniques. Both are demonstrated on numerical examples.
Locally Bayesian Learning
"... This article is concerned with trialbytrial, online learning of cueoutcome mappings. In models structured as successions of component functions, an external target can be backpropagated such that the lower layer’s target is the input to the higher layer that maximizes the probability of the highe ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This article is concerned with trialbytrial, online learning of cueoutcome mappings. In models structured as successions of component functions, an external target can be backpropagated such that the lower layer’s target is the input to the higher layer that maximizes the probability of the higher layer’s target. Each layer then does locally Bayesian learning. The resulting parameter updating is not globally Bayesian, but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs, and then maps attentionally filtered inputs to outputs. The model is applied to the humanlearning phenomenon called highlighting, which is challenging to other extant Bayesian models, including the rational model of Anderson, the Kalman filter model of Dayan and
Bayesian Estimation and the Kalman Filter
 Computers Math. Applic
, 1994
"... In this tutorial article we give a Bayesian derivation of a basic state estimation result for discretetime Markov process models with independent process and measurement noise and measurements not affecting the state. We then list some properties of Gaussian random vectors and show how the Kalman f ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this tutorial article we give a Bayesian derivation of a basic state estimation result for discretetime Markov process models with independent process and measurement noise and measurements not affecting the state. We then list some properties of Gaussian random vectors and show how the Kalman filtering algorithm follows from the general state estimation result and a linearGaussian model definition. We give some illustrative examples including a probabilistic Turing machine, dynamic classification, and tracking a moving object. 1 Introduction The goal of this paper is to provide a relatively selfcontained derivation of some Bayesian estimation results leading to the Kalman filter, with emphasis on conceptual simplicity. The results we present are really just a repackaging of standard results in optimal estimation theory and Bayesian analysis, following mainly from references [Med69, JH69, Sal89, Ber85]. We hope, though, that this paper will provide useful results which can be ...
A TRANSFORMATIONBASED DERIVATION OF THE KALMAN FILTER AND AN EXTENSIVE UNSCENTED TRANSFORM
"... In the unscented Kalman filter (UKF), the state vector is typically augmented with process and measurement noise in order to approximate the joint predictive distribution of state and observation. For that, the unscented transform is used. As its point selection mechanism changes the higher order mo ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
In the unscented Kalman filter (UKF), the state vector is typically augmented with process and measurement noise in order to approximate the joint predictive distribution of state and observation. For that, the unscented transform is used. As its point selection mechanism changes the higher order moments between the random variables, statistical independence is not preserved. In this work, we show how statistical independence can be preserved by representing independent variables by separate pointsets. In addition to that, we show how the Kalman filter (KF) can be derived based on a particular type of linear transform that allows for a more uniform treatment of KF and UKF. Index Terms — Kalman filter, conditional Gaussian distribution, unscented transform