Results 1  10
of
114
Functionalcoefficient Regression Models for Nonlinear Time Series
 Journal of the American Statistical Association
, 1998
"... We apply the local linear regression technique for estimation of functionalcoefficient regression models for time series data. The models include threshold autoregressive models (Tong 1990) and functionalcoefficient autoregressive models (Chen and Tsay 1993) as special cases but with the added adv ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We apply the local linear regression technique for estimation of functionalcoefficient regression models for time series data. The models include threshold autoregressive models (Tong 1990) and functionalcoefficient autoregressive models (Chen and Tsay 1993) as special cases but with the added advantages such as depicting finer structure of the underlying dynamics and better postsample forecasting performance. We have also proposed a new bootstrap test for the goodness of fit of models and a bandwidth selector based on newly defined crossvalidatory estimation for the expected forecasting errors. The proposed methodology is dataanalytic and is of appreciable flexibility to analyze complex and multivariate nonlinear structures without suffering from the "curse of dimensionality". The asymptotic properties of the proposed estimators are investigated under the ffmixing condition. Both simulated and real data examples are used for illustration. Key Words: ffmixing; Asymptotic normali...
Recursive Monte Carlo filters: Algorithms and theoretical analysis
, 2003
"... powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–rejec ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
powerful tool to perform computations in general state space models. We discuss and compare the accept–reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept–reject version, and we compare different resampling techniques. In a second part, we show laws of large numbers and a central limit theorem for these Monte Carlo filters by simple induction arguments that need only weak conditions. We also show that, under stronger conditions, the required sample size is independent of the length of the observed series. 1. State space and hidden Markov models. A general state space or hidden Markov model consists of an unobserved state sequence (Xt) and an observation sequence (Yt) with the following properties: State evolution: X0,X1,X2,... is a Markov chain with X0 ∼ a0(x)dµ(x) and XtXt−1 = xt−1 ∼ at(xt−1,x)dµ(x). Generation of observations: Conditionally on (Xt), the Yt’s are independent and Yt depends on Xt only with YtXt = xt ∼ bt(xt,y)dν(y). These models occur in a variety of applications. Linear state space models are equivalent to ARMA models (see, e.g., [16]) and have become popular Received January 2003; revised August 2004. AMS 2000 subject classifications. Primary 62M09; secondary 60G35, 60J22, 65C05. Key words and phrases. State space models, hidden Markov models, filtering and smoothing, particle filters, auxiliary variables, sampling importance resampling, central limit theorem. This is an electronic reprint of the original article published by the
Survey of decision field theory
, 2002
"... This article summarizes the cumulative progress of a cognitivedynamical approach to decision making and preferential choice called decision field theory. This review includes applications to (a) binary decisions among risky and uncertain actions, (b) multiattribute preferential choice, (c) multia ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
This article summarizes the cumulative progress of a cognitivedynamical approach to decision making and preferential choice called decision field theory. This review includes applications to (a) binary decisions among risky and uncertain actions, (b) multiattribute preferential choice, (c) multialternative preferential choice, and (d) certainty equivalents such as prices. The theory provides natural explanations for violations of choice principles including strong stochastic transitivity, independence of irrelevant alternatives, and regularity. The theory also accounts for the relation between choice and decision time, preference reversals between choice and certainty equivalents, and preference reversals under time pressure. Comparisons with other dynamic models of decisionmaking and other random utility models of preference are discussed.
Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models
 Journal of Econometrics
, 2001
"... Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models
Robust maximumlikelihood estimation of multivariable dynamic systems
 Automatica
, 2005
"... This paper examines the problem of estimating linear timeinvariant statespace system models. In particular it addresses the parametrization and numerical robustness concerns that arise in the multivariable case. These difficulties are well recognised in the literature, resulting (for example) in e ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
This paper examines the problem of estimating linear timeinvariant statespace system models. In particular it addresses the parametrization and numerical robustness concerns that arise in the multivariable case. These difficulties are well recognised in the literature, resulting (for example) in extensive study of subspace based techniques, as well as recent interest in “data driven” local coordinate approaches to gradient search solutions. The paper here proposes a different strategy that employs the Expectation Maximisation (EM) technique. The consequence is an algorithm that is iterative, and locally convergent to stationary points of the (Gaussian) Likelihood function. Furthermore, theoretical and empirical evidence presented here establishes additional attractive properties such as numerical robustness, avoidance of difficult parametrization choices, the ability to estimate unstable systems, the ability to naturally and easily estimate nonzero initial conditions, and moderate computational cost. Moreover, since the methods here are MaximumLikelihood based, they have associated known and asymptotically optimal statistical properties. 1
Analysis of the asymptotic properties of the MOESP type of subspace algorithms
, 2000
"... The MOESP type of subspace algorithms are used for the identification of linear, discrete time, finite dimensional state space systems. They are based on the geometric structure of covariance matrices and exploit the properties of the state vector extensively. In this paper the asymptotic properties ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
The MOESP type of subspace algorithms are used for the identification of linear, discrete time, finite dimensional state space systems. They are based on the geometric structure of covariance matrices and exploit the properties of the state vector extensively. In this paper the asymptotic properties of the algorithms are examined. The main results include consistency and asymptotic normality for the estimates of the system matrices, under suitable assumptions on the noise sequence, the input process and the underlying true system.
Nonlinear and NonGaussian StateSpace Modeling with Monte Carlo Techniques: A Survey and Comparative Study
 In Rao, C., & Shanbhag, D. (Eds.), Handbook of Statistics
, 2000
"... Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vect ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vector. Therefore, to improve the above problem, the sampling techniques such as Monte Carlo integration with importance sampling, resampling, rejection sampling, Markov chain Monte Carlo and so on are utilized, which can be easily applied to multidimensional cases. Thus, in the last decade, several kinds of nonlinear and nonGaussian filters and smoothers have been proposed using various computational techniques. The objective of this paper is to introduce the nonlinear and nonGaussian filters and smoothers which can be applied to any nonlinear and/or nonGaussian cases. Moreover, by Monte Carlo studies, each procedure is compared by the root mean square error criterion.
Experimental Evidence Showing That Stochastic Subspace Identification Methods May Fail
 Systems and Control Letters
, 1998
"... It is known that certain popular stochastic subspace identification methods mayfail for theoretical reasons related to positive realness. In fact, these algorithms are implicitlybased on the assumption that the positive and algebraic degrees of a certain estimated covariance sequence coincide. In th ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
It is known that certain popular stochastic subspace identification methods mayfail for theoretical reasons related to positive realness. In fact, these algorithms are implicitlybased on the assumption that the positive and algebraic degrees of a certain estimated covariance sequence coincide. In this paper, we describe how to generate data with the propertythat this condition is not satisfied. Using this data we show through simulations that several subspace identification algorithms exhibit massive failure. K words. subspace identification, positive degree, algebraic degree, partial realization, positive realness, model reduction. 1.
Numerical Algorithms For Subspace State Space System Identification (n4sid)
 in Applied and Computational Control, Signals and Circuits
, 1997
"... We present the basic notions on subspace identification algorithms for linear systems. These methods estimate state sequences or extended observability matrices directly from the given data, through an orthogonal or oblique projection of the row spaces of certain block Hankel matrices into the row s ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We present the basic notions on subspace identification algorithms for linear systems. These methods estimate state sequences or extended observability matrices directly from the given data, through an orthogonal or oblique projection of the row spaces of certain block Hankel matrices into the row spaces of others. The extraction of the state space model is then achieved through the solution of a least squares problem. These algorithms can be elegantly implemented using wellknown numerical linear algebra algorithms such as the LQ and singular value decomposition. The paper aims at giving an overview of the methodologies used in time domain subspace identification. A short overview of frequency domain subspace identification results is also presented. 1 INTRODUCTION While at first sight, the class of linear timeinvariant systems with lumped parameters, seems to be rather restricted, it turns out that the inputoutput behavior of many reallife industrial processes, for most practica...