Results 1  10
of
21
Switching Kalman Filters
, 1998
"... We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, generalpurpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
We show how many different variants of Switching Kalman Filter models can be represented in a unified way, leading to a single, generalpurpose inference algorithm. We then show how to find approximate Maximum Likelihood Estimates of the parameters using the EM algorithm, extending previous results on learning using EM in the nonswitching case [DRO93, GH96a] and in the switching, but fully observed, case [Ham90]. 1 Introduction Dynamical systems are often assumed to be linear and subject to Gaussian noise. This model, called the Linear Dynamical System (LDS) model, can be defined as x t = A t x t\Gamma1 + v t y t = C t x t +w t where x t is the hidden state variable at time t, y t is the observation at time t, and v t ¸ N(0; Q t ) and w t ¸ N(0; R t ) are independent Gaussian noise sources. Typically the parameters of the model \Theta = f(A t ; C t ; Q t ; R t )g are assumed to be timeinvariant, so that they can be estimated from data using e.g., EM [GH96a]. One of the main adva...
On adaptive decision rules and decision parameter adaptation for automatic speech recognition
 Proc. IEEE
, 2000
"... Recent advances in automatic speech recognition are accomplished by designing a plugin maximum a posteriori decision rule such that the forms of the acoustic and language model distributions are specified and the parameters of the assumed distributions are estimated from a collection of speech and ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Recent advances in automatic speech recognition are accomplished by designing a plugin maximum a posteriori decision rule such that the forms of the acoustic and language model distributions are specified and the parameters of the assumed distributions are estimated from a collection of speech and language training corpora. Maximumlikelihood point estimation is by far the most prevailing training method. However, due to the problems of unknown speech distributions, sparse training data, high spectral and temporal variabilities in speech, and possible mismatch between training and testing conditions, a dynamic training strategy is needed. To cope with the changing speakers and speaking conditions in real operational conditions for highperformance speech recognition, such paradigms incorporate a small amount of speaker and environment specific adaptation data into the training process. Bayesian adaptive learning is an optimal way to combine
Value At Risk When Daily Changes In Market Variables Are Not Normally Distributed
 Journal of Derivatives
, 1998
"... This paper proposes a new model for calculating VaR where the user is free to choose any probability distributions for daily changes in the market variables and parameters of the probability distributions are subject to updating schemes such as GARCH. Transformations of the probability distributions ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This paper proposes a new model for calculating VaR where the user is free to choose any probability distributions for daily changes in the market variables and parameters of the probability distributions are subject to updating schemes such as GARCH. Transformations of the probability distributions are assumed to be multivariate normal. The model is appealing in that the calculation of VaR is relatively straightforward and can make use of the RiskMetrics or a similar database. We test a version of the model using nine years of daily data on 12 different exchange rates. When the first half of the data is used to estimate the model's parameters we find that it provides a good prediction of the distribution of daily changes in the second half of the data. * Faculty of Management, University of Toronto, 105 St. George Street, Toronto, Ontario, Canada M5S 3E6. We are grateful to Tom McCurdy for comments and helpful suggestions. An earlier version of this paper was entitled "Taking account of the kurtosis in market variables when calculating value at risk" 2
Nonlinear and NonGaussian StateSpace Modeling with Monte Carlo Techniques: A Survey and Comparative Study
 In Rao, C., & Shanbhag, D. (Eds.), Handbook of Statistics
, 2000
"... Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vect ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Since Kitagawa (1987) and Kramer and Sorenson (1988) proposed the filter and smoother using numerical integration, nonlinear and/or nonGaussian state estimation problems have been developed. Numerical integration becomes extremely computerintensive in the higher dimensional cases of the state vector. Therefore, to improve the above problem, the sampling techniques such as Monte Carlo integration with importance sampling, resampling, rejection sampling, Markov chain Monte Carlo and so on are utilized, which can be easily applied to multidimensional cases. Thus, in the last decade, several kinds of nonlinear and nonGaussian filters and smoothers have been proposed using various computational techniques. The objective of this paper is to introduce the nonlinear and nonGaussian filters and smoothers which can be applied to any nonlinear and/or nonGaussian cases. Moreover, by Monte Carlo studies, each procedure is compared by the root mean square error criterion.
Value at risk for a mixture of normal distributions: The use of quasiBayesian estimation techniques
, 1997
"... this article, I examine one such alternative assumption that simultaneously allows for asset returns that are fat tailed and for tractable estimations of VaR. This distribution, based on a mixture of normal densities, has also been proposed by Zangari (1996). First, I relate the mixture of distribut ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
this article, I examine one such alternative assumption that simultaneously allows for asset returns that are fat tailed and for tractable estimations of VaR. This distribution, based on a mixture of normal densities, has also been proposed by Zangari (1996). First, I relate the mixture of distributions approach to alternatives that have been presented in the academic literature on the stochastic processes governing asset returns. Second, I use an estimation technique for the parameters of the mixture of distributions that is computationally simpler than the techniques suggested by Zangarithe quasiBayesian maximum likelihood estimation (QBMLE) approach (first suggested by Hamilton, 1991).
357 “Seasonal adjustment and the detection of business cycle phases” by
, 2004
"... In 2004 all publications will carry a motif taken from the €100 banknote. This paper can be downloaded without charge from ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In 2004 all publications will carry a motif taken from the €100 banknote. This paper can be downloaded without charge from
Three approaches to probability model selection
 In de Mantaras and Poole [160
, 1994
"... relative entropy, EM algorithm. This paper compares three approaches to the problem of selecting among probability models to fit data: (1) use of statistical criteria such as Akaike’s information criterion and Schwarz’s “Bayesian information criterion, ” (2) maximization of the posterior probability ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
relative entropy, EM algorithm. This paper compares three approaches to the problem of selecting among probability models to fit data: (1) use of statistical criteria such as Akaike’s information criterion and Schwarz’s “Bayesian information criterion, ” (2) maximization of the posterior probability of the model, and (3) maximization of an “effectiveness ratio ” trading off accuracy and computational cost. The unifying characteristic of the approaches is that all can be viewed as maximizing a penalized likelihood function. The second approach with suitable prior distributions has been shown to reduce to the first. This paper shows that the third approach reduces to the second for a particular form of the effectiveness ratio, and illustrates all three approaches with the problem of selecting the number of components in a mixture of Gaussian distributions. Unlike the first two approaches, the third can be used even when the candidate models are chosen for computational efficiency, without regard to physical interpretation, so that the likelihoods and the prior distribution over models cannot be interpreted literally. As the most general and computationally oriented of the approaches, it is especially useful for artificial intelligence applications. 1
ValueatRisk Analysis for HeavyTailed Financial Returns
, 2001
"... OF MASTER'S THESIS Department of Engineering Physics and Mathematics PO Box 2200, FIN020105 HUT, FINLAND Author: Samppa Nylund Department: Department of Engineering Physics and Mathematics Major subject: Systems Analysis and Operations Research Minor subject: Business Finance and Financial Economic ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
OF MASTER'S THESIS Department of Engineering Physics and Mathematics PO Box 2200, FIN020105 HUT, FINLAND Author: Samppa Nylund Department: Department of Engineering Physics and Mathematics Major subject: Systems Analysis and Operations Research Minor subject: Business Finance and Financial Economics English title: ValueatRisk Analysis for HeavyTailed Financial Returns Finnish title: Positiivisesti huipukkaiden osaketuottojen ValueatRiskanalyysi Number of pages: 77 Chair: Mat2 Applied Mathematics Supervisor: Ahti Salo Instructor: Ahti Salo Abstract: Market risk refers to the risks of financial losses arising from adverse movements in market prices and rates. Sources of market risk include equity prices, interest rates, foreign exchange rates and commodity prices. The interest in managing market risk has grown during the last two decades among financial institutions and other market participants. A market risk measure called ValueatRisk (VaR) has been adopted as the most important measure of market risk by most financial institutions. VaR based risk management is also required by many regulatory bodies. VaR measures the maximum expected loss in monetary terms at a given confidence level over a given forecast period. There are several alternative techniques to estimate VaR measures. The RiskMetrics system, introduced by investment bank J.P. Morgan in 1994, has become de facto standard within the industry. This system is based on the assumption that the returns of financial instruments follow the normal distribution. Yet the results of many empirical studies suggest that at high confidence levels (at confidence levels over 95%) extreme losses are more frequent than predicted by the RiskMetrics technique. This is be...
Measuring DAX Market Risk: A Neural Network Volatility Mixture Approach
 the FFM2000 Conference, London, 31 May2
, 2000
"... In this paper we propose a framework for estimation and quality control of conditional neural network volatility models for market risk management. In a first step, we derive a conditional volatility model based on gaussian mixture densities, that can be used with linear or neural regression model ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we propose a framework for estimation and quality control of conditional neural network volatility models for market risk management. In a first step, we derive a conditional volatility model based on gaussian mixture densities, that can be used with linear or neural regression models (extendable even to rule systems or decision trees). In a second step, we introduce performance measures, that measure two different properties of the models' volatility forecasts important to riskmanagement. The proposed framework is being tested on daily DAX (German stock index) data. Results show, that the neural network volatility mixture approach outperforms GARCH models. 1 Introduction One of the fundamental assumptions used today by risk measurement systems is that the underlying returns on financial processes are distributed according to a conditional normal distribution (i.e. JP Morgan 's RiskMetrics). This assumption has been a focus point of today's research on risk measur...
Market Power and Price Movements over the Business Cycle
, 2001
"... This paper develops and tests implications of an oligopolypricing model. The model predicts that during a demand expansion the short run competitive price is a pure strategy Nash equilibrium, but in a recession firms set prices above the competitive price. Thus, price markups over the competitive p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper develops and tests implications of an oligopolypricing model. The model predicts that during a demand expansion the short run competitive price is a pure strategy Nash equilibrium, but in a recession firms set prices above the competitive price. Thus, price markups over the competitive price are countercyclical. Prices set during a recession are more variable than prices set in expansions, because firms employ mixed strategy pricing in recessions. The empirical analysis utilizes Hamilton's time series switching regime filter to test the unique predictions of the model. Fourteen out of fifteen industries have fluctuations consistent with this oligopolypricing model. * Wilson acknowledges research support from the John M. Olin Foundation and the National Science Foundation. Doc Ghose, Joe Harrington, Mark Walker, and John Wooders provided helpful suggestions. This paper has also benefited from comments by seminar participants at the University of Arizona, the University of Houston, Johns Hopkins University, the University of Kansas, the University of Toronto, and Virginia Polytechnic Institute and State University. 1 Recent research has focused on firms' market power and how this power may be exercised differentially over the business cycle. For example, studies by Greenwald et al. (1984), Gottfries (1991), Klemperer (1995), and Chevalier and Scharfstein (1996) have emphasized the role of capitalmarket imperfections. When capitalmarket imperfections exist, the incentives for firms to make investments may be reduced because firms may not reap the profits associated with the investment. One form of investment is a low price that builds a firm's market share by attracting more customers in the future. During a recession, firms may raise prices, forgoing a...