Results 1  10
of
96
DecisionTheoretic Planning: Structural Assumptions and Computational Leverage
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives ..."
Abstract

Cited by 415 (4 self)
 Add to MetaCart
Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDPrelated methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to de...
Computing optimal policies for partially observable decision processes using compact representations
 In Proceedings of the Thirteenth National Conference on Artificial Intelligence
, 1996
"... Abstract: Partiallyobservable Markov decision processes provide a very general model for decisiontheoretic planning problems, allowing the tradeoffs between various courses of actions to be determined under conditions of uncertainty, and incorporating partial observations made by an agent. Dynami ..."
Abstract

Cited by 112 (15 self)
 Add to MetaCart
Abstract: Partiallyobservable Markov decision processes provide a very general model for decisiontheoretic planning problems, allowing the tradeoffs between various courses of actions to be determined under conditions of uncertainty, and incorporating partial observations made by an agent. Dynamic programming algorithms based on the information or belief state of an agent can be used to construct optimal policies without explicit consideration of past history, but at high computational cost. In this paper, we discuss how structured representations of the system dynamics can be incorporated in classic POMDP solution algorithms. We use Bayesian networks with structured conditional probability matrices to represent POMDPs, and use this representation to structure the belief space for POMDP algorithms. This allows irrelevant distinctions to be ignored. Apart from speeding up optimal policy construction, we suggest that such representations can be exploited to great extent in the development of useful approximation methods. We also briefly discuss the difference in perspective adopted by influence diagram solution methods vis à vis POMDP techniques.
Learning over Sets using Kernel Principal Angles
 Journal of Machine Learning Research
, 2003
"... We consider the problem of learning with instances defined over a space of sets of vectors. We derive a new positive definite kernel f (A,B) defined over pairs of matrices A,B based on the concept of principal angles between two linear subspaces. We show that the principal angles can be recovered ..."
Abstract

Cited by 79 (2 self)
 Add to MetaCart
We consider the problem of learning with instances defined over a space of sets of vectors. We derive a new positive definite kernel f (A,B) defined over pairs of matrices A,B based on the concept of principal angles between two linear subspaces. We show that the principal angles can be recovered using only innerproducts between pairs of column vectors of the input matrices thereby allowing the original column vectors of A,B to be mapped onto arbitrarily highdimensional feature spaces.
On the Mechanics of Forming and Estimating Dynamic Linear Economies
"... This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to sever ..."
Abstract

Cited by 51 (14 self)
 Add to MetaCart
This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to several example economies.
Computational mechanics: Pattern and prediction, structure and simplicity
 Journal of Statistical Physics
, 1999
"... Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with
A Lyapunov Bound for Solutions of Poisson's Equation
 Ann. Probab
, 1996
"... In this paper we consider /irreducible Markov processes evolving in discrete or continuous time, on a general state space. We develop a Lyapunov function criterion that permits one to obtain explicit bounds on the solution to Poisson's equation and, in particular, obtain conditions under which the ..."
Abstract

Cited by 43 (25 self)
 Add to MetaCart
In this paper we consider /irreducible Markov processes evolving in discrete or continuous time, on a general state space. We develop a Lyapunov function criterion that permits one to obtain explicit bounds on the solution to Poisson's equation and, in particular, obtain conditions under which the solution is square integrable. These results are applied to obtain sufficient conditions that guarantee the validity of a functional central limit theorem for the Markov process. As a second consequence of the bounds obtained, a perturbation theory for Markov processes is developed which gives conditions under which both the solution to Poisson's equation and the invariant probability for the process are continuous functions of its transition kernel. The techniques are illustrated with applications to queueing theory and autoregressive processes. AMS subject classifications: 68M20, 60J10 Running head: Poisson's Equation Keywords: Markov chain, Markov process, Poisson's equation, Lyapunov f...
Kernel Principal Angles for Classification Machines with Applications to Image Sequence Interpretation
, 2002
"... We consider the problem of learning with instances defined over a space of sets of vectors. We derive a new positive definite kernel f(A# B) defined over pairs of matrices A# B based on the concept of principal angles between two linear subspaces. We show that the principal angles can be recovered ..."
Abstract

Cited by 37 (6 self)
 Add to MetaCart
We consider the problem of learning with instances defined over a space of sets of vectors. We derive a new positive definite kernel f(A# B) defined over pairs of matrices A# B based on the concept of principal angles between two linear subspaces. We show that the principal angles can be recovered using only innerproducts between pairs of column vectors of the input matrices thereby allowing the original column vectors of A# B to be mapped onto arbitrarily highdimensional feature spaces.
Conditional Forecasts in Dynamic Multivariate Models
 Review of Economics and Statistics
, 1998
"... : In the existing literature, conditional forecasts in the vector autoregressive (VAR) framework have not been commonly presented with probability distributions or error bands. This paper develops Bayesian methods for computing such distributions or bands. It broadens the class of conditional foreca ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
: In the existing literature, conditional forecasts in the vector autoregressive (VAR) framework have not been commonly presented with probability distributions or error bands. This paper develops Bayesian methods for computing such distributions or bands. It broadens the class of conditional forecasts to which the methods can be applied. The methods work for both structural and reducedform VAR models and, in contrast to common practices, account for the parameter uncertainty in small samples. Empirical examples under the flat prior and under the reference prior of Sims and Zha (1998) are provided to show the use of these methods. JEL classification: C32, E17, C53 Key words: conditional forecasts, hard and soft conditions, Bayesian methods, probability distribution, error bands, likelihood Conditional Forecasts in Dynamic Multivariate Models 1. Introduction In policy analysis, it is believed that monetary policy has long and variable effects on the overall economy. To capture such ...
A unified model of qualitative belief change: a dynamical systems perspective
 Artificial Intelligence
, 1998
"... Belief revision and belief update have been proposed as two types of belief change serving different purposes, revision intended to capture changes in belief state reflecting new information about a static world, and update intended to capture changes of belief in response to a changing world. We ar ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Belief revision and belief update have been proposed as two types of belief change serving different purposes, revision intended to capture changes in belief state reflecting new information about a static world, and update intended to capture changes of belief in response to a changing world. We argue that routine belief change involves elements of both and present a model of generalized update that allows updates in response to external changes to inform an agent about its prior beliefs. This model of update combines aspects of revision and update, providing a more realistic characterization of belief change. We show that, under certain assumptions, the original update postulates are satisfied. We also demonstrate that plain revision and plain update are special cases of our model. We also draw parallels to models of stochastic dynamical systems, and use this to develop a model that deals with iterated update and noisy observations in (qualitative settings) that is analogous to Bayesian updating in a quantitative setting. Some parts of this report appeared in preliminary form in “Generalized Update: Belief Change in Dynamic Settings,” Proc. of Fourteenth International Joint Conf. on Artificial Intelligence (IJCAI95), Montreal, pp.1550–1556 (1995).
Robust maximumlikelihood estimation of multivariable dynamic systems
 Automatica
, 2005
"... This paper examines the problem of estimating linear timeinvariant statespace system models. In particular it addresses the parametrization and numerical robustness concerns that arise in the multivariable case. These difficulties are well recognised in the literature, resulting (for example) in e ..."
Abstract

Cited by 24 (12 self)
 Add to MetaCart
This paper examines the problem of estimating linear timeinvariant statespace system models. In particular it addresses the parametrization and numerical robustness concerns that arise in the multivariable case. These difficulties are well recognised in the literature, resulting (for example) in extensive study of subspace based techniques, as well as recent interest in “data driven” local coordinate approaches to gradient search solutions. The paper here proposes a different strategy that employs the Expectation Maximisation (EM) technique. The consequence is an algorithm that is iterative, and locally convergent to stationary points of the (Gaussian) Likelihood function. Furthermore, theoretical and empirical evidence presented here establishes additional attractive properties such as numerical robustness, avoidance of difficult parametrization choices, the ability to estimate unstable systems, the ability to naturally and easily estimate nonzero initial conditions, and moderate computational cost. Moreover, since the methods here are MaximumLikelihood based, they have associated known and asymptotically optimal statistical properties. 1