Results 1  10
of
26
Risk Sensitive Control of Finite State Machines on an Infinite Horizon I
"... In this paper we consider robust and risk sensitive control of discrete time finite state systems on an infinite horizon. The solution of the state feedback robust control problem is characterized in terms of the value of an average cost dynamic game. The risk sensitive stochastic optimal control pr ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
In this paper we consider robust and risk sensitive control of discrete time finite state systems on an infinite horizon. The solution of the state feedback robust control problem is characterized in terms of the value of an average cost dynamic game. The risk sensitive stochastic optimal control problem is solved using the policy iteration algorithm, and the optimal rate is expressed in terms of the value of a stochastic dynamic game with average cost per unit time criterion. By taking a small noise limit a deterministic dynamic game is obtained, which is closely related to the robust control problem. 1 Introduction. There are various approaches to treating disturbances in control systems. In stochastic control, disturbances are modelled as stochastic processes (random noise). On the other hand, in H1 /robust control theory disturbances are modelled deterministically. The theory of risk sensitive optimal control provides a link between stochastic and deterministic approaches. The l...
Risk Sensitive Control of Markov Processes in Countable State Space
"... In this paper we consider infinite horizon risksensitive control of Markov processes with discrete time and denumerable state space. This problem is solved proving, under suitable conditions, that there exists a bounded solution to the dynamic programming equation. The dynamic programming equation ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
In this paper we consider infinite horizon risksensitive control of Markov processes with discrete time and denumerable state space. This problem is solved proving, under suitable conditions, that there exists a bounded solution to the dynamic programming equation. The dynamic programming equation is transformed into an Isaacs equation for a stochastic game; and the vanishin discount method is used to study its solution. In addition, we prove that the existence conditions are as well necessary.
Risk Sensitive Markov Decision Processes
, 1997
"... This paper summarizes some contributions to the first of these objectives. For linear systems and exponential of the sum of quadratic costs, the problem has been studied by [26] in the fully observed setting. Extensions to the partially observed setting are due to [3] and [34]. A somewhat surprising ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
This paper summarizes some contributions to the first of these objectives. For linear systems and exponential of the sum of quadratic costs, the problem has been studied by [26] in the fully observed setting. Extensions to the partially observed setting are due to [3] and [34]. A somewhat surprising result is that the conditional distribution of the state given past observations does not constitute an information state. The equivalence, in the large risk limit, to a differential game arising in H
Feedback Control Applied to Survivability: A HostBased Autonomic Defense System
 IEEE Trans. Reliability, Mar
"... ..."
(Show Context)
RiskAware Decision Making and Dynamic Programming
"... This paper considers sequential decision making problems under uncertainty, the tradeoff between the expected return and the risk of high loss, and methods that use dynamic programming to find optimal policies. It is argued that using Bellman Principle determines how risk considerations on the retur ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
This paper considers sequential decision making problems under uncertainty, the tradeoff between the expected return and the risk of high loss, and methods that use dynamic programming to find optimal policies. It is argued that using Bellman Principle determines how risk considerations on the return can be incorporated. The discussion centers around returns generated by Markov Decision Processes and conclusions concern a large class of methods in Reinforcement Learning. 1
Sensor planning with nonlinear utility functions
 In Proceedings of the Fifth European Conference on Planning (ECP99
, 1999
"... Abstract. Sensor planning is concerned with when to sense and what to sense. We study sensor planning in the context of planning objectives that tradeo between minimizing the worstcase, expected, and bestcase planexecution costs. Sensor planning with these planning objectives is interesting becau ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Sensor planning is concerned with when to sense and what to sense. We study sensor planning in the context of planning objectives that tradeo between minimizing the worstcase, expected, and bestcase planexecution costs. Sensor planning with these planning objectives is interesting because they are realistic and the frequency of sensing changes with the planning objective: more pessimistic decision makers tend to sense more frequently. We perform sensor planning by combining one of our techniques for planning with nonlinear utility functions with an existing sensorplanning method. The resulting sensorplanning method is not only as easy to implement as the sensorplanning method that it extends but also (almost) as e cient. We demonstrate empirically how sensor plans change as the planning objective changes, using a common testbed for sensor planning.
RiskSensitive and Minimax Control of DiscreteTime, FiniteState Markov Decision Processes
 AUTOMATICA
, 1999
"... This paper analyzes a connection between risksensitive and minimax criteria for discretetime, finitestates Markov Decision Processes (MDPs). We synthesize optimal policies with respect to both criteria, both for finite horizon and discounted infinite horizon problem. A generalized decisionmaking ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper analyzes a connection between risksensitive and minimax criteria for discretetime, finitestates Markov Decision Processes (MDPs). We synthesize optimal policies with respect to both criteria, both for finite horizon and discounted infinite horizon problem. A generalized decisionmaking framework is introduced, which includes as special cases a number of approaches that have been considered in the literature. The framework allows for discounted risksensitive and minimax formulations leading to stationary optimal policies on the infinite horizon. We illustrate our results with a simple machine replacement problem.
Existence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processes
 Applied Mathematics and Optimization
, 1997
"... In this paper we are concerned with the existence of optimal stationary policies for infinite horizon risk sensitive Markov control processes with denumerable state space, unbounded cost function, and long run average cost. Introducing a discounted cost dynamic game, we prove that its value function ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper we are concerned with the existence of optimal stationary policies for infinite horizon risk sensitive Markov control processes with denumerable state space, unbounded cost function, and long run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk sensitive dynamic programming inequality holds, and derive an optimal stationary policy. Key Words. Risk sensitive stochastic control, dynamic games, Isaacs equation, optimal stationary policies. Mathematics Subject clasifications (1991). 90C40 (93E20). Running Head. Risk Sensitive Controlled Markov Processes. 1 Supported in part by the National Science Foundation under grant EEC 9402384 2 Institute for Systems Research, University of Maryland, College Park, Maryland 20742. On leave from Department of Mathematics, CINVEST...
Finite TimeHorizon Risk Sensitive Control and the Robust Limit under a Quadratic Growth Assumption
, 1998
"... The finite timehorizon risk sensitive limit problem for continuous, nonlinear systems is considered. Previous results are extended to cover more typical examples. In particular, the cost may grow quadratically, and the diffusion coefficient may depend on the state. It is shown that the risk sensit ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
The finite timehorizon risk sensitive limit problem for continuous, nonlinear systems is considered. Previous results are extended to cover more typical examples. In particular, the cost may grow quadratically, and the diffusion coefficient may depend on the state. It is shown that the risk sensitive value function is the solution of the corresponding dynamic programming equation. It is also shown that this value converges to the value of the Robust control problem as the cost becomes infintely risk averse with corresponding scaling of the diffusion coefficient. Key Words: risksensitive control, robust, H1 , viscosity solutions, nonlinear HJB equations, nonlinear Isaacs equations AMS subject classifications: 35B37, 49L25, 90D25, 93B36, 93C10, 93E05, 93E20 1 Introduction The nonlinear, finite timehorizon risk sensitive limit problem is considered. It is, by now, well known that the value functions of risksensitive stochastic control problems tend to converge to the value functio...
RiskSensitive Control of Markov Decision Processes
 In Proc. 30th Conference on Information Sciences and Systems
, 1996
"... This paper introduces an algorithm to determine nearoptimal control laws for Markov Decision Processes with a risksensitive criterion. Both the fully observed and the partially observed settings are considered, for finite and infinite horizon formulations. Dynamic programming equations are i ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper introduces an algorithm to determine nearoptimal control laws for Markov Decision Processes with a risksensitive criterion. Both the fully observed and the partially observed settings are considered, for finite and infinite horizon formulations. Dynamic programming equations are introduced which characterize the value function for the partially observed, infinite horizon, discounted costs formulation. An alternative risksensitive formulation is examined, for which there exists a stationary infinite horizon optimal policy. Policy and value iteration algorithms are used to determine such a policy. Finally, the alternative formulation is extended in a natural way to the partially observed setting. 1 Introduction Risksensitive control is an area of continuing interest in stochastic control theory. It is a generalization of the classical, riskneutral approach, whereby we seek to minimize an expression that depends not only on the total expected cost, but on h...