Results 1  10
of
113
Optimal execution of portfolio transactions
 Journal of Risk
"... We consider the execution of portfolio transactions with the aim of minimizing a combination of volatility risk and transaction costs arising from permanent and temporary market impact. For a simple linear cost model, we explicitly construct the efficient frontier in the space of timedependent liqu ..."
Abstract

Cited by 87 (8 self)
 Add to MetaCart
We consider the execution of portfolio transactions with the aim of minimizing a combination of volatility risk and transaction costs arising from permanent and temporary market impact. For a simple linear cost model, we explicitly construct the efficient frontier in the space of timedependent liquidation strategies, which have minimum expected cost for a given level of uncertainty. We may then select optimal strategies either by minimizing a quadratic utility function, or by minimizing Value at Risk. The latter choice leads to the concept of Liquidityadjusted VAR, or LVaR, that explicitly considers the best
Investment, capacity utilization, and the real business cycle
 American Economic Review
, 1988
"... This paper adopts Keynes ' view that shocks to the marginal efficiency of investment are important for business fluctuations, but incorporates it in a neoclassical framework with endogenous capacity utilization. Increases in the efficiency of newly produced investment goods stimulate the formation o ..."
Abstract

Cited by 81 (4 self)
 Add to MetaCart
This paper adopts Keynes ' view that shocks to the marginal efficiency of investment are important for business fluctuations, but incorporates it in a neoclassical framework with endogenous capacity utilization. Increases in the efficiency of newly produced investment goods stimulate the formation of "new " capital and more intensive utilization and accelerated depreciation of "old " capital. Theoretical and quantitative analysis suggests that the shocks and transmission mechanism studied here may be important elements of business cycles. In the realbusiness cycle models of the type developed by Finn Kydland and Edward Prescott (1982), and John Long and Charles Plosser (1983), the cycles are generated by exogenous shocks to the production function. A stylized version of the main mechanism working in these models can be described as follows. Dynamic optimizing behavior on the part of agents in the economy implies that both consumption and investment react positively to these direct shocks to output. Since the marginal productivity of labor is directly affected, employment is also procyclical. The resulting capital accumulation provides a channel of persistence, even if the technology shocks are serially uncorrelated. Hence, these productivity shocks are able to generate, from a neoclassical framework, comovements of macroeconomic variables and persistence of fluctuations that conform to those typically observed during business cycles. In contrast with the mechanism described above, where investment reacts to changes in output, the present paper adopts John Maynard Keynes ' (1936) view that it is shocks to the marginal efficiency of investment that are important for generating out
Real business cycles in a small open economy
 American Economic Review
, 1991
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Approximate Solutions to Markov Decision Processes
, 1999
"... One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, ..."
Abstract

Cited by 66 (9 self)
 Add to MetaCart
One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, since the results of its actions are not completely predictable, it is not enough just to compute the correct sequence; instead the robot must sense and correct for deviations from its intended path. In order for any machine learner to act reasonably in an uncertain environment, it must solve problems like the above one quickly and reliably. Unfortunately, the world is often so complicated that it is difficult or impossible to find the optimal sequence of actions to achieve a given goal. So, in order to scale our learners up to realworld problems, we usually must settle for approximate solutions. One representation for a learner's environment and goals is a Markov decision process or MDP. ...
LinearQuadratic Approximation of Optimal Policy Problems
, 2006
"... We consider a general class of nonlinear optimal policy problems involving forwardlooking constraints (such as the Euler equations that are typically present as structural equations in DSGE models), and show that it is possible, under regularity conditions that are straightforward to check, to deri ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
We consider a general class of nonlinear optimal policy problems involving forwardlooking constraints (such as the Euler equations that are typically present as structural equations in DSGE models), and show that it is possible, under regularity conditions that are straightforward to check, to derive a problem with linear constraints and a quadratic objective that approximates the exact problem. The LQ approximate problem is computationally simple to solve, even in the case of moderately large state spaces and flexibly parameterized disturbance processes, and its solution represents a local linear approximation to the optimal policy for the exact model in the case that stochastic disturbances are small enough. We derive the secondorder conditions that must be satisfied in order for the LQ problem to have a solution, and show that these are stronger, in general, than those required for LQ problems without forwardlooking constraints. We also show how the same linear approximations to the model structural equations and quadratic approximation to the exact welfare measure can be used to correctly rank alternative simple policy rules, again in the case of small enough shocks.
External control in Markovian genetic regulatory networks: the imperfect information case
 Machine Learning
, 2004
"... Probabilistic Boolean Networks, which form a subclass of Markovian Genetic Regulatory Networks, have been recently introduced as a rulebased paradigm for modeling gene regulatory networks. In an earlier paper, we introduced external control into Markovian Genetic Regulatory networks. More precisely ..."
Abstract

Cited by 46 (17 self)
 Add to MetaCart
Probabilistic Boolean Networks, which form a subclass of Markovian Genetic Regulatory Networks, have been recently introduced as a rulebased paradigm for modeling gene regulatory networks. In an earlier paper, we introduced external control into Markovian Genetic Regulatory networks. More precisely, given a Markovian genetic regulatory network whose state transition probabilities depend on an external (control) variable, a Dynamic Programmingbased procedure was developed by which one could choose the sequence of control actions that minimized a given performance index over a finite number of steps. The control algorithm of that paper, however, could be implemented only when one had perfect knowledge of the states of the Markov Chain.This paper presents a control strategy that can be implemented in the imperfect information case, and makes use of the available measurements which are assumed to be probabilistically related to the states of the underlying Markov Chain.
Hidden State and Reinforcement Learning with InstanceBased State Identification
 IEEE Transations on Systems, Man, and Cybernetics
"... Real robots with real sensors are not omniscient. When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, we say the robot suffers from the hidden state problem. ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
Real robots with real sensors are not omniscient. When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, we say the robot suffers from the hidden state problem. State identification techniques use history information to uncover hidden state. Some previous approaches to encoding history include: finite state machines [12, 28], recurrent neural networks [25] and genetic programming with indexed memory [49]. A chief disadvantage of all these techniques is their long training time. This paper presents instancebased state identification, a new approach to reinforcement learning with state identification that learns with much fewer training steps. Noting that learning with history and learning in continuous spaces both share the property that they begin without knowing the granularity of the state space, the approach applies instancebased (or "memoryba...
Generalized linearquadratic problems of deterministic and stochastic optimal control in discrete time
 SIAM J. Control Opt
, 1990
"... Abstract. Two fundamental classes of problems in largescale linear and quadratic programming are described. Multistage problems covering a wide variety of models in dynamic programming and stochastic programming are represented in a new way. Strong properties of duality are revealed which support t ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
Abstract. Two fundamental classes of problems in largescale linear and quadratic programming are described. Multistage problems covering a wide variety of models in dynamic programming and stochastic programming are represented in a new way. Strong properties of duality are revealed which support the development of iterative approximate techniques of solution in terms of saddlepoints. Optimality conditions are derived in a form that emphasizes the possibilities of decomposition.
Spline Approximations to Value Functions: A Linear Programming Approach
 Macroeconomic Dynamics
, 1995
"... We review the properties of algorithms that characterize the solution the Bellman equation of a stochastic dynamic program, as the solution to a linear program. The variables in this problem are the ordinates of the value function, hence, the number of variables grows with the state space. For situa ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
We review the properties of algorithms that characterize the solution the Bellman equation of a stochastic dynamic program, as the solution to a linear program. The variables in this problem are the ordinates of the value function, hence, the number of variables grows with the state space. For situations when this size becomes computationally burdensome, we suggest the use of lowdimensional cubicspline approximations to the value function. We show that fitting this approximation through linear programming provides upper and lower bounds on the solution to the original "large" problem. The information contained in these bounds leads to inexpensive improvements in the accuracy of approximate solutions. 1 Introduction For a large (and for economists, an interesting) class of nonlinear stochastic dynamic programming problems, the Bellman equation can be characterized by GSIA, Carnegie Mellon University. Supported in part by Office of Naval Research Grant, Grant N0001492J1387. ...
Intervention in contextsensitive probabilistic Boolean networks
, 2005
"... Motivation: Intervention in a gene regulatory network is used to help it avoid undesirable states, such as those associated with a disease. Several types of intervention have been studied in the framework of a probabilistic Boolean network (PBN), which is essentially a finite collection of Boolean n ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Motivation: Intervention in a gene regulatory network is used to help it avoid undesirable states, such as those associated with a disease. Several types of intervention have been studied in the framework of a probabilistic Boolean network (PBN), which is essentially a finite collection of Boolean networks in which at any discrete time point the gene state vector transitions according to the rules of one of the constituent networks. For an instantaneously random PBN, the governing Boolean network is randomly chosen at each time point. For a contextsensitive PBN, the governing Boolean network remains fixed for an interval of time until a binary random variable determines a switch. The theory of automatic control has been previously applied to find optimal strategies for manipulating external (control) variables that affect the transition probabilities of an instantaneously random PBN to desirably affect its dynamic evolution over a finite time horizon. This paper extends the methods of external control to contextsensitive PBNs.