Results 1 - 10
of
697
PAC-inspired Option Discovery in Lifelong Reinforcement Learning
"... A key goal of AI is to create lifelong learn-ing agents that can leverage prior experience to improve performance on later tasks. In reinforcement-learning problems, one way to summarize prior experience for future use is through options, which are temporally extended actions (subpolicies) for how t ..."
Abstract
- Add to MetaCart
prior empirical results on when and how options may accelerate learning. We then quantify the benefit of options in reducing sample complexity of a lifelong learning agent. Finally, the new the-oretical insights inspire a novel option-discovery algorithm that aims at minimizing overall sample complexity
Learning by Automatic Option Discovery from Conditionally Terminating Sequences
"... Abstract. This paper proposes a novel approach to discover options in the form of conditionally terminating sequences, and shows how they can be integrated into reinforcement learning framework to improve the learning performance. The method utilizes stored histories of possible optimal policies and ..."
Abstract
- Add to MetaCart
Abstract. This paper proposes a novel approach to discover options in the form of conditionally terminating sequences, and shows how they can be integrated into reinforcement learning framework to improve the learning performance. The method utilizes stored histories of possible optimal policies
Option Discovery in Hierarchical Reinforcement Learning for Training Large Factor Graphs for Information Extraction
, 2009
"... Since exact training and inference is not possible for most factor graphs, a number of tech-niques have been proposed to train models approximately, but they do not scale to large factor graphs used in recent work on joint inference on multiple information extraction tasks. Sam-pleRank is an MCMC ba ..."
Abstract
- Add to MetaCart
, MAP inference in factor graphs is reframed as hierarchical re-inforcement learning, and a novel method for discovering options fast is introduced. Sample trajectories are analyzed to detect dependencies between primitive actions. These dependencies are exploited to extract the commonly occurring
IP MTU discovery options
- Templin Experimental [Page 24] 5320 SEAL February 2010
, 1988
"... A pair of IP options that can be used to learn the minimum MTU of a path through an internet is described, along with its possible uses. This is a proposal for an Experimental protocol. Distribution of this memo is unlimited. ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
A pair of IP options that can be used to learn the minimum MTU of a path through an internet is described, along with its possible uses. This is a proposal for an Experimental protocol. Distribution of this memo is unlimited.
Improved automatic discovery of subgoals for options in hierarchical reinforcement learning
- Journal of Computer Science and Technology
, 2003
"... Abstract Options have been shown to be a key step in extending reinforcement learning beyond low-level reactionary systems to higher-level, planning systems. Most of the options research involves hand-crafted options; there has been only very limited work in the automated discovery of options. We e ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
Abstract Options have been shown to be a key step in extending reinforcement learning beyond low-level reactionary systems to higher-level, planning systems. Most of the options research involves hand-crafted options; there has been only very limited work in the automated discovery of options. We
Automated Discovery of Options in Reinforcement Learning
, 2004
"... AI planning benefits greatly from the use of temporally-extended or
macro-actions. Macro-actions allow for faster and more efficient
planning as well as the reuse of knowledge from previous solutions.
In recent years, a significant amount of research has been devoted
to incorporating macro-actio ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
-actions in learned controllers, particularly
in the context of Reinforcement Learning. One general approach is
the use of options (temporally-extended actions) in Reinforcement
Learning. While the properties of options are well understood, it
is not clear how to find new options automatically. In this thesis
we
Crash Discovery in Stock and Option Markets
, 1999
"... This article investigates, both theoretically and empirically, the economics of stock market crashes. Using more than 100 years of daily data on the DJIA (and shorter series on NASDAQ, IBM, and Caterpillar), we first document empirically that (a) the probability of a daily stock market decline in ex ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
implementation methods are sufficiently versatile to discover crash/rally information embedded in option markets. Exploiting more than 17,000 out-of-money option prices, the framework quantifies three dimensions of crash discovery (i) time-variations in Arrow-Debreu security price on the extre...
Informed trading in stock and option markets
- Journal of Finance
, 2004
"... We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share ” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent on averag ..."
Abstract
-
Cited by 64 (3 self)
- Add to MetaCart
We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share ” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent
Do options contribute to price discovery in emerging markets?
"... 1 Do options contribute to price discovery in emerging markets? ..."
Results 1 - 10
of
697