Results 1  10
of
36
Apprenticeship learning using inverse reinforcement learning and gradient methods
 Proc. UAI
, 2007
"... In this paper we propose a novel gradient algorithm to learn a policy from an expert’s observed behavior assuming that the expert behaves optimally with respect to some unknown reward function of a Markovian Decision Problem. The algorithm’s aim is to find a reward function such that the resulting o ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
In this paper we propose a novel gradient algorithm to learn a policy from an expert’s observed behavior assuming that the expert behaves optimally with respect to some unknown reward function of a Markovian Decision Problem. The algorithm’s aim is to find a reward function such that the resulting optimal policy matches well the expert’s observed behavior. The main difficulty is that the mapping from the parameters to policies is both nonsmooth and highly redundant. Resorting to subdifferentials solves the first difficulty, while the second one is overcome by computing natural gradients. We tested the proposed method in two artificial domains and found it to be more reliable and efficient than some previous methods. 1
Operator Adaptation in Evolutionary Computation and its Application to Structure Optimization of Neural Networks
, 2001
"... In this study, we give a brief overview of search strategy adaptation in evolutionary computation. The ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
In this study, we give a brief overview of search strategy adaptation in evolutionary computation. The
Neural network regularization and ensembling using multiobjective evolutionary algorithms
 In: Congress on Evolutionary Computation (CEC’04), IEEE
, 2004
"... Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract — Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conduced by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that determines to which extension the regularization in¤uences the learning algorithm must be determined beforehand. This paper addresses the neural network regularization problem from a multiobjective optimization point of view. During the optimization, both structure and parameters of the neural network will be optimized. A slightly modi£ed version of two multiobjective optimization algorithms, the dynamic weighted aggregation (DWA) method and the elitist nondominated sorting genetic algorithm (NSGAII) are used and compared. An evolutionary multiobjective approach to neural network regularization has a number of advantages compared to the traditional methods. First, a number of models with a spectrum of model complexity can be obtained in one optimization run instead of only one single solution. Second, an ef£cient new regularization term can be introduced, which is not applicable to gradientbased learning algorithms. As a natural byproduct of the multiobjective optimization approach to neural network regularization, neural network ensembles can be easily constructed using the obtained networks with different levels of model complexity. Thus, the model complexity of the ensemble can be adjusted by adjusting the weight of each member network in the ensemble. Simulations are carried out on a test function to illustrate the feasibility of the proposed ideas. I.
M.N.Vrahatis, Financial forecasting through unsupervised clustering and evolutionary trained neural networks
 in: Congress on Evolutionary Computation
, 2003
"... In this paper, we review our work on a time series forecasting methodology based on the combination of unsupervised clustering and artificial neural networks. To address noise and nonstationarity, a common approach is to combine a method for the partitioning of the input space into a number of subs ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
In this paper, we review our work on a time series forecasting methodology based on the combination of unsupervised clustering and artificial neural networks. To address noise and nonstationarity, a common approach is to combine a method for the partitioning of the input space into a number of subspaces with a local approximation scheme for each subspace. Unsupervised clustering algorithms have the desirable property of deciding on the number of partitions required to accurately segment the input space during the clustering process, thus relieving the user from making this ad hoc choice. Artificial neural networks, on the other hand, are powerful computational models that have proved their capabilities on numerous hard realworld problems. The time series that we consider are all daily spot foreign exchange rates of major currencies. The experimental results reported suggest that predictability varies across different regions of the input space, irrespective of clustering algorithm. In all cases, there are regions that are associated with a particularly high forecasting performance. Evaluating the performance of the proposed methodology with respect to its profit generating capability indicates that it compares favorably with that of two other established approaches. Moving from the task of onestepahead to multiplestepahead prediction, performance deteriorates rapidly.
Novel Hybrid NN/HMM Modelling Techniques for OnLine Handwriting Recognition
 Proc. of the Int. Workshop on Frontiers in Handwriting Rec., pp 619–623
, 2006
"... In this work we propose two hybrid NN/HMM systems for handwriting recognition. The tied posterior model approximates the output probability density function of a Hidden Markov Model (HMM) with a neural net (NN). This allows a discriminative training of the model. The second system is the tandem appr ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
In this work we propose two hybrid NN/HMM systems for handwriting recognition. The tied posterior model approximates the output probability density function of a Hidden Markov Model (HMM) with a neural net (NN). This allows a discriminative training of the model. The second system is the tandem approach: A NN is used as part of the feature extraction, and then a standard HMM apporach is applied. This adds more discrimination to the features. In an experimental section we compare the two proposed models with a baseline standard HMM system. We show that enhancing the feature vector has only a limited effect on the standard HMMs, but a significant influence to the hybrid systems. With an enhanced feature vector the two hybrid models highly outperform all baseline models. The tandem approach improves the recognition performance by 4.6 % (52.9 % rel. error reduction) absolute compared to the best baseline HMM.
TaskDependent Evolution of Modularity in Neural Networks
 Connection Science
, 2002
"... There exist many ideas and assumptions concerning the development and meaning of modularity in biological and technical neural systems. Nevertheless, this wide field is far from being understood; quantitative simulations and investigations are rare. In our contribution, we empirically study the deve ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
There exist many ideas and assumptions concerning the development and meaning of modularity in biological and technical neural systems. Nevertheless, this wide field is far from being understood; quantitative simulations and investigations are rare. In our contribution, we empirically study the development of connectionist models in the context of the evolution of artificial neural networks for highly modular problems. We define two measures for the degree of modularity and monitor their values during the evolutionary process. We identify two different reasons for the development of modular structures: the modularity of the task is reflected by the modularity of the adapted structure and the demand for fast learning structures increases the selective pressure towards modularity. However, learning can also counterbalance some imperfection of the underlying structure.
Time Series Prediction with Ensemble Models
, 2004
"... We describe the use of ensemble methods to build proper models for time series prediction. Our approach extends the classical ensemble methods for neural networks by using several different model architectures. We further suggest an iterated prediction procedure to select the final ensemble members. ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We describe the use of ensemble methods to build proper models for time series prediction. Our approach extends the classical ensemble methods for neural networks by using several different model architectures. We further suggest an iterated prediction procedure to select the final ensemble members.
Training Parsers by Inverse Reinforcement Learning
 MACHINE LEARNING
, 2009
"... One major idea in structured prediction is to assume that the predictor computes its output by finding the maximum of a score function. The training of such a predictor can then be cast as the problem of finding weights of the score function so that the output of the predictor on the inputs matche ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
One major idea in structured prediction is to assume that the predictor computes its output by finding the maximum of a score function. The training of such a predictor can then be cast as the problem of finding weights of the score function so that the output of the predictor on the inputs matches the corresponding structured labels on the training set. A similar problem is studied in inverse reinforcement learning (IRL) where one is given an environment and a set of trajectories and the problem is to find a reward function such that an agent acting optimally with respect to the reward function would follow trajectories that match those in the training set. In this paper we show how IRL algorithms can be applied to structured prediction, in particular to parser training. We present a number of recent incremental IRL algorithms in a unified framework and map them to parser training algorithms. This allows us to recover some existing parser training algorithms, as well as to obtain a new one. The resulting algorithms are compared in terms of their sensitivity to the choice of various parameters and generalization ability on the Penn Treebank WSJ corpus.
Optimization for Problem Classes  Neural Networks that Learn to Learn
, 2000
"... The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural netwo ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks  the ability of the network to efficiently solve related problems denoted as a class of problems. In a more theoretical framework the aim is to develop neural networks for adaptability  networks that learn (during evolution) to learn (during operation) . Evolutionary algorithms have turned out to be a robust method for the optimization of neural networks. As this process is time consuming, it is therefore also from the perspective of efficiency desirable to design structures that are applicable to many related problems. In this paper, two different approaches to solve this problem are studied, called ensemble method and generation method. We empirically show that an averaged Lamarcki...