Results 1  10
of
125
Popular ensemble methods: an empirical study
 Journal of Artificial Intelligence Research
, 1999
"... An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Baggi ..."
Abstract

Cited by 181 (3 self)
 Add to MetaCart
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier – especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble’s performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees. 1.
Intrinsic motivation systems for autonomous mental development
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2007
"... Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this dr ..."
Abstract

Cited by 130 (39 self)
 Add to MetaCart
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot’s activities autonomously increases and complex developmental sequences selforganize without
A DoubleLoop Algorithm to Minimize the Bethe and Kikuchi Free Energies
 NEURAL COMPUTATION
, 2001
"... Recent work (Yedidia, Freeman, Weiss [22]) has shown that stable points of belief propagation (BP) algorithms [12] for graphs with loops correspond to extrema of the Bethe free energy [3]. These BP algorithms have been used to obtain good solutions to problems for which alternative algorithms fail t ..."
Abstract

Cited by 108 (4 self)
 Add to MetaCart
Recent work (Yedidia, Freeman, Weiss [22]) has shown that stable points of belief propagation (BP) algorithms [12] for graphs with loops correspond to extrema of the Bethe free energy [3]. These BP algorithms have been used to obtain good solutions to problems for which alternative algorithms fail to work [4], [5], [10] [11]. In this paper we rst obtain the dual energy of the Bethe free energy which throws light on the BP algorithm. Next we introduce a discrete iterative algorithm which we prove is guaranteed to converge to a minimum of the Bethe free energy. We call this the doubleloop algorithm because it contains an inner and an outer loop. It extends a class of mean eld theory algorithms developed by [7],[8] and, in particular, [13]. Moreover, the doubleloop algorithm is formally very similar to BP which may help understand when BP converges. Finally, we extend all our results to the Kikuchi approximation which includes the Bethe free energy as a special case [3]. (Yedidia et al [22] showed that a \generalized belief propagation" algorithm also has its xed points at extrema of the Kikuchi free energy). We are able both to obtain a dual formulation for Kikuchi but also obtain a doubleloop discrete iterative algorithm that is guaranteed to converge to a minimum of the Kikuchi free energy. It is anticipated that these doubleloop algorithms will be useful for solving optimization problems in computer vision and other applications.
Computational nature of human adaptive control during learning of reaching movements in force fields. Biol Cybern 81: 39–60
, 1999
"... Abstract. Learning to make reaching movements in force ®elds was used as a paradigm to explore the system architecture of the biological adaptive controller. We compared the performance of a number of candidate control systems that acted on a model of the neuromuscular system of the human arm and as ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Abstract. Learning to make reaching movements in force ®elds was used as a paradigm to explore the system architecture of the biological adaptive controller. We compared the performance of a number of candidate control systems that acted on a model of the neuromuscular system of the human arm and asked how well the dynamics of the candidate system compared with the movement characteristics of 16 subjects. We found that control via a supraspinal system that utilized an adaptive inverse model resulted in dynamics that were similar to that observed in our subjects, but lacked essential characteristics. These characteristics pointed to a di€erent architecture where descending commands were in¯uenced by an adaptive forward model. However, we found that control via a forward model alone also resulted in dynamics that did not match the behavior of the human arm. We considered a third control architecture where a forward model was used in conjunction with an inverse model and found that the resulting dynamics were remarkably similar to that observed in the experimental data. The essential property of this control architecture was that it predicted a complex pattern of neardiscontinuities in hand trajectory in the novel force ®eld. A nearly identical pattern was observed in our subjects, suggesting that generation of descending motor commands was likely through a control system architecture that included both adaptive forward and inverse models. We found that as subjects learned to make reaching movements, adaptation rates for the forward and inverse models could be independently estimated and the resulting changes in performance of subjects from movement to movement could be accurately accounted for. Results suggested that the adaptation of the forward model played a dominant role in the motor learning of subjects. After a period of
DENFIS: Dynamic Evolving NeuralFuzzy Inference System and Its Application for TimeSeries Prediction
, 2001
"... This paper introduces a new type of fuzzy inference systems, denoted as DENFIS (dynamic evolving neuralfuzzy inference system), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
This paper introduces a new type of fuzzy inference systems, denoted as DENFIS (dynamic evolving neuralfuzzy inference system), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning and accommodate new input data, including new features, new classes, etc. through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment the output of DENFIS is calculated through a fuzzy inference system based on mmost activated fuzzy rules which are dynamically chosen from a fuzzy rule set. Two approaches are proposed: (1) dynamic creation of a firstorder TakagiSugeno type fuzzy rule set for a DENFIS online model; (2) creation of a firstorder TakagiSugeno type fuzzy rule set, or an expanded highorder one, for a DENFIS offline model. A set of fuzzy rules can be inserted into DENFIS before, or during its learning process. Fuzzy rules can also be extracted during the learning process or after it. An evolving clustering method (ECM), which is employed in both online and offline DENFIS models, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some well known, existing models.
The Challenges of Joint Attention
 Interaction Studies
, 2004
"... This paper discusses the concept of joint attention and the di#erent skills underlying its development. We argue that joint attention is much more than gaze following or simultaneous looking because it implies a shared intentional relation to the world. The current stateoftheart in robotic ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
This paper discusses the concept of joint attention and the di#erent skills underlying its development. We argue that joint attention is much more than gaze following or simultaneous looking because it implies a shared intentional relation to the world. The current stateoftheart in robotic and computational models of the di#erent prerequisites of joint attention is discussed in relation with a developmental timeline drawn from results in child studies.
On the Computational Power of WinnerTakeAll
, 2000
"... This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in com ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winnertakeall is a surprisingly powerful computational module in comparison with threshold gates (= McCullochPitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winnertakeall in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winnertakeall gate as their only nonlinear operation. Our
Impact of Active Dendrites and Structural Plasticity on the Memory Capacity of Neural Tissue
, 2001
"... values averaged over longer timescales. Nevertheless, age capacities for cells with nonlinear subunits and to the extent that shortterm synaptic dynamics are a show that this capacity is accessible to a structural pervasive phenomenon in vivo, involving substantial learning rule that combines rando ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
values averaged over longer timescales. Nevertheless, age capacities for cells with nonlinear subunits and to the extent that shortterm synaptic dynamics are a show that this capacity is accessible to a structural pervasive phenomenon in vivo, involving substantial learning rule that combines random synapse forma changes in synaptic efficacy from moment to moment tion with activitydependent stabilization/elimination. based on the recent activation history of the synapse, In a departure from the common view that memories the straightforward mapping of stable numerical weights are encoded in the overall connection strengths be from a connectionist learning system onto synapses in tween neurons, our results suggest that longterm in the brain becomes more strained (Liaw and Berger, formation storage in neural tissue could reside primar 1996; Abbott et al.
Learning from examples as an inverse problem
 Journal of Machine Learning Research
, 2005
"... Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regulari ..."
Abstract

Cited by 28 (14 self)
 Add to MetaCart
Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in illposed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized leastsquares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.
The Many Facets of Natural Computing
"... related. I am confident that at their interface great discoveries await those who seek them. ” (L.Adleman, [3]) 1. FOREWORD Natural computing is the field of research that investigates models and computational techniques inspired by nature and, dually, attempts to understand the world around us in t ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
related. I am confident that at their interface great discoveries await those who seek them. ” (L.Adleman, [3]) 1. FOREWORD Natural computing is the field of research that investigates models and computational techniques inspired by nature and, dually, attempts to understand the world around us in terms of information processing. It is a highly interdisciplinary field that connects the natural sciences with computing science, both at the level of information technology and at the level of fundamental research, [98]. As a matter of fact, natural computing areas and topics come in many flavours, including pure theoretical research, algorithms and software applications, as well as biology, chemistry and physics experimental laboratory research. In this review we describe computing paradigms abstracted