Results 1  10
of
13
The Use of Entropy for Analysis and Control of Cognitive Models
 Proceedings of the Fifth International Conference on Cognitive Modeling
, 2003
"... Measures of entropy are useful for explaining the behaviour of cognitive models. We demonstrate that entropy can not only help to analyse the performance of the model, but also it can be used to control model parameters and improve the match between the model and data. We present a cognitive m ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
(Show Context)
Measures of entropy are useful for explaining the behaviour of cognitive models. We demonstrate that entropy can not only help to analyse the performance of the model, but also it can be used to control model parameters and improve the match between the model and data. We present a cognitive model that uses local computations of entropy to moderate its own behaviour and matches the data fairly well.
On relation between emotion and entropy
 In: Proceedings of the AISB’04 Symposium on Emotion, Cognition and Affective Computing
, 2004
"... The ways of modelling some of the most profound effects of emotion and arousal on cognition are discussed. Entropy reduction is used to measure quantitatively the learning speed in a cognitive model under different parameters ’ conditions. It is noticed that some settings facilitate the learning in ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
The ways of modelling some of the most profound effects of emotion and arousal on cognition are discussed. Entropy reduction is used to measure quantitatively the learning speed in a cognitive model under different parameters ’ conditions. It is noticed that some settings facilitate the learning in particular stages of problem solving more than others. The entropy feedback is used to control these parameters and strategy, which in turn improves greatly the learning in the model as well as the model match with the data. This result may explain the reasons behind some of the neurobiological changes, associated with emotion and its control of the decision making strategy and behaviour. 1
The Emergence of Rules in Cell–Assemblies of FLIF Neurons
"... Abstract. There are many examples of intelligent and learning systems that are based either on the connectionist or the symbolic approach. Although the latter can be successfully combined with statistical learning to create a hybrid system, it is not so clear how symbolic processing can emerge from ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
Abstract. There are many examples of intelligent and learning systems that are based either on the connectionist or the symbolic approach. Although the latter can be successfully combined with statistical learning to create a hybrid system, it is not so clear how symbolic processing can emerge from a connectionst system. Human mind is a living proof that such a transition must be possible. Inspired by biological cognition, our project explores the ways symbolic processing can emerge in a system of neural cell–assemblies (CAs). Here, we present the meta–process that regulates learning of associations between the CAs. The process is compared with the stochastic learning theory, and its outcome is a set of optimal rules. The paper concludes by an example of a working system and the discussion of it biological plausibility. 1
A Model of Probability Matching in a TwoChoice Task Based on Stochastic Control of Learning in Neural CellAssemblies
"... Donald Hebb proposed a hypothesis that specialised groups of neurons, called cellassemblies (CAs), form the basis for neural encoding of symbols in human mind. It is not clear, however, how CAs can be reused and combined to form new representations as in classical symbolic systems. We demonstrate ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Donald Hebb proposed a hypothesis that specialised groups of neurons, called cellassemblies (CAs), form the basis for neural encoding of symbols in human mind. It is not clear, however, how CAs can be reused and combined to form new representations as in classical symbolic systems. We demonstrate that Hebbian learning of synaptic weights alone is not adequate for the task, and that additional metacontrol process should be involved. We describe a proposed earlier architecture implementing such a process, and then evaluate it by modelling the probability matching phenomenon in a classical twochoice task. The model and its results are discussed in view of mathematical theory of learning, existing cognitive architectures as well as some hypotheses about neural functioning in the brain.
Conflict resolution by random estimated costs
 In D. AlDabass (Ed.), Proceedings of the 17th European simulation multiconference
, 2003
"... Abstract: Conflict resolution is an important part of many intelligent systems such as production systems, planning tools and cognitive architectures. For example, the ACT–R cognitive architectire [Anderson and Lebiere, 1998] uses a powerful conflict resolution theory that allowed for modelling many ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract: Conflict resolution is an important part of many intelligent systems such as production systems, planning tools and cognitive architectures. For example, the ACT–R cognitive architectire [Anderson and Lebiere, 1998] uses a powerful conflict resolution theory that allowed for modelling many characteristics of human decision making. The results of more recent works, however, pointed to the need of revisiting the conflict resolution theory of ACT–R to incorporate more dynamics. In the proposed theory the conflict is resolved using the estimates of the expected costs of production rules. The method has been implemented as a stand alone search program as well as an add–on to the ACT–R architecture replacing the standard mechanism. The method expresses more dynamic and adaptive behaviour. The performance of the algorithm shows that it can be successfully used as a search and optimisation technique. keywods: conflict resolution, decision making, search, optimisation, rule–based systems, cognitive modelling. 1
Towards an Integrated Cognitive Architecture for
"... We outline the cognitive model CASS (CognitiveAffective State System). As the name suggests it is a cognitive model that also takes human affect into account. CASS combines Dynamic Bayesian Networks (DBNs) and an ACTR model. The DBN model (RBARS, the Rensselaer Bayesian Affect Recognition System ..."
Abstract
 Add to MetaCart
We outline the cognitive model CASS (CognitiveAffective State System). As the name suggests it is a cognitive model that also takes human affect into account. CASS combines Dynamic Bayesian Networks (DBNs) and an ACTR model. The DBN model (RBARS, the Rensselaer Bayesian Affect Recognition System) determines the user's most likely affective states using both current and stored sensory data. The affectivecognitive model integrates RBARS with ACTR to play two roles: (1) the use of model tracing to determine the impact of affective state on cognitive processing, and (2) linking changes in affective state to changes in the value of ACTR's parameters so as to directly generate (i.e., predict) the influence of affect on cognition. The cognitive implications of the user's affective state are determined by analyzing the deviation of user behavior from the optimal path determined by the model.
Chapter 2 Information Trajectory of Optimal Learning
"... Summary The paper outlines some basic principles of geometric and nonasymptotic theory of learning systems. An evolution of such a system is represented by points on a statistical manifold, and a topology related to information dynamics is introduced to define trajectories continuous in information ..."
Abstract
 Add to MetaCart
(Show Context)
Summary The paper outlines some basic principles of geometric and nonasymptotic theory of learning systems. An evolution of such a system is represented by points on a statistical manifold, and a topology related to information dynamics is introduced to define trajectories continuous in information. It is shown that optimization of learning with respect to a given utility function leads to an evolution described by a continuous trajectory. Path integrals along the trajectory define the optimal utility and information bounds. Closed form expressions are derived for two important types of utility functions. The presented approach is a generalization of the use of Orlicz spaces in information geometry, and it gives a new, geometric interpretation of the classical information value theory and statistical mechanics. In addition, theoretical predictions are evaluated experimentally by comparing performance of agents learning in a nonstationary stochastic environment. The ability to learn and adapt the behavior with respect to changes in the environment is arguably one of the most important characteristics of intelligent systems.
The Emergence of Rules in Cell Assemblies of fLIF Neurons
"... Abstract. There are many examples of intelligent and learning systems that are based either on the connectionist or the symbolic approach. Although the latter has been successfully combined with statistical learning to create a hybrid system, it is not clear how symbolic processing can emerge from a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. There are many examples of intelligent and learning systems that are based either on the connectionist or the symbolic approach. Although the latter has been successfully combined with statistical learning to create a hybrid system, it is not clear how symbolic processing can emerge from a connectionist system. The human mind is a living proof that such a transition must be possible. Inspired by biological cognition, our project explores the ways symbolic processing can emerge in a system of neural cell assemblies (CAs). Here, we present a meta–process that regulates learning of associations between the CAs. The process is compared with stochastic learning theory, and its outcome is a set of optimal rules implemented in simulated neurons and learned by Hebbian adaptation of synaptic weights. A neural simulation shows rules can be learned. 1
In 7th IEEE International Conference on Cybernetic Intelligent Systems, 2008. The Duality of Utility and Information in Optimally Learning Systems
"... Abstract—The paper considers learning systems as optimisation systems with dynamical information constraints, and general optimality conditions are derived using the duality between the space of utility functions and probability measures. The increasing dynamics of the constraints is used to paramet ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—The paper considers learning systems as optimisation systems with dynamical information constraints, and general optimality conditions are derived using the duality between the space of utility functions and probability measures. The increasing dynamics of the constraints is used to parametrise the optimal solutions which form a trajectory in the space of probability measures. Stochastic processes following such trajectories describe systems achieving the maximum possible utility gain with respect to a given information. The theory is discussed on examples for finite and uncountable sets and in relation to existing applications and cognitive models of learning. I.
Ji, Q., Gray, W. D., Guhe, M., Schoelles, M. J. (March 2004). Towards an integrated cognitive
"... We outline the cognitive model CASS (CognitiveAffective State System). As the name suggests it is a cognitive model that also takes human affect into account. CASS combines Dynamic Bayesian Networks (DBNs) and an ACTR model. The DBN model (RBARS, the Rensselaer Bayesian Affect Recognition System ..."
Abstract
 Add to MetaCart
We outline the cognitive model CASS (CognitiveAffective State System). As the name suggests it is a cognitive model that also takes human affect into account. CASS combines Dynamic Bayesian Networks (DBNs) and an ACTR model. The DBN model (RBARS, the Rensselaer Bayesian Affect Recognition System) determines the user's most likely affective states using both current and stored sensory data. The affectivecognitive model integrates RBARS with ACTR to play two roles: (1) the use of model tracing to determine the impact of affective state on cognitive processing, and (2) linking changes in affective state to changes in the value of ACTR's parameters so as to directly generate (i.e., predict) the influence of affect on cognition. The cognitive implications of the user's affective state are determined by analyzing the deviation of user behavior from the optimal path determined by the model.