Results 1 
3 of
3
Rate distortion manifolds as model spaces for cognitive information
 In preparation
, 2007
"... The rate distortion manifold is considered as a carrier for elements of the theory of information proposed by C. E. Shannon combined with the semantic precepts of F. Dretske’s theory of communication. This type of information space was suggested by R. Wallace as a possible geometric–topological desc ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The rate distortion manifold is considered as a carrier for elements of the theory of information proposed by C. E. Shannon combined with the semantic precepts of F. Dretske’s theory of communication. This type of information space was suggested by R. Wallace as a possible geometric–topological descriptive model for incorporating a dynamic information based treatment of the Global Workspace theory of B. Baars. We outline a more formal mathematical description for this class of information space and further clarify its structural content and overall interpretation within prospectively a broad range of cognitive situations that apply to individuals, human institutions, distributed cognition and massively parallel intelligent machine design. Povzetek: Predstavljena je formalna definicija prostora za opisovanje kognitivnih procesov. 1
Computation of normal logic programs by fibring neural networks
 In Proceedings of the Seventh International Workshop on FirstOrder Theorem Proving (FTP’05
, 2005
"... Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semant ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semantic immediate consequence operators TP and TP, where TP denotes a generalization of TP relative to a manyvalued logic analogous to Kleene’s strong logic. We establish a minimalfixedpoint semantics for normal logic programs somewhat analogous to the leastfixedpoint semantics for definite logic programs. We argue that the class of logic programs for which the approximation by fibring neural networks may be employed to compute minimal fixed points of TP and of TP is the class of normal programs. Our theorems on the approximation of TP and TP for normal programs extend recent results on approximation of these operators for definite programs by conventional neural networks.
ONTOLOGIES AND WORLDS IN CATEGORY THEORY: IMPLICATIONS FOR NEURAL SYSTEMS
"... ABSTRACT. We propose category theory, the mathematical theory of structure, as a vehicle for defining ontologies in an unambiguous language with analytical and constructive features. Specifically, we apply categorical logic and model theory, based upon viewing an ontology as a subcategory of a cate ..."
Abstract
 Add to MetaCart
ABSTRACT. We propose category theory, the mathematical theory of structure, as a vehicle for defining ontologies in an unambiguous language with analytical and constructive features. Specifically, we apply categorical logic and model theory, based upon viewing an ontology as a subcategory of a category of theories expressed in a formal logic. In addition to providing mathematical rigor, this approach has several advantages. It allows the incremental analysis of ontologies by basing them in an interconnected hierarchy of theories, with an operation on the hierarchy that expresses the formation of complex theories from simple theories that express first principles. Another operation forms abstractions expressing the shared concepts in an array of theories. The use of categorical model theory makes possible the incremental analysis of possible worlds, or instances, for the theories, and the mapping of instances of a theory to instances of its more abstract parts. We describe the theoretical approach by applying it to the semantics of neural networks.