Results 1  10
of
27
SelfOrganizing Maps In Natural Language Processing
, 1997
"... Kohonen's SelfOrganizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall i ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
(Show Context)
Kohonen's SelfOrganizing Map (SOM) is one of the most popular artificial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the selforganizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in artificial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a specific SOMbased method called the WEBSOM has been developed. The WEBSOM metho...
Tenacious Tortoises: A formalism for argument over rules of inference
 Computational Dialectics (ECAI 2000 Workshop
, 2000
"... As multiagent systems proliferate and employ different and more sophisticated formal logics, it is increasingly likely that agents will be reasoning with different rules of inference. Hence, an agent seeking to convince another of some proposition may first have to convince the latter to use a rule ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
(Show Context)
As multiagent systems proliferate and employ different and more sophisticated formal logics, it is increasingly likely that agents will be reasoning with different rules of inference. Hence, an agent seeking to convince another of some proposition may first have to convince the latter to use a rule of inference which it has not thus far adopted. We define a formalism to represent degrees of acceptability or validity of rules of inference, to enable autonomous agents to undertake dialogue concerning inference rules. Even when they disagree over the acceptability of a rule, two agents may still use the proposed formalism to reason collaboratively. 1
Reason and Rationality
"... Over the past few decades, reasoning and rationality have been the focus of enormous interdisciplinary attention, attracting interest from philosophers, psychologists, economists, statisticians and anthropologists, among others. The widespread interest in the topic reflects the central status of rea ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Over the past few decades, reasoning and rationality have been the focus of enormous interdisciplinary attention, attracting interest from philosophers, psychologists, economists, statisticians and anthropologists, among others. The widespread interest in the topic reflects the central status of reasoning in human affairs. But it also suggests that
The philosophical significance of Cox’s theorem
 International Journal of Approximate Reasoning
, 2004
"... Cox’s theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of par ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Cox’s theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox’s theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox’s theorem.
Fuzzy Epistemicism ∗
, 2007
"... Please do not cite or quote without permission. It is taken for granted in much of the literature on vagueness that semantic and epistemic approaches to vagueness are fundamentally at odds. If we can analyze borderline cases and the sorites paradox in terms of degrees of truth, then we don’t need an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Please do not cite or quote without permission. It is taken for granted in much of the literature on vagueness that semantic and epistemic approaches to vagueness are fundamentally at odds. If we can analyze borderline cases and the sorites paradox in terms of degrees of truth, then we don’t need an epistemic explanation. Conversely, if an epistemic explanation suffices, then there is no reason to depart from the familiar simplicity of classical bivalent semantics. I question this assumption, showing that there is an intelligible motivation for adopting a manyvalued semantics even if one accepts a form of epistemicism. The resulting hybrid view has advantages over both classical epistemicism and traditional manyvalued approaches. It is taken for granted in much of the literature on vagueness that semantic and epistemic approaches to vagueness are fundamentally at odds. If we can analyze borderline cases and the sorites paradox in terms of degrees of truth, then we don’t need an epistemic explanation. Conversely, if an epistemic explanation suffices, then there is no reason to depart from the familiar simplicity of classical bivalent semantics. Thus, while an epistemic approach to vagueness is not logically incompatible with the view that truth comes in degrees, it is usually assumed that there could be no motivation for combining the two.
Fuzzy Concepts and Formal Methods: Some Illustrative Examples
 In Proceedings of the Seventh AsiaPacific Software Engineering Conference
, 1999
"... It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms.
The Universal Generalization Problem
 Logique & Analyse
, 2009
"... Gentzen gave different justifications of universal generalization. In particular, Gentzen’s justification is the one currently used in most logic textbooks. In this paper I argue that all such justifications are problematic, and propose an alternative justification which is related to the approach t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Gentzen gave different justifications of universal generalization. In particular, Gentzen’s justification is the one currently used in most logic textbooks. In this paper I argue that all such justifications are problematic, and propose an alternative justification which is related to the approach to generality of Greek mathematics. 1.
Integrated Qualitativeness in Design by MultiObjective Optimization and Interactive Evolutionary Computation
"... Abstract The concept of qualitativeness in design is an important one, and needs to be incorporated in the optimization process for a number of reasons outlined in this paper. Interactive Evolutionary Computation and Fuzzy Systems are two of the widely used approaches for handling qualitativeness i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract The concept of qualitativeness in design is an important one, and needs to be incorporated in the optimization process for a number of reasons outlined in this paper. Interactive Evolutionary Computation and Fuzzy Systems are two of the widely used approaches for handling qualitativeness in design optimization. This paper classifies the types of qualitativeness observed in design optimization, makes the case for their necessity, and proposes a novel framework for handling them, combining the two approaches in an evolutionary multiobjective optimization platform. Two components of the framework are tested using the floorplanning problem, and observations are reported. Future work is defined on the development of the framework.
F.J.: Fregean algebraic tableaux: Automating inferences in fuzzy propositional logic
 In: Proceedings of the 12th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR’05), SpringerVerlag
, 2005
"... Abstract. We develop a tableau procedure for finding theorems and consequence relations of RPL △ (i.e., ̷L ℵ extended with constants and a determinacy operator). RPL △ includes a large number of proposed truthfunctions for fuzzy logic. Our procedure simplifies tableaux for infinitevalued systems by ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We develop a tableau procedure for finding theorems and consequence relations of RPL △ (i.e., ̷L ℵ extended with constants and a determinacy operator). RPL △ includes a large number of proposed truthfunctions for fuzzy logic. Our procedure simplifies tableaux for infinitevalued systems by incorporating an insight of Frege’s. We take formulas of the language to be names for their truthvalues, which permits them to be manipulated in the tableaux as if they were algebraic variables. Hence, we call our system FAT, for Fregean Algebraic Tableaux. We have additionally developed an automated procedure for proving theorems using FAT, which we will briefly describe. 1