Results 1  10
of
302
Authorship Attribution with Support Vector Machines
 APPLIED INTELLIGENCE
, 2000
"... In this paper we explore the use of textmining methods for the identification of the author of a text. For the first time we apply the support vector machine (SVM) to this problem. As it is able to cope with half a million of inputs it requires no feature selection and can process the frequency v ..."
Abstract

Cited by 90 (0 self)
 Add to MetaCart
(Show Context)
In this paper we explore the use of textmining methods for the identification of the author of a text. For the first time we apply the support vector machine (SVM) to this problem. As it is able to cope with half a million of inputs it requires no feature selection and can process the frequency vector of all words of a text. We performed a number of experiments with texts from a German newspaper. With nearly perfect reliability the SVM was able to reject other authors and detected the target author in 6080% of the cases. In a second experiment we ignored nouns, verbs and adjectives and replaced them by grammatical tags and bigrams. This resulted in slightly reduced performance. Author detection with SVM on full word forms was remarkably robust even if the author wrote about different topics.
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 63 (22 self)
 Add to MetaCart
(Show Context)
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
Approximating the Semantics of Logic Programs by Recurrent Neural Networks
"... In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a no ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of TP. This result is proven by taking advantage of the fact that for acyclic logic programs the function TP is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space IR with Euclidean distance, a real valued function fP can be defined which corresponds to TP and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks.
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical ..."
Abstract

Cited by 60 (24 self)
 Add to MetaCart
A new methodology of extraction, optimization, and application of sets of logical rules is described. Neural networks are used for initial rule extraction, local, or global minimization procedures for optimization, and Gaussian uncertainties of measurements are assumed during application of logical rules. Algorithms for extraction of logical rules from data with realvalued features require determination of linguistic variables or membership functions. Contextdependent membership functions for crisp and fuzzy linguistic variables are introduced and methods of their determination described. Several neural and machine learning methods of logical rule extraction generating initial rules are described, based on constrained multilayer perceptron, networks with localized transfer functions or on separability criteria for determination of linguistic variables. A tradeoff between accuracy/simplicity is explored at the rule extraction stage and between rejection/error level at the optimization stage. Gaussian uncertainties of measurements are assumed during application of crisp logical rules, leading to “soft trapezoidal” membership functions and allowing to optimize the linguistic variables using gradient procedures. Numerous applications of this methodology to benchmark and reallife problems are reported and very simple crisp logical rules for many datasets provided.
Symbolic knowledge extraction from trained neural networks: A sound approach
, 2001
"... Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of ..."
Abstract

Cited by 57 (10 self)
 Add to MetaCart
Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of nonregular networks, the general case. We show that nonregular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and realworld application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved....
Similarity and rules: Distinct? Exhaustive? Empirically distinguishable
 Cognition
, 1998
"... The distinction between rulebased and similaritybased processes in cognition is of fundamental importance for cognitive science, and has been the focus of a large body of empirical research. However, intuitive uses of the distinction are subject to theoretical difficulties and their relation to em ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
(Show Context)
The distinction between rulebased and similaritybased processes in cognition is of fundamental importance for cognitive science, and has been the focus of a large body of empirical research. However, intuitive uses of the distinction are subject to theoretical difficulties and their relation to empirical evidence is not clear. We propose a ‘core ’ distinction between ruleand similaritybased processes, in terms of the way representations of stored information are ‘matched ’ with the representation of a novel item. This explication captures the intuitively clearcut cases of processes of each type, and resolves apparent problems with the rule/ similarity distinction. Moreover, it provides a clear target for assessing the psychological and AI literatures. We show that many lines of psychological evidence are less conclusive than sometimes assumed, but suggest that converging lines of evidence may be persuasive. We then argue that the AI literature suggests that approaches which combine rules and similarity are an important new focus for empirical work. © 1998 Elsevier Science B.V. Keywords: Similaritybased process; Rulebased process 1.
Hybrid ruleextraction from support vector machines
 in Proc. of IEEE conference on cybernetics and intelligent systems
, 2004
"... Abstract — Support vector machines (SVMs) have shown superior performance compared to other machine learning techniques, especially in classification problems. Yet one limitation of SVMs is the lack of an explanation capability which is crucial in some applications, e.g. in the medical and security ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
(Show Context)
Abstract — Support vector machines (SVMs) have shown superior performance compared to other machine learning techniques, especially in classification problems. Yet one limitation of SVMs is the lack of an explanation capability which is crucial in some applications, e.g. in the medical and security domains. In this paper, a novel approach for eclectic ruleextraction from support vector machines is presented. This approach utilizes the knowledge acquired by the SVM and represented in its support vectors as well as the parameters associated with them. The approach includes three stages; training, propositional ruleextraction and rule quality evaluation. Results from four different experiments have demonstrated the value of the approach for extracting comprehensible rules of high accuracy and fidelity.
Hybrid Neural Systems
, 2000
"... This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe rece ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
(Show Context)
This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.
Using Neural Network Rule Extraction and Decision Tables for CreditRisk Evaluation
 Management Science
, 2003
"... Creditrisk evaluation is a very challenging and important management science problemin the domain of financial analysis. Many classification methods have been suggested in the literature to tackle this problem. Neural networks, especially, have received a lot of attention because of their universal ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
Creditrisk evaluation is a very challenging and important management science problemin the domain of financial analysis. Many classification methods have been suggested in the literature to tackle this problem. Neural networks, especially, have received a lot of attention because of their universal approximation property. However, a major drawback associated with the use of neural networks for decision making is their lack of explanation capability. While they can achieve a high predictive accuracy rate, the reasoning behind how they reach their decisions is not readily available. In this paper, we present the results from analysing three reallife creditrisk data sets using neural network rule extraction techniques. Clarifying the neural network decisions by explanatory rules that capture the learned knowledge embedded in the networks can help the creditrisk manager in explaining why a particular applicant is classified as either bad or good. Furthermore, we also discuss how these rules can be visualized as a decision table in a compact and intuitive graphical format that facilitates easy consultation. It is concluded that neural network rule extraction and decision tables are powerful management tools that allow us to build advanced and userfriendly decisionsupport systems for creditrisk evaluation.