Results 11  20
of
95
NeuroRule: a connectionist approach to data mining
 In Proceedings of the International Conference on Very Large Databases (VLDB95
, 1995
"... Classification, which involves finding rules that partition a given da.ta set into disjoint groups, is one class of data mining problems. Approaches proposed so far for mining classification rules for large databases are mainly decision tree based symbolic learning methods. The connectionist approac ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Classification, which involves finding rules that partition a given da.ta set into disjoint groups, is one class of data mining problems. Approaches proposed so far for mining classification rules for large databases are mainly decision tree based symbolic learning methods. The connectionist approach based on neura.l networks has been thought not well suited for data mining. One of the major reasons cited is that knowledge generated by neural networks is not explicitly represented in the form of rules suitable for verification or interpretation by humans. This paper examines this issue. With our newly developed algorithms, rules which are similar to, or more concise than those generated by the symbolic methods can be extracted from the neural networks. The data mining process using neural networks with the emphasis on rule extraction is described. ExperimenM results and comparison with previously published works are presented. 1
Accelerated Learning By Active Example Selection
 International Journal of Neural Systems
, 1994
"... Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative a ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms. 1 Introduction One of the most widely used methods for training multilayer feedforward neural networks is the erro...
Training Neural Nets with the Reactive Tabu Search
"... In this paper the task of training subsymbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search. An iterative optimization process based on a "modified greedy search" component is complemented with a metastrategy to real ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
In this paper the task of training subsymbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search. An iterative optimization process based on a "modified greedy search" component is complemented with a metastrategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search, for a prohibition period that is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to nondifferentiable functions, it is robust with respect to the random initialization and effective in continuing the search after local minima. Three tests of the technique on feedforward and feedback systems are presented.
OnLine Learning Processes in Artificial Neural Networks
, 1993
"... We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. O ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. Online learning is necessary if not all training patterns are available all the time. This occurs in many applications when the training patterns are drawn from a timedependent environmental distribution. Studying learning in a changing environment, we encounter a conflict between the adaptability and the confidence of the network's representation. Minimization of a criterion incorporating both effects yields an algorithm for online adaptation of the learning parameter. The inherent noise of online learning makes it possible to escape from undesired local minima of the error potential on which the learning rule performs (stochastic) gradient descent. We try to quantify these often made cl...
LocationAware Computing: A Neural Network Model For Determining Location In Wireless LANs
, 2002
"... The strengths of the RF signals arriving from more access points in a wireless LANs are related to the position of the mobile terminal and can be used to derive the location of the user. In a ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
The strengths of the RF signals arriving from more access points in a wireless LANs are related to the position of the mobile terminal and can be used to derive the location of the user. In a
Image Recognition and Neuronal Networks: Intelligent Systems for the Improvement of Imaging Information
, 2000
"... this paper we have concentrated on describing issues related to the development and use of artificial neural networkbased intelligent systems for medical image interpretation. Research in intelligent systems todate remains centred on technological issues and is mostly application driven. However, ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
this paper we have concentrated on describing issues related to the development and use of artificial neural networkbased intelligent systems for medical image interpretation. Research in intelligent systems todate remains centred on technological issues and is mostly application driven. However, previous research and experience suggests that the successful implementation of computerised systems (e.g., [34] [35]), and decision support systems in particular (e.g., [36]), in the area of healthcare relies on the successful integration of the technology with the organisational and social context within which it is applied. Therefore, the successful implementation of intelligent medical image interpretation systems 9 should not only rely on their technical feasibility and effectiveness but also on organisational and social aspects that may rise from their applications, as clinical information is acquired, processed, used and exchanged between professionals. All these issues are critical in healthcare applications because they ultimately reflect on the quality of care provided.
Design of Neural Network Filters
 Electronics Institute, Technical University of Denmark
, 1993
"... Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, uline re adaptive model med additiv st j. Formalet er at klarl gge en r kke faser forbundet med design af neural netv rks arkitekturer med henblik pa at udf re forskellige \blackbox " modellerings opgaver sa som: System identi kation, invers modellering og pr diktion af tidsserier. De v senligste bidrag omfatter: Formulering af en neural netv rks baseret kanonisk lter repr sentation, der danner baggrund for udvikling af et arkitektur klassi kationssystem. I hovedsagen drejer det sig om en skelnen mellem globale og lokale modeller. Dette leder til at en r kke kendte neurale netv rks arkitekturer kan klassi ceres, og yderligere abnes der mulighed for udvikling af helt nye strukturer. I denne sammenh ng ndes en gennemgang af en r kke velkendte arkitekturer. I s rdeleshed l gges der v gt pa behandlingen af multilags perceptron neural netv rket.
Discriminative Training of Hidden Markov Models
, 1998
"... vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Finding the Best Transcription . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.5 Setting the Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Objective Functions 19 3.1 Properties of Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . 19 3.2 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Maximum Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Frame Discrimination . . . . . . . . . . . . . . . . ....
Efficient Algorithms for Function Approximation with Piecewise Linear Sigmoidal Networks
, 1998
"... This paper presents a computationally efficient algorithm for function approximation with piecewise linear sigmoidal nodes. A one hidden layer network is constructed one node at a time using the wellknown method of fitting the residual. The task of fitting an individual node is accomplished using ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
This paper presents a computationally efficient algorithm for function approximation with piecewise linear sigmoidal nodes. A one hidden layer network is constructed one node at a time using the wellknown method of fitting the residual. The task of fitting an individual node is accomplished using a new algorithm that searches for the best fit by solving a sequence of Quadratic Programming problems. This approach offers significant advantages over derivativebased search algorithms (e.g. backpropagation and its extensions). Unique characteristics of this algorithm include: finite step convergence, a simple stopping criterion, solutions that are independent of initial conditions, good scaling properties and a robust numerical implementation. Empirical results are included to illustrate these characteristics.
On Langevin Updating in Multilayer Perceptrons
 Neural Computation
, 1993
"... : The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numericall ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
: The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numerically illconditioned, Langevin updating converges faster than backpropagation and, probably, also higher order algorithms. This is particularly important for multilayer perceptrons with many hidden layers, which tend to have illconditioned Hessians. In addition, Manhattan updating is shown to have a similar effect as Langevin updating. 1 denni@thep.lu.se Introduction Performances of artificial neural networks (ANN) are often improved when external noise is present during the training phase. For instance in Hopfieldtype networks the basins of attraction for the stored memory patterns are enlarged when noisecorrupted training patterns are used [1]. In linear perceptrons the generalization a...