Results 1 
7 of
7
Causality detection based on informationtheoretic approaches in time series analysis
, 2007
"... ..."
Entropy Optimization  Application To Blind Source Separation
 In ICANN 97
, 1997
"... This paper proposes an approach for entropy optimization by neural networks. A brief introduction to this problem is given. A simple neural algorithm based upon MSE minimization is provided. Validation of this algorithm is given by an application to the Source Separation problem. 1 ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
(Show Context)
This paper proposes an approach for entropy optimization by neural networks. A brief introduction to this problem is given. A simple neural algorithm based upon MSE minimization is provided. Validation of this algorithm is given by an application to the Source Separation problem. 1
Gradientbased manipulation of nonparametric entropy estimates
 IEEE Trans. Neural Networks
, 2004
"... Abstract — This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the outpu ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the output density. We address problems associated with quantized data and finite sample size, and implement efficient maximum likelihood techniques for optimizing the regularizer. We also develop a normalized entropy estimate that is invariant with respect to affine transformations, facilitating optimization of the shape, rather than the scale, of the output density. Kernel density estimates are smooth and differentiable; this makes the derived entropy estimates amenable to manipulation by gradient descent. The resulting weight updates are surprisingly simple and efficient learning rules that operate on pairs of input samples. They can be tuned for datalimited or memorylimited situations, or modified to give a fully online implementation. Index Terms — affineinvariant entropy, entropy manipulation, expectationmaximization, kernel density, maximum likelihood kernel, overrelaxation, Parzen windows, step size adaptation. I.
Unsupervised Learning in LSTM Recurrent Neural Networks
 In
, 2001
"... While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and timevarying inputs has rarely been explored. Here we train Long ShortTerm Memory (LSTM) recurrent networks to maximize two ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and timevarying inputs has rarely been explored. Here we train Long ShortTerm Memory (LSTM) recurrent networks to maximize two informationtheoretic objectives for unsupervised learning: Binary Information Gain Optimization (BINGO) and Nonparametric Entropy Optimization (NEO). LSTM learns to discriminate different types of temporal sequences and group them according to a variety of features.
Unsupervised Learning in Recurrent Neural Networks
"... . While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and timevarying inputs has rarely been explored. Here we train Long ShortTerm Memory (LSTM) recurrent networks to maximize two i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and timevarying inputs has rarely been explored. Here we train Long ShortTerm Memory (LSTM) recurrent networks to maximize two informationtheoretic objectives for unsupervised learning: Binary Information Gain Optimization (BINGO) and Nonparametric Entropy Optimization (NEO). LSTM learns to discriminate di erent types of temporal sequences and group them according to a variety of features. 1
unknown title
"... Causality detection based on informationtheoretic approaches in time series analysis ..."
Abstract
 Add to MetaCart
(Show Context)
Causality detection based on informationtheoretic approaches in time series analysis
Call Pattern Analysis with Unsupervised Neural Networks
, 2005
"... Huge amounts of data are being collected as a result of the increased use of mobile telecommunications. Insight into information and knowledge derived from these databases can give operators a competitive edge in terms of customer care and retention, marketing and fraud detection. One of the strate ..."
Abstract
 Add to MetaCart
(Show Context)
Huge amounts of data are being collected as a result of the increased use of mobile telecommunications. Insight into information and knowledge derived from these databases can give operators a competitive edge in terms of customer care and retention, marketing and fraud detection. One of the strategies for fraud detection checks for signs of questionable changes in user behavior. Although the intentions of the mobile phone users cannot be observed, their intentions are reflected in the call data which define usage patterns. Over a period of time, an individual phone generates a large pattern of use. While call data are recorded for subscribers for billing purposes, we are making no prior assumptions about the data indicative of fraudulent call patterns, i.e. the calls made for billing purpose are unlabeled. Further analysis is thus, required to be able to isolate fraudulent usage. An unsupervised learning algorithm can analyse and cluster call patterns for each subscriber in order to facilitate the fraud detection process. This research investigates the unsupervised learning potentials of two neural net