Results 1  10
of
13
An informationmaximization approach to blind separation and blind deconvolution
 NEURAL COMPUTATION
, 1995
"... ..."
Speaker association with signallevel audiovisual fusion
 IEEE Transactions on Multimedia
, 2004
"... Abstract—Audio and visual signals arriving from a common source are detected using a signallevel fusion technique. A probabilistic multimodal generation model is introduced and used to derive an information theoretic measure of crossmodal correspondence. Nonparametric statistical density modeling ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Audio and visual signals arriving from a common source are detected using a signallevel fusion technique. A probabilistic multimodal generation model is introduced and used to derive an information theoretic measure of crossmodal correspondence. Nonparametric statistical density modeling techniques can characterize the mutual information between signals from different domains. By comparing the mutual information between different pairs of signals, it is possible to identify which person is speaking a given utterance and discount errant motion or audio from other utterances or nonspeech events. Index Terms—Audiovisual correspondence, multimodal data association, mutual information. I.
Unsupervised Neural Network Learning Procedures . . .
, 1996
"... In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, densi ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, density estimation methods, and feature extraction methods. Each of these major sections concludes with a discussion of successful applications of the methods to realworld problems.
Lyapunov functions for convergence of principal component algorithms
 Neural Networks
, 1995
"... Recent theoretical analyses of a class of unsupervized Hebbian principal component algorithms have identied its local stability conditions. The only locally stable solution for the subspace P extracted by the network is the principal component subspace P. In this paper we use the Lyapunov function a ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
Recent theoretical analyses of a class of unsupervized Hebbian principal component algorithms have identied its local stability conditions. The only locally stable solution for the subspace P extracted by the network is the principal component subspace P. In this paper we use the Lyapunov function approach to discover the global stability characteristics of this class of algorithms. The subspace projection error, least mean squared projection error, and mutual information I are all Lyapunov functions for convergence to the principal subspace, although the various domains of convergence indicated by these Lyapunov functions leave some of Pspace uncovered. A modication to I yields a `principal subspace information ' Lyapunov function I 0 with a domain of convergence which covers almost all of Pspace. This shows that this class of algorithms converges to the principal subspace from almost everywhere.
A Hebbian/antiHebbian Network Which Optimizes Information Capacity By Orthonormalizing The Principal Subspace
 in Proc. IEE Conf. on Artificial Neural Networks
, 1993
"... this paper we extend this work to develop an algorithm for the case of both input and output noise, with an output power constraint. We find that it is possible to simplify the obvious algorithm obtained by concatenating the two previous solutions. Previous Algorithms ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
this paper we extend this work to develop an algorithm for the case of both input and output noise, with an output power constraint. We find that it is possible to simplify the obvious algorithm obtained by concatenating the two previous solutions. Previous Algorithms
Segmentation And Classification Of HandDrawn Pictograms In Cluttered Scenes  An Integrated Approach
, 1999
"... In this paper, a new approach to identification of handwritten symbols in arbitrary complex environments is presented. 20 different pictograms drawn in different backgrounds can be identified with a recognition accuracy of 90%. In order to perform this challenging task, we use pattern spotting techn ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In this paper, a new approach to identification of handwritten symbols in arbitrary complex environments is presented. 20 different pictograms drawn in different backgrounds can be identified with a recognition accuracy of 90%. In order to perform this challenging task, we use pattern spotting techniques based on pseudo 2D Hidden Markov Models (P2DHMMs). Practical applications of our approach can be found in many typical mulitmedia document processing tasks, such as localization and recognition of nonrigid objects in image databases, detection of objects in complex scenes, finding trademarks in presence of clutter within videos, processing distorted document images in digital libraries, or contentbased image retrieval based on handwritten query symbols.
Probabalistic models and informative subspaces for audiovisual correspondence
 in European Conf. on Computer Vision
"... ..."
(Show Context)
Information Theory and Neural Networks
, 1993
"... Ever since Shannon's "Mathematical Theory of Communication" [40] first appeared, information theory has been of interest to psychologists and physiologists, to try to provide an explanation for the process of perception. Attneave [8] proposed that visual perception is the construction ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Ever since Shannon's "Mathematical Theory of Communication" [40] first appeared, information theory has been of interest to psychologists and physiologists, to try to provide an explanation for the process of perception. Attneave [8] proposed that visual perception is the construction of an economical description of a scene from a very redundant initial representation. Barlow [9] suggested that lateral inhibition in the visual pathway may reduce the redundancy of an image, so information can be represented more eciently. More recently, Linsker with his `Infomax' principle [21, 22], Atick and Redlich [5, 6], and Plumbley and Fallside [30, 32] have continued with this approach with considerable success. There have also been important advances in data compression techniques associated with principal component analysis. The original work of Oja [23] has now been extended to the analysis of higherorder statistics by Taylor and Coombes [45], and these techniques are presently ...
Information Theory and Neural Network Learning Algorithms
, 1992
"... . There have been a number of recent papers on information theory and neural networks, especially in a perceptual system such as vision. Some of these approaches are examined, and their implications for neural network learning algorithms are considered. Existing supervised learning algorithms such a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. There have been a number of recent papers on information theory and neural networks, especially in a perceptual system such as vision. Some of these approaches are examined, and their implications for neural network learning algorithms are considered. Existing supervised learning algorithms such as Back Propagation to minimize mean squared error can be viewed as attempting to minimize an upper bound on information loss. By making an assumption of noise either at the input or the output to the system, unsupervised learning algorithms such as those based on Hebbian (principal component analysing) or antiHebbian (decorrelating) approaches can also be viewed in a similar light. The optimization of information by the use of interneurons to decorrelate output units suggests a role for inhibitory interneurons and cortical loops in biological sensory systems. 1. Introduction Almost as soon as Shannon first formulated his `Mathematical Theory of Communication' [1], psychologists and physiol...