Results 11  20
of
3,816
Adaptive OnLine Learning Algorithms for Blind Separation  Maximum Entropy and Minimum Mutual Information
 Neural Computation
, 1997
"... There are two major approaches for blind separation: Maximum Entropy (ME) and Minimum Mutual Information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the demixing matrix. The MI is the contrast function for blind separation while the entropy is not. To just ..."
Abstract

Cited by 133 (16 self)
 Add to MetaCart
There are two major approaches for blind separation: Maximum Entropy (ME) and Minimum Mutual Information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the demixing matrix. The MI is the contrast function for blind separation while the entropy is not
Entanglement entropy and conformal field theory
, 2009
"... We review the conformal field theory approach to entanglement entropy. We show how to apply these methods to the calculation of the entanglement entropy of a single interval, and the generalization to different situations such as finite size, systems with boundaries, and the case of several disjoint ..."
Abstract

Cited by 92 (11 self)
 Add to MetaCart
We review the conformal field theory approach to entanglement entropy. We show how to apply these methods to the calculation of the entanglement entropy of a single interval, and the generalization to different situations such as finite size, systems with boundaries, and the case of several
matrix
, 2003
"... entropy principle and quantum statistical information functional in random ..."
Abstract
 Add to MetaCart
entropy principle and quantum statistical information functional in random
Local characteristics, entropy and limit theorems for spanning trees and domino tilings via transferimpedances
, 1993
"... Let G be a finite graph or an infinite graph on which Z d acts with finite fundamental domain. If G is finite, let T be a random spanning tree chosen uniformly from all spanning trees of G; if G is infinite, methods from [Pem] show that this still makes sense, producing a random essential spanning f ..."
Abstract

Cited by 112 (2 self)
 Add to MetaCart
forest of G. A method for calculating local characteristics (i.e. finitedimensional marginals) of T from the transferimpedance matrix is presented. This differs from the classical matrixtree theorem in that only small pieces of the matrix (ndimensional minors) are needed to compute small (n
The emergence of classical properties through interaction with the environment. Zeitschrift fr Physik
 B
, 1985
"... The dependence of macroscoprc systems uporr their environmertt is studied under the assumption that quantum theory is univcrsally valid. ln particular scattering of photons antl molecules turns out to be essential even in intergalactic space in restricting the observable properties by locally destr ..."
Abstract

Cited by 154 (3 self)
 Add to MetaCart
ion between destruction of coherence by the interaction ancl dispersion of the wave packet by the internal dynamics. A nonphenomenological Boltzmanntype master equation is derived for the density matrix of the center of mass. Its solutions show that the muchdiscussed dispersion hardly ever shows up
Geodesic entropic graphs for dimension and entropy estimation in manifold learning
 IEEE TRANS. ON SIGNAL PROCESSING
, 2004
"... In the manifold learning problem, one seeks to discover a smooth low dimensional surface, i.e., a manifold embedded in a higher dimensional linear vector space, based on a set of measured sample points on the surface. In this paper, we consider the closely related problem of estimating the manifold ..."
Abstract

Cited by 99 (5 self)
 Add to MetaCart
not require reconstruction of the manifold or estimation of the multivariate density of the samples. The GMST method simply constructs a minimal spanning tree (MST) sequence using a geodesic edge matrix and uses the overall lengths of the MSTs to simultaneously estimate manifold dimension and entropy. We
Informationtheoretic asymptotics of Bayes methods
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1990
"... In the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and sh ..."
Abstract

Cited by 144 (13 self)
 Add to MetaCart
and show that the asymptotic distance is (d/2Xlogn)+ c, where d is the dimension of the parameter vector. Therefore, the relative entropy rate D,,/n converges to zero at rate (logn)/n. The constant c, which we explicitly identify, depends only on the prior density function and the Fisher information matrix
Article Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes
, 2014
"... entropy ..."
On the Entropy of Matrix Black Holes
, 1997
"... Matrix theory [1] appears to provide a nonperturbative definition of quantum gravity. As such, it ought to resolve the conceptual issues surrounding the quantum mechanics of black holes [2]. Four and fivedimensional black holes with four and three charges Qi, respectively, appear to be ideal testb ..."
Abstract
 Add to MetaCart
Matrix theory [1] appears to provide a nonperturbative definition of quantum gravity. As such, it ought to resolve the conceptual issues surrounding the quantum mechanics of black holes [2]. Four and fivedimensional black holes with four and three charges Qi, respectively, appear to be ideal
Results 11  20
of
3,816