Results 1 
7 of
7
Ensemble Learning For Independent Component Analysis
, 1999
"... In this paper, a recently developed Bayesian method called ensemble learning is applied to independent component analysis (ICA). Ensemble learning is a computationally efficient approximation for exact Bayesian analysis. In general, the posterior probability density function (pdf) is a complex high ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
In this paper, a recently developed Bayesian method called ensemble learning is applied to independent component analysis (ICA). Ensemble learning is a computationally efficient approximation for exact Bayesian analysis. In general, the posterior probability density function (pdf) is a complex high dimensional function whose exact treatment is diffucult. In ensemble learning, the posterior pdf is approximated by a more simple function and KullbackLeibler information is used as the criterion for minimising the misfit between the actual posterior pdf and its parametric approximation. In this paper, the posterior pdf is approximated by a diagonal Gaussian pdf. According to the ICAmodel used in this paper, the measurements are generated by a linear mapping from mutually independent source signals whose distributions are mixtures of Gaussians. The measurements are also assumed to have additive Gaussian noise with diagonal covariance. The model structure and all parameters of the distribution...
Choice of Basis for Laplace Approximation
 Machine Learning
, 1998
"... Maximum a posterJori optimization of parameters and the Laplace approximation for the marginal likelihood are both basisdependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the prob ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
Maximum a posterJori optimization of parameters and the Laplace approximation for the marginal likelihood are both basisdependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the softmax' basis.
Accelerating cyclic update algorithms for parameter estimation by pattern searches
 Neural Processing Letters
"... Abstract. A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. A wellknown drawback of this strategy is slow convergence in low noise conditions. We propose using socalled ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
(Show Context)
Abstract. A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. A wellknown drawback of this strategy is slow convergence in low noise conditions. We propose using socalled pattern searches which consist of an exploratory phase followed by a line search. During the exploratory phase, a search direction is determined by combining the individual updates of all subproblems. The approach can be used to speed up several wellknown learning methods such as variational Bayesian learning (ensemble learning) and expectationmaximization algorithm with modest algorithmic modifications. Experimental results show that the proposed method is able to reduce the required convergence time by 60–85 % in realistic variational Bayesian learning problems.
Building Blocks For Variational Bayesian Learning Of Latent Variable Models
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models a ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models and nonlinear modelling, which are lacking from most existing variational systems. The introduced blocks are designed to fit together and to yield e#cient update rules. Practical implementation of various models is easy thanks to an associated software package which derives the learning formulas automatically once a specific model structure has been fixed. Variational Bayesian learning provides a cost function which is used both for updating the variables of the model and for optimising the model structure. All the computations can be carried out locally, resulting in linear computational complexity. We present
Speeding up cyclic update schemes by pattern searches
 In Proc. of the 9th Int. Conf. on Neural Information Processing (ICONIP’02
, 2002
"... A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. We address a wellknown problem with this strategy, namely slow convergence under low noise. We propose using so called ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. We address a wellknown problem with this strategy, namely slow convergence under low noise. We propose using so called pattern searches which consist of a parameterwise update phase followed by a line search. The search direction of the line search is computed by combining the individual updates of all subproblems. The approach can be used to accelerate learning of several methods proposed in the literature without the need for large algorithmic modifications such as evaluation of global gradients. The proposed modification is shown to reduce the convergence time in a realistic independent component analysis (ICA) problem by more than 85 %. 1.
Article A Dynamic Navigation Model for Unmanned Aircraft Systems and an Application to Autonomous FrontOn Environmental Sensing and Photography Using LowCost Sensor Systems
, 2015
"... sensors ..."
(Show Context)
Title in English: Nonlinear Switching StateSpace Models Professuurin koodi ja nimi: Tik61 Informaatiotekniikka
, 2001
"... Tiivistelmä: Epälineaarinen vaihtuva tilaavaruusmalli (vaihtuva NSSM) on kahden dynaamisen mallin yhdistelmä. Epälineaarinen tilaavaruusmalli (NSSM) on jatkuva ja kätketty Markovmalli (HMM) diskreetti. Vaihtuvassa mallissa NSSM mallittaa datan lyhyen aikavälin dynamiikkaa. HMM kuvaa pidempiaikais ..."
Abstract
 Add to MetaCart
(Show Context)
Tiivistelmä: Epälineaarinen vaihtuva tilaavaruusmalli (vaihtuva NSSM) on kahden dynaamisen mallin yhdistelmä. Epälineaarinen tilaavaruusmalli (NSSM) on jatkuva ja kätketty Markovmalli (HMM) diskreetti. Vaihtuvassa mallissa NSSM mallittaa datan lyhyen aikavälin dynamiikkaa. HMM kuvaa pidempiaikaisia muutoksia ja ohjaa NSSM:a. Tässä työssä kehitetään vaihtuva NSSM ja oppimisalgoritmi sen parametreille. Oppimisalgoritmi perustuu bayesiläiseen ensembleoppimiseen, jossa todellista posteriorijakaumaa approksimoidaan helpommin käsiteltävällä jakaumalla. Sovitus tehdään todennäköisyysmassan perusteella ylioppimisen välttämiseksi. Algoritmin toteutus perustuu TkT Harri Valpolan aiempaan NSSMalgoritmiin. Se käyttää monikerrosperceptronverkkoja NSSM:n epälineaaristen kuvausten mallittamiseen. NSSMalgoritmin laskennallinen vaativuus rajoittaa vaihtuvan mallin rakennetta. Vain yhden dynaamisen mallin käyttö on mahdollista. Tällöin HMM:a käytetään vain NSSM:n ennustusvirheiden mallittamiseen. Tämä lähestymistapa on laskennallisesti kevyt mutta hyödyntää HMM:a vain vähän. Algoritmin toimivuutta kokeillaan todellisella puhedatalla. Vaihtuva NSSM osoittautuu