Results 1  10
of
234
Model Multiple Heterogeneity via Hierarchical MultiLatent Space Learning
"... In many real world applications such as satellite image analysis, gene function prediction, and insider threat detection, the data collected from heterogeneous sources often exhibit multiple types of heterogeneity, such as task heterogeneity, view heterogeneity, and label heterogeneity. To addres ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. To address this problem, we propose a Hierarchical MultiLatent Space (HiMLS) learning approach to jointly model the triple types of heterogeneity. The basic idea is to learn a hierarchical multilatent space by which we can simultaneously leverage the task relatedness, view consistency and the label
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 770 (3 self)
 Add to MetaCart
been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete
Parallel Networks that Learn to Pronounce English Text
 COMPLEX SYSTEMS
, 1987
"... This paper describes NETtalk, a class of massivelyparallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed h ..."
Abstract

Cited by 549 (5 self)
 Add to MetaCart
is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced practice is more effective for longterm retention than massed practice. Network models can be constructed that have the same performance and learning characteristics on a particular
Features of similarity.
 Psychological Review
, 1977
"... Similarity plays a fundamental role in theories of knowledge and behavior. It serves as an organizing principle by which individuals classify objects, form concepts, and make generalizations. Indeed, the concept of similarity is ubiquitous in psychological theory. It underlies the accounts of stimu ..."
Abstract

Cited by 1455 (2 self)
 Add to MetaCart
. These models represent objects as points in some coordinate space such that the observed dissimilarities between objects correspond to the metric distances between the respective points. Practically all analyses of proximity data have been metric in nature, although some (e.g., hierarchical clustering) yield
Flexible Latent Variable Models for MultiTask Learning
"... Summary. Given multiple prediction problems such as regression and classification, we are interested in a joint inference framework which can effectively borrow information among tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this p ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multitask learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among
Hierarchical probabilistic neural network language model
 In AISTATS
, 2005
"... In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbo ..."
Abstract

Cited by 101 (4 self)
 Add to MetaCart
symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used ngram models, both for training and recognition
Hierarchical models in the brain
 PLoS Computational Biology
, 2008
"... This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of statespace or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of a ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of statespace or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data
Learning Hierarchical Relationships among Partially Ordered Objects with Heterogeneous Attributes and Links
"... Objects linking with many other objects in an information network may imply various semantic relationships. Uncovering such knowledge is essential for role discovery, data cleaning, and better organization of information networks, especially when the semantically meaningful relationships are hidden ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
or mingled with noisy links and attributes. In this paper we study a generic form of relationship along which objects can form a treelike structure, a pervasive structure in various domains. We formalize the problem of uncovering hierarchical relationships in a supervised setting. In general, local features
Relational Markov Models and their Application to Adaptive Web Navigation
, 2002
"... Relational Markov models (RMMs) are a generalization of Markov models where states can be of different types, with each type described by a different set of variables. The domain of each variable can be hierarchically structured, and shrinkage is carried out over the cross product of these hierarchi ..."
Abstract

Cited by 90 (9 self)
 Add to MetaCart
Relational Markov models (RMMs) are a generalization of Markov models where states can be of different types, with each type described by a different set of variables. The domain of each variable can be hierarchically structured, and shrinkage is carried out over the cross product
Algebraic Geometrical Methods for Hierarchical Learning Machines
, 2001
"... Hierarchical learning machines such as layered perceptrons, radial basis functions, gaussian mixtures are nonidentifiable learning machines, whose Fisher information matrices are not positive definite. This fact shows that conventional statistical asymptotic theory can not be applied to the neural ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
between the learning curve of a hierarchical learning machine and the algebraic geometrical structure of the parameter space. We establish an algorithm to calculate the Bayesian stochastic complexity based on blowingup technology in algebraic geometry and prove that the Bayesian generalization error of a
Results 1  10
of
234