Results 1  10
of
18
Hidden process models
 In International Conference of Machine Learning ICML
, 2006
"... We introduce Hidden Process Models (HPMs), a class of probabilistic models for multivariate time series data. The design of HPMs has been motivated by the challenges of modeling hidden cognitive processes in the brain, given functional Magnetic Resonance Imaging (fMRI) data. fMRI data is sparse, hig ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We introduce Hidden Process Models (HPMs), a class of probabilistic models for multivariate time series data. The design of HPMs has been motivated by the challenges of modeling hidden cognitive processes in the brain, given functional Magnetic Resonance Imaging (fMRI) data. fMRI data is sparse, highdimensional, nonMarkovian, and often involves prior knowledge of the form “hidden event A occurs n times within the interval [t,t ′]. ” HPMs provide a generalization of the widely used General Linear Model approaches to fMRI analysis, and HPMs can also be viewed as a subclass of Dynamic Bayes Networks.
Classification in Very High Dimensional Problems with Handfuls of Examples
"... Abstract. Modern classification techniques perform well when the number of training examples exceed the number of features. If, however, the number of features greatly exceed the number of training examples, then these same techniques can fail. To address this problem, we present a hierarchical Baye ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. Modern classification techniques perform well when the number of training examples exceed the number of features. If, however, the number of features greatly exceed the number of training examples, then these same techniques can fail. To address this problem, we present a hierarchical Bayesian framework that shares information between features by modeling similarities between their parameters. We believe this approach is applicable to many sparse, high dimensional problems and especially relevant to those with both spatial and temporal components. One such problem is fMRI time series, and we present a case study that shows how we can successfully classify in this domain with 80,000 original features and only 2 training examples per class. 1
Evaluation Results for a QueryBased Diagnostics Application
"... QueryBased Diagnostics refers to the simultaneous building and use of Bayes network diagnostic models, removing the distinction between elicitation and inference phases. In this paper we describe a successful eld trial of such a system in manufacturing. The detailed session logs that are collected ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
QueryBased Diagnostics refers to the simultaneous building and use of Bayes network diagnostic models, removing the distinction between elicitation and inference phases. In this paper we describe a successful eld trial of such a system in manufacturing. The detailed session logs that are collected during use of the system reveal how the model evolved in use, and pose challenging questions on how the models can be adapted to session outcomes. 1
Combining subjective probabilities and data in training markov logic networks
 of Lecture Notes in Computer Science
, 2012
"... Abstract. Markov logic is a rich language that allows one to specify a knowledge base as a set of weighted firstorder logic formulas, and to define a probability distribution over truth assignments to ground atoms using this knowledge base. Usually, the weight of a formula cannot be related to the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Markov logic is a rich language that allows one to specify a knowledge base as a set of weighted firstorder logic formulas, and to define a probability distribution over truth assignments to ground atoms using this knowledge base. Usually, the weight of a formula cannot be related to the probability of the formula without taking into account the weights of the other formulas. In general, this is not an issue, since the weights are learned from training data. However, in many domains (e.g. healthcare, dependable systems, etc.), only little or no training data may be available, but one has access to a domain expert whose knowledge is available in the form of subjective probabilities. Within the framework of Bayesian statistics, we present a formalism for using a domain expert’s knowledge for weight learning. Our approach defines priors that are different from and more general than previously used Gaussian priors over weights. We show how one can learn weights in an MLN by combining subjective probabilities and training data, without requiring that the domain expert provides consistent knowledge. Additionally, we also provide a formalism for capturing conditional subjective probabilities, which are often easier to obtain and more reliable than nonconditional probabilities. We demonstrate the effectiveness of our approach by extensive experiments in a domain that models failure dependencies in a cyberphysical system. Moreover, we demonstrate the advantages of using our proposed prior over that of using nonzero mean Gaussian priors in a commonly cited social network MLN testbed. 1
Improving Bayesian Network Parameter Learning using Constraints
"... This paper describes a new approach to unify constraints on parameters with training data to perform parameter estimation in Bayesian networks of known structure. The method is general in the sense that any convex constraint is allowed, which includes many proposals in the literature. Driven by a ma ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper describes a new approach to unify constraints on parameters with training data to perform parameter estimation in Bayesian networks of known structure. The method is general in the sense that any convex constraint is allowed, which includes many proposals in the literature. Driven by a maximum entropy criterion and the Imprecise Dirichlet Model, we present a constrained convex optimization formulation to combine priors, constraints and data. Experiments indicate benefits of this framework. 1
Automated Refinement of Bayes Networks’ Parameters based on Test Ordering Constraints
"... In this paper, we derive a method to refine a Bayes network diagnostic model by exploiting constraints implied by expert decisions on test ordering. At each step, the expert executes an evidence gathering test, which suggests the test’s relative diagnostic value. We demonstrate that consistency with ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, we derive a method to refine a Bayes network diagnostic model by exploiting constraints implied by expert decisions on test ordering. At each step, the expert executes an evidence gathering test, which suggests the test’s relative diagnostic value. We demonstrate that consistency with an expert’s test selection leads to nonconvex constraints on the model parameters. We incorporate these constraints by augmenting the network with nodes that represent the constraint likelihoods. Gibbs sampling, stochastic hill climbing and greedy search algorithms are proposed to find a MAP estimate that takes into account test ordering constraints and any data available. We demonstrate our approach on diagnostic sessions from a manufacturing scenario. 1
Modeling fMRI data generated by overlapping cognitive processes with unknown onsets using Hidden Process Models
"... We present a new method for modeling fMRI time series data called Hidden Process Models (HPMs). Like several earlier models for fMRI analysis, Hidden Process Models assume the observed data is generated by a sequence of underlying mental processes that may be triggered by stimuli. HPMs go beyond the ..."
Abstract
 Add to MetaCart
We present a new method for modeling fMRI time series data called Hidden Process Models (HPMs). Like several earlier models for fMRI analysis, Hidden Process Models assume the observed data is generated by a sequence of underlying mental processes that may be triggered by stimuli. HPMs go beyond these earlier models by allowing for processes whose timing may be unknown, and that might not be directly tied to specific stimuli. HPMs provide a principled, probabilistic framework for simultaneously learning the contribution of each process to the observed data, as well as the timing and identities of each instantiated process. They also provide a framework for evaluating and selecting among competing models that assume different numbers and types of underlying mental processes. We describe the HPM framework and its learning and inference algorithms, and present experimental results demonstrating its use on simulated and real fMRI data. Our experiments compare several models of the data using crossvalidated data loglikelihood in an fMRI study involving overlapping mental processes whose timings are not fully known. Key words: functional magnetic resonance imaging, statistical methods, machine learning, hemodynamic response, mental chronometry
Refining Diagnostic POMDPs with User Feedback
"... Bayesian networks have been widely used for diagnostics. These models can be extended to POMDPs to select the best action. This allows modeling partial observability due to causes and the utility of executing various tests. We describe the problem of refining diagnostic POMDPs when user feedback is ..."
Abstract
 Add to MetaCart
Bayesian networks have been widely used for diagnostics. These models can be extended to POMDPs to select the best action. This allows modeling partial observability due to causes and the utility of executing various tests. We describe the problem of refining diagnostic POMDPs when user feedback is available. We propose utilizing user feedback to pose constraints on the model, i.e., the transition, observation and reward functions. These constraints can then be used to efficiently learn the POMDP model and incorporate expert knowledge about the problem.
Decoding Brain Activity Using the ZeroShot Learning Model
, 2011
"... Machine learning algorithms have been successfully applied to learning classifiers in many domains such as computer vision, fraud detection, and brain image analysis. Typically, classifiers are trained to predict a class value given a set of labeled training data that includes all possible class val ..."
Abstract
 Add to MetaCart
Machine learning algorithms have been successfully applied to learning classifiers in many domains such as computer vision, fraud detection, and brain image analysis. Typically, classifiers are trained to predict a class value given a set of labeled training data that includes all possible class values, and sometimes additional unlabeled training data. Little research has been performed where the possible values for the class variable include values that have been omitted from the training examples. This is an important problem setting, especially in domains where the class value can take on many values, and the cost of obtaining labeled examples for all values is high. We show that the key to addressing this problem is not predicting the heldout classes directly, but rather by recognizing the semantic properties of the classes such as their physical or functional attributes. We formalize this method as zeroshot learning and show that by utilizing semantic knowledge mined from large text corpora