Results 1 
6 of
6
Structure learning in random fields for heart motion abnormality detection
 In CVPR
, 2008
"... Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this pr ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this problem, the interactions between the different heart regions and their overall influence on the clinical condition of the heart need to be considered. To do this, we propose a method for jointly learning the structure and parameters of conditional random fields, formulating these tasks as a convex optimization problem. We consider blockL1 regularization for each set of features associated with an edge, and formalize an efficient projection method to find the globally optimal penalized maximum likelihood solution. We perform extensive numerical experiments comparing the presented method with related methods that approach the structure learning problem differently. We verify the robustness of our method on echocardiograms collected in routine clinical practice at one hospital. 1.
Learning Bayesian Network Structure using LP Relaxations
"... We propose to solve the combinatorial problem of finding the highest scoring Bayesian network structure from data. This structure learning problem can be viewed as an inference problem where the variables specify the choice of parents for each node in the graph. The key combinatorial difficulty aris ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We propose to solve the combinatorial problem of finding the highest scoring Bayesian network structure from data. This structure learning problem can be viewed as an inference problem where the variables specify the choice of parents for each node in the graph. The key combinatorial difficulty arises from the global constraint that the graph structure has to be acyclic. We cast the structure learning problem as a linear program over the polytope defined by valid acyclic structures. In relaxing this problem, we maintain an outer bound approximation to the polytope and iteratively tighten it by searching over a new class of valid constraints. If an integral solution is found, it is guaranteed to be the optimal Bayesian network. When the relaxation is not tight, the fast dual algorithms we develop remain useful in combination with a branch and bound method. Empirical results suggest that the method is competitive or faster than alternative exact methods based on dynamic programming. 1
Learning gene regulatory networks via globally regularized risk minimization
 In Proceedings of the Fifth Annual RECOMB Satellite Workshop on Comparative Genomics
, 2007
"... Abstract. Learning the structure of a gene regulatory network from timeseries gene expression data is a significant challenge. Most approaches proposed in the literature to date attempt to predict the regulators of each target gene individually, but fail to share regulatory information between rela ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. Learning the structure of a gene regulatory network from timeseries gene expression data is a significant challenge. Most approaches proposed in the literature to date attempt to predict the regulators of each target gene individually, but fail to share regulatory information between related genes. In this paper, we propose a new globally regularized risk minimization approach to address this problem. Our approach first clusters genes according to their timeseries expression profiles— identifying related groups of genes. Given a clustering, we then develop a simple technique that exploits the assumption that genes with similar expression patterns are likely to be coregulated by encouraging the genes in the same group to share common regulators. Our experiments on both synthetic and real gene expression data suggest that our new approach is more effective at identifying important transcription factor based regulatory mechanisms than the standard independent approach and a prototype based approach. 1
Lipschitz Parametrization of Probabilistic Graphical Models
"... We show that the loglikelihood of several probabilistic graphical models is Lipschitz continuous with respect to the ℓpnorm of the parameters. We discuss several implications of Lipschitz parametrization. We present an upper bound of the KullbackLeibler divergence that allows understanding method ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that the loglikelihood of several probabilistic graphical models is Lipschitz continuous with respect to the ℓpnorm of the parameters. We discuss several implications of Lipschitz parametrization. We present an upper bound of the KullbackLeibler divergence that allows understanding methods that penalize the ℓpnorm of differences of parameters as the minimization of that upper bound. The expected loglikelihood is lower bounded by the negative ℓpnorm, which allows understanding the generalization ability of probabilistic models. The exponential of the negative ℓpnorm is involved in the lower bound of the Bayes error rate, which shows that it is reasonable to use parameters as features in algorithms that rely on metric spaces (e.g. classification, dimensionality reduction, clustering). Our results do not rely on specific algorithms for learning the structure or parameters. We show preliminary results for activity recognition and temporal segmentation. 1
Learning the Structure and Parameters of LargePopulation Graphical Games from Behavioral Data
"... We formalize and study the problem of learning the structure and parameters of graphical games from strictly behavioral data. We cast the problem as a maximum likelihood estimation (MLE) based on a generative model defined by the purestrategy Nash equilibria (PSNE) of the game. The formulation brin ..."
Abstract
 Add to MetaCart
We formalize and study the problem of learning the structure and parameters of graphical games from strictly behavioral data. We cast the problem as a maximum likelihood estimation (MLE) based on a generative model defined by the purestrategy Nash equilibria (PSNE) of the game. The formulation brings out the interplay between goodnessoffit and model complexity: good models capture the equilibrium behavior represented in the data while controlling the true number of PSNE, including those potentially unobserved. We provide a generalization bound for MLE. We discuss several optimization algorithms including convex loss minimization (CLM), sigmoidal approximations and exhaustive search. We formally prove that games in our hypothesis space have a small true number of PSNE, with high probability; thus, CLM is sound. We illustrate our approach, show and discuss promising results on synthetic data and the U.S. congressional voting records. 1
On the Identifiability of Games
"... Several games with different coefficients can lead to the same PSNE set. As a simple example that illustrates the issue of identifiability, consider the three following LIGs with the same PSNE sets, i.e. N E(Wk, 0) = {(−1, −1, −1), (+1, +1, +1)} for k = 1, 2, 3: ..."
Abstract
 Add to MetaCart
Several games with different coefficients can lead to the same PSNE set. As a simple example that illustrates the issue of identifiability, consider the three following LIGs with the same PSNE sets, i.e. N E(Wk, 0) = {(−1, −1, −1), (+1, +1, +1)} for k = 1, 2, 3: