#### DMCA

## Factorial Markov Random Fields (2002)

### Cached

### Download Links

- [mri.med.cornell.edu]
- [www.cs.cornell.edu]
- [www.cs.cornell.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In European Conference on Computer Vision (ECCV |

Citations: | 10 - 0 self |

### Citations

11970 | Maximum likelihood from incomplete data via the em algorithm
- Dempster, Laird, et al.
- 1977
(Show Context)
Citation Context ...age, i). In some other cases, we may want to infer the single most probable state ofthe hidden variables. The parameters of a factorial MRF can be estimated via the Expectation Maximization algorithm =-=[4]-=-, which iterates between assuming the current parameters to compute posterior probabilities over the hidden states (E-step), and using these probabilities to maximize the expected log likelihood of th... |

2087 |
Monte Carlo Sampling Methods Using Markov Chains and Their Applications.” Biometrika
- Hastings
- 1970
(Show Context)
Citation Context ...ep is not exact even in a single layer MRF, as opposed to the exact E step in a single HMM chain used in factorial HMM's. We can use Monte Carlo sampling methods using Markov chain for each MRF layer =-=[8-=-], which oers the theoretical assurance that the sampling procedure will converge to the correct posterior distribution ultimately. It is not a particularly attractive approach though, since inference... |

1315 | An experimental comparison of min-cut/maxflow algorithms for energy minimization in vision.
- Boykov, Kolmogorov
- 2004
(Show Context)
Citation Context ... rapidly obtained via graph cuts [7]. Parallel to the case of factorial HMM’s, we find the MAP estimates considering the hl p as the observable. In our experiments, we used the graph cut algorithm of =-=[2]-=-, which is specifically designed for the kind of graphs that arise in computer vision problems. Expectation: Given the MAP estimates f ∗ obtained above, we can approximate f as: f l p = � f l pP (f l ... |

1128 | introduction to variational methods for graphical models
- Jordan, Jaakkola, et al.
(Show Context)
Citation Context ...represents a hidden variable. 2 Related work Inference problems on probabilistic models are frequently encountered in computer vision and image processing. In the structured variational approximation =-=[10]-=-, exact algorithms for probability computation on tractable substructures are combined with variational methods to handle the interaction between the substructures which make the system as a whole int... |

637 | M.I.J.: Factorial hidden markov models.
- Ghahramani
- 1997
(Show Context)
Citation Context ... graphical model. We extend the standard MRF model to several layers of MRF's, which is analogous to the extension from Hidden Markov Models (HMM's) to Factorial HMM's (see Figure (1) and Figure (2)) =-=[6]-=-. A Factorial HMM has the structure shown in Figure (1), and consists of i) a set of hidden variables, which are a priori independent, and ii) a set of observable variables, whose state depends on the... |

542 | Representing moving images with layers
- Wang, Adelson
- 1994
(Show Context)
Citation Context ...nterest (intensity, disparity, etc) [12]. However, MRF's cannot eectively combine information over disconnected spatial regions. Layer representations are a popular way of addressing this limitation [=-=16, 14, 1]-=-. The main contribution of this paper is to propose a new graphical model that can represent image layers, and to develop an ecient algorithm for inference on this graphical model. We extend the stand... |

531 | Image inpainting
- Bertalmio, Sapiro, et al.
- 2000
(Show Context)
Citation Context ...er occupancy, but not layer texture as shown in Table(3) (c) and (e), and Table(4) (d) and (f). We can either (1) incorporate texture in our formulation, or (2) use an image inpainting method such as =-=[13]-=-. (a) (b) (c) (d) (e) (f) (g) Table 3. Layer analysis result on thesower garden sequence. (a) Left image. (b) Disparity map adopted from [11]. We run our algorithm on this disparity map, not on the or... |

516 | Markov Random Field Modeling in Computer Vision,
- Li
- 1995
(Show Context)
Citation Context ...troduction Markov Random Fields (MRF's) have been extensively used in low level vision because they can naturally incorporate the spatial coherence of measures of interest (intensity, disparity, etc) =-=[12-=-]. However, MRF's cannot eectively combine information over disconnected spatial regions. Layer representations are a popular way of addressing this limitation [16, 14, 1]. The main contribution of th... |

501 | Coupled hidden markov models for complex action recognition,” in CVPR,
- Brand, Oliver, et al.
- 1997
(Show Context)
Citation Context ...eo segments and form a video summary in an unsupervised fashion. Besides the Factorial HMM, researchers have proposed other extensions such as Coupled HMM's for Chinese martial art action recognition =-=[3]-=-, or Parallel HMM's for American sign language recognition [15]. Layer representations are known to be able to precisely segment and estimate motion for multiple objects, and to provide compact and co... |

429 |
Exact maximum a posteriori estimation for binary images.
- Greig, Porteous, et al.
- 1989
(Show Context)
Citation Context ... Appendix 8.1 M Step Opaque layer: We start by expanding Q: Q = 1=2 X p [i p 0 C(f p ) 1 i p 2i p 0 C(f p ) 1 F (f p ) + F (f p ) 0 C(f p ) 1 F (f p )] + X l X p;q2"N 2 p;q2"N (f l p 6= f ql=-= ) log Z (7)-=- From the model of opaque layers we have: P (i p j f p ) = N (i p ; F (f p ); C(f p ) 1 ) F (f p ) = X l W l M l p C(f p ) 1 = X l C l 1 M l p M l p = f l p L Y l 0 =l+1 (1 f l 0 p ) The average value... |

365 | Computing visual correspondence with occlusions using graph cuts.
- Kolmogorov, Zabih
- 2001
(Show Context)
Citation Context ...on, as shown in (c) of Table (2). 5.2 Real image We used the disparity map of the Garden- ower images and Tsukuba stereo images from a recent algorithm for computing motion and stereo with occlusions =-=[11]-=- as shown in Table(3) and Table(4) repectively. Notice that the result depends on the number of layers (which user must specify) as shown in Table(3). Although both of them give reasonable layer decom... |

204 | Layered Representation of Motion Video Using Robust Maximum Likelihood Estimation of Mixture Models and MDL Encoding,”
- Ayer, Sawhney
- 1995
(Show Context)
Citation Context ...nterest (intensity, disparity, etc) [12]. However, MRF's cannot eectively combine information over disconnected spatial regions. Layer representations are a popular way of addressing this limitation [=-=16, 14, 1]-=-. The main contribution of this paper is to propose a new graphical model that can represent image layers, and to develop an ecient algorithm for inference on this graphical model. We extend the stand... |

172 | Smoothness in layers: Motion segmentation using nonparametric mixture estimation.
- Weiss
- 1997
(Show Context)
Citation Context ...gorithm for robust maximum-likelihood estimation of the multiple models and their layers of support. They also applied the minimum descriptive length principle to estimate the number of models. Weiss =-=[17]-=- presented an EM algorithm that can segment image sequences bystting multiple smoothsowselds to the spatiotemporal data. He showed how to estimate a single smoothsowseld, which eventually leads to the... |

124 | An integrated Bayesian approach to layer extraction from image sequences,
- Torr, Szeliski, et al.
- 2001
(Show Context)
Citation Context ...nterest (intensity, disparity, etc) [12]. However, MRF's cannot eectively combine information over disconnected spatial regions. Layer representations are a popular way of addressing this limitation [=-=16, 14, 1]-=-. The main contribution of this paper is to propose a new graphical model that can represent image layers, and to develop an ecient algorithm for inference on this graphical model. We extend the stand... |

63 | Parallel Hidden Markov Models for American Sign Language Recognition”, Gesture Workshop,
- Vogler, Metaxas
- 1999
(Show Context)
Citation Context ...n. Besides the Factorial HMM, researchers have proposed other extensions such as Coupled HMM's for Chinese martial art action recognition [3], or Parallel HMM's for American sign language recognition =-=[15]-=-. Layer representations are known to be able to precisely segment and estimate motion for multiple objects, and to provide compact and comprehensive representations. Wang and Adelson [16] approached t... |

43 | Transformed hidden Markov models: estimating mixture models of images and inferring spatial transformations in video sequences
- Jojic, Pretrovic, et al.
- 2000
(Show Context)
Citation Context ...g for legibility the Hidden Markov decision trees, there are two natural tractable substructures, the \forest of chains approximation", and the \forest of trees approximation" [10]. Transfor=-=med HMM's [9]-=- can be considered as Factorial HMM's with two hidden variables, i.e., transformation variable and class variable. They use the model to cluster unlabeled video segments and form a video summary in an... |

13 |
Filling in scenes by propagating probabilities through layers into appearance models
- Frey
(Show Context)
Citation Context ...consists of approximately planar layers that have arbitrary 3D positions and orientations. 3D layer representations can naturally handle parallax eects on the layer as opposed to 2D approaches. Frey [=-=5]-=- recently proposed a Bayesian network for appearance-based layered vision which describes the occlusion process, and developed iterative probability propagation to recover the identity and position of... |

6 |
An experimental comparison of min-cut/max- algorithms for energy minimization in vision
- Boykov, Kolmogorov
(Show Context)
Citation Context ...e rapidly obtained via graph cuts [7]. Parallel to the case of factorial HMM's, wesnd the MAP estimates considering the h l p as the observable. In our experiments, we used the graph cut algorithm of =-=[-=-2], which is specically designed for the kind of graphs that arise in computer vision problems. Expectation: Given the MAP estimates f obtained above, we can approximatesf as: f l p = X f l p f l p P... |

1 | l′ f l′ p f l ′ p − ipf l p [ � l ′ ipip ′ − � p,l ipf l ′ l p W ′ W l f l pip - W |