## Learning CRFs using Graph Cuts

### Cached

### Download Links

Citations: | 76 - 6 self |

### BibTeX

@MISC{Szummer_learningcrfs,

author = {Martin Szummer and Pushmeet Kohli and Derek Hoiem},

title = {Learning CRFs using Graph Cuts},

year = {}

}

### OpenURL

### Abstract

Abstract. Many computer vision problems are naturally formulated as random fields, specifically MRFs or CRFs. The introduction of graph cuts has enabled efficient and optimal inference in associative random fields, greatly advancing applications such as segmentation, stereo reconstruction and many others. However, while fast inference is now widespread, parameter learning in random fields has remained an intractable problem. This paper shows how to apply fast inference algorithms, in particular graph cuts, to learn parameters of random fields with similar efficiency. We find optimal parameter values under standard regularized objective functions that ensure good generalization. Our algorithm enables learning of many parameters in reasonable time, and we explore further speedup techniques. We also discuss extensions to non-associative and multi-class problems. We evaluate the method on image segmentation and geometry recognition. 1

### Citations

1399 | Fast approximate energy minimization via graph cuts
- Boykov, Veksler, et al.
- 1999
(Show Context)
Citation Context ...he method on image segmentation and geometry recognition. 1 Introduction The availability of efficient and provably optimal inference algorithms, such as graph cuts [1] and its approximate extensions =-=[2]-=-, has inspired progress in many areas of computer vision. For example, the efficiency of graph cut based algorithms enables interactive image [3,4] and real-time video segmentation tasks. The optimali... |

705 | What energy functions can be minimized via graph cuts
- Kolmogorov, Zabih
- 2004
(Show Context)
Citation Context ...multi-class problems. We evaluate the method on image segmentation and geometry recognition. 1 Introduction The availability of efficient and provably optimal inference algorithms, such as graph cuts =-=[1]-=- and its approximate extensions [2], has inspired progress in many areas of computer vision. For example, the efficiency of graph cut based algorithms enables interactive image [3,4] and real-time vid... |

667 | Grabcut: Interactive Foreground Extraction using Iterated Graph Cuts
- Rother, Kolmogorov, et al.
- 2004
(Show Context)
Citation Context ... such as graph cuts [1] and its approximate extensions [2], has inspired progress in many areas of computer vision. For example, the efficiency of graph cut based algorithms enables interactive image =-=[3,4]-=- and real-time video segmentation tasks. The optimality guarantees of these algorithms allow computing the maximum a posteriori (MAP) solution of the model distribution. The ability to compute the min... |

664 | Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in n-d Images
- Boykov, Jolly
- 2001
(Show Context)
Citation Context ... such as graph cuts [1] and its approximate extensions [2], has inspired progress in many areas of computer vision. For example, the efficiency of graph cut based algorithms enables interactive image =-=[3,4]-=- and real-time video segmentation tasks. The optimality guarantees of these algorithms allow computing the maximum a posteriori (MAP) solution of the model distribution. The ability to compute the min... |

489 | Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms
- Collins
- 2002
(Show Context)
Citation Context ... the nonsubmodular Potts model (although our model is more complex). Relation to Previous Work This work presents a method for max-margin learning in loopy graphs such as grids (images). In contrast, =-=[15]-=- uses Viterbi on a linear chain, an easier problem not applicable to images. Tsochantaridis et al. [8] and Taskar et al. [10] rely on MAP inference but do not use graphcut for learning. Anguelov et al... |

376 | Large margin methods for structured and interdependent output variables
- Tsochantaridis, Joachims, et al.
- 2005
(Show Context)
Citation Context ...e that is capable of learning dozens of parameters from millions of pixels in minutes. Our algorithm is based on the structured support vector machine (SVMstruct)2 framework of Tsochantaridis et al. =-=[8]-=- and the maximum-margin network learning of Taskar et al. [9,10]. These works were not focused on computer vision problems. They did not explore graph cuts for learning, and instead chose approximate ... |

246 | A comparative study of energy minimization methods for Markov random fields with smoothnessbased priors
- Szeliski, Zabih, et al.
- 2008
(Show Context)
Citation Context ...hat simple energy models (e.g., with one unary term and an isotropic smoothing penalty in grid labeling problems) are insufficient to model the complex structures inherent in computer vision problems =-=[5,6]-=-. Despite this knowledge, overly simplistic hand-tuned random field models continue to be common practice. We believe that the continued use of such impoverished models reflects, not a belief in the s... |

169 | Geometric context from a single image
- Hoiem, Efros, et al.
- 2005
(Show Context)
Citation Context ...on involve non-submodular energy functions that are NP-hard and cannot generally be exactly minimized in polynomial time. Examples include multi-label problems such as multi-class geometric labelling =-=[12]-=- and stereo [2]. In these cases, efficient approximate inference can be used in Step 3 of the cutting plane algorithm in Figure 1. The choice of the optimization technique affects the convergence and ... |

164 | Learning Structured Prediction Models: A Large Margin Approach
- Taskar
- 2004
(Show Context)
Citation Context ...ons of pixels in minutes. Our algorithm is based on the structured support vector machine (SVMstruct)2 framework of Tsochantaridis et al. [8] and the maximum-margin network learning of Taskar et al. =-=[9,10]-=-. These works were not focused on computer vision problems. They did not explore graph cuts for learning, and instead chose approximate inference algorithms (such as sum-product loopy BP) which do not... |

109 | Discriminative learning of Markov Random Fields for segmentation of 3D scan data
- Anguelov, Taskar, et al.
(Show Context)
Citation Context ...variables ξn. Thus, the individual margins may be smaller, but that is discouraged through a slack penalty in the objective, regulated by the slack penalty parameter C. min w 1 2 ‖w‖2 + C N ∑ ξn s.t. =-=(9)-=- n E(y, x (n) ; w) − E(y (n) , x (n) ; w) ≥ 1 − ξn ∀y ̸= y (n) ∀n ξn ≥ 0 ∀n8 These are all standard large margin formulations. However, in the context of optimization over joint labelings y, we now c... |

108 | Pseudo-boolean optimization
- Boros, Hammer
- 2002
(Show Context)
Citation Context ... that the MAP labeling in Step 1 can be translated to a graph cut problem for many vision tasks. Graph cut algorithms can quickly find global optima for a large class of so-called submodular energies =-=[1,11]-=-. Step 3 is written as a general optimization for clarity, but is in fact just a quadratic program (given in Section 3). Thus it is convex and is free from local minima. The overall procedure converge... |

84 | Training structural SVMs when exact inference is intractable
- Finley, Joachims
- 2008
(Show Context)
Citation Context ...ient approximate inference can be used in Step 3 of the cutting plane algorithm in Figure 1. The choice of the optimization technique affects the convergence and correctness guarantees of this method =-=[13]-=-. The approximate algorithms commonly used in computer vision include (1) message passing algorithms such as tree-reweighed message passing (TRW) and loopy belief propagation (LBP), and (2) move-makin... |

74 | Learning associative markov networks
- Taskar, Chatalbashev, et al.
- 2004
(Show Context)
Citation Context ..., y). For example, an appropriate loss may be the Hamming loss ∆(y (n), y) = ∑ i δ(y(n) i , yi). For a 0-1 loss function ∆(y (n), y) = δ(y (n), y) we reduce to the previous formulation. Taskar et al. =-=[16]-=- proposed to rescale the margin to enforce the constraint E(y, x (n) ; w) − E(y (n) , x (n) ; w) ≥ ∆(y (n) , y) − ξn (10) for all y ̸= y (n) and all n. To construct a learning algorithm that takes los... |

53 | Generalizing swendsen-wang to sampling arbitrary posterior probabilities
- Barbu, Zhu
- 2007
(Show Context)
Citation Context ...hat simple energy models (e.g., with one unary term and an isotropic smoothing penalty in grid labeling problems) are insufficient to model the complex structures inherent in computer vision problems =-=[5,6]-=-. Despite this knowledge, overly simplistic hand-tuned random field models continue to be common practice. We believe that the continued use of such impoverished models reflects, not a belief in the s... |

38 | approximately optimal solutions for single and dynamic MRFs
- Komodakis, Tziritas, et al.
(Show Context)
Citation Context ...ter vision include (1) message passing algorithms such as tree-reweighed message passing (TRW) and loopy belief propagation (LBP), and (2) move-making algorithms such as alphaexpansion [2] and Fast-PD=-=[14]-=-. Algorithms such as TRW and Fast-PD work by formulating the energy minimization problem as an integer programming problem and then solving its linear relaxation. These algorithms in addition to givin... |

36 | Measuring uncertainty in graph cut solutions
- Kohli, Torr
(Show Context)
Citation Context ... This problem can be overcome by adding multiple low-energy labelings at every iteration, instead of just one. The global minima of a submodular function can be found using graph cuts. Kohli and Torr =-=[17]-=- showed how exact min-marginal energies can be efficiently computed for submodular functions. We use these min-marginals to compute the N-best solutions for the energy minimization problem. The time t... |

25 | Exploiting inference for approximate parameter learning in discriminative fields: An empirical study
- Kumar, August, et al.
- 2005
(Show Context)
Citation Context ...ence of tractable machine learning techniques for large MRF and CRF problems. Currently, the most widely used learning algorithms include cross-validation and simple partition function approximations =-=[7]-=-. However, many works do not perform learning at all and rely on hand-tuned parameters. In this paper, we describe an efficient and easy-to-implement technique that is capable of learning dozens of pa... |