Results 1 - 1 of 1
Fast approximate energy minimization with label costs. CVPR, 2010. 6 Cut based Inference with Co-occurrence Statistics 13
"... The α-expansion algorithm  has had a significant impact in computer vision due to its generality, effectiveness, and speed. Thus far it can only minimize energies that involve unary, pairwise, and specialized higher-order terms. Our main contribution is to extend α-expansion so that it can simult ..."
Abstract - Cited by 16 (3 self) - Add to MetaCart
The α-expansion algorithm  has had a significant impact in computer vision due to its generality, effectiveness, and speed. Thus far it can only minimize energies that involve unary, pairwise, and specialized higher-order terms. Our main contribution is to extend α-expansion so that it can simultaneously optimize “label costs ” as well. An energy with label costs can penalize a solution based on the set of labels that appear in it. The simplest special case is to penalize the number of labels in the solution. Our energy is quite general, and we prove optimality bounds for our algorithm. A natural application of label costs is multi-model fitting, and we demonstrate several such applications in vision: homography detection, motion segmentation, and unsupervised image segmentation. Our C++/MATLAB implementation is publicly available. 1. Some Useful Regularization Energies In a labeling problem we are given a set of observations P (pixels, features, data points) and a set of labels L (categories, geometric models, disparities). The goal is to assign each observation p ∈ P a label fp ∈ L such that the joint labeling f minimizes some objective function E(f). Most labeling problems in computer vision are ill-posed and in need of regularization, but the most useful regularizers often make the problem NP-hard. Our work is about how to effectively optimize two such regularizers: a preference for fewer labels in the solution, and a preference for spatial smoothness. Figure 1 suggests how these criteria cooperate to give clean results. Surprisingly, there is no good algorithm to optimize their combination. 1 Our main contribution is a way to simultaneously optimize both of these criteria inside the powerful α-expansion algorithm . Label costs. Start from a basic (unregularized) energy E(f) = ∑ pDp(fp), where optimal fp can each be determined independently from the ‘data costs’. Suppose, however, that we wish to explain the observations using as few unique labels as necessary. We can introduce label costs into E(f) to penalize each unique label that appears in f: E(f) = ∑