Results 1 
8 of
8
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 1384 (52 self)
 Add to MetaCart
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
Learning from Labeled and Unlabeled Data using Graph Mincuts
, 2001
"... Many application domains suffer from not having enough labeled training data for learning. However, large amounts of unlabeled examples can often be gathered cheaply. As a result, there has been a great deal of work in recent years on how unlabeled data can be used to aid classification. We consi ..."
Abstract

Cited by 268 (5 self)
 Add to MetaCart
Many application domains suffer from not having enough labeled training data for learning. However, large amounts of unlabeled examples can often be gathered cheaply. As a result, there has been a great deal of work in recent years on how unlabeled data can be used to aid classification. We consider an algorithm based on finding minimum cuts in graphs, that uses pairwise relationships among the examples in order to learn from both labeled and unlabeled data. Our algorithm
Efficient GraphBased Energy Minimization Methods In Computer Vision
, 1999
"... ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. Th ..."
Abstract

Cited by 83 (5 self)
 Add to MetaCart
ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. The running time at each iteration is eectively linear due to the special graph structure. In practice it takes just a few iterations to converge. Moreover most of the progress happens during the rst iteration. For a certain piecewise constant prior we adapt the algorithms developed for the piecewise smooth prior. One of them nds a solution within a factor of two from the optimum. In addition we develop a third algorithm which nds a local minimum in yet another move space. We demonstrate the eectiveness of our approach on image restoration, stereo, and motion. For the data with ground truth, our methods signicantly outperform standard methods. Biographical Sketch Olga
Geometric Algorithms for Online Optimization
 Journal of Computer and System Sciences
, 2002
"... In this paper, we consider a natural online version of linear optimization: the problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online versions of optimization problems, such as maxcut, variants of cluster ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
In this paper, we consider a natural online version of linear optimization: the problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online versions of optimization problems, such as maxcut, variants of clustering, and also the classic online binary search tree problem. We present algorithms for this problem that are (1 + epsilon)competitive with the optimal static solution chosen in hindsight. Our algorithms and proofs are motivated by geometric considerations.
Comparing and unifying searchbased and similaritybased approaches to semisupervised clustering
 In Proceedings of the ICML2003 Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining
, 2003
"... Semisupervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has employed one of two approaches: 1) Searchbased methods that utilize supervised data to guide the search for the best clustering, and 2) Similaritybased methods that use supe ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Semisupervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has employed one of two approaches: 1) Searchbased methods that utilize supervised data to guide the search for the best clustering, and 2) Similaritybased methods that use supervised data to adapt the underlying similarity metric used by the clustering algorithm. This paper presents a unified approach based on the KMeans clustering algorithm that incorporates both of these techniques. Experimental results demonstrate that the combined approach generally produces better clusters than either of the individual approaches. 1.
Approximate Classification via Earthmover Metrics
 In SODA ’04: Proceedings of the fifteenth annual ACMSIAM symposium on Discrete algorithms
, 2004
"... Given a metric space (X, d), a natural distance measure on probability distributions over X is the earthmover metric. We use randomized rounding of earthmover metrics to devise new approximation algorithms for two wellknown classification problems, namely, metric labeling and 0extension. ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Given a metric space (X, d), a natural distance measure on probability distributions over X is the earthmover metric. We use randomized rounding of earthmover metrics to devise new approximation algorithms for two wellknown classification problems, namely, metric labeling and 0extension.
Quadratic Minimization for Labeling Problems
, 2002
"... Many tasks in Computer Vision can be formulated in the framework of Labeling Problems. Thereby we are asked to assign labels to objects. The assignment is based on a prior model for observationals in the sehensfeld and posteriori data. The labeling is to compute which minimizes ambiguities in the me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Many tasks in Computer Vision can be formulated in the framework of Labeling Problems. Thereby we are asked to assign labels to objects. The assignment is based on a prior model for observationals in the sehensfeld and posteriori data. The labeling is to compute which minimizes ambiguities in the measurements. This computation involves an appropriate functional over objects and labels, which defines a notion of consistency.
Submitted for Publication
"... Semisupervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach speci es a limited number of mustlink and cannotlink constraints between pairs of examples. ..."
Abstract
 Add to MetaCart
Semisupervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach speci es a limited number of mustlink and cannotlink constraints between pairs of examples.