Results 1  10
of
37
An Efficient Cost Scaling Algorithm for the Assignment Problem
 MATH. PROGRAM
, 1995
"... The cost scaling pushrelabel method has been shown to be efficient for solving minimumcost flow problems. In this paper we apply the method to the assignment problem and investigate implementations of the method that take advantage of assignment's special structure. The results show that the metho ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
The cost scaling pushrelabel method has been shown to be efficient for solving minimumcost flow problems. In this paper we apply the method to the assignment problem and investigate implementations of the method that take advantage of assignment's special structure. The results show that the method is very promising for practical use.
Augment or Push? A computational study of Bipartite Matching and Unit Capacity Flow Algorithms
 ACM J. EXP. ALGORITHMICS
, 1998
"... We conduct a computational study of unit capacity flow and bipartite matching algorithms. Our goal is to determine which variant of the pushrelabel method is most efficient in practice and to compare pushrelabel algorithms with augmenting path algorithms. We have implemented and compared three pus ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
We conduct a computational study of unit capacity flow and bipartite matching algorithms. Our goal is to determine which variant of the pushrelabel method is most efficient in practice and to compare pushrelabel algorithms with augmenting path algorithms. We have implemented and compared three pushrelabel algorithms, three augmenting path algorithms (one of which is new), and one augmentrelabel algorithm. The depthfirst search augmenting path algorithm was thought to be a good choice for the bipartite matching problem, but our study shows that it is not robust. For the problems we study, our implementations of the fifo and lowestlevel selection pushrelabel algorithms have the most robust asymptotic rate of growth and work best overall. Augmenting path algorithms, although not as robust, on some problem classes are faster by a moderate constant factor. Our study includes several new problem families and input graphs with as many as 5 \Theta 10 5 vertices.
Level of repair analysis and minimum cost homomorphisms of graphs
 Discrete Appl. Math
"... This paper is dedicated to the memory of Lillian Barros Abstract. Level of Repair Analysis (LORA) is a prescribed procedure for defence logistics support planning. For a complex engineering system containing perhaps thousands of assemblies, subassemblies, components, etc. organized into several lev ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
This paper is dedicated to the memory of Lillian Barros Abstract. Level of Repair Analysis (LORA) is a prescribed procedure for defence logistics support planning. For a complex engineering system containing perhaps thousands of assemblies, subassemblies, components, etc. organized into several levels of indenture and with a number of possible repair decisions, LORA seeks to determine an optimal provision of repair and maintenance facilities to minimize overall lifecycle costs. For a LORA problem with two levels of indenture with three possible repair decisions, which is of interest in UK and US military and which we call LORABR, Barros (1998) and Barros and Riley (2001) developed certain branchandbound heuristics. The surprising result of this paper is that LORABR is, in fact, polynomialtime solvable. To obtain this result, we formulate the general LORA problem as an optimization homomorphism problem on bipartite graphs, and reduce a generalization of LORABR, LORAM, to the maximum weight independent set problem on a bipartite graph. We prove that the general LORA problem is NPhard by using an important result on list homomorphisms of graphs. We introduce the minimum cost graph homomorphism problem, provide partial results and pose an open problem. Finally, we show that our result for LORABR can be applied to prove that an extension of the maximum weight independent set problem on bipartite graphs is polynomial time solvable.
A global perspective on map inference for lowlevel vision
 In Microsoft Research Technical Report
, 2009
"... In recent years the Markov Random Field (MRF) has become the de facto probabilistic model for lowlevel vision applications. However, in a maximum a posteriori (MAP) framework, MRFs inherently encourage delta function marginal statistics. By contrast, many lowlevel vision problems have heavy tailed ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
In recent years the Markov Random Field (MRF) has become the de facto probabilistic model for lowlevel vision applications. However, in a maximum a posteriori (MAP) framework, MRFs inherently encourage delta function marginal statistics. By contrast, many lowlevel vision problems have heavy tailed marginal statistics, making the MRF model unsuitable. In this paper we introduce a more general Marginal Probability Field (MPF), of which the MRF is a special, linear case, and show that convex energy MPFs can be used to encourage arbitrary marginal statistics. We introduce a flexible, extensible framework for effectively optimizing the resulting NPhard MAP problem, based around dualdecomposition and a modified mincost flow algorithm, and which achieves global optimality in some instances. We use a range of applications, including image denoising and texture synthesis, to demonstrate the benefits of this class of MPF over MRFs. 1.
Convex Combinatorial Optimization
, 2004
"... We introduce the convex combinatorial optimization problem, a farreaching generalization of the standard linear combinatorial optimization problem. We show that it is strongly polynomial time solvable over any edgeguaranteed family, and discuss several applications. ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We introduce the convex combinatorial optimization problem, a farreaching generalization of the standard linear combinatorial optimization problem. We show that it is strongly polynomial time solvable over any edgeguaranteed family, and discuss several applications.
An even faster and more unifying algorithm for comparing trees via unbalanced bipartite matchings
 Journal of Algorithms
"... A widely used method for determining the similarity of two labeled trees is to compute a maximum agreement subtree of the two trees. Previous work on this similarity measure is only concerned with the comparison of labeled trees of two special kinds, namely, uniformly labeled trees (i.e., trees with ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
A widely used method for determining the similarity of two labeled trees is to compute a maximum agreement subtree of the two trees. Previous work on this similarity measure is only concerned with the comparison of labeled trees of two special kinds, namely, uniformly labeled trees (i.e., trees with all their nodes labeled by the same symbol) and evolutionary trees (i.e., leaflabeled trees with distinct symbols for distinct leaves). This paper presents an algorithm for comparing trees that are labeled in an arbitrary manner. In addition to this generality, this algorithm is faster than the previous algorithms. Another contribution of this paper is on maximum weight bipartite matchings. We show how to speed up the best known matching algorithms when the input graphs are nodeunbalanced or weightunbalanced. Based on these enhancements, we obtain an efficient algorithm for a new matching problem called the hierarchical bipartite matching problem, which is at the core of our maximum agreement subtree algorithm. 1
Flow faster: Efficient decision algorithms for probabilistic simulations
 13th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 4424 of LNCS
"... Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong sim ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong simulation over probabilistic automata are not efficient, which makes it as yet unclear whether they can be used as effectively as their nonprobabilistic counterparts. This paper presents drastically improved algorithms to decide whether some (discrete or continuoustime) Markov chain strongly or weakly simulates another, or whether a probabilistic automaton strongly simulates another. The key innovation is the use of parametric maximum flow techniques to amortize computations. We also present a novel algorithm for deciding strong probabilistic simulation preorders on probabilistic automata, which has polynomial complexity via a reduction to an LP problem. When extending the algorithms for probabilistic automata to their continuoustime counterpart, we retain the same complexity for both strong and strong probabilistic simulations.
Polynomial approximation algorithms for belief matrix maintenance in identity management
 In 43rd IEEE Conference on Decision and Control
, 2004
"... Abstract — Updating probabilistic belief matrices as new observations arrive, in the presence of noise, is a critical part of many algorithms for target tracking in sensor networks. These updates have to be carried out while preserving sum constraints, arising for example, from probabilities. This p ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract — Updating probabilistic belief matrices as new observations arrive, in the presence of noise, is a critical part of many algorithms for target tracking in sensor networks. These updates have to be carried out while preserving sum constraints, arising for example, from probabilities. This paper addresses the problem of updating belief matrices to satisfy sum constraints using scaling algorithms. We show that the convergence behavior of the Sinkhorn scaling process, used for scaling belief matrices, can vary dramatically depending on whether the prior unscaled matrix is exactly scalable or only almost scalable. We give an efficient polynomialtime algorithm based on the maximumflow algorithm that determines whether a given matrix is exactly scalable, thus determining the convergence properties of the Sinkhorn scaling process. We prove that the Sinkhorn scaling process always provides a solution to the problem of minimizing the KullbackLeibler distance of the physically feasible scaled matrix from the prior constraintviolating matrix, even when the matrices are not exactly scalable. We pose the scaling process as a linearly constrained convex optimization problem, and solve it using an interiorpoint method. We prove that even in cases in which the matrices are not exactly scalable, the problem can be solved to ɛ−optimality in strongly polynomial time, improving the best known bound for the problem of scaling arbitrary nonnegative rectangular matrices to prescribed row and column sums. I.
Efficient Algorithms for Robustness in Matroid Optimization
 PROCEEDINGS OF THE EIGHTH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS (NEW
, 1996
"... The robustness function of a matroid measures the maximum increase in the weight of its minimum weight bases that can be produced by increases of a given total cost on the weights of its elements. We present an algorithm for computing this function, that runs in strongly polynomial time for matroids ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The robustness function of a matroid measures the maximum increase in the weight of its minimum weight bases that can be produced by increases of a given total cost on the weights of its elements. We present an algorithm for computing this function, that runs in strongly polynomial time for matroids in which independence can be tested in strongly polynomial time. We identify key properties of transversal, scheduling and partition matroids, and exploit them to design robustness algorithms that are more efficient than our general algorithm.