Results 1  10
of
19
Improved Algorithms For Bipartite Network Flow
, 1994
"... In this paper, we study network flow algorithms for bipartite networks. A network G = (V; E) is called bipartite if its vertex set V can be partitioned into two subsets V 1 and V 2 such that all edges have one endpoint in V 1 and the other in V 2 . Let n = jV j, n 1 = jV 1 j, n 2 = jV 2 j, m = jE ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
In this paper, we study network flow algorithms for bipartite networks. A network G = (V; E) is called bipartite if its vertex set V can be partitioned into two subsets V 1 and V 2 such that all edges have one endpoint in V 1 and the other in V 2 . Let n = jV j, n 1 = jV 1 j, n 2 = jV 2 j, m = jEj and assume without loss of generality that n 1 n 2 . We call a bipartite network unbalanced if n 1 ø n 2 and balanced otherwise. (This notion is necessarily imprecise.) We show that several maximum flow algorithms can be substantially sped up when applied to unbalanced networks. The basic idea in these improvements is a twoedge push rule that allows us to "charge" most computation to vertices in V 1 , and hence develop algorithms whose running times depend on n 1 rather than n. For example, we show that the twoedge push version of Goldberg and Tarjan's FIFO preflow push algorithm runs in O(n 1 m + n 3 1 ) time and that the analogous version of Ahuja and Orlin's excess scaling algori...
An even faster and more unifying algorithm for comparing trees via unbalanced bipartite matchings
 Journal of Algorithms
"... A widely used method for determining the similarity of two labeled trees is to compute a maximum agreement subtree of the two trees. Previous work on this similarity measure is only concerned with the comparison of labeled trees of two special kinds, namely, uniformly labeled trees (i.e., trees with ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
A widely used method for determining the similarity of two labeled trees is to compute a maximum agreement subtree of the two trees. Previous work on this similarity measure is only concerned with the comparison of labeled trees of two special kinds, namely, uniformly labeled trees (i.e., trees with all their nodes labeled by the same symbol) and evolutionary trees (i.e., leaflabeled trees with distinct symbols for distinct leaves). This paper presents an algorithm for comparing trees that are labeled in an arbitrary manner. In addition to this generality, this algorithm is faster than the previous algorithms. Another contribution of this paper is on maximum weight bipartite matchings. We show how to speed up the best known matching algorithms when the input graphs are nodeunbalanced or weightunbalanced. Based on these enhancements, we obtain an efficient algorithm for a new matching problem called the hierarchical bipartite matching problem, which is at the core of our maximum agreement subtree algorithm. 1
The Maximum Traveling Salesman Problem under Polyhedral Norms
, 1998
"... . We consider the traveling salesman problem when the cities are points in R d for some fixed d and distances are computed according to a polyhedral norm. We show that for any such norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
. We consider the traveling salesman problem when the cities are points in R d for some fixed d and distances are computed according to a polyhedral norm. We show that for any such norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are assumed to take unit time, our algorithms run in time O(n f \Gamma2 log n), where f is the number of facets of the polyhedron determining the polyhedral norm. Thus for example we have O(n 2 log n) algorithms for the cases of points in the plane under the Rectilinear and Sup norms. This is in contrast to the fact that finding a minimum length tour in each case is NPhard. 1 Introduction In the Traveling Salesman Problem (TSP), the input consists of a set C of cities together with the distances d(c; c 0 ) between every pair of distinct cities c; c 0 2 C. The goal is to find an ordering or tour of the cities that minimizes (Minimum TSP) or maximizes (Maximum TSP) the total tour leng...
The Geometric Maximum Traveling Salesman Problem
, 1999
"... We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that for any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We consider the traveling salesman problem when the cities are points in R^d for some fixed d and distances are computed according to geometric distances, determined by some norm. We show that for any polyhedral norm, the problem of finding a tour of maximum length can be solved in polynomial time. If arithmetic operations are assumed to take unit time, our algorithms run in time O(n log n), where f is the number of facets of the polyhedron determining the polyhedral norm. Thus for example we have O(n log n) algorithms for the cases of points in the plane under the Rectilinear and Sup norms. This is in contrast to the fact that finding a minimum length tour in each case is NPhard. Our approach can be extended to the more general case of quasinorms with not necessarily symmetric unit ball, where we get a complexity of O(n log n).
Using combinatorial optimization in model–based trimmed clustering with cardinality constraints
"... Abstract Statistical clustering criteria with free scale parameters and unknown cluster sizes are inclined to create small, spurious clusters. To mitigate this tendency a statistical model for cardinality–constrained clustering of data with gross outliers is established, its maximum likelihood and m ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract Statistical clustering criteria with free scale parameters and unknown cluster sizes are inclined to create small, spurious clusters. To mitigate this tendency a statistical model for cardinality–constrained clustering of data with gross outliers is established, its maximum likelihood and maximum a posteriori clustering criteria are derived, and their consistency and robustness are analyzed. The criteria lead to constrained optimization problems that can be solved by iterative, alternating trimming algorithms of k–means type. Each step in the algorithms requires the solution to a λ–assignment problem known from combinatorial optimization. The method allows to estimate the numbers of clusters and outliers. It is illustrated with a synthetic and a real data set. Key words model–based clustering; classification model; outliers; size constraints; combinatorial optimization; λ–assignment problem; model selection 1
A Linear Time Algorithm for the Hitchcock Transportation Problem with Fixed Number of Supply Points
 Cooperative Research Report 35 (1992), The Institute of Statistical Mathematics, MinamiAzabu, Minatoku
, 1993
"... : In this paper, we propose an O(n) time algorithm for the Hitchcock transportation problem with n demand points and fixed number of supply points. When the number of supply points is very small and the number of demand points is much larger than that of supply points, our algorithm is efficient. If ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
: In this paper, we propose an O(n) time algorithm for the Hitchcock transportation problem with n demand points and fixed number of supply points. When the number of supply points is very small and the number of demand points is much larger than that of supply points, our algorithm is efficient. If we have m supply points, the time complexity of our algorithm is bounded by O((m!) 2 n) . Keywords: transportation problem, median finding problem, linear time algorithm. AMS(MOS) subject classifications: 68Q25 68R10 68U05 1 Introduction This paper deals with the Hitchcock transportation problem. The problem is formulated as a linear programming problem in the following way; minimize X i2U X j2V w ij x ij subject to X j2V x ij = a i 8i 2 U; X i2U x ij = b j 8j 2 V; x ij 0 8(i; j) 2 U 2 V; where U = f1; 2; : : : ; ng and V = f1; 2; : : : ; mg: Each element in U is called demand point and each element in V is called supply point. The classical example of this problem is th...
Fast fusion moves for multimodel estimation
 IN: PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION
, 2012
"... We develop a fast, effective algorithm for minimizing a wellknown objective function for robust multimodel estimation. Our work introduces a combinatorial step belonging to a family of powerful movemaking methods like αexpansion and fusion. We also show that our subproblem can be quickly transf ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We develop a fast, effective algorithm for minimizing a wellknown objective function for robust multimodel estimation. Our work introduces a combinatorial step belonging to a family of powerful movemaking methods like αexpansion and fusion. We also show that our subproblem can be quickly transformed into a comparatively small instance of minimumweighted vertexcover. In practice, these vertexcover subproblems are almost always bipartite and can be solved exactly by specialized network flow algorithms. Experiments indicate that our approach achieves the robustness of methods like affinity propagation, whilst providing the speed of fast greedy heuristics.
Designing Communication Networks with Fixed or Nonblocking Traffic Requirements
, 1992
"... A general framework for specifying communication network design problems is given. We analyze the computational complexity of several specific problems within this framework. For fixed multirate traffic requirements, we prove that a particular network analysis problem is npcomplete, although severa ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
A general framework for specifying communication network design problems is given. We analyze the computational complexity of several specific problems within this framework. For fixed multirate traffic requirements, we prove that a particular network analysis problem is npcomplete, although several related network design problems are either efficiently solvable or have good approximation algorithms. For the case when we wish the network to operate without blocking any connection requests, we give efficient algorithms for dimensioning the link capacities of the network. This work is supported by the National Science Foundation, Bell Communications Research, Bell Northern Research, Digital Equipment Corporation, Italtel SIT, NEC, NTT, and SynOptics. 1. Introduction Much work has been done on the computational problem of designing lowcost communication networks (see [GN89, GTD + 89, GW90, GK90, KKG91, AKR91] and references therein). The general problem is: given a collection of no...
Baseball, Optimization and the World Wide Web
, 1999
"... The competition for baseball playoff spots  the fabled "pennant race"  is one of the most closelywatched American sports traditions. Baseball fans, known for their love of statistics, check newspapers and web sites daily looking for measures of their team's progress (or lack ther ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The competition for baseball playoff spots  the fabled "pennant race"  is one of the most closelywatched American sports traditions. Baseball fans, known for their love of statistics, check newspapers and web sites daily looking for measures of their team's progress (or lack thereof !). While traditionallyreported playoff race statistics such as games back and "magic number" are informative, they are overly conservative and ignore the remaining schedule of games. By using optimization techniques, one can model schedule effects explicitly and determine when a team has locked up a playoff spot or is truly "mathematically eliminated" from contention. This paper describes the Baseball Playoff Races web site, a popular site developed at Berkeley that provides automatic daily updates of new, optimizationbased playoff race statistics. During the development of the site, it was found that the firstplace elimination status of all teams in a division can be determined using a single linear prog...