Results 1  10
of
14
Modular Decomposition and Transitive Orientation
, 1999
"... A module of an undirected graph is a set X of nodes such for each node x not in X, either every member of X is adjacent to x, or no member of X is adjacent to x. There is a canonical linearspace representation for the modules of a graph, called the modular decomposition. Closely related to modular ..."
Abstract

Cited by 111 (12 self)
 Add to MetaCart
A module of an undirected graph is a set X of nodes such for each node x not in X, either every member of X is adjacent to x, or no member of X is adjacent to x. There is a canonical linearspace representation for the modules of a graph, called the modular decomposition. Closely related to modular decomposition is the transitive orientation problem, which is the problem of assigning a direction to each edge of a graph so that the resulting digraph is transitive. A graph is a comparability graph if such an assignment is possible. We give O(n +m) algorithms for modular decomposition and transitive orientation, where n and m are the number of vertices and edges of the graph. This gives linear time bounds for recognizing permutation graphs, maximum clique and minimum vertex coloring on comparability graphs, and other combinatorial problems on comparability graphs and their complements.
Efficient and practical algorithms for sequential modular decomposition
, 1999
"... A module of an undirected graph G = (V, E) is a set X of vertices that have the same set of neighbors in V \ X. The modular decomposition is a unique decomposition of the vertices into nested modules. We give a practical algorithm with an O(n + m(m;n)) time bound and a variant with a linear time bou ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
A module of an undirected graph G = (V, E) is a set X of vertices that have the same set of neighbors in V \ X. The modular decomposition is a unique decomposition of the vertices into nested modules. We give a practical algorithm with an O(n + m(m;n)) time bound and a variant with a linear time bound.
Task Graph Performance Bounds Through Comparison Methods
, 2001
"... When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel str ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel structure. Bounding the slowdown ratio would allow an informed tradeoff between the benefits of a restrictive formalism and its cost in loss of performance. This dissertation deals with seriesparallelising task graphs by adding precedence constraints to a task graph, to make the resulting task graph seriesparallel. The weak bounded slowdown conjecture for seriesparallelising task graphs is introduced. This states that the slowdown is bounded if information about the workload can be used to guide the selection of which precedence constraints to add. A theory of best seriesparallelisations is developed to investigate this conjecture. Partial evidence is presented that the weak slowdown bound is likely to be 4/3, and this bound is shown to be tight.
A supernodal formulation of vertex colouring with applications in course timetabling
, 2009
"... For many problems in Scheduling and Timetabling the choice of an mathematical programming formulation is determined by the formulation of the graph colouring component. This paper briefly surveys seven known integer programming formulations of vertex colouring and introduces a new formulation using ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
For many problems in Scheduling and Timetabling the choice of an mathematical programming formulation is determined by the formulation of the graph colouring component. This paper briefly surveys seven known integer programming formulations of vertex colouring and introduces a new formulation using “supernodes”. In the definition of George and McIntyre [SIAM J. Numer. Anal. 15 (1978), no. 1, 90–112], “supernode” is a complete subgraph, where each two vertices have the same neighbourhood outside of the subgraph. Seen another way, the algorithm for obtaining the best possible partition of an arbitrary graph into supernodes, which we give and show to be polynomialtime, makes it possible to use any formulation of vertex multicolouring to encode vertex colouring. The power of this approach is shown on the benchmark problem of Udine Course Timetabling. Results from empirical tests on DIMACS colouring instances, in addition to instances from other timetabling applications, are also provided and discussed.
On the P 4 components of Graphs
 Discrete Appl. Math
, 1997
"... Two edges are called P 4 adjacent if they belong to the same P 4 (chordless path on 4 vertices). P 4 components, in our terminology, are the equivalence classes of the transitive closure of the P 4 adjacency relation. In this paper, new results on the structure of P 4 components are obtained. On ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Two edges are called P 4 adjacent if they belong to the same P 4 (chordless path on 4 vertices). P 4 components, in our terminology, are the equivalence classes of the transitive closure of the P 4 adjacency relation. In this paper, new results on the structure of P 4 components are obtained. On the one hand, these results allow us to improve the complexity of the recognition and orientation algorithms for P 4 comparability and P 4  indifference graphs from O(n 5 ) to O(n 2 m) and from O(n 6 ) to O(n 2 m), respectively. On the other hand, by combining the modular decomposition with the substitution of P 4  components, a new unique tree representation for arbitrary graphs is derived which generalizes the homogeneous decomposition introduced by Jamison and Olariu [JO95]. 1 Introduction A P k (C k ) is a chordless path (cycle) on k vertices. By the P 4 abcd, we denote the P 4 with vertices a; b; c; d and edges ab, bc and cd. An orientation U of a graph G is the antisymmet...
Betweenness and comparability obtained from binary relations
, 2006
"... www.cosc.brocku.ca Betweenness and comparability obtained from binary relations ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
www.cosc.brocku.ca Betweenness and comparability obtained from binary relations
The Communication Complexity of Distributed SetJoins with Applications to Matrix Multiplication
"... Given a setcomparison predicateP and given two lists of setsA = (A1,..., Am) and B = (B1,..., Bm), with all Ai, Bj ⊆ [n], the Pset join A./P B is defined to be the set {(i, j) ∈ [n] × [n]  P(Ai, Bj)} ([n] denotes {1, 2,..., n}). When P(Ai, Bj) is the condition “Ai ∩ Bj 6 = ∅ ” we call this the ..."
Abstract
 Add to MetaCart
Given a setcomparison predicateP and given two lists of setsA = (A1,..., Am) and B = (B1,..., Bm), with all Ai, Bj ⊆ [n], the Pset join A./P B is defined to be the set {(i, j) ∈ [n] × [n]  P(Ai, Bj)} ([n] denotes {1, 2,..., n}). When P(Ai, Bj) is the condition “Ai ∩ Bj 6 = ∅ ” we call this the setintersectionnotempty join (a.k.a. the composition ofA and B); whenP(Ai, Bj) is “Ai∩Bj = ∅ ” we call it the setdisjointness join; when P(Ai, Bj) is “Ai = Bj ” we call it the setequality join; when P(Ai, Bj) is “Ai ∩ Bj  ≥ T ” for a given threshold T, we call it the setintersection threshold join. Assuming A and B are stored at two different sites in a distributed environment, we study the (randomized) communication complexity of computing these, and related, setjoins A./P B, as well as the (randomized) communication complexity of computing the exact and approximate value of their size k = A./P B. Combined, our analyses shed new insights into the quantitative differences between these different setjoins. Furthermore, given the close affinity of the natural join and the setintersectionnotempty join, our results also yield communication complexity results for computing the natural join in a distributed environment. Additionally, we obtain new algorithms for computing the distributed setintersectionnotempty join when the input and/or output is sparse. For instance, when the and output is ksparse, we improve an Õ(kn) communication algorithm of (Williams and Yu, SODA 2014). Observing that the setintersectionnotempty join is isomorphic to Boolean matrix multiplication (BMM), our results imply new algorithms for fundamental graph theoretic problems related to BMM. For example, we show how to compute the transitive closure of a directed graph in Õ(k3/2) time, when the transitive closure contains at most k edges. When k = O(n), we obtain a (practical) Õ(n3/2) time algorithm, improving a recent Õ(n · nω+14) time algorithm (Borassi, Crescenzi, and Habib, arXiv 2014) based on (impractical) fast matrix multiplication, where ω ≥ 2 is the exponent for matrix multiplication.
EXPLOITING STRUCTURE IN INTEGER PROGRAMS
, 2011
"... This dissertation argues the case for exploiting certain structures in integer linear programs. Integer linear programming is a wellknown optimisation problem, which seeks the optimum of a linear function of variables, whose values are required to be integral as well as to satisfy certain linear eq ..."
Abstract
 Add to MetaCart
This dissertation argues the case for exploiting certain structures in integer linear programs. Integer linear programming is a wellknown optimisation problem, which seeks the optimum of a linear function of variables, whose values are required to be integral as well as to satisfy certain linear equalities and inequalities. The state of the art in solvers for this problem is the “branch and bound ” approach. The performance of such solvers depends crucially on four types of inbuilt heuristics: primal, improvement, branching, and cutseparation or, more generally, bounding heuristics. Such heuristics in generalpurpose solvers have not, until recently, exploited structure in integer linear programs beyond the recognition of certain types of singlerow constraints. Many alternative approaches to integer linear programming can be cast in the following, novel framework. “Structure” in any integer linear program