Results 1  10
of
69
Introduction to Algorithms, second edition
 BOOK
, 2001
"... This part will get you started in thinking about designing and analyzing algorithms.
It is intended to be a gentle introduction to how we specify algorithms, some of the
design strategies we will use throughout this book, and many of the fundamental
ideas used in algorithm analysis. Later parts of t ..."
Abstract

Cited by 707 (3 self)
 Add to MetaCart
This part will get you started in thinking about designing and analyzing algorithms.
It is intended to be a gentle introduction to how we specify algorithms, some of the
design strategies we will use throughout this book, and many of the fundamental
ideas used in algorithm analysis. Later parts of this book will build upon this base.
Chapter 1 is an overview of algorithms and their place in modern computing
systems. This chapter defines what an algorithm is and lists some examples. It also
makes a case that algorithms are a technology, just as are fast hardware, graphical
user interfaces, objectoriented systems, and networks.
In Chapter 2, we see our first algorithms, which solve the problem of sorting
a sequence of n numbers. They are written in a pseudocode which, although not
directly translatable to any conventional programming language, conveys the structure
of the algorithm clearly enough that a competent programmer can implement
it in the language of his choice. The sorting algorithms we examine are insertion
sort, which uses an incremental approach, and merge sort, which uses a recursive
technique known as “divide and conquer.” Although the time each requires increases
with the value of n, the rate of increase differs between the two algorithms.
We determine these running times in Chapter 2, and we develop a useful notation
to express them.
Chapter 3 precisely defines this notation, which we call asymptotic notation. It
starts by defining several asymptotic notations, which we use for bounding algorithm
running times from above and/or below. The rest of Chapter 3 is primarily a
presentation of mathematical notation. Its purpose is more to ensure that your use
of notation matches that in this book than to teach you new mathematical concepts.
Algorithms in Discrete Convex Analysis
 Math. Programming
, 2000
"... this paper is to describe the f#eA damental results on M and Lconvex f#24L2A+ with special emphasis on algorithmic aspects. ..."
Abstract

Cited by 96 (21 self)
 Add to MetaCart
this paper is to describe the f#eA damental results on M and Lconvex f#24L2A+ with special emphasis on algorithmic aspects.
The excluded minors for GF(4)representable matroids
, 1997
"... There are exactly seven excluded minors for the class of GF(4)representable matroids. 1 Introduction We prove the following theorem. Theorem 1.1 A matroid M is GF(4)representable if and only if M has no minor isomorphic to any of U 2;6 , U 4;6 , P 6 , F \Gamma 7 , F \Gamma 7 , P 8 , and ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
There are exactly seven excluded minors for the class of GF(4)representable matroids. 1 Introduction We prove the following theorem. Theorem 1.1 A matroid M is GF(4)representable if and only if M has no minor isomorphic to any of U 2;6 , U 4;6 , P 6 , F \Gamma 7 , F \Gamma 7 , P 8 , and P 00 8 . The definitions for these matroids, with a summary of their interesting properties, can be found in the Appendix. Other than P 00 8 , they were all known to be excluded minors for GF(4) representability (see Oxley [13,15]). The matroid P 00 8 is obtained by relaxing the unique pair of disjoint circuithyperplanes of P 8 . Ever since Whitney's introductory paper [24] on matroid theory, researchers have sought ways to distinguish the representable matroids. For any field F, the class of Frepresentable matroids is closed under taking minors. Thus, it is natural to characterize the minorminimal matroids that are not Frepresentable; we refer to such matroids as excluded ...
Geometric optimization of the evaluation of finite element matrices
 SIAM J. Sci. Comput
"... Abstract. Assembling stiffness matrices represents a significant cost in many finite element computations. We address the question of optimizing the evaluation of these matrices. By finding redundant computations, we are able to significantly reduce the cost of building local stiffness matrices for ..."
Abstract

Cited by 16 (13 self)
 Add to MetaCart
Abstract. Assembling stiffness matrices represents a significant cost in many finite element computations. We address the question of optimizing the evaluation of these matrices. By finding redundant computations, we are able to significantly reduce the cost of building local stiffness matrices for the Laplace operator and for the trilinear form for NavierStokes. For the Laplace operator in two space dimensions, we have developed a heuristic graph algorithm that searches for such redundancies and generates code for computing the local stiffness matrices. Up to cubics, we are able to build the stiffness matrix on any triangle in less than one multiplyadd pair per entry. Up to sixth degree, we can do it in less than about two. Preliminary lowdegree results for Poisson and NavierStokes operators in three dimensions are also promising.
Determinantal probability measures
, 2002
"... Abstract. Determinantal point processes have arisen in diverse settings in recent years and have been investigated intensively. We initiate a detailed study of the discrete analogue, the most prominent example of which has been the uniform spanning tree measure. Our main results concern relationship ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Abstract. Determinantal point processes have arisen in diverse settings in recent years and have been investigated intensively. We initiate a detailed study of the discrete analogue, the most prominent example of which has been the uniform spanning tree measure. Our main results concern relationships with matroids, stochastic domination, negative association, completeness for infinite matroids, tail triviality, and a method for extension of results from orthogonal projections to positive contractions. We also present several new avenues for further investigation, involving Hilbert spaces, combinatorics, homology,
Greedy in Approximation Algorithms
 PROC. OF ESA
, 2006
"... The objective of this paper is to characterize classes of problems for which a greedy algorithm finds solutions provably close to optimum. To that end, we introduce the notion of kextendible systems, a natural generalization of matroids, and show that a greedy algorithm is a 1factor approximatio ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The objective of this paper is to characterize classes of problems for which a greedy algorithm finds solutions provably close to optimum. To that end, we introduce the notion of kextendible systems, a natural generalization of matroids, and show that a greedy algorithm is a 1factor approximation for these systems. Many seemly unrelated k problems fit in our framework, e.g.: bmatching, maximum profit scheduling and maximum asymmetric TSP. In the second half of the paper we focus on the maximum weight bmatching problem. The problem forms a 2extendible system, so greedy gives us a 1factor solution which runs in 2 O(m log n) time. We improve this by providing two linear time approximation algorithms for the problem: a 1 2factor algorithm that runs in O(bm) time, and a `2 3 − ǫ ´factor algorithm which runs in expected O ` bm log 1 ´ time.
Algebraic Algorithms for Matching and Matroid Problems
 SIAM JOURNAL ON COMPUTING
, 2009
"... We present new algebraic approaches for two wellknown combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algori ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We present new algebraic approaches for two wellknown combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time O(n ω) where n is the number of vertices and ω is the matrix multiplication exponent. This resolves the central open problem of Mucha and Sankowski (2004). For matroid intersection, our algorithm has running time O(nr ω−1) for matroids with n elements and rank r that satisfy some natural conditions.