Results 1  10
of
39
Perspectives of Monge Properties in Optimization
, 1995
"... An m × n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) f ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
An m &times; n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) fundamental combinatorial properties of Monge structures, (ii) applications of Monge properties to optimization problems and (iii) recognition of Monge properties.
Sparse dynamic programming I: Linear cost functions
 J. Assoc. Comp. Mach
, 1992
"... A.bstmct: We consider dynamic programming solutions to a number of different recurrences for sequence comparison and for R ~ A secondary structure prediction. These recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
A.bstmct: We consider dynamic programming solutions to a number of different recurrences for sequence comparison and for R ~ A secondary structure prediction. These recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. \Ve give efficient algorithms for these problems. when the weight functions used in the recurrences are taken to be linear. Our algorithms reduce the best known bounds by a factor almost linear in the density of the problems: when the problems are sparse this results in a substantial speedup. In trod uction Sparsity is a phenomenon that has long been exploited for efficient algorithms. For instance, most of the best known graph algorithms take time bounded by a function of the number of actual edges in the graph, rather than the maximum possible number of edges. The algorithms we study in this paper perform various kinds of sequence analysis. which are typically solved by dynamic programming in a matrix indexed by positions in the inpllt sequences. Only two such problems are already known to be solved by algorithms taking advantage of
Improved complexity bounds for location problems on the real line
 Operations Research Letters
, 1991
"... ..."
(Show Context)
Sequence Comparison with Mixed Convex and Concave Costs
, 1989
"... Recently a number of algorithms have been developed for solving the minimumweight edit sequence problem with nonlinear costs for multiple insertions and deletions. We extend these algorithms to cost functions that are neither convex nor concave, but a mixture of both. We also apply this techniqu ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Recently a number of algorithms have been developed for solving the minimumweight edit sequence problem with nonlinear costs for multiple insertions and deletions. We extend these algorithms to cost functions that are neither convex nor concave, but a mixture of both. We also apply this technique
Speeding up Dynamic Programming
 In Proc. 29th Symp. Foundations of Computer Science
, 1988
"... this paper we consider the problem of computing two similar recurrences: the onedimensional case ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
this paper we consider the problem of computing two similar recurrences: the onedimensional case
Linear and O(n log n) Time MinimumCost Matching Algorithms for Quasiconvex Tours (Extended Abstract)
"... Samuel R. Buss # Peter N. Yianilos + Abstract Let G be a complete, weighted, undirected, bipartite graph with n red nodes, n # blue nodes, and symmetric cost function c(x, y) . A maximum matching for G consists of min{n, n # edges from distinct red nodes to distinct blue nodes. Our objective is ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Samuel R. Buss # Peter N. Yianilos + Abstract Let G be a complete, weighted, undirected, bipartite graph with n red nodes, n # blue nodes, and symmetric cost function c(x, y) . A maximum matching for G consists of min{n, n # edges from distinct red nodes to distinct blue nodes. Our objective is to find a minimumcost maximum matching, i.e. one for which the sum of the edge costs has minimal value. This is the weighted bipartite matching problem; or as it is sometimes called, the assignment problem.
Constructing Huffman Trees in Parallel
 SIAM JOURNAL ON COMPUTING
, 1995
"... We present a parallel algorithm for the Huffman Coding problem. We reduce the Huffman Coding problem to the Concave Least Weight Subsequence problem and give a parallel algorithm that solves the latter problem in O( p n log n) time with n processors on a CREW PRAM. This leads to the first sublinea ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We present a parallel algorithm for the Huffman Coding problem. We reduce the Huffman Coding problem to the Concave Least Weight Subsequence problem and give a parallel algorithm that solves the latter problem in O( p n log n) time with n processors on a CREW PRAM. This leads to the first sublinear time o(n 2 )total work parallel algorithm for Huffman Coding. This reduction of the Huffman Coding problem to the Concave Least Weight Subsequence problem also yields an alternative O(n log n)time (or linear time  for a sorted input sequence) algorithm for Huffman Coding.
Approximate Regular Expression Pattern Matching with Concave Gap Penalties
 ALGORITHMICA
, 1992
"... Given a sequence A of length M and a regular expression R of length P , an approximate regular expression pattern matching algorithm computes the score of the optimal alignment between A and one of the sequences B exactly matched by R. An alignment between sequences A = a 1 a 2 : : : aM and B = b ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Given a sequence A of length M and a regular expression R of length P , an approximate regular expression pattern matching algorithm computes the score of the optimal alignment between A and one of the sequences B exactly matched by R. An alignment between sequences A = a 1 a 2 : : : aM and B = b 1 b 2 : : : b N is a list of ordered pairs, !(i 1 ; j 1 ); (i 2 ; j 2 ); : : : (i t ; j t )? such that i k ! i k+1 and j k ! j k+1 . In this case, the alignment aligns symbols a i k and b jk , and leaves blocks of unaligned symbols, or gaps, between them. A scoring scheme S associates costs for each aligned symbol pair and each gap. The alignment's score is the sum of the associated costs, and an optimal alignment is one of minimal score. There are a variety of schemes for scoring alignments. In a concave gappenalty scoring scheme S = fffi; wg, a function ffi (a; b) gives the score of each aligned pair of symbols a and b, and a concave function w(k) gives the score of a gap of lengt...
Parallel Construction Of Optimal Alphabetic Trees
 PROCEEDINGS OF THE 5 TH ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 1993
"... A parallel algorithm is given which constructs an optimal alphabetic tree in O(log³ n) time with n² log n processors. The construction is basically a parallelization of the GarsiaWachs version [5] of the Hutucker algorithm [8]. The best previous NC algorithm for the problem uses n 6 = log O(1) ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A parallel algorithm is given which constructs an optimal alphabetic tree in O(log³ n) time with n² log n processors. The construction is basically a parallelization of the GarsiaWachs version [5] of the Hutucker algorithm [8]. The best previous NC algorithm for the problem uses n 6 = log O(1) n processors. [15] Our method is an extension of techniques used first in [3] and later used in [13] for the Huffman coding problem, which can be viewed as the alphabetic tree problem for the special case of a monotone weight sequence. In this paper, we extend to the case of certain "almost monotone " sequences, which we call "sorted regular valleys." The processing of such subsequences depends on a quadrangle inequality, while the total number of global iterations depends on a kind of tree contraction. Altogether we can view our algorithmic approach as (quadrangle inequality + tree contraction). An optimal alphabetic tree is a special case of an optimal binary search tree where all...
Approximation of Staircases By Staircases
, 1992
"... The simplest nontrivial monotone functions are "staircases." The problem arises: what is the best approximation of some monotone function f(x) by a staircase with M jumps? In particular: what if f(x) is itself a staircase with N , N ? M , steps? This paper considers algorithms for solving ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
The simplest nontrivial monotone functions are "staircases." The problem arises: what is the best approximation of some monotone function f(x) by a staircase with M jumps? In particular: what if f(x) is itself a staircase with N , N ? M , steps? This paper considers algorithms for solving, and theorems relating to, this problem. All of the algorithms we propose are spaceoptimal up to a constant factor and and also runtimeoptimal except for at most a logarithmic factor. One application of our results is to "data compression" of probability distributions. We find yet another remarkable property of Monge's inequality, called the "concave cost as a function of zigzag number theorem." This property leads to new ways to get speedups in certain 1dimensional dynamic programming problems satisfying this inequality. Keywords  Histograms, data compression, cumulative distribution functions, approximation, monotone functions, dynamic programming, Monge's quadrangle inequality, concave cost...