Results 1  10
of
17
Mesh Generation And Optimal Triangulation
, 1992
"... We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two and threedimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some cri ..."
Abstract

Cited by 180 (8 self)
 Add to MetaCart
We survey the computational geometry relevant to finite element mesh generation. We especially focus on optimal triangulations of geometric domains in two and threedimensions. An optimal triangulation is a partition of the domain into triangles or tetrahedra, that is best according to some criterion that measures the size, shape, or number of triangles. We discuss algorithms both for the optimization of triangulations on a fixed set of vertices and for the placement of new vertices (Steiner points). We briefly survey the heuristic algorithms used in some practical mesh generators.
Perspectives of Monge Properties in Optimization
, 1995
"... An m × n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) funda ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
An m × n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) fundamental combinatorial properties of Monge structures, (ii) applications of Monge properties to optimization problems and (iii) recognition of Monge properties.
Speeding up Dynamic Programming
 In Proc. 29th Symp. Foundations of Computer Science
, 1988
"... this paper we consider the problem of computing two similar recurrences: the onedimensional case ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
this paper we consider the problem of computing two similar recurrences: the onedimensional case
Parallel Algorithm for the Matrix Chain Product and the Optimal Triangulation Problems (Extended Abstract)
, 1993
"... This paper considers the problem of finding an optimal order of the multiplication chain of matrices and the problem of finding an optimal triangulation of a convex polygon. For both these problems the best sequential algorithms run in \Theta(n log n) time. All parallel algorithms known use the dyna ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This paper considers the problem of finding an optimal order of the multiplication chain of matrices and the problem of finding an optimal triangulation of a convex polygon. For both these problems the best sequential algorithms run in \Theta(n log n) time. All parallel algorithms known use the dynamic programming paradigm and run in a polylogarithmic time using, in the best case, O(n 6 =log k n) processors for a constant k. We give a new algorithm which uses a different approach and reduces the problem to computing certain recurrence in a tree. We show that this recurrence can be optimally solved which enables us to improve the parallel bound by a few factors. Our algorithm runs in O(log 3 n) time using n 2 =log 3 n processors on a CREW PRAM. We also consider the problem of finding an optimal triangulation in a monotone polygon. An O(log 2 n) time and n processors algorithm on a CREW PRAM is given. Key words : parallel algorithms, computational geometry, dynamic programm...
Linear and O(n log n) Time MinimumCost Matching Algorithms for Quasiconvex Tours (Extended Abstract)
"... Samuel R. Buss # Peter N. Yianilos + Abstract Let G be a complete, weighted, undirected, bipartite graph with n red nodes, n # blue nodes, and symmetric cost function c(x, y) . A maximum matching for G consists of min{n, n # edges from distinct red nodes to distinct blue nodes. Our objective is ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Samuel R. Buss # Peter N. Yianilos + Abstract Let G be a complete, weighted, undirected, bipartite graph with n red nodes, n # blue nodes, and symmetric cost function c(x, y) . A maximum matching for G consists of min{n, n # edges from distinct red nodes to distinct blue nodes. Our objective is to find a minimumcost maximum matching, i.e. one for which the sum of the edge costs has minimal value. This is the weighted bipartite matching problem; or as it is sometimes called, the assignment problem.
A Generic Program for Sequential Decision Processes
 Programming Languages: Implementations, Logics, and Programs
, 1995
"... This paper is an attempt to persuade you of my viewpoint by presenting a novel generic program for a certain class of optimisation problems, named sequential decision processes. This class was originally identified by Richard Bellman in his pioneering work on dynamic programming [4]. It is a perfect ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This paper is an attempt to persuade you of my viewpoint by presenting a novel generic program for a certain class of optimisation problems, named sequential decision processes. This class was originally identified by Richard Bellman in his pioneering work on dynamic programming [4]. It is a perfect example of a class of problems which are very much alike, but which has until now escaped solution by a single program. Those readers who have followed some of the work that Richard Bird and I have been doing over the last five years [6, 7] will recognise many individual examples: all of these have now been unified. The point of this observation is that even when you are on the lookout for generic programs, it can take a rather long time to discover them. The presentation below will follow that earlier work, by referring to the calculus of relations and the relational theory of data types. I shall however attempt to be light on the formalism, as I do not regard it as essential to the main thesis of this paper. Undoubtedly there are other (perhaps more convenient) notations in which the same ideas could be developed. This paper does assume some degree of familiarity with a lazy functional programming language such as Haskell, Hope, Miranda
Efficient Matrix Chain Ordering in Polylog Time
 IN PROC. OF INT'L PARALLEL PROCESSING SYMP
, 1998
"... The matrix chain ordering problem is to find the cheapest way to multiply a chain of n matrices, where the matrices are pairwise compatible but of varying dimensions. Here we give several new parallel algorithms including O(lg 3 n)time and n/lg nprocessor algorithms for solving the matrix chain o ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
The matrix chain ordering problem is to find the cheapest way to multiply a chain of n matrices, where the matrices are pairwise compatible but of varying dimensions. Here we give several new parallel algorithms including O(lg 3 n)time and n/lg nprocessor algorithms for solving the matrix chain ordering problem and for solving an optimal triangulation problem of convex polygons on the common CRCW PRAM model. Next, by using efficient algorithms for computing row minima of totally monotone matrices, this complexity is improved to O(lg 2 n) time with n processors on the EREW PRAM and to O(lg 2 nlg lg n) time with n/lg lg n processors on a common CRCW PRAM. A new algorithm for computing the row minima of totally monotone matrices improves our parallel MCOP algorithm to O(nlg 1.5 n) work and polylog time on a CREW PRAM. Optimal logtime algorithms for computing row minima of totally monotone matrices will improve our algorithm and enable it to have the same work as the sequential algorithm of Hu and
Optimum Binary Search Trees On The Hierarchical Memory Model
, 2001
"... The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a nonuniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a nonuniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by (a), where : N ! N is the memory cost function, and the h distinct values of model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h = 2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter for the HMM inc...
Efficient Parallel Dynamic Programming
, 1994
"... In 1983, Valiant, Skyum, Berkowitz and Racko# showed that many problems with simple O#n 3 # sequential dynamic programming solutions are in the class NC. They used straight line programs to show that these problems can be solved in O#lg 2 n# time with n 9 processors. In 1988, Rytter used pebbl ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In 1983, Valiant, Skyum, Berkowitz and Racko# showed that many problems with simple O#n 3 # sequential dynamic programming solutions are in the class NC. They used straight line programs to show that these problems can be solved in O#lg 2 n# time with n 9 processors. In 1988, Rytter used pebbling games to show that these same problems can be solved on a CREW PRAM in O#lg 2 n# time with n 6 =lg n processors. Recently, Huang, Liu and Viswanathan #23# and Galil and Park #15# give algorithms that improve this processor complexityby polylog factors. Using a graph structure that is analogous to the classical dynamic programming table, this paper improves these results. First, this graph characterization leads to a polylog time and n 6 =lg n processor algorithm that solves these problems. Second, there follows a subpolylog time and sublinear processor parallel approximation algorithm for the matrix chain ordering problem. Finally, this paper presents a n 3 =lg n processor and O...
Approximation of Staircases By Staircases
, 1992
"... The simplest nontrivial monotone functions are "staircases." The problem arises: what is the best approximation of some monotone function f(x) by a staircase with M jumps? In particular: what if f(x) is itself a staircase with N , N ? M , steps? This paper considers algorithms for solving, and theo ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The simplest nontrivial monotone functions are "staircases." The problem arises: what is the best approximation of some monotone function f(x) by a staircase with M jumps? In particular: what if f(x) is itself a staircase with N , N ? M , steps? This paper considers algorithms for solving, and theorems relating to, this problem. All of the algorithms we propose are spaceoptimal up to a constant factor and and also runtimeoptimal except for at most a logarithmic factor. One application of our results is to "data compression" of probability distributions. We find yet another remarkable property of Monge's inequality, called the "concave cost as a function of zigzag number theorem." This property leads to new ways to get speedups in certain 1dimensional dynamic programming problems satisfying this inequality. Keywords  Histograms, data compression, cumulative distribution functions, approximation, monotone functions, dynamic programming, Monge's quadrangle inequality, concave cost...