Results 1  10
of
12
A Linear Time Algorithm for the k Maximal Sums Problem
"... Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm f ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the twodimensional k maximal sums problem in O(m 2 ·n+k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the ddimensional problem in O(n 2d−1 +k) time. The space usage of all the algorithms can be reduced to O(n d−1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space. 1
Computing maximumscoring segments in almost linear time
 In Proceedings of the 12th Annual International Computing and Combinatorics Conference, volume 4112 of LNCS
, 2006
"... Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires Θ(n log n) time in th ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires Θ(n log n) time in the worst case. For a given sequence of length n, we present an almost lineartime algorithm for this problem. Our algorithm uses a disjointset data structure and requires O(nα(n, n)) time in the worst case, where α(n, n) is the inverse Ackermann function. 1
Algorithms for Finding the WeightConstrained k Longest Paths in a Tree and the LengthConstrained k MaximumSum Segments of a Sequence
, 2008
"... In this work, we obtain the following new results: – Given a tree T = (V, E) with a length function ℓ: E → R and a weight function w: E → R, a positive integer k, and an interval [L, U], the WeightConstrained k Longest Paths problem is to find the k longest paths among all paths in T with weights i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this work, we obtain the following new results: – Given a tree T = (V, E) with a length function ℓ: E → R and a weight function w: E → R, a positive integer k, and an interval [L, U], the WeightConstrained k Longest Paths problem is to find the k longest paths among all paths in T with weights in the interval [L, U]. We show that the WeightConstrained k Longest Paths problem has a lower bound Ω(V log V + k) in the algebraic computation tree model and give an O(V log V + k)time algorithm for it. – Given a sequence A = (a1, a2,..., an) of numbers and an interval [L, U], we define the sum and length of a segment A[i, j] to be ai + ai+1 + · · · + aj and j − i + 1, respectively. The LengthConstrained k MaximumSum Segments problem is to find the k maximumsum segments among all segments of A with lengths in the interval [L, U]. We show that the LengthConstrained k MaximumSum Segments problem can be solved in O(n + k) time. ∗Corresponding
Ranking Cartesian Sums and Kmaximum subarrays
, 2006
"... We design a simple algorithm that ranks K largest in Cartesian sums X + Y in O(m + K log K) time. Based on this, Kmaximum subarrays can be computed in O(n + K log K) time (1D) and O(n 3 + K log K) time (2D) for input array of size n and n × n respectively. 1 ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We design a simple algorithm that ranks K largest in Cartesian sums X + Y in O(m + K log K) time. Based on this, Kmaximum subarrays can be computed in O(n + K log K) time (1D) and O(n 3 + K log K) time (2D) for input array of size n and n × n respectively. 1
A Subcubic Time Algorithm for the kMaximum Subarray Problem
"... Abstract. We design a faster algorithm for the kmaximum subarray problem under the conventional RAM model, based on distance matrix multiplication (DMM). Specifically we achieve O(n 3 √ log log n/log n + k log n) for a general problem where overlapping is allowed for solution arrays. This complexi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We design a faster algorithm for the kmaximum subarray problem under the conventional RAM model, based on distance matrix multiplication (DMM). Specifically we achieve O(n 3 √ log log n/log n + k log n) for a general problem where overlapping is allowed for solution arrays. This complexity is subcubic when k = o(n 3 / log n). The best known complexities of this problem are O(n 3 + k log n), which is cubic when k = O(n 3 /log n), and O(kn 3 √ log log n / log n), which is subcubic when k = o ( √ log n / log log n). 1
Effect of Corner Information . . .
, 2010
"... We consider the optimization problem of finding k nonintersecting rectangles and tableaux in n × n pixel plane where each pixel has a real valued weight. We discuss existence of efficient algorithms if a corner point of each rectangle/tableau is specified. ..."
Abstract
 Add to MetaCart
We consider the optimization problem of finding k nonintersecting rectangles and tableaux in n × n pixel plane where each pixel has a real valued weight. We discuss existence of efficient algorithms if a corner point of each rectangle/tableau is specified.
Towards Concurrent Hoare Logic
, 2012
"... How can we rigorously prove that an algorithm does what we think it does? Logically verifying programs is very important to industry. FloydHoare Logic (or Hoare Logic for short) is a set of rules that describe a type of valid reasoning for sequential program verification. Many different attempts ha ..."
Abstract
 Add to MetaCart
How can we rigorously prove that an algorithm does what we think it does? Logically verifying programs is very important to industry. FloydHoare Logic (or Hoare Logic for short) is a set of rules that describe a type of valid reasoning for sequential program verification. Many different attempts have been made to extend Hoare Logic for concurrent program verification. We combine ideas from a few of these extensions to formalise a verification framework for specific classes of parallel programs. A new proof rule to deal with the semantics of mesh algorithms is proposed within the verification framework. We use the framework and mesh proof rule to verify the correctness of Sung Bae’s parallel