## A Linear Time Algorithm for the k Maximal Sums Problem

Citations: | 6 - 2 self |

### BibTeX

@MISC{Brodal_alinear,

author = {Gerth Stølting Brodal and Allan Grønlund Jørgensen},

title = {A Linear Time Algorithm for the k Maximal Sums Problem},

year = {}

}

### OpenURL

### Abstract

Abstract. Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2 ·n+k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d−1 +k) time. The space usage of all the algorithms can be reduced to O(n d−1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space. 1

### Citations

371 | Time bounds for selection
- Blum, Floyd, et al.
- 1973
(Show Context)
Citation Context ...thod for locating an element, e, with k ≤ rank(e) ≤ ck for some constant c 3 . After this element is found the input heap is traversed and all elements larger than e are extracted. Standard selection =-=[21]-=- is then used to obtain the k largest elements from the O(k) extracted elements. To find e, elements in the heap are organized into appropriately sized groups named clans. Clans are represented by the... |

290 | Finding the k shortest paths
- Eppstein
(Show Context)
Citation Context ...h m ≤ n. This improves the previous best result [13], which was 1 The k maximal sums problem can also be solved in O(n + k) time by a reduction to Eppstein’s solution for the k shortest paths problem =-=[15]-=- which also makes essential use of Fredericksons heap selection algorithm. This reduction was observed independently by Hsiao-Fei Liu and Kun-Mao Chao [14].san O(m2 · n + k log k) time algorithm. This... |

246 | Making data structures persistent
- Driscoll, Sarnak, et al.
- 1989
(Show Context)
Citation Context ...e � � n 2 + n sums in O(n) time. The k largest sums from the heap are then selected in O(n+k) time using the heap selection algorithm of Frederickson [16]. The heap is build using partial persistence =-=[17]-=-. The space is reduced by only processing k elements at time. The resulting algorithm can be viewed as a natural extension of Kadane’s linear time algorithm for solving the maximum sum problem introdu... |

148 |
ACM algorithm 232: Heap sort
- Williams
- 1964
(Show Context)
Citation Context ...ted elements. To find e, elements in the heap are organized into appropriately sized groups named clans. Clans are represented by their smallest element, and these are managed in classic binary heaps =-=[22]-=-. By fixing the size of clans to log k one can obtain an O(k log log k) time algorithm as follows. Construct the first clan by locating the ⌊log k⌋ largest elements and initialize a clan-heap with the... |

46 |
Self adjusting heaps
- Sleator, Tarjan
- 1986
(Show Context)
Citation Context ...ity queues represented as heap ordered binary trees supporting insertions in constant time already exist. One such data structure is the self-adjusting binary heaps of Tarjan and Sleator described in =-=[18]-=- called Skew Heaps. The Skew heap is a data structure reminiscent of Leftist heaps [19,20]. Even though the Skew heap would suffice for our algorithm it is able to do much more than we require. Theref... |

43 |
A new upper bound on the complexity of the all pairs shortest path problem
- Takaoka
- 1992
(Show Context)
Citation Context ...[6] asymptotically faster algorithms for the two-dimensional problem are described. In [6] Takaoka designed an O(m2n � log log m/ log m) time algorithm by a reduction to (min,+) matrix multiplication =-=[7]-=-. A simple extension of the maximum sum problem is to compute the k largest sub-vectors for 1 ≤ k ≤ � � n 2 + n. The sub-vectors are allowed to overlap, and the output is k triples of the form (i, j, ... |

35 |
Programming pearls: algorithm design techniques
- Bentley
- 1984
(Show Context)
Citation Context ...n numbers. The maximal sub-vector of A is the subvector A[i, . . . , j] maximizing �j s=i A[s]. The problem originates from Ulf Grenander who defined the problem in the setting of pattern recognition =-=[1]-=-. Solutions to the problem also have applications in areas such as Data Mining [2] and Bioinformatics [3]. The problem, and an optimal linear time algorithm credited to Jay Kadane, are described by Be... |

33 |
The art of computer programming, volume 3: (2nd ed.) sorting and searching
- Knuth
- 1998
(Show Context)
Citation Context ...ime already exist. One such data structure is the self-adjusting binary heaps of Tarjan and Sleator described in [18] called Skew Heaps. The Skew heap is a data structure reminiscent of Leftist heaps =-=[19,20]-=-. Even though the Skew heap would suffice for our algorithm it is able to do much more than we require. Therefore, we design a simpler heap which we will name Iheap. The essential properties of the Ih... |

23 |
Linear lists and priority queues as balanced binary trees
- Crane
- 1972
(Show Context)
Citation Context ...ime already exist. One such data structure is the self-adjusting binary heaps of Tarjan and Sleator described in [18] called Skew Heaps. The Skew heap is a data structure reminiscent of Leftist heaps =-=[19,20]-=-. Even though the Skew heap would suffice for our algorithm it is able to do much more than we require. Therefore, we design a simpler heap which we will name Iheap. The essential properties of the Ih... |

22 |
A note on a standard strategy for developing loop invariants and loops
- Gries
- 1984
(Show Context)
Citation Context ...roblem also have applications in areas such as Data Mining [2] and Bioinformatics [3]. The problem, and an optimal linear time algorithm credited to Jay Kadane, are described by Bentley [1] and Gries =-=[4]-=-. The algorithm they describe is a �j scanning algorithm which remembers the best solution, max1≤i≤j≤t s=i A[s], �t and the best suffix solution, max1≤i≤t s=i A[s], in the part of the input array, A[1... |

22 | Efficient algorithms for the maximum subarray problem by distance matrix multiplication
- Takaoka
(Show Context)
Citation Context ...(m2 ·n) time algorithm. The same reduction technique can be applied iteratively to solve the problem in any dimension. But unlike the one dimensional case these algorithms are not optimal. In [5] and =-=[6]-=- asymptotically faster algorithms for the two-dimensional problem are described. In [6] Takaoka designed an O(m2n � log log m/ log m) time algorithm by a reduction to (min,+) matrix multiplication [7]... |

19 |
An optimal algorithm for selection in a min-heap
- Frederickson
- 1993
(Show Context)
Citation Context ...nerating a binary heap that implicitly contains the � � n 2 + n sums in O(n) time. The k largest sums from the heap are then selected in O(n+k) time using the heap selection algorithm of Frederickson =-=[16]-=-. The heap is build using partial persistence [17]. The space is reduced by only processing k elements at time. The resulting algorithm can be viewed as a natural extension of Kadane’s linear time alg... |

18 | Longest biased interval and longest non-negative sum interval
- Allison
(Show Context)
Citation Context ...em originates from Ulf Grenander who defined the problem in the setting of pattern recognition [1]. Solutions to the problem also have applications in areas such as Data Mining [2] and Bioinformatics =-=[3]-=-. The problem, and an optimal linear time algorithm credited to Jay Kadane, are described by Bentley [1] and Gries [4]. The algorithm they describe is a �j scanning algorithm which remembers the best ... |

18 |
Algorithms for the maximum subarray problem based on matrix multiplication
- Tamaki, Tokuyama
- 1998
(Show Context)
Citation Context ... in an O(m2 ·n) time algorithm. The same reduction technique can be applied iteratively to solve the problem in any dimension. But unlike the one dimensional case these algorithms are not optimal. In =-=[5]-=- and [6] asymptotically faster algorithms for the two-dimensional problem are described. In [6] Takaoka designed an O(m2n � log log m/ log m) time algorithm by a reduction to (min,+) matrix multiplica... |

17 |
Data mining with optimized two-dimensional association rules
- Fukuda, Morimoto, et al.
- 2001
(Show Context)
Citation Context ... �j s=i A[s]. The problem originates from Ulf Grenander who defined the problem in the setting of pattern recognition [1]. Solutions to the problem also have applications in areas such as Data Mining =-=[2]-=- and Bioinformatics [3]. The problem, and an optimal linear time algorithm credited to Jay Kadane, are described by Bentley [1] and Gries [4]. The algorithm they describe is a �j scanning algorithm wh... |

17 |
Algorithms for the problem of k maximum sums and a vlsi algorithm for the k maximum subarrays problem
- Bae, Takaoka
- 2004
(Show Context)
Citation Context .... ⋆ ⋆ ⋆ Center for Massive Data Algorithmics, a Center of the Danish National Research Foundation.sTable 1. Previous and new results for the k maximal sums problem Paper Time complexity Bae & Takaoka =-=[8]-=- O(n · k) Bengtson & Chen [9] O(min{k + nlog 2 n, n √ k}) Bae & Takaoka [10] O(n log k + k 2 ) Bae & Takaoka [11] O((n + k) log k) Lie & Lin [12] O(n log n + k) expected Cheng et al. [13] O(n + k log ... |

17 |
Efficient algorithms for k maximum sums
- Bengtsson, Chen
- 2004
(Show Context)
Citation Context ...ta Algorithmics, a Center of the Danish National Research Foundation.sTable 1. Previous and new results for the k maximal sums problem Paper Time complexity Bae & Takaoka [8] O(n · k) Bengtson & Chen =-=[9]-=- O(min{k + nlog 2 n, n √ k}) Bae & Takaoka [10] O(n log k + k 2 ) Bae & Takaoka [11] O((n + k) log k) Lie & Lin [12] O(n log n + k) expected Cheng et al. [13] O(n + k log k) Liu & Chao [14] 1 O(n + k)... |

15 | Improved algorithms for the k-maximum subarray problem
- Bae, Takaoka
(Show Context)
Citation Context ...al Research Foundation.sTable 1. Previous and new results for the k maximal sums problem Paper Time complexity Bae & Takaoka [8] O(n · k) Bengtson & Chen [9] O(min{k + nlog 2 n, n √ k}) Bae & Takaoka =-=[10]-=- O(n log k + k 2 ) Bae & Takaoka [11] O((n + k) log k) Lie & Lin [12] O(n log n + k) expected Cheng et al. [13] O(n + k log k) Liu & Chao [14] 1 O(n + k) This paper O(n + k) was the original problem, ... |

11 |
Randomized algorithm for the sum selection problem
- Lin, Lee
- 2007
(Show Context)
Citation Context ...aximal sums problem Paper Time complexity Bae & Takaoka [8] O(n · k) Bengtson & Chen [9] O(min{k + nlog 2 n, n √ k}) Bae & Takaoka [10] O(n log k + k 2 ) Bae & Takaoka [11] O((n + k) log k) Lie & Lin =-=[12]-=- O(n log n + k) expected Cheng et al. [13] O(n + k log k) Liu & Chao [14] 1 O(n + k) This paper O(n + k) was the original problem, introduced as a method for maximum likelihood estimations of patterns... |

10 |
Improved algorithms for the k maximum-sums problems. Theoretical Computer Science
- Cheng, Chen, et al.
- 2006
(Show Context)
Citation Context ...Bae & Takaoka [8] O(n · k) Bengtson & Chen [9] O(min{k + nlog 2 n, n √ k}) Bae & Takaoka [10] O(n log k + k 2 ) Bae & Takaoka [11] O((n + k) log k) Lie & Lin [12] O(n log n + k) expected Cheng et al. =-=[13]-=- O(n + k log k) Liu & Chao [14] 1 O(n + k) This paper O(n + k) was the original problem, introduced as a method for maximum likelihood estimations of patterns in digitized images [1]. With m ≤ n, the ... |