## Space Efficient Parallel Buddy Memory Management (1992)

Citations: | 5 - 0 self |

### BibTeX

@TECHREPORT{Johnson92spaceefficient,

author = {Theodore Johnson and Tim Davis},

title = {Space Efficient Parallel Buddy Memory Management},

institution = {},

year = {1992}

}

### OpenURL

### Abstract

Shared memory multiprocessor systems need efficient dynamic storage allocators, both for system purposes and to support parallel programs. Memory managers are often based on the buddy system, which provides fast allocation and release. Previous parallel buddy memory managers made no attempt to coordinate the allocation, splitting and release of blocks, and as a result needlessly fragment memory. We a present fast, and simple parallel buddy memory manager that is also as space efficient as a serial buddy memory manager. We test our algorithms using memory allocation/deallocation traces collected from a parallel sparse matrix algorithm. Keywords: Memory management, Concurrent data structure, Buddy system, Parallel algorithm. 1 Introduction A memory manager accepts two kinds of operations: requests to allocate and requests to release blocks of memory, which may be of an arbitrary size. For example, the UNIX system calls malloc() and free() are requests to a memory manager. A concurrent ...

### Citations

565 |
Direct method for sparse matrices
- Duff, Erisman, et al.
- 1986
(Show Context)
Citation Context ...ng time and storage can be achieved by taking advantage of those entries. One common data structure stores a matrix as a set of compressed sparse vectors, one for each row and/or column of the matrix =-=[6]-=-: a compressed vector y(1 : : : n nz ) holds the n nz nonzero values of a full vector x(1 : : : n), and a corresponding integer index vector i(1 : : : n nz ) holds the location of the nonzero entries ... |

293 |
Sparse matrix test problems
- DuÂ®, Grimes, et al.
- 1989
(Show Context)
Citation Context ... just large enough to hold it. The compressed rows and columns are then released. We generated traces from the D2 algorithm for a set of matrices from the Harwell/Boeing sparse matrix test collection =-=[5]-=-. These matrices come from a wide range of real problems in scientific computing. The two matrices we use are gemat11, an order 4929 matrix with 33108 nonzeros, and gre-1107, an order 1107 matrix with... |

18 |
A nondeterministic parallel algorithm for general unsymmetric sparse LU factorization
- Davis, Yew
- 1990
(Show Context)
Citation Context ...ent for a multiprogrammed uniprocessor, it can create a serial bottleneck in a parallel processor. Example applications of parallel memory managers are parallel sparse matrix factorization algorithms =-=[2, 4]-=-, and buffers for message passing in clustered parallel computers[3]. 1 Most heap memory management algorithms use one of two main methods: free lists and buddy systems. In a free list algorithm [12],... |

12 |
Multiprocessing a sparse matrix code on the Alliant FX/8
- Duff
- 1989
(Show Context)
Citation Context ...ent for a multiprogrammed uniprocessor, it can create a serial bottleneck in a parallel processor. Example applications of parallel memory managers are parallel sparse matrix factorization algorithms =-=[2, 4]-=-, and buffers for message passing in clustered parallel computers[3]. 1 Most heap memory management algorithms use one of two main methods: free lists and buddy systems. In a free list algorithm [12],... |

10 |
Parallel dynamic storage allocation
- Bigler, Allan, et al.
- 1985
(Show Context)
Citation Context ...hods: free lists and buddy systems. In a free list algorithm [12], the free blocks are linked together in a list, ordered by starting address. Several parallel free list algorithms have been proposed =-=[16, 1, 7, 8]-=-. Free list algorithms are much slower than buddy algorithms, which are the subject of this paper. Another concurrent memory manager [11] has been proposed, based on the fast fits memory manager [14, ... |

8 |
Parallelizing the usual buddy algorithm
- Gottlieb, Wilson
- 1982
(Show Context)
Citation Context ...unt of the number of blocks of each size that are contained in subtree rooted at a node is stored at each node. Concurrent allocators use this information to navigate the tree. Their second algorithm =-=[10, 17]-=- is a concurrent version of the commonly described buddy algorithm. In both of these buddy algorithms, an allocate operation performs a split every time it fails to allocate a block, even if the alloc... |

7 |
Using the buddy system for concurrent memory allocation
- Gottlieb, Wilson
- 1981
(Show Context)
Citation Context ...ithms, and are faster but less space efficient than free list algorithms. Gottlieb and Wilson developed concurrent buddy systems that use fetch-and-add to coordinate processors. Their first algorithm =-=[9, 17]-=- considers a buddy system to be organized as a tree. A count of the number of blocks of each size that are contained in subtree rooted at a node is stored at each node. Concurrent allocators use this ... |

2 |
Concurrent Dynamic Storage Allocation
- Ellis, Olson
- 1987
(Show Context)
Citation Context ...hods: free lists and buddy systems. In a free list algorithm [12], the free blocks are linked together in a list, ordered by starting address. Several parallel free list algorithms have been proposed =-=[16, 1, 7, 8]-=-. Free list algorithms are much slower than buddy algorithms, which are the subject of this paper. Another concurrent memory manager [11] has been proposed, based on the fast fits memory manager [14, ... |

2 |
Concurrent algorithms for real time memory management
- Ford
- 1988
(Show Context)
Citation Context ...hods: free lists and buddy systems. In a free list algorithm [12], the free blocks are linked together in a list, ordered by starting address. Several parallel free list algorithms have been proposed =-=[16, 1, 7, 8]-=-. Free list algorithms are much slower than buddy algorithms, which are the subject of this paper. Another concurrent memory manager [11] has been proposed, based on the fast fits memory manager [14, ... |

1 |
Raytheon Submarine Signals Division
- Roeber
- 1991
(Show Context)
Citation Context ...ck in a parallel processor. Example applications of parallel memory managers are parallel sparse matrix factorization algorithms [2, 4], and buffers for message passing in clustered parallel computers=-=[3]-=-. 1 Most heap memory management algorithms use one of two main methods: free lists and buddy systems. In a free list algorithm [12], the free blocks are linked together in a list, ordered by starting ... |