Results 1 
9 of
9
Generic GramSchmidt Orthogonalization by Exact Division
, 1996
"... Given a vector space basis with integral domain coefficients, a variant of the GramSchmidt process produces an orthogonal basis using exact divisions, so that all arithmetic is within the integral domain. Zerodivision is avoided by the assumption that in the domain a sum of squares of nonzero elem ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Given a vector space basis with integral domain coefficients, a variant of the GramSchmidt process produces an orthogonal basis using exact divisions, so that all arithmetic is within the integral domain. Zerodivision is avoided by the assumption that in the domain a sum of squares of nonzero elements is always nonzero. In this paper we fully develop this method and use it to illustrate and compare a variety of means for implementing generic algorithms. Previous generic programming methods have been limited to one of compiletime, linktime, or runtime instantiation of type parameters, such as the integral domain of this algorithm, but we show how to express generic algorithms in C+ + so that all three possibilities are available using a single source code. Finally, we take advantage of the genericness to test and time the algorithm using different arithmetics, including three hugeinteger arithmetic packages. 1 Introduction Given a basis B = fb1 ; : : : ; bng for R n the GramS...
The GNU Scientific Software Library
, 1996
"... Contents 1 Overview 5 2 Things to do 6 3 Copying 7 4 Programming Notes 7 4.1 Instantiation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 4.2 Compilation in multiple source files : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 4.3 Member templates : : : ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Contents 1 Overview 5 2 Things to do 6 3 Copying 7 4 Programming Notes 7 4.1 Instantiation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 4.2 Compilation in multiple source files : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 4.3 Member templates : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 5 CLanguage structures 8 6 Linear algebra 10 6.1 GaussJordan elimination : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 6.2 LUDecomposition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 6.2.1 The determinant : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 6.3 Pivoting utilities : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 6.4 Reductions : : : : : : : : : : : : : :
The boost C++ metaprogramming library
, 2002
"... This paper describes the Boost C++ template metaprogramming library (MPL), an extensible compiletime framework of algorithms, sequences and metafunction classes. The library brings together important abstractions from the generic and functional programming worlds to build a powerful and easytouse ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes the Boost C++ template metaprogramming library (MPL), an extensible compiletime framework of algorithms, sequences and metafunction classes. The library brings together important abstractions from the generic and functional programming worlds to build a powerful and easytouse toolset which makes template metaprogramming practical enough for the realworld environments. The MPL is heavily influenced by its runtime equivalent — the Standard Template Library (STL), a part of the C++ standard library [STL94], [ISO98]. Like the STL, it defines an open conceptual and implementation framework which can serve as a foundation for future contributions in the domain. The library's fundamental concepts and idioms enable the user to focus on solutions without navigating the universe of possible adhoc approaches to a given metaprogramming problem, even if no actual MPL code is used. The library also provides a compiletime lambda expression facility enabling arbitrary currying and composition of class templates, a feature whose runtime counterpart is often cited as missing from the STL. This paper explains the motivation, usage, design, and implementation of the MPL with examples of its reallife applications, and offers some lessons learned about C++ template metaprogramming.
Exploiting Data Locality For Multiprocessor Query Scheduling
 International Conference on Parallel Processing
, 1996
"... We analyze the scheduling aspects of database queries submitted to an abstract model of a very large distributed system. The essential elements of this model are (a) a finite number of identical processing nodes with limited storage capacity, (b) a finite number of queries to be serviced, (c) a very ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We analyze the scheduling aspects of database queries submitted to an abstract model of a very large distributed system. The essential elements of this model are (a) a finite number of identical processing nodes with limited storage capacity, (b) a finite number of queries to be serviced, (c) a very large readonly data set that is shared by all queries and (d) a fixed internode communication latency. This framework models an important class of applications that use distributed processing of very large data sets. Examples of these applications exist in the very large database and multimedia problem domains. To meet the objective of minimizing flow time of queries while exploiting interquery locality, various heuristics are proposed and evaluated through extensive simulation. 1 Introduction 1.1 Motivation for the Problem In recent years, increasing processor speeds and distributed computing are making feasible query processing over very large databases. The size of the database in s...
Segmented Iterators and Hierarchical Algorithms
"... Abstract. Many data structures are naturally segmented. Generic algorithms that ignore that feature, and that treat every data structure as a uniform range of elements, are unnecessarily inefficient. A new kind of iterator abstraction, in which segmentation is explicit, makes it possible to write hi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Many data structures are naturally segmented. Generic algorithms that ignore that feature, and that treat every data structure as a uniform range of elements, are unnecessarily inefficient. A new kind of iterator abstraction, in which segmentation is explicit, makes it possible to write hierarchical algorithms that exploit segmentation.
Hierarchical Collections: An Efficient Scheme to Build an ObjectOriented Distributed Class Library for Massively Parallel Computation
"... . Separation of parallelism and distribution is one of major concerns of efficient massively parallel computation. The details of distribution should be hidden from users of parallel / distributed class frameworks, while they should be easily modifiable by (library) programmers who are builders of t ..."
Abstract
 Add to MetaCart
. Separation of parallelism and distribution is one of major concerns of efficient massively parallel computation. The details of distribution should be hidden from users of parallel / distributed class frameworks, while they should be easily modifiable by (library) programmers who are builders of the framework. We propose a new scheme to build an objectoriented parallel distributed class framework based on a simple but mathematically disciplined model called hierarchy of collections. Based on the model, classes can be easily derived to achieve high performance massively parallel computation on a variety of physical platforms. We have examined the descriptive power of our proposal with various specialized distributions including the recently proposed Twisted Data Layout, on the Fujitsu AP1000 parallel computer. 1 Introduction Massively parallel computation involving a large number of dataelementsnaturally utilizes underlying structures among the elements (arrays, lists, or trees) whi...
Numerous Small STL Changes
"... Introduction This paper contains a number of relatively small corrections, modifications and additions to various parts of STL. Most of these changes were identified by Alex Stepanov and Meng Lee, the original authors of STL. The primary goal was to correct obvious errors and omissions; however the ..."
Abstract
 Add to MetaCart
Introduction This paper contains a number of relatively small corrections, modifications and additions to various parts of STL. Most of these changes were identified by Alex Stepanov and Meng Lee, the original authors of STL. The primary goal was to correct obvious errors and omissions; however these changes also bring the WP descriptions of the STL components more in line with the recent versions of STL that Alex and Meng have made publicly available. To the extent possible, I have tried to separate the substantive issues (such as new components or changes in existing behavior) from editorial issues (typos, minor wording changes, etc.). The clause numbers referenced below are from the preValley Forge version of the WP [Koenig94], which in turn was based on the version of STL described in [Stepanov94]; most of the corrections are from [Stepanov95]. 1.0 Substantive Changes 1.1 Clause 17 (Library Introduction) 1.1.1 Summary: Add additional cont
Gutachter – Reviewers
"... The notion of graph traversal is of fundamental importance to solving many computational problems. In many modern applications involving graph traversal such as those arising in the domain of social networks, Internet based services, fraud detection in telephone calls etc., the underlying graph is v ..."
Abstract
 Add to MetaCart
The notion of graph traversal is of fundamental importance to solving many computational problems. In many modern applications involving graph traversal such as those arising in the domain of social networks, Internet based services, fraud detection in telephone calls etc., the underlying graph is very large and dynamically evolving. This thesis deals with the design and engineering of traversal algorithms for such graphs. We engineer various I/Oefficient Breadth First Search (BFS) algorithms for massive sparse undirected graphs. Our pipelined implementations with low constant factors, together with some heuristics preserving the worstcase guarantees makes BFS viable on massive graphs. We perform an extensive set of experiments to study the effect of various graph properties such as diameter, initial disk layouts, tuning parameters, disk parallelism, cacheobliviousness etc. on the relative performance of these algorithms. We characterize the performance of NAND flash based storage devices, including many solid state disks. We show that despite the similarities between flash memory