Results 1 
7 of
7
BestEffort Cache Synchronization with Source Cooperation
 IN SIGMOD
, 2002
"... In environments where exact synchronization between source data objects and cached copies is not achievable due to bandwidth or other resource constraints, stale (outofdate) copies are permitted. It is desirable to minimize the overall divergence between source objects and cached copies by sele ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
In environments where exact synchronization between source data objects and cached copies is not achievable due to bandwidth or other resource constraints, stale (outofdate) copies are permitted. It is desirable to minimize the overall divergence between source objects and cached copies by selectively refreshing modified objects. We call the online process of selecting which objects to refresh in order to minimize divergence besteffort synchronization. In most approaches to besteffort synchronization, the cache coordinates the process and selects objects to refresh. In this paper, we propose a besteffort synchronization scheduling policy that exploits cooperation between data sources and the cache. We also propose an implementation of our policy that incurs low communication overhead even in environments with very large numbers of sources. Our algorithm is adaptive to wide fluctuations in available resources and data update rates. Through experimental simulation over synthetic and realworld data, we demonstrate the effectiveness of our algorithm, and we quantify the significant decrease in divergence achievable with source cooperation.
A Parallelization of Dijkstra's Shortest Path Algorithm
 IN PROC. 23RD MFCS'98, LECTURE NOTES IN COMPUTER SCIENCE
, 1998
"... The single source shortest path (SSSP) problem lacks parallel solutions which are fast and simultaneously workefficient. We propose simple criteria which divide Dijkstra's sequential SSSP algorithm into a number of phases, such that the operations within a phase can be done in parallel. We give a P ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
The single source shortest path (SSSP) problem lacks parallel solutions which are fast and simultaneously workefficient. We propose simple criteria which divide Dijkstra's sequential SSSP algorithm into a number of phases, such that the operations within a phase can be done in parallel. We give a PRAM algorithm based on these criteria and analyze its performance on random digraphs with random edge weights uniformly distributed in [0, 1]. We use
Parallelizing NPComplete Problems Using Tree Shaped Computations
, 1999
"... We explain how the parallelization aspects of a large class of applications can be modeled as tree shaped computations. This model is particularly suited for NPcomplete problems. One reason for this is that any computation on a nondeterministic machine can be emulated on a deterministic machine ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We explain how the parallelization aspects of a large class of applications can be modeled as tree shaped computations. This model is particularly suited for NPcomplete problems. One reason for this is that any computation on a nondeterministic machine can be emulated on a deterministic machine using a tree shaped computation. We then proceed to a particular example, the knapsack problem It turns out that a parallel depth first branchandbound algorithm based on tree shaped computations yields superlinear average speedup using 1024 processors.
Probabilistic Reasoning
, 2012
"... Scalable probabilistic reasoning is the key to unlocking the full potential of the age of big data. From untangling the biological processes that govern cancer to effectively targeting products and advertisements, probabilistic reasoning is how we make sense of noisy data and turn information into u ..."
Abstract
 Add to MetaCart
Scalable probabilistic reasoning is the key to unlocking the full potential of the age of big data. From untangling the biological processes that govern cancer to effectively targeting products and advertisements, probabilistic reasoning is how we make sense of noisy data and turn information into understanding and action. Unfortunately, the algorithms and tools for sophisticated structured probabilistic reasoning were developed for the sequential Von Neumann architecture and have therefore been unable to scale with big data. In this thesis we propose a simple set of design principles to guide the development of new parallel and distributed algorithms and systems for scalable probabilistic reasoning. We then apply these design principles to develop a series of new algorithms for inference in probabilistic graphical models and derive theoretical tools to characterize the parallel properties of statistical inference. We implement and assess the efficiency and scalability of the new inference algorithms in the multicore and distributed settings demonstrating the substantial gains from applying the thesis methodology to realworld probabilistic reasoning. Based on the lessons learned in statistical inference we introduce the GraphLab parallel abstraction which generalizes the thesis methodology and enable the rapid development of
Invasive Computing—An Overview
"... Abstract A novel paradigm for designing and programming future parallel computing systems called invasive computing is proposed. The main idea and novelty of invasive computing is to introduce resourceaware programming support in the sense that a given program gets the ability to explore and dynami ..."
Abstract
 Add to MetaCart
Abstract A novel paradigm for designing and programming future parallel computing systems called invasive computing is proposed. The main idea and novelty of invasive computing is to introduce resourceaware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors in a phase called invasion, then to execute portions of code of high parallelism degree in parallel based on the available invasible region on a given multiprocessor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of selfadaptive and resourceaware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (MultiProcessor SystemsonaChip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration. This
Parallel Splash Belief Propagation
"... As computer architectures transition towards exponentially increasing parallelism we are forced to adopt parallelism at a fundamental level in the design of machine learning algorithms. In this paper we focus on parallel graphical model inference. We demonstrate that the natural, synchronous paralle ..."
Abstract
 Add to MetaCart
As computer architectures transition towards exponentially increasing parallelism we are forced to adopt parallelism at a fundamental level in the design of machine learning algorithms. In this paper we focus on parallel graphical model inference. We demonstrate that the natural, synchronous parallelization of belief propagation is highly inefficient. By bounding the achievable parallel performance in chain graphical models we develop a theoretical understanding of the parallel limitations of belief propagation. We then provide a new parallel belief propagation algorithm which achieves optimal performance. Using several challenging realworld tasks, we empirically evaluate the performance of our algorithm on large cyclic graphical models where we achieve near linear parallel scaling and out perform alternative algorithms.