• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 6,037
Next 10 →

Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks

by Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly - In EuroSys , 2007
"... Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad applica-tion combines computational “vertices ” with communica-tion “channels ” to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of availa ..."
Abstract - Cited by 762 (27 self) - Add to MetaCart
of available computers, communicating as appropriate through files, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run

PVM: A Framework for Parallel Distributed Computing

by V. S. Sunderam - Concurrency: Practice and Experience , 1990
"... The PVM system is a programming environment for the development and execution of large concurrent or parallel applications that consist of many interacting, but relatively independent, components. It is intended to operate on a collection of heterogeneous computing elements interconnected by one or ..."
Abstract - Cited by 788 (27 self) - Add to MetaCart
The PVM system is a programming environment for the development and execution of large concurrent or parallel applications that consist of many interacting, but relatively independent, components. It is intended to operate on a collection of heterogeneous computing elements interconnected by one

Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism

by Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska, Henry M. Levy - ACM Transactions on Computer Systems , 1992
"... Threads are the vehicle,for concurrency in many approaches to parallel programming. Threads separate the notion of a sequential execution stream from the other aspects of traditional UNIX-like processes, such as address spaces and I/O descriptors. The objective of this separation is to make the expr ..."
Abstract - Cited by 475 (21 self) - Add to MetaCart
Threads are the vehicle,for concurrency in many approaches to parallel programming. Threads separate the notion of a sequential execution stream from the other aspects of traditional UNIX-like processes, such as address spaces and I/O descriptors. The objective of this separation is to make

Fast Folding and Comparison of RNA Secondary Structures (The Vienna RNA Package)

by Ivo L. Hofacker, Walter Fontana, Peter F. Stadler, L. Sebastian Bonhoeffer, Manfred Tacker, Peter Schuster
"... Computer codes for computation and comparison of RNA secondary structures, the Vienna RNA package, are presented, that are based on dynamic programming algorithms and aim at predictions of structures with minimum free energies as well as at computations of the equilibrium partition functions and bas ..."
Abstract - Cited by 809 (117 self) - Add to MetaCart
and base pairing probabilities. An efficient heuristic for the inverse folding problem of RNA is introduced. In addition we present compact and efficient programs for the comparison of RNA secondary structures based on tree editing and alignment. All computer codes are written in ANSI C. They include

MULTILISP: a language for concurrent symbolic computation

by Robert H. Halstead - ACM Transactions on Programming Languages and Systems , 1985
"... Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing par ..."
Abstract - Cited by 529 (1 self) - Add to MetaCart
Multilisp is a version of the Lisp dialect Scheme extended with constructs for parallel execution. Like Scheme, Multilisp is oriented toward symbolic computation. Unlike some parallel programming languages, Multilisp incorporates constructs for causing side effects and for explicitly introducing

Treadmarks: Shared memory computing on networks of workstations

by Cristiana Amza, Alan L. Cox, Hya Dwarkadas, Pete Keleher, Honghui Lu, Ramakrishnan Rajamony, Weimin Yu, Willy Zwaenepoel - Computer , 1996
"... TreadMarks supports parallel computing on networks of workstations by providing the application with a shared memory abstraction. Shared memory facilitates the transition from sequential to parallel programs. After identifying possible sources of parallelism in the code, most of the data structures ..."
Abstract - Cited by 487 (37 self) - Add to MetaCart
TreadMarks supports parallel computing on networks of workstations by providing the application with a shared memory abstraction. Shared memory facilitates the transition from sequential to parallel programs. After identifying possible sources of parallelism in the code, most of the data structures

Synchronous data flow

by Edward A. Lee, et al. , 1987
"... Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case ..."
Abstract - Cited by 622 (45 self) - Add to MetaCart
Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case

Trace Scheduling: A Technique for Global Microcode Compaction

by Joseph A. Fisher - IEEE TRANSACTIONS ON COMPUTERS , 1981
"... Microcode compaction is the conversion of sequential microcode into efficient parallel (horizontal) microcode. Local com-paction techniques are those whose domain is basic blocks of code, while global methods attack code with a general flow control. Compilation of high-level microcode languages int ..."
Abstract - Cited by 683 (5 self) - Add to MetaCart
Microcode compaction is the conversion of sequential microcode into efficient parallel (horizontal) microcode. Local com-paction techniques are those whose domain is basic blocks of code, while global methods attack code with a general flow control. Compilation of high-level microcode languages

Scalable molecular dynamics with NAMD.

by James C Phillips , Rosemary Braun , Wei Wang , James Gumbart , Emad Tajkhorshid , Elizabeth Villa , Christophe Chipot , Robert D Skeel , Laxmikant Kalé , Klaus Schulten - J Comput Chem , 2005
"... Abstract: NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and la ..."
Abstract - Cited by 849 (63 self) - Add to MetaCart
and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion

Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors

by John M. Mellor-crummey, Michael L. Scott - ACM Transactions on Computer Systems , 1991
"... Busy-wait techniques are heavily used for mutual exclusion and barrier synchronization in shared-memory parallel programs. Unfortunately, typical implementations of busy-waiting tend to produce large amounts of memory and interconnect contention, introducing performance bottlenecks that become marke ..."
Abstract - Cited by 573 (32 self) - Add to MetaCart
Busy-wait techniques are heavily used for mutual exclusion and barrier synchronization in shared-memory parallel programs. Unfortunately, typical implementations of busy-waiting tend to produce large amounts of memory and interconnect contention, introducing performance bottlenecks that become
Next 10 →
Results 1 - 10 of 6,037
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University