• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 122,333
Next 10 →

Distributed-shared-memory support on the Simultaneous Optical Multiprocessor Exchange Bus

by Constantine Katsinis - 9th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'98 , 1998
"... This paper examines the performance of distributed-shared-memory systems based on the Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) using queueing network models and develops theoretical results which predict processor utilization, message latency and other useful measures. It also pre ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
This paper examines the performance of distributed-shared-memory systems based on the Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) using queueing network models and develops theoretical results which predict processor utilization, message latency and other useful measures. It also

Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors

by John M. Mellor-crummey, Michael L. Scott - ACM Transactions on Computer Systems , 1991
"... Busy-wait techniques are heavily used for mutual exclusion and barrier synchronization in shared-memory parallel programs. Unfortunately, typical implementations of busy-waiting tend to produce large amounts of memory and interconnect contention, introducing performance bottlenecks that become marke ..."
Abstract - Cited by 567 (32 self) - Add to MetaCart
-accessible ag variables, and for some other processor to terminate the spin with a single remote write operation at an appropriate time. Flag variables may be locally-accessible as a result of coherent caching, or by virtue of allocation in the local portion of physically distributed shared memory. We present a

Memory Consistency and Event Ordering in Scalable Shared-Memory Multiprocessors

by Kourosh Gharachorloo, Daniel Lenoski, James Laudon, Phillip Gibbons, Anoop Gupta, John Hennessy - In Proceedings of the 17th Annual International Symposium on Computer Architecture , 1990
"... Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the f ..."
Abstract - Cited by 735 (18 self) - Add to MetaCart
Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory

TreadMarks: Distributed Shared Memory on Standard Workstations and Operating Systems

by Pete Keleher , Alan L. Cox, Sandhya Dwarkadas, Willy Zwaenepoel - IN PROCEEDINGS OF THE 1994 WINTER USENIX CONFERENCE , 1994
"... TreadMarks is a distributed shared memory (DSM) system for standard Unix systems such as SunOS and Ultrix. This paper presents a performance evaluation of TreadMarks running on Ultrix using DECstation-5000/240's that are connected by a 100-Mbps switch-based ATM LAN and a 10-Mbps Ethernet. Ou ..."
Abstract - Cited by 527 (17 self) - Add to MetaCart
TreadMarks is a distributed shared memory (DSM) system for standard Unix systems such as SunOS and Ultrix. This paper presents a performance evaluation of TreadMarks running on Ultrix using DECstation-5000/240's that are connected by a 100-Mbps switch-based ATM LAN and a 10-Mbps Ethernet

Treadmarks: Shared memory computing on networks of workstations

by Cristiana Amza, Alan L. Cox, Hya Dwarkadas, Pete Keleher, Honghui Lu, Ramakrishnan Rajamony, Weimin Yu, Willy Zwaenepoel - Computer , 1996
"... TreadMarks supports parallel computing on networks of workstations by providing the application with a shared memory abstraction. Shared memory facilitates the transition from sequential to parallel programs. After identifying possible sources of parallelism in the code, most of the data structures ..."
Abstract - Cited by 484 (37 self) - Add to MetaCart
TreadMarks supports parallel computing on networks of workstations by providing the application with a shared memory abstraction. Shared memory facilitates the transition from sequential to parallel programs. After identifying possible sources of parallelism in the code, most of the data structures

Fault-tolerant distributed-shared-memory on a broadcast-based architecture

by Diana Hecht, Constantine Katsinis - Transactions on Parallel and Distributed Systems
"... Abstract. The Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) is a low-latency, high-bandwidth interconnection network which directly links arbitrary pairs of processor nodes without contention, and can efficiently interconnect over one hundred nodes. Each node has a dedicated output cha ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. The Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) is a low-latency, high-bandwidth interconnection network which directly links arbitrary pairs of processor nodes without contention, and can efficiently interconnect over one hundred nodes. Each node has a dedicated output

Sorting networks and their applications

by K. E. Batcher , 1968
"... To achieve high throughput rates today's computers perform several operations simultaneously. Not only are I/O operations performed concurrently with computing, but also, in multiprocessors, several computing ..."
Abstract - Cited by 660 (0 self) - Add to MetaCart
To achieve high throughput rates today's computers perform several operations simultaneously. Not only are I/O operations performed concurrently with computing, but also, in multiprocessors, several computing

Language Support for Lightweight Transactions

by Tim Harris, Keir Fraser , 2003
"... Concurrent programming is notoriously di#cult. Current abstractions are intricate and make it hard to design computer systems that are reliable and scalable. We argue that these problems can be addressed by moving to a declarative style of concurrency control in which programmers directly indicate t ..."
Abstract - Cited by 479 (16 self) - Add to MetaCart
Concurrent programming is notoriously di#cult. Current abstractions are intricate and make it hard to design computer systems that are reliable and scalable. We argue that these problems can be addressed by moving to a declarative style of concurrency control in which programmers directly indicate the safety properties that they require.

The Stanford DASH multiprocessor

by Daniel Lenoski, James Laudon, Kourosh Gharachorloo, Wolf-dietrich Weber, Anoop Gupta, John Hennessy, Mark Horowitz, Monica S. Lam, Dash The Ease-of-use - IEEE Computer , 1992
"... cache coherence gives ..."
Abstract - Cited by 404 (5 self) - Add to MetaCart
cache coherence gives

Shared memory consistency models: A tutorial

by Sarita V. Adve, Kourosh Gharachorloo - IEEE Computer , 1996
"... Parallel systems that support the shared memory abstraction are becoming widely accepted in many areas of computing. Writing correct and efficient programs for such systems requires a formal specification of memory semantics, called a memory consistency model. The most intuitive model—sequential con ..."
Abstract - Cited by 435 (11 self) - Add to MetaCart
Parallel systems that support the shared memory abstraction are becoming widely accepted in many areas of computing. Writing correct and efficient programs for such systems requires a formal specification of memory semantics, called a memory consistency model. The most intuitive model
Next 10 →
Results 1 - 10 of 122,333
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University