• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

First-Class User-Level Threads (1991)

Cached

  • Download as a PDF

Download Links

  • [www.cs.rochester.edu]
  • [www.cs.ucsd.edu]
  • [www-cse.ucsd.edu]
  • [www.bennetyee.org]
  • [www.bennetyee.org]
  • [cseweb.ucsd.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www-static.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [www.cc.gatech.edu]
  • [ftp.cs.rochester.edu]
  • [ftp.cs.rochester.edu]
  • [www.cs.rochester.edu]
  • [www.informatik.hu-berlin.de]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Brian D. Marsh , Michael L. Scott , Thomas J. Leblanc , Evangelos P. Markatos
Venue:In Proceedings of the Thirteenth ACM Symposium on Operating Systems Principles
Citations:124 - 12 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Marsh91first-classuser-level,
    author = {Brian D. Marsh and Michael L. Scott and Thomas J. Leblanc and Evangelos P. Markatos},
    title = {First-Class User-Level Threads},
    booktitle = {In Proceedings of the Thirteenth ACM Symposium on Operating Systems Principles},
    year = {1991},
    pages = {110--121}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

It is often desirable, for reasons of clarity, portability, and efficiency, to write parallel programs in which the number of processes is independent of the number of available processors. Several modern operating systems support more than one process in an address space, but the overhead of creating and synchronizing kernel processes can be high. Many runtime environments implement lightweight processes (threads) in user space, but this approach usually results in second-class status for threads, making it difficult or impossible to perform scheduling operations at appropriate times (e.g. when the current thread blocks in the kernel). In addition, a lack of common assumptions may also make it difficult for parallel programs or library routines that use dissimilar thread packages to communicate with each other, or to synchronize access to shared data. We describe a set of kernel mechanisms and conventions designed to accord first-class status to user-level threads, allowing them to be used in any reasonable way that traditional kernel-provided processes can be used, while leaving the details of their implementation to userlevel code. The key features of our approach are (1) shared memory for asynchronous communication between the kernel and the user, (2) software interrupts for events that might require action on the part of a user-level scheduler, and (3) a scheduler interface convention that facilitates interactions in user space between dissimilar kinds of threads. We have incorporated these mechanisms in the Psyche parallel operating system, and have used them to implement several different kinds of user-level threads. We argue for our approach in terms of both flexibility and performance.

Keyphrases

first-class user-level thread    user space    user-level thread    parallel program    system support    kernel process    key feature    dissimilar kind    current thread block    second-class status    kernel mechanism    library routine    asynchronous communication    first-class status    several different kind    appropriate time    scheduler interface convention    address space    psyche parallel    dissimilar thread package    available processor    user-level scheduler    traditional kernel-provided process    reasonable way    common assumption    software interrupt    userlevel code   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University