• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Hashed and hierarchical timing wheels: Efficient data structure for implementing a timer facility (1997)

by G Varghese, A Lauck
Venue:IEEE/ACM Transaction on Networking
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 15
Next 10 →

Idletime scheduling with preemption intervals

by Lars Eggert - 20th ACM Symposium on Operating Systems Principles , 2005
"... ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decr ..."
Abstract - Cited by 31 (0 self) - Add to MetaCart
ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decrease by 50 % or more. The idletime scheduler minimizes this interference by partially relaxing the work conservation principle during preemption intervals, during which it serves no background requests even if the resource is idle. The length of preemption intervals is a controlling parameter of the scheduler: short intervals aggressively utilize idle capacity; long intervals reduce the impact of background use on foreground performance. Unlike existing approaches to establish prioritized resource use, idletime scheduling requires only localized modifications to a limited number of system schedulers. In experiments, a FreeBSD implementation for idletime network scheduling maintains over 90 % of foreground TCP throughput, while allowing concurrent, high-rate UDP background flows to consume up to 80 % of remaining link capacity. A FreeBSD disk scheduler implementation maintains 80 % of foreground read performance, while enabling concurrent background operations to reach 70% throughput.
(Show Context)

Citation Context

...nges or IP aliases can indicate idletime use for idletime networking. Both the modified network and disk schedulers implement the preemption interval mechanism with the standard BSD timing facilities =-=[37]-=-. A separate timer is associated with each network and disk device. The timer restarts whenever the scheduler for the given resource reenters state P. While the timer is active, the resource is in its...

A Fast Content-based Data Distribution Infrastructure

by Samrat Ganguly, Sudeept Bhatnagar, Akhilesh Saxena, Suman Banerjee, Rauf Izmailov - In Infocom , 2006
"... Abstract — We present Sieve – an infrastructure for fast content-based data distribution to interested users. The ability of Sieve to filter and forward high-bandwidth data streams stems from its distributed pipelined architecture. The complex message filtering task is broken-up into a sequence of l ..."
Abstract - Cited by 8 (0 self) - Add to MetaCart
Abstract — We present Sieve – an infrastructure for fast content-based data distribution to interested users. The ability of Sieve to filter and forward high-bandwidth data streams stems from its distributed pipelined architecture. The complex message filtering task is broken-up into a sequence of light-weight filtering components resulting in high end-to-end throughput. Furthermore, since each component is assigned to a node based on its resource constraints, the queue buildup inside the nodes is minimal resulting in low end-to-end latency. Our experimental results based on real system implementation show that Sieve can sustain a throughput of more than 5000 messages per second for 100000 subscriptions with predicates of 10 attributes. Index Terms — Content-based Information Dissemination, Publish-Subscribe System, Event Stream Filtering
(Show Context)

Citation Context

...ctive timers can quickly grow into an infeasible number. We control this overhead by grouping the expiry events into buckets and using a timer expiry to process all events in the corresponding bucket =-=[9]-=-. 7sD. Selective Subscriptions We allow SPS to subscribe to only a subset of attributes for each subscription to reduce the extraneous messages. The rationale for this choice is that certain attribute...

Link Gradients: Predicting the Impact of Network Latency on Multi-Tier Applications

by Shuyi Chen, Kaustubh R. Joshi, Matti A. Hiltunen, William H. S, Ers Richard D. Schlichting
"... Geographically dispersed deployments of large and complex multitier enterprise applications introduce many challenges, including those involved in predicting the impact of network latency on end-to-end transaction response times. Here, a measurement-based approach to quantifying this impact using a ..."
Abstract - Cited by 4 (2 self) - Add to MetaCart
Geographically dispersed deployments of large and complex multitier enterprise applications introduce many challenges, including those involved in predicting the impact of network latency on end-to-end transaction response times. Here, a measurement-based approach to quantifying this impact using a new metric called the link gradient is presented. A nonintrusive technique for measuring the link gradient in running systems using delay injection and spectral analysis is presented, along with experimental results on PlanetLab that demonstrate that the link gradient can be used to predict end-to-end responsiveness, even in new and unknown application configurations. 1
(Show Context)

Citation Context

...ciency of the mechanism is independent of the size of the packet, and the overhead introduced due to data copying from kernel to userspace is small. The delay daemon uses a timer-wheel implementation =-=[17]-=- with which it maintains a queue of packets scheduled to be sent in order of their send times. The second version is specific for PlanetLab, since PlanetLab uses an experimental Linux kernel that has ...

Background Use of Idle Resource Capacity

by Lars René Eggert , 2004
"... ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract not found

Concurrent Implementation of Packet Processing Algorithms on Network Processors

by Mark Groves
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Mark Groves ii Network Processor Units (NPUs) are a compr ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Mark Groves ii Network Processor Units (NPUs) are a compromise between software-based and hardwired packet processing solutions. While slower than hardwired solutions, NPUs have the flexibility of software-based solutions, allowing them to adapt faster to changes in network protocols. Network processors have multiple processing engines so that multiple packets can be processed simultaneously within the NPU. In addition, each of these processing engines is multi-threaded, with special hardware support built in to alleviate some of the cost of concurrency. This hardware design allows the NPU to handle multiple packets concurrently, so that while one thread is waiting for a memory access to complete, another thread can be processing a different packet. By handling several packets simultaneously, an NPU can achieve similar processing power as traditional packet
(Show Context)

Citation Context

...line speed R of the router, as opposed to absolute rates that do not take the router’s capabilities into consideration. The algorithm is able to achieve this performance using a group of timer wheels =-=[48]-=-, stratified into levels such that the timer wheel at level i contains only flows with service rates between R 2i+1 and R 2i . Because of the different service rates, each timer wheel has its slots or...

A Large-Scale Hardware Timer Manager

by Silvio Dragone, Andreas Döring, Rainer Hagenau
"... Timers are used throughout in network protocols, particularly for packet loss detection and for connection management. Thus, at least one timer is used per connection. Internet servers and gateways need to serve several thousands of simultaneously open connections, therefore a multiplicity of timers ..."
Abstract - Add to MetaCart
Timers are used throughout in network protocols, particularly for packet loss detection and for connection management. Thus, at least one timer is used per connection. Internet servers and gateways need to serve several thousands of simultaneously open connections, therefore a multiplicity of timers have to be managed simultaneously. To achieve scalable timer management, we present a large-scale hardware timer manager that can be implemented as a coprocessor in any network processing unit. This coprocessor uses on- and off-chip memory to handle the timers. The on-chip memory functions like a processor cache to reduce the number of external memory accesses and therefore, to decrease operation latency. To sort the timers according to their expiration time, the data structure of the timer manager is based on the d-heap structure. We have simulated the model in SystemC to measure the performance of the timer operations: start, stop and expire. In this paper present a hardware concept for a large-scale timer manager and we discuss the simulation results, to show its efficiency. 1.
(Show Context)

Citation Context

...work processors do not have a hardware large-scale timer manager, the timer manager is usually implemented in software. Varghese presented an efficient data structure for a software implementation in =-=[7]-=-. The data structure is based on a hashed and hierarchical list (timing wheel). However, this approach has the intrinsic problem of bad data locality because operations with the same timer instance ar...

Abstract

by Jia-yu Pan, Hyungjeong Yang, Christos Faloutsos, Pinar Duygulu
"... Multimedia objects like video clips or captioned images contain data of various modalities such as image, audio, and transcript text. Correlations across different modalities provide infor-mation about the multimedia content, and are useful in applications ranging from summarization to semantic capt ..."
Abstract - Add to MetaCart
Multimedia objects like video clips or captioned images contain data of various modalities such as image, audio, and transcript text. Correlations across different modalities provide infor-mation about the multimedia content, and are useful in applications ranging from summarization to semantic captioning. For discovering cross-modal correlations, we proposed a graph-based method, MAGIC, which turns the multimedia problem into a graph problem, by representing multimedia data as a graph. Using “random walks with restarts ” on the graph, MAGIC is capable of finding correlations among all modalities. When applied to the task of automatic image captioning, MAGIC found robust correlations between text and image and achieved a relative improvement by 58 % in captioning accuracy as compared to recent machine learning techniques. MAGIC has several desirable properties: (a) it is general and domain-independent; (b) it can spot correlations across any two modalities; (c) it is completely automatic and insensitive to parameter settings; (d) it scales up well for large datasets, (e) it enables novel multimedia applications (e.g., group captioning), and (f) it creates opportunity for applying graph algorithms to multimedia problems. 1

Channel Management, Message Representation and Event Handling of a Protocol Implementation Framework for Linux Using Generative Programming

by Song Zhang, Song Zhang, Song Zhang
"... There exist two main approaches for the implementation of protocols: the user-level approach and the kernel-level approach. An implementation of protocols at the user level runs as a user process while a kernel-level implementation runs in the operating system. The Protocol Implementation Framework ..."
Abstract - Add to MetaCart
There exist two main approaches for the implementation of protocols: the user-level approach and the kernel-level approach. An implementation of protocols at the user level runs as a user process while a kernel-level implementation runs in the operating system. The Protocol Implementation Framework for Linux (PIX) is a tool for developing a user level implementation of protocols by means of Generative Programming (GP). It provides a set of libraries and uniform interfaces for a client-configurable protocol stack implementation. GP focuses on software system families. It uses concepts to represent the elements with variation points, and features to model the configurable aspects of a concept. Generators in GP take the specification of a system and automatically manufacture the one required. In this thesis, session management, message representation and event handling in PIX are developed to enhance its functionality. A passive open feature for session management provides a local node with the ability to accept connections. The creation of a channel is completed as soon as a remote node requests a connection. The protocol generator and session generator are designed to assemble implementation components automatically with previously-defined features, as well as new features, which are defined in this thesis. Three event-handling features have been developed for use in different situations. SimpleEventManager handles a few events. DeltalistManager can schedule more events efficiently. TimingwheelManager retains the efficiency of DeltalistManager while optimizing the event storage process. The BufferTreeMessage feature has been developed to allocate messages dynamically and avoid time-consuming data copying.
(Show Context)

Citation Context

...es the event to retransmit the message. Sometimes the protocol may need to broadcast or multicast advertisements or solicitations periodically and scheduling relevant events are required. G. Varghese =-=[22]-=- introduces seven schemes for managing events. But only two of them are fundamental. The first is a scheme without the storage of events in a data structure and the second is a scheme with the storage...

Multiprocess Time Queue

by Andrej Brodnik, Johan Karlsson , 2001
"... We show how to provide a time queue, which is a variant of a priority queue, for two different processes. One process has time constraints and may only spend constant worst case time on each operation. Itonly has to be able to perform some of the time queue operations while the other process has to ..."
Abstract - Add to MetaCart
We show how to provide a time queue, which is a variant of a priority queue, for two different processes. One process has time constraints and may only spend constant worst case time on each operation. Itonly has to be able to perform some of the time queue operations while the other process has to be able to perform all operations. The otherprocess do not have any time constraints but we provide its operation in expected constant time.The main contribution is to show how to deamortize the deleteMincost and providing mutual exclusion for the parts that both processes maintains.

Scalable Hierarchical Coarse-grained Timers

by Rohit Dube
"... Several network servers and routing and signalling protocols need a large number of events to be scheduled off timers. Some of these applications can withstand a bounded level of inaccuracy in when the timer is scheduled. In this paper we describe a novel mechanism called "scalable hierarchical ..."
Abstract - Add to MetaCart
Several network servers and routing and signalling protocols need a large number of events to be scheduled off timers. Some of these applications can withstand a bounded level of inaccuracy in when the timer is scheduled. In this paper we describe a novel mechanism called "scalable hierarchical coarse grained timers" which can handle the scheduling of a large number of events while incurring a minimum of cpu and memory overhead. The techniques presented here were implemented on a commercial IP routing system and are used by the routing stack to damp flapping BGP routes. The paper reflects our experiences in carrying out this implementation and the subsequent performance analysis.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University