Results 1 - 10
of
15
Idletime scheduling with preemption intervals
- 20th ACM Symposium on Operating Systems Principles
, 2005
"... ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decr ..."
Abstract
-
Cited by 31 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decrease by 50 % or more. The idletime scheduler minimizes this interference by partially relaxing the work conservation principle during preemption intervals, during which it serves no background requests even if the resource is idle. The length of preemption intervals is a controlling parameter of the scheduler: short intervals aggressively utilize idle capacity; long intervals reduce the impact of background use on foreground performance. Unlike existing approaches to establish prioritized resource use, idletime scheduling requires only localized modifications to a limited number of system schedulers. In experiments, a FreeBSD implementation for idletime network scheduling maintains over 90 % of foreground TCP throughput, while allowing concurrent, high-rate UDP background flows to consume up to 80 % of remaining link capacity. A FreeBSD disk scheduler implementation maintains 80 % of foreground read performance, while enabling concurrent background operations to reach 70% throughput.
A Fast Content-based Data Distribution Infrastructure
- In Infocom
, 2006
"... Abstract — We present Sieve – an infrastructure for fast content-based data distribution to interested users. The ability of Sieve to filter and forward high-bandwidth data streams stems from its distributed pipelined architecture. The complex message filtering task is broken-up into a sequence of l ..."
Abstract
-
Cited by 8 (0 self)
- Add to MetaCart
(Show Context)
Abstract — We present Sieve – an infrastructure for fast content-based data distribution to interested users. The ability of Sieve to filter and forward high-bandwidth data streams stems from its distributed pipelined architecture. The complex message filtering task is broken-up into a sequence of light-weight filtering components resulting in high end-to-end throughput. Furthermore, since each component is assigned to a node based on its resource constraints, the queue buildup inside the nodes is minimal resulting in low end-to-end latency. Our experimental results based on real system implementation show that Sieve can sustain a throughput of more than 5000 messages per second for 100000 subscriptions with predicates of 10 attributes. Index Terms — Content-based Information Dissemination, Publish-Subscribe System, Event Stream Filtering
Link Gradients: Predicting the Impact of Network Latency on Multi-Tier Applications
"... Geographically dispersed deployments of large and complex multitier enterprise applications introduce many challenges, including those involved in predicting the impact of network latency on end-to-end transaction response times. Here, a measurement-based approach to quantifying this impact using a ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Geographically dispersed deployments of large and complex multitier enterprise applications introduce many challenges, including those involved in predicting the impact of network latency on end-to-end transaction response times. Here, a measurement-based approach to quantifying this impact using a new metric called the link gradient is presented. A nonintrusive technique for measuring the link gradient in running systems using delay injection and spectral analysis is presented, along with experimental results on PlanetLab that demonstrate that the link gradient can be used to predict end-to-end responsiveness, even in new and unknown application configurations. 1
Concurrent Implementation of Packet Processing Algorithms on Network Processors
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Mark Groves ii Network Processor Units (NPUs) are a compr ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Mark Groves ii Network Processor Units (NPUs) are a compromise between software-based and hardwired packet processing solutions. While slower than hardwired solutions, NPUs have the flexibility of software-based solutions, allowing them to adapt faster to changes in network protocols. Network processors have multiple processing engines so that multiple packets can be processed simultaneously within the NPU. In addition, each of these processing engines is multi-threaded, with special hardware support built in to alleviate some of the cost of concurrency. This hardware design allows the NPU to handle multiple packets concurrently, so that while one thread is waiting for a memory access to complete, another thread can be processing a different packet. By handling several packets simultaneously, an NPU can achieve similar processing power as traditional packet
A Large-Scale Hardware Timer Manager
"... Timers are used throughout in network protocols, particularly for packet loss detection and for connection management. Thus, at least one timer is used per connection. Internet servers and gateways need to serve several thousands of simultaneously open connections, therefore a multiplicity of timers ..."
Abstract
- Add to MetaCart
(Show Context)
Timers are used throughout in network protocols, particularly for packet loss detection and for connection management. Thus, at least one timer is used per connection. Internet servers and gateways need to serve several thousands of simultaneously open connections, therefore a multiplicity of timers have to be managed simultaneously. To achieve scalable timer management, we present a large-scale hardware timer manager that can be implemented as a coprocessor in any network processing unit. This coprocessor uses on- and off-chip memory to handle the timers. The on-chip memory functions like a processor cache to reduce the number of external memory accesses and therefore, to decrease operation latency. To sort the timers according to their expiration time, the data structure of the timer manager is based on the d-heap structure. We have simulated the model in SystemC to measure the performance of the timer operations: start, stop and expire. In this paper present a hardware concept for a large-scale timer manager and we discuss the simulation results, to show its efficiency. 1.
Abstract
"... Multimedia objects like video clips or captioned images contain data of various modalities such as image, audio, and transcript text. Correlations across different modalities provide infor-mation about the multimedia content, and are useful in applications ranging from summarization to semantic capt ..."
Abstract
- Add to MetaCart
Multimedia objects like video clips or captioned images contain data of various modalities such as image, audio, and transcript text. Correlations across different modalities provide infor-mation about the multimedia content, and are useful in applications ranging from summarization to semantic captioning. For discovering cross-modal correlations, we proposed a graph-based method, MAGIC, which turns the multimedia problem into a graph problem, by representing multimedia data as a graph. Using “random walks with restarts ” on the graph, MAGIC is capable of finding correlations among all modalities. When applied to the task of automatic image captioning, MAGIC found robust correlations between text and image and achieved a relative improvement by 58 % in captioning accuracy as compared to recent machine learning techniques. MAGIC has several desirable properties: (a) it is general and domain-independent; (b) it can spot correlations across any two modalities; (c) it is completely automatic and insensitive to parameter settings; (d) it scales up well for large datasets, (e) it enables novel multimedia applications (e.g., group captioning), and (f) it creates opportunity for applying graph algorithms to multimedia problems. 1
Channel Management, Message Representation and Event Handling of a Protocol Implementation Framework for Linux Using Generative Programming
"... There exist two main approaches for the implementation of protocols: the user-level approach and the kernel-level approach. An implementation of protocols at the user level runs as a user process while a kernel-level implementation runs in the operating system. The Protocol Implementation Framework ..."
Abstract
- Add to MetaCart
(Show Context)
There exist two main approaches for the implementation of protocols: the user-level approach and the kernel-level approach. An implementation of protocols at the user level runs as a user process while a kernel-level implementation runs in the operating system. The Protocol Implementation Framework for Linux (PIX) is a tool for developing a user level implementation of protocols by means of Generative Programming (GP). It provides a set of libraries and uniform interfaces for a client-configurable protocol stack implementation. GP focuses on software system families. It uses concepts to represent the elements with variation points, and features to model the configurable aspects of a concept. Generators in GP take the specification of a system and automatically manufacture the one required. In this thesis, session management, message representation and event handling in PIX are developed to enhance its functionality. A passive open feature for session management provides a local node with the ability to accept connections. The creation of a channel is completed as soon as a remote node requests a connection. The protocol generator and session generator are designed to assemble implementation components automatically with previously-defined features, as well as new features, which are defined in this thesis. Three event-handling features have been developed for use in different situations. SimpleEventManager handles a few events. DeltalistManager can schedule more events efficiently. TimingwheelManager retains the efficiency of DeltalistManager while optimizing the event storage process. The BufferTreeMessage feature has been developed to allocate messages dynamically and avoid time-consuming data copying.
Multiprocess Time Queue
, 2001
"... We show how to provide a time queue, which is a variant of a priority queue, for two different processes. One process has time constraints and may only spend constant worst case time on each operation. Itonly has to be able to perform some of the time queue operations while the other process has to ..."
Abstract
- Add to MetaCart
We show how to provide a time queue, which is a variant of a priority queue, for two different processes. One process has time constraints and may only spend constant worst case time on each operation. Itonly has to be able to perform some of the time queue operations while the other process has to be able to perform all operations. The otherprocess do not have any time constraints but we provide its operation in expected constant time.The main contribution is to show how to deamortize the deleteMincost and providing mutual exclusion for the parts that both processes maintains.
Scalable Hierarchical Coarse-grained Timers
"... Several network servers and routing and signalling protocols need a large number of events to be scheduled off timers. Some of these applications can withstand a bounded level of inaccuracy in when the timer is scheduled. In this paper we describe a novel mechanism called "scalable hierarchical ..."
Abstract
- Add to MetaCart
Several network servers and routing and signalling protocols need a large number of events to be scheduled off timers. Some of these applications can withstand a bounded level of inaccuracy in when the timer is scheduled. In this paper we describe a novel mechanism called "scalable hierarchical coarse grained timers" which can handle the scheduling of a large number of events while incurring a minimum of cpu and memory overhead. The techniques presented here were implemented on a commercial IP routing system and are used by the routing stack to damp flapping BGP routes. The paper reflects our experiences in carrying out this implementation and the subsequent performance analysis.