Results 1 - 10
of
485
Sizing Router Buffers
, 2004
"... All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalen ..."
Abstract
-
Cited by 352 (17 self)
- Add to MetaCart
(Show Context)
All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP’s congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100 % of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = RT T × C, where RT T is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms × 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = RT T ×C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (RT T × C) / √ n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99 % with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.
Optimizing the migration of virtual computers
- In Proceedings of the 5th Symposium on Operating Systems Design and Implementation
, 2002
"... Abstract This paper shows how to quickly move the state of a running computer across a network, including the state in its disks, memory, CPU registers, and I/O devices. We call this state a capsule. Capsule state is hardware state, so it includes the entire operating system as well as applications ..."
Abstract
-
Cited by 238 (5 self)
- Add to MetaCart
(Show Context)
Abstract This paper shows how to quickly move the state of a running computer across a network, including the state in its disks, memory, CPU registers, and I/O devices. We call this state a capsule. Capsule state is hardware state, so it includes the entire operating system as well as applications and running processes. We have chosen to move x86 computer states because x86 computers are common, cheap, run the software we use, and have tools for migration. Unfortunately, x86 capsules can be large, containing hundreds of megabytes of memory and gigabytes of disk data. We have developed techniques to reduce the amount of data sent over the network: copy-on-write disks track just the updates to capsule disks, "ballooning" zeros unused memory, demand paging fetches only needed blocks, and hashing avoids sending blocks that already exist at the remote end. We demonstrate these optimizations in a prototype system that uses VMware GSX Server virtual machine monitor to create and run x86 capsules. The system targets networks as slow as 384 kbps. Our experimental results suggest that efficient capsule migration can improve user mobility and system management. Software updates or installations on a set of machines can be accomplished simply by distributing a capsule with the new changes. Assuming the presence of a prior capsule, the amount of traffic incurred is commensurate with the size of the update or installation package itself. Capsule migration makes it possible for machines to start running an application within 20 minutes on a 384 kbps link, without having to first install the application or even the underlying operating system. Furthermore, users' capsules can be migrated during a commute between home and work in even less time.
On the Characteristics and Origins of Internet Flow Rates
- In ACM SIGCOMM
, 2002
"... This paper considers the distribution of the rates at which flows transmit data, and the causes of these rates. First, using packet level traces from several Internet links, and summary flow statistics from an ISP backbone, we examine Internet flow rates and the relationship between the rate and oth ..."
Abstract
-
Cited by 179 (4 self)
- Add to MetaCart
(Show Context)
This paper considers the distribution of the rates at which flows transmit data, and the causes of these rates. First, using packet level traces from several Internet links, and summary flow statistics from an ISP backbone, we examine Internet flow rates and the relationship between the rate and other flow characteristics such as size and duration. We find, as have others, that while the distribution of flow rates is skewed, it is not as highly skewed as the distribution of flow sizes. We also find that for large flows the size and rate are highly correlated. Second, we attempt to determine the cause of the rates at which flows transmit data by developing a tool, T-RAT, to analyze packet-level TCP dynamics. In our traces, the most frequent causes appear to be network congestion and receiver window limits.
Network Emulation in the Vint/NS Simulator
- Proceedings of the fourth IEEE Symposium on Computers and Communications
, 1999
"... Employing an emulation capability in network simulation provides the ability for real-world traffic to interact with a simulation. The benefits of emulation include the ability to expose experimental algorithms and protocols to live traffic loads, and to test real-world protocol implementations agai ..."
Abstract
-
Cited by 141 (0 self)
- Add to MetaCart
(Show Context)
Employing an emulation capability in network simulation provides the ability for real-world traffic to interact with a simulation. The benefits of emulation include the ability to expose experimental algorithms and protocols to live traffic loads, and to test real-world protocol implementations against repeatable interference generated in simulation. This paper describes the design and implementation of the emulation facility in the NS simulator, a commonly-used publicly available network research simulator. 1. Introduction Simulation and testbed construction represent the two most important methodologies available to network protocol developers for design and evaluation of both novel and existing network protocols. Simulation provides for repeatable, controlled experimentation with a modest overhead required to construct and carry out a simulation. Unfortunately, simulations often make simplifying assumptions which may obscure understanding of behavior seen in real-world situations....
Size-based Scheduling to Improve Web Performance
"... Is it possible to reduce the expected response time ofevery request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper. This paper proposes a method for improving the performance of web servers servicing static HTTP requests. Th ..."
Abstract
-
Cited by 140 (13 self)
- Add to MetaCart
Is it possible to reduce the expected response time ofevery request at a web server, simply by changing the order in which we schedule the requests? That is the question we ask in this paper. This paper proposes a method for improving the performance of web servers servicing static HTTP requests. The idea is to give preference to those requests which are short, or have small remaining processing requirements, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy. The implementation is at the kernel level and in-volves controlling the order in which socket buffers are drained into the network.Experiments are executed both in a LAN and a WAN environment. We use the Linux operating system and the Apache and Flash web servers. Results indicate that SRPT-based scheduling of connections yields significant reductions in delay at the web server. These result in a substantial reduction inmean response time, mean slowdown, and variance in response time for both the LAN and WAN environments. Significantly, and counter to intuition, the large requests are only negligibly penalized or not at all penalized as a result of SRPT-based scheduling.
Early Experience with an Internet Broadcast System Based on Overlay Multicast
, 2003
"... In this paper, we report on experience in building and deploying an operational Internet broadcast system based on Overlay Multicast. In over a year, the system has been providing a cost-e#ective alternative for Internet broadcast, used by over 3600 users spread across multiple continents in home, a ..."
Abstract
-
Cited by 134 (16 self)
- Add to MetaCart
(Show Context)
In this paper, we report on experience in building and deploying an operational Internet broadcast system based on Overlay Multicast. In over a year, the system has been providing a cost-e#ective alternative for Internet broadcast, used by over 3600 users spread across multiple continents in home, academic and commercial environments. Technical conferences and special interest groups are the early adopters. Our experience confirms that Overlay Multicast can be easily deployed and can provide reasonably good application performance. The experience has led us to identify first-order issues that are guiding our future e#orts and are of importance to any Overlay Multicast protocol or system. Our key contributions are (i) enabling a real Overlay Multicast application and strengthening the case for overlays as a viable architecture for enabling group communication applications on the Internet, (ii) the details in engineering and operating a fully functional streaming system, addressing a wide range of real-world issues that are not typically considered in protocol design studies, and (iii) the data, analysis methodology, and experience that we are able to report given our unique standpoint.
Improving simulation for network research
- University of Southern California
, 1999
"... In recent years, the Internet has grown signicantly in size and scope, and as a result new protocols and algorithms are being developed to meet changing operational ..."
Abstract
-
Cited by 122 (17 self)
- Add to MetaCart
(Show Context)
In recent years, the Internet has grown signicantly in size and scope, and as a result new protocols and algorithms are being developed to meet changing operational
NIST Net: A Linux-based Network Emulation Tool
- Computer Communication Review
, 2003
"... Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and exp ..."
Abstract
-
Cited by 106 (0 self)
- Add to MetaCart
(Show Context)
Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics. 1
Mesh-Based Content Routing using XML
- 18TH ACM SYMPOSIUM ON OPERATING SYSTEM PRINCIPLES (SOSP '01)
, 2001
"... We have developed a new approach for reliably multicasting timecritical data to heterogeneous clients over mesh-based overlay networks. To facilitate intelligent content pruning, data streams are comprised of a sequence of XML packets and forwarded by application-level XML routers. XML routers perfo ..."
Abstract
-
Cited by 102 (3 self)
- Add to MetaCart
We have developed a new approach for reliably multicasting timecritical data to heterogeneous clients over mesh-based overlay networks. To facilitate intelligent content pruning, data streams are comprised of a sequence of XML packets and forwarded by application-level XML routers. XML routers perform contentbased routing of individual XML packets to other routers or clients based upon queries that describe the information needs of downstream nodes. Our PC-based XML router prototype can route an 18 Mbit per second XML stream. Our routers use
Puppeteer: Component-based Adaptation for Mobile Computing
- In Proceedings of the 3rd USENIX Symposium on Internet Technologies and Systems
, 2001
"... Puppeteer is a system for adapting component-based applications in mobile environments. Puppeteer takes advantage of the exported interfaces of these applications and the structured nature of the documents they manipulate to perform adaptation without modifying the applications. The system is struct ..."
Abstract
-
Cited by 97 (10 self)
- Add to MetaCart
(Show Context)
Puppeteer is a system for adapting component-based applications in mobile environments. Puppeteer takes advantage of the exported interfaces of these applications and the structured nature of the documents they manipulate to perform adaptation without modifying the applications. The system is structured in a modular fashion, allowing easy addition of new applications and adaptation policies. Our initial prototype focuses on adaptation to limited bandwidth. It runs on Windows NT, and includes support for a variety of adaptation policies for Microsoft PowerPoint and Internet Explorer 5. We demonstrate that Puppeteer can support complex policies without any modification to the application and with little overhead. To the best of our knowledge, previous implementations of adaptations of this nature have relied on modifying the application. 1