Results 1 - 10
of
48
A parameterizable methodology for Internet traffic flow profiling
- IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
, 1995
"... We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities. Our methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP co ..."
Abstract
-
Cited by 169 (6 self)
- Add to MetaCart
We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities. Our methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP connections. Instead, our model defines flows based on traffic satisfying various temporal and spatial locality conditions, as observed at internal points of the network. This approach to flow characterization helps address some central problems in networking based on the Internet model. Among them are route caching, resource reservation at multiple service levels, usage based accounting, and the integration of IP traffic over an ATM fabric. We first define the parameter space and then concentrate on metrics characterizing both individual flows as well as the aggregate flow profile. We consider various granularities of the definition of a flow, such as by destination network, host-pair, or hos...
Characteristics of Wide-Area TCP/IP Conversations
- IN PROCEEDINGS OF ACM SIGCOMM '91
, 1991
"... In this paper, we characterize wide-area network applications that use the TCP transport protocol. We also describe a new way to model the wide-area traffic generated by a stub network. We believe the traffic model presented here will be useful in studying congestion control, routing algorithms, and ..."
Abstract
-
Cited by 110 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we characterize wide-area network applications that use the TCP transport protocol. We also describe a new way to model the wide-area traffic generated by a stub network. We believe the traffic model presented here will be useful in studying congestion control, routing algorithms, and other resource management schemes for existing and future networks. Our model is based on trace analysis of TCP/IP widearea internetwork traffic. We collected the TCP/IP packet headers of USC, UCB, and Bellcore networks at the point they connect with their respective regional access networks. We then wrote a handful of programs to analyze the traces. Our model characterizes individual TCP conversations by the distributions of: number of bytes transferred, duration, number of packets transferred, packet size, and packet interarrival time. Our trace analysis shows that both interactive and bulk transfer traffic from all sites reflect a large number of short conversations. Similarly, it shows that a very large percentage of traffic is bidirectional, even for bulk transfer. We observed that interactive applications send significantly different amounts of data in each direction of a conversation, and that interarrival times for interactive applications closely follow a constant plus exponential model. Half of the conversations are directed to a handful of networks, but the other half are directed to hundreds of networks. Many of these observations contradict commonly held beliefs regarding wide-area traffic.
An Empirical Workload Model for Driving Wide-Area TCP/IP Network Simulations
- Internetworking: Research and Experience
, 1992
"... We present an artificial workload model of wide-area internetwork traffic. The model can be used to drive simulation experiments of communication protocols and flow and congestion control experiments. The model is based on analysis of wide-area TCP/IP traffic collected from one industrial and two a ..."
Abstract
-
Cited by 103 (7 self)
- Add to MetaCart
(Show Context)
We present an artificial workload model of wide-area internetwork traffic. The model can be used to drive simulation experiments of communication protocols and flow and congestion control experiments. The model is based on analysis of wide-area TCP/IP traffic collected from one industrial and two academic networks. The artificial workload model uses both detailed knowledge and measured characteristics of the user application programs responsible for the traffic. Observations drawn from our measurements contradict some commonly held beliefs regarding wide-area TCP/IP network traffic.
Internet Traffic Characterization
, 1994
"... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xii 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1. The problem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ..."
Abstract
-
Cited by 56 (0 self)
- Add to MetaCart
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xii 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1. The problem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2. Overview of thesis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 3. Contribution of our work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2 Taxonomy of traffic characteristics : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 1. Aggregation granularity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2. Host versus network centric perspective : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 3. Host centric perspective : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 1. Delay and jitter : : : : : ...
A Comparison of Hashing Schemes for Address Lookup in Computer Networks
, 1989
"... The trend toward networks becoming larger and faster, and addresses increasing in size, has impelled a need to explore alternatives for fast address recognition. Hashing is one such alternative which can help minimize the address search time in adapters, bridges, routers, gateways, and name servers. ..."
Abstract
-
Cited by 53 (1 self)
- Add to MetaCart
The trend toward networks becoming larger and faster, and addresses increasing in size, has impelled a need to explore alternatives for fast address recognition. Hashing is one such alternative which can help minimize the address search time in adapters, bridges, routers, gateways, and name servers. Using a trace of address references, we compared the efficiency of several different hashing functions and found that the cyclic redundancy checking (CRC) polynomials provide excellent hashing functions. For software implementation, Fletcher checksum provides a good hashing function. Straightforward folding of address octets using the exclusive-or operation is also a good hashing function. For some applications, bit extraction from the address can be used. Guidelines are provided for determining the size of hash mask required to achieve a specified level of performance.
Cache Memory Design for Network Processors
"... The exponential growth in Internet traffic has motivated the development of a new breed of microprocessors called Network Processors, which are designed to address the performance problem resulting from exploding Internet traffic. The development efforts of these network processors concentrate almos ..."
Abstract
-
Cited by 37 (4 self)
- Add to MetaCart
(Show Context)
The exponential growth in Internet traffic has motivated the development of a new breed of microprocessors called Network Processors, which are designed to address the performance problem resulting from exploding Internet traffic. The development efforts of these network processors concentrate almost exclusively on streamlining their data-paths to speed up network packet processing, which mainly constitute routing and data movement. Rather than blindly pushing the performance of packet processing hardware, an alternative approach is to avoid repeated computation by applying the time-tested architectural idea of caching to network packet processing. Because the data streams presented to network processors and general-purpose CPUs exhibit different characteristics, detailed cache design tradeoffs for the two also differ considerably. This research focuses on cache memory design specifically for network processors. Using a trace-driven simulation methodology, we evaluate a series of three...
Approximate Caches for Packet Classification
- In IEEE INFOCOM
, 2004
"... Many network devices such as routers and firewalls employ caches to take advantage of temporal locality of packet headers in order to speed up packet processing decisions. Traditionally, cache designs trade off time and space with the goal of balancing the overall cost and performance of the device. ..."
Abstract
-
Cited by 36 (1 self)
- Add to MetaCart
(Show Context)
Many network devices such as routers and firewalls employ caches to take advantage of temporal locality of packet headers in order to speed up packet processing decisions. Traditionally, cache designs trade off time and space with the goal of balancing the overall cost and performance of the device. In this paper, we examine another axis of the design space that has not been previously considered: accuracy. In particular, we quantify the benefits of relaxing the accuracy of the cache on the cost and performance of packet classification caches. Our cache design is based on the popular Bloom filter data structure. This paper provides a model for optimizing Bloom filters for this purpose, as well as extensions to the data structure to support graceful aging, bounded misclassification rates, and multiple binary predicates. Given this, we show that such caches can provide nearly an order of magnitude cost savings at the expense of misclassifying one billionth of packets for IPv6-based caches.
A Novel Cache Architecture to Support Layer-Four Packet Classification at Memory Access Speeds
, 2000
"... | Existing and emerging layer-4 switching technologies require packet classication to be performed on more than one header elds, known as layer-4 lookup. Currently, the fastest general layer-4 lookup scheme delivers a throughput of 1 Million Lookups Per Second (MLPS), far o from 25/75 MLPS needed to ..."
Abstract
-
Cited by 35 (3 self)
- Add to MetaCart
| Existing and emerging layer-4 switching technologies require packet classication to be performed on more than one header elds, known as layer-4 lookup. Currently, the fastest general layer-4 lookup scheme delivers a throughput of 1 Million Lookups Per Second (MLPS), far o from 25/75 MLPS needed to support 50/150 Gbps layer4 router. We propose the use of route caching to speed up layer-4 lookup, and design and implement a cache architecture for this purpose. We investigated the locality behavior of the Interent trac (at layer-4) and proposed a near-LRU algorithm that best harness this behavior. In implementation, to best approximate fully-associative nearLRU using relatively inexpensive set-associative hardware, we invented a dynamic set-associative scheme that exploits the nice properties of N-universal hash functions. The cache architecture achieves a high and stable hit ratio above 90 percent and a fast throughput up to 75 MLPS at a reasonable cost ($700/1700 for 50/150 Gbps rou...
An assessment of state and lookup overhead in routers
- Proceedings of IEEE Infocom
, 1992
"... ..."
(Show Context)
Leveraging Zipf’s law for traffic offloading
- SIGCOMM Comput. Commun. Rev
, 2012
"... Internet traffic has Zipf-like properties at multiple aggregation lev-els. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the for-warder handle a very limited ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
Internet traffic has Zipf-like properties at multiple aggregation lev-els. These properties suggest the possibility of offloading most of the traffic from a complex controller (e.g., a software router) to a simple forwarder (e.g., a commodity switch), by letting the for-warder handle a very limited set of flows; the heavy hitters. As the volume of traffic from a set of flows is highly dynamic, maintain-ing a reliable set of heavy hitters over time is challenging. This is especially true when we face a volume limit in the non-offloaded traffic in combination with a constraint in the size of the heavy hit-ter set or its rate of change. We propose a set selection strategy that takes advantage of the properties of heavy hitters at different time scales. Based on real Internet traffic traces, we show that our strategy is able to offload most of the traffic while limiting the rate of change of the heavy hitter set, suggesting the feasibility of alter-native router designs.