Results

**11 - 15**of**15**### The Pseudo Self-Similar Traffic Model: Application and Validation

, 2002

"... Since the early 1990's, a variety of studies has shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures ..."

Abstract
- Add to MetaCart

Since the early 1990's, a variety of studies has shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures of interest. Instead of developing completely new traffic models, a number of researchers have proposed to adapt traditional traffic modelling approaches to incorporate aspects of self-similarity. The motivation for doing so is the hope to be able to reuse techniques and tools that have been developed in the past and with which experience has been gained.

### FB∞: An Efficient Scheduling Policy for Edge Routers to Speedup the Internet Access

- TR 02.04.1, Institut Eurecom
, 2002

"... Recent Internet traffic measurements reveal that Internet traffic exhibits high coefficient of variability (CoV). That is, the Internet traffic consists of many small jobs and very few large jobs, and less than 1% of the largest jobs constitute more than half of the load. Consequently, we propose to ..."

Abstract
- Add to MetaCart

Recent Internet traffic measurements reveal that Internet traffic exhibits high coefficient of variability (CoV). That is, the Internet traffic consists of many small jobs and very few large jobs, and less than 1% of the largest jobs constitute more than half of the load. Consequently, we propose to use policies that take advantage of this attribute to favor small jobs over large jobs. If such a policy is implemented in an edge router, this would mean that short jobs such as HTTP sessions will see their latency reduced. The shortest remaining processing time first (SRPT) scheduling policy has been known to be an optimal policy in minimizing the mean response time. Recent work [2] has shown that for job size distributions with high coefficient of variability (CoV), SRPT favors small jobs without unfairly penalizing large jobs. An implementation of SRPT requires that the sizes of all jobs be known, which can not be assumed in most networking environments. In this paper, we analyze the ForegroundBackground -Infinity ( ) scheduling policy, which is a priority policy that is known to favor small jobs the most among the scheduling policies that do not require the knowledge of job sizes. However, when evaluated under the M/M/1 queueing model, has been shown to highly penalize many large jobs in favor of small jobs. In this paper, we analyze the M/G/1/ queue the objective being to investigate the fairness of by comparing its slowdown to the slowdown offered by PS, to quantify the response time improvement that offers when used instead of FIFO, and to compare to an optimal policy SRPT, for service distributions with varying CoVs. Finally, we consider under overload where we analyze its stability and derive its expression for the conditional mean response...

### Visualization Challenges in Internet Traffic Research

, 2004

"... This is an overview of some recent research, and of some open problems, in the visualization of internet traffic data. Onechallengecomes from the sheer scale of the data, where millions (and far more if desired) of observations are frequently available. Another challenge comes from ubiquitous heavy ..."

Abstract
- Add to MetaCart

This is an overview of some recent research, and of some open problems, in the visualization of internet traffic data. Onechallengecomes from the sheer scale of the data, where millions (and far more if desired) of observations are frequently available. Another challenge comes from ubiquitous heavy tail distributions, which render standard ideas such as “random sampling will give a representative sample” obsolete. Some alternate sampling approaches are suggested and studied. One more challenge is the visual representation of (and even the definition of) “common constant transfer rates” in a large scatterplot.

### ON THE STATISTICAL CHARACTERIZATION OF FLOWS IN INTERNET TRAFFIC WITH APPLICATION TO SAMPLING

, 902

"... Abstract. A new method of estimating some statistical characteristics of TCP flows in the Internet is developed in this paper. For this purpose, a new set of random variables (referred to as observables) is defined. When dealing with sampled traffic, these observables can easily be computed from sam ..."

Abstract
- Add to MetaCart

Abstract. A new method of estimating some statistical characteristics of TCP flows in the Internet is developed in this paper. For this purpose, a new set of random variables (referred to as observables) is defined. When dealing with sampled traffic, these observables can easily be computed from sampled data. By adopting a convenient mouse/elephant dichotomy also dependent on traffic, it is shown how these variables give a reliable statistical representation of the number of packets transmitted by large flows during successive time intervals with an appropriate duration. A mathematical framework is developed to estimate the accuracy of the method. As an application, it is shown how one can estimate the number of large TCP flows when only sampled traffic is available. The algorithm proposed is tested against experimental data