Results 1  10
of
43
Internet growth: Is there a “Moore’s Law” for data traffic?
 Handbook of Massive Data Sets
, 2001
"... Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlene ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to those that have produced “Moore’s Law ” in semiconductors. A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a twoyear period 19956. Ever since, though, growth appears to have reverted to the Internet’s historical pattern of a single doubling each year. Progress in transmission technology appears sufficient to double network capacity each year for about the next decade. However, traffic growth faster than a tripling each year could probably not be sustained for more than a few years. Since computing and storage capacities will also be growing, as
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 41 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
Routers with a Single Stage of Buffering
, 2002
"... Most high performance routers today use combined input and output queueing (CIOQ). The CIOQ router is also frequently used as an abstract model for routers: at one extreme is input queueing, at the other extreme is output queueing, and inbetween there is a continuum of performance as the speedup is ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Most high performance routers today use combined input and output queueing (CIOQ). The CIOQ router is also frequently used as an abstract model for routers: at one extreme is input queueing, at the other extreme is output queueing, and inbetween there is a continuum of performance as the speedup is increased from 1 to N (where N is the number of linecards). The model includes architectures in which a switch fabric is sandwiched between two stages of buffering. There is a rich and growing theory for CIOQ routers, including algorithms, throughput results and conditions under which delays can be guaranteed. But there is a broad class of architectures that are not captured by the CIOQ model, including routers with centralized shared memory, and loadbalanced routers. In this paper we propose an abstract model called SingleBuffered (SB) routers that includes these architectures. We describe a method called Constraint Sets to analyze a number of SB router architectures. The model helped identify previously unstudied architectures, in particular the Distributed Shared Memory router. Although commercially deployed, its performance is not widely known. We find conditions under which it can emulate an ideal shared memory router, and believe it to be a promising architecture. Questions remain about its complexity, but we find that the memory bandwidth, and potentially the power consumption of the router is lower than for a CIOQ router.
Recent progress and prospects for integer factorisation algorithms
 In Proc. of COCOON 2000
, 2000
"... Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In recent years the limits of the best integer factorisation algorithms have been extended greatly, due in part to Moore’s law and in part to algorithmic improvements. It is now routine to factor 100decimal digit numbers, and feasible to factor numbers of 155 decimal digits (512 bits). We outline several integer factorisation algorithms, consider their suitability for implementation on parallel machines, and give examples of their current capabilities. In particular, we consider the problem of parallel solution of the large, sparse linear systems which arise with the MPQS and NFS methods. 1
The Slow Evolution of Electronic Publishing
 in Electronic Publishing  New Models and Opportunities
, 1997
"... How will scholarly publishing evolve? The history of other technological innovations suggests the shift to electronic publications will be rapid, but fundamental changes in the nature of scholarly communications will be much slower. 1. ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
How will scholarly publishing evolve? The history of other technological innovations suggests the shift to electronic publications will be rapid, but fundamental changes in the nature of scholarly communications will be much slower. 1.
Interferometric synthetic aperture microscopy: physicsbased image reconstruction from optical coherence tomography data
 In International Conference on Image Processing
, 2007
"... sensors ..."
Growth of the internet
 TECH. REP. 9908, DIMACS
, 2001
"... The Internet is the main cause of the recent explosion of activity in optical fiber telecommunications. The high growth rates observed on the Internet, and the popular perception that growth rates were even higher, led to an upsurge in research, development, and investment in telecommunications. The ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The Internet is the main cause of the recent explosion of activity in optical fiber telecommunications. The high growth rates observed on the Internet, and the popular perception that growth rates were even higher, led to an upsurge in research, development, and investment in telecommunications. The telecom crash of 2000 occurred when investors realized that transmission capacity in place and under construction greatly exceeded actual traffic demand. This chapter discusses the growth of the Internet and compares it with that of other communication services. Internet traffic is growing, approximately doubling each year. There are reasonable arguments that it will continue to grow at this rate for the rest of this decade. If this happens, then in a few years, we may have a rough balance between supply and
Internet TV: Implications for the long distance network
 Erlbaum Associates, 2003
, 2001
"... The migration of traditional TV to the Internet is likely to have little impact on the long distance network. The main reason is that consumers still take on the order of a decade to embrace new technologies (such as cell phones) or even improved variants of old media (as with CDs replacing vinyl re ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The migration of traditional TV to the Internet is likely to have little impact on the long distance network. The main reason is that consumers still take on the order of a decade to embrace new technologies (such as cell phones) or even improved variants of old media (as with CDs replacing vinyl records). Hence we should not expect traditional broadcast TV to change substantially or to migrate to new modes of distribution any time soon. Yet within much less than a decade, progress in photonics will produce an increase in the capacity of Internet backbones far beyond that required to carry all the broadcast TV signals. There will continue to be bottlenecks in the "last mile" that will limit the migration of TV to the Internet (and this will reinforce the natural inertia of the consumer market). However, the backbones are unlikely to be an impediment.
THE TWENTYFOURTH FERMAT NUMBER IS COMPOSITE
, 2002
"... We have shown by machine proof that F24 =2 224 +1 iscom posite. The rigorous Pépin primality test was performed using independently developed programs running simultaneously on two different, physically separated processors. Each program employed a floatingpoint, FFTbased discrete weighted transf ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We have shown by machine proof that F24 =2 224 +1 iscom posite. The rigorous Pépin primality test was performed using independently developed programs running simultaneously on two different, physically separated processors. Each program employed a floatingpoint, FFTbased discrete weighted transform (DWT) to effect multiplication modulo F24. The final, respective Pépin residues obtained by these two machines were in complete agreement. Using intermediate residues stored periodically during one of the floatingpoint runs, a separate algorithm for pureinteger negacyclic convolution verified the result in a “wavefront ” paradigm, by running simultaneously on numerous additional machines, to effect piecewise verification of a saturating set of deterministic links for the Pépin chain. We deposited a final Pépin residue for possible use by future investigators in the event that a proper factor of F24 should be discovered; herein we report the more compact, traditional SelfridgeHurwitz residues. For the sake of completeness, we also generated a Pépin residue for F23, and via the Suyama test determined that the known cofactor of this number is composite.