Results 11  20
of
86
Extremal properties of threedimensional sensor networks with applications
 IEEE Transactions on Mobile Computing
"... In this paper, we analyze various critical transmitting/sensing ranges for connectivity and coverage in threedimensional sensor networks. As in other largescale complex systems, many global parameters of sensor networks undergo phase transitions: For a given property of the network, there is a cri ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
In this paper, we analyze various critical transmitting/sensing ranges for connectivity and coverage in threedimensional sensor networks. As in other largescale complex systems, many global parameters of sensor networks undergo phase transitions: For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (resp. below) which the property exists with high (resp. a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity/coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors and their transmitting/sensing ranges. More specifically, we consider the following problems: Assume that n nodes, each capable of sensing events within a radius of r, are randomly and uniformly distributed in a 3dimensional region R of volume V, how large must the sensing range rSense be to ensure a given degree of coverage of the region to monitor? For a given transmission range rTrans, what is the minimum (resp. maximum) degree of the network? What is then the typical hopdiameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks. Keywords Sensor networks, ad hoc networks; coverage, connectivity; hopdiameter; minimum/maximum degrees; transmitting/sensing ranges; analytical methods; energy consumption; topology control. I.
A Sequence of Series for The Lambert Function
, 1997
"... We give a uniform treatment of several series expansions for the Lambert W function, leading to an infinite family of new series. We also discuss standardization, complex branches, a family of arbitraryorder iterative methods for computation of W , and give a theorem showing how to correctly solve ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We give a uniform treatment of several series expansions for the Lambert W function, leading to an infinite family of new series. We also discuss standardization, complex branches, a family of arbitraryorder iterative methods for computation of W , and give a theorem showing how to correctly solve another simple and frequently occurring nonlinear equation in terms of W and the unwinding number. 1 Introduction Investigations of the properties of the Lambert W function are good examples of nontrivial interactions between computer algebra, mathematics, and applications. To begin with, the standardization of the name W by computer algebra (see section 1.2 below) has had several effects. First, this standardization has exposed a great variety of applications; second, it has uncovered a significant history, hitherto unnoticed because the lack of a standard name meant that most researchers were unaware of previous work; and, third, it has now stimulated current interest in this remarkable ...
Data Morphing: An Adaptive, CacheConscious Storage Technique
 In Proc. VLDB, 2003
, 2003
"... The number of processor cache misses has a critical impact on the performance of DBMSs running on servers with large mainmemory configurations. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
The number of processor cache misses has a critical impact on the performance of DBMSs running on servers with large mainmemory configurations.
On the Analysis of Linear Probing Hashing
, 1998
"... This paper presents moment analyses and characterizations of limit distributions for the construction cost of hash tables under the linear probing strategy. Two models are considered, that of full tables and that of sparse tables with a fixed filling ratio strictly smaller than one. For full tables, ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
This paper presents moment analyses and characterizations of limit distributions for the construction cost of hash tables under the linear probing strategy. Two models are considered, that of full tables and that of sparse tables with a fixed filling ratio strictly smaller than one. For full tables, the construction cost has expectation O(n3/2), the standard deviation is of the same order, and a limit law of the Airy type holds. (The Airy distribution is a semiclassical distribution that is defined in terms of the usual Airy functions or equivalently in terms of Bessel functions of indices − 1 2 3, 3.) For sparse tables, the construction cost has expectation O(n), standard deviation O ( √ n), and a limit law of the Gaussian type. Combinatorial relations with other problems leading to Airy phenomena (like graph connectivity, tree inversions, tree path length, or area under excursions) are also briefly discussed.
Hyperloglog: The analysis of a nearoptimal cardinality estimation algorithm
 IN AOFA ’07: PROCEEDINGS OF THE 2007 INTERNATIONAL CONFERENCE ON ANALYSIS OF ALGORITHMS
, 2007
"... This extended abstract describes and analyses a nearoptimal probabilistic algorithm, HYPERLOGLOG, dedicated to estimating the number of distinct elements (the cardinality) of very large data ensembles. Using an auxiliary memory of m units (typically, “short bytes”), HYPERLOGLOG performs a single pa ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This extended abstract describes and analyses a nearoptimal probabilistic algorithm, HYPERLOGLOG, dedicated to estimating the number of distinct elements (the cardinality) of very large data ensembles. Using an auxiliary memory of m units (typically, “short bytes”), HYPERLOGLOG performs a single pass over the data and produces an estimate of the cardinality such that the relative accuracy (the standard error) is typically about 1.04 / √ m. This improves on the best previously known cardinality estimator, LOGLOG, whose accuracy can be matched by consuming only 64% of the original memory. For instance, the new algorithm makes it possible to estimate cardinalities well beyond 10 9 with a typical accuracy of 2 % while using a memory of only 1.5 kilobytes. The algorithm parallelizes optimally and adapts to the sliding window model.
The Average Case Analysis Of Algorithms  Complex Asymptotics and Generating Functions
, 1991
"... This report is part of a projected series whose aim is to present in a synthetic way the major methods and models in the averagecase analysis of algorithms. The following items are to be treated in the series. First, there will be a collection of reports on Methods: ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This report is part of a projected series whose aim is to present in a synthetic way the major methods and models in the averagecase analysis of algorithms. The following items are to be treated in the series. First, there will be a collection of reports on Methods:
Planar Maps and Airy Phenomena
, 2000
"... A considerable number of asymptotic distributions arising in random combinatorics and analysis of algorithms are of the exponentialquadratic type (e x 2 ), that is, Gaussian. We exhibit here a new class of \universal" phenomena that are of the exponentialcubic type (e ix 3 ), corresponding to ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
A considerable number of asymptotic distributions arising in random combinatorics and analysis of algorithms are of the exponentialquadratic type (e x 2 ), that is, Gaussian. We exhibit here a new class of \universal" phenomena that are of the exponentialcubic type (e ix 3 ), corresponding to nonstandard distributions that involve the Airy function. Such Airy phenomena are expected to be found in a number of applications, when conuences of critical points and singularities occur. About a dozen classes of planar maps are treated in this way, leading to the occurrence of a common Airy distribution that describes the sizes of cores and of largest (multi)connected components. Consequences include the analysis and ne optimization of random generation algorithms for multiply connected planar graphs.
Assessing the Distinguishability of Models and the Informativeness of Data
"... A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by mod ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by model B, and vice versa (a technique that we call landscaping), much safer inferences can be made about the meaning of a models fit to data. We demonstrate the landscaping technique using four models of retention and 77 historical data sets, and show how the method can be used to (1) evaluate the distinguishability of models, (2) evaluate the informativeness of data in distinguishing between models, and (3) suggest new ways to distinguish between models. The generality of the method is demonstrated in two other research areas (information integration and categorization), and its relationship to the important notion of model complexity is discussed.
Model Selection for Generalized Linear Models via GLIB, with Application to Epidemiology
, 1993
"... Epidemiological studies for assessing risk factors often use logistic regression, loglinear models, or other generalized linear models. They involve many decisions, including the choice and coding of risk factors and control variables. It is common practice to select independent variables using a s ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Epidemiological studies for assessing risk factors often use logistic regression, loglinear models, or other generalized linear models. They involve many decisions, including the choice and coding of risk factors and control variables. It is common practice to select independent variables using a series of significance tests and to choose the way variables are coded somewhat arbitrarily. The overall properties of such a procedure are not well understood, and conditioning on a single model ignores model uncertainty, leading to underestimation of uncertainty about quantities of interest (QUOIs). We describe a Bayesian modeling strategy that formalizes the model selection process and propagates model uncertainty through to inference about QUOIs. Each possible combination of modeling decisions defines a different model, and the models are compared using Bayes factors. Inference about a QUOI is based on an average of its posterior distributions under the individual models, weighted by thei...
Learning and Verifying Graphs using Queries with a Focus on Edge Counting
"... Abstract. We consider the problem of learning and verifying hidden graphs and their properties given query access to the graphs. We analyze various queries (edge detection, edge counting, shortest path), but we focus mainly on edge counting queries. We give an algorithm for learning graph partitions ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Abstract. We consider the problem of learning and verifying hidden graphs and their properties given query access to the graphs. We analyze various queries (edge detection, edge counting, shortest path), but we focus mainly on edge counting queries. We give an algorithm for learning graph partitions using O(n log n) edge counting queries. We introduce a problem that has not been considered: verifying graphs with edge counting queries, and give a randomized algorithm with error ǫ for graph verification using O(log(1/ǫ)) edge counting queries. We examine the current state of the art and add some original results for edge detection and shortest path queries to give a more complete picture of the relative power of these queries to learn various graph classes. Finally, we relate our work to Freivalds ’ ‘fingerprinting technique ’ – a probabilistic method for verifying that two matrices are equal by multiplying them by random vectors. 1