Results 1  10
of
780
Stochastic Geometry and Random Graphs for the Analysis and Design of Wireless Networks
"... Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communicationtheoretic results accoun ..."
Abstract

Cited by 231 (43 self)
 Add to MetaCart
Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communicationtheoretic results accounting for the network’s geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs – including point process theory, percolation theory, and probabilistic combinatorics – have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.
Connectivity of wireless multihop networks in a shadow fading environment
, 2003
"... Authors ’ preprint of an article accepted for ACM/Kluwer Wireless Networks, special issue on selected papers from ACM MSWiM 2003, to be published 2005. Abstract. This article analyzes the connectivity of multihop radio networks in a lognormal shadow fading environment. Assuming the nodes have equal ..."
Abstract

Cited by 143 (6 self)
 Add to MetaCart
(Show Context)
Authors ’ preprint of an article accepted for ACM/Kluwer Wireless Networks, special issue on selected papers from ACM MSWiM 2003, to be published 2005. Abstract. This article analyzes the connectivity of multihop radio networks in a lognormal shadow fading environment. Assuming the nodes have equal transmission capabilities and are randomly distributed according to a homogeneous Poisson process, we give a tight lower bound for the minimum node density that is necessary to obtain an almost surely connected subnetwork on a bounded area of given size. We derive an explicit expression for this bound, compute it in a variety of scenarios, and verify its tightness by simulation. The numerical results can be used for the practical design and simulation of wireless sensor and ad hoc networks. In addition, they give insight into how fading affects the topology of multihop networks. It is explained why a high fading variance helps the network to become connected.
The Gaussian mixture probability hypothesis density filter
 IEEE Trans. SP
, 2006
"... Abstract — A new recursive algorithm is proposed for jointly estimating the timevarying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise and false alarms. The approach involves modelling the respecti ..."
Abstract

Cited by 141 (14 self)
 Add to MetaCart
(Show Context)
Abstract — A new recursive algorithm is proposed for jointly estimating the timevarying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first order statistic of the random finite set of targets, in time. At present, there is no closed form solution to the PHD recursion. This work shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed form recursions for propagating the means, covariances and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters. Index Terms — Multitarget tracking, optimal filtering, point
The TimeRescaling Theorem and Its Application to Neural Spike Train Data Analysis
 NEURAL COMPUTATION
, 2001
"... Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point pro ..."
Abstract

Cited by 126 (22 self)
 Add to MetaCart
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point process neural spike train models, especially for histogrambased models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The timerescaling theorem is a wellknown result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodnessoffit tests for both parametric and histogrambased point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the sup
Sequential Monte Carlo methods for multitarget filtering with random finite sets
 of F(S) by PX(T) � P(X ∈ T). However, RST is
, 2005
"... Abstract — Random finite sets are natural representations of multitarget states and observations that allow multisensor multitarget filtering to fit in the unifying random set framework for Data Fusion. Although the foundation has been established in the form of Finite Set Statistics (FISST), its ..."
Abstract

Cited by 114 (15 self)
 Add to MetaCart
(Show Context)
Abstract — Random finite sets are natural representations of multitarget states and observations that allow multisensor multitarget filtering to fit in the unifying random set framework for Data Fusion. Although the foundation has been established in the form of Finite Set Statistics (FISST), its relationship to conventional probability is not clear. Furthermore, optimal Bayesian multitarget filtering is not yet practical due to the inherent computational hurdle. Even the Probability Hypothesis Density (PHD) filter, which propagates only the first moment (or PHD) instead of the full multitarget posterior, still involves multiple integrals with no closed forms in general. This article establishes the relationship between FISST and conventional probability that leads to the development of a sequential Monte Carlo (SMC) multitarget filter. In addition, a SMC implementation of the PHD filter is proposed and demonstrated on a number of simulated scenarios. Both of the proposed filters are suitable for problems involving nonlinear nonGaussian dynamics. Convergence results for these filters are also established.
Simulation of networks of spiking neurons: A review of tools and strategies
 Journal of Computational Neuroscience
, 2007
"... We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on ..."
Abstract

Cited by 106 (29 self)
 Add to MetaCart
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including HodgkinHuxley type, integrateandfire models, interacting with currentbased or conductancebased synapses, using clockdriven or eventdriven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given
Inverting Sampled Traffic
 In Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement
, 2003
"... Routers have the ability to output statistics about packets and flows of packets that traverse them. Since however the generation of detailed tra#c statistics does not scale well with link speed, increasingly routers and measurement boxes implement sampling strategies at the packet level. In this pa ..."
Abstract

Cited by 104 (4 self)
 Add to MetaCart
(Show Context)
Routers have the ability to output statistics about packets and flows of packets that traverse them. Since however the generation of detailed tra#c statistics does not scale well with link speed, increasingly routers and measurement boxes implement sampling strategies at the packet level. In this paper we study both theoretically and practically what information about the original tra#c can be inferred when sampling, or `thinning', is performed at the packet level. While basic packet level characteristics such as first order statistics can be fairly directly recovered, other aspects require more attention. We focus mainly on the spectral density, a second order statistic, and the distribution of the number of packets per flow, showing how both can be exactly recovered, in theory. We then show in detail why in practice this cannot be done using the traditional packet based sampling, even for high sampling rate. We introduce an alternative flow based thinning, where practical inversion is possible even at arbitrarily low sampling rate. We also investigate the theory and practice of fitting the parameters of a Poisson cluster process, modelling the full packet tra#c, from sampled data.