Results 1 
9 of
9
Peertopeer membership management for gossipbased protocols
 IEEE TRANSACTIONS ON COMPUTERS
, 2003
"... Gossipbased protocols for group communication have attractive scalability and reliability properties. The probabilistic gossip schemes studied so far typically assume that each group member has full knowledge of the global membership and chooses gossip targets uniformly at random. The requirement ..."
Abstract

Cited by 167 (21 self)
 Add to MetaCart
Gossipbased protocols for group communication have attractive scalability and reliability properties. The probabilistic gossip schemes studied so far typically assume that each group member has full knowledge of the global membership and chooses gossip targets uniformly at random. The requirement of global knowledge impairs their applicability to very largescale groups. In this paper, we present SCAMP (Scalable Membership protocol), a novel peertopeer membership protocol which operates in a fully decentralized manner and provides each member with a partial view of the group membership. Our protocol is selforganizing in the sense that the size of partial views naturally converges to the value required to support a gossip algorithm reliably. This value is a function of the group size, but is achieved without any node knowing the group size. We propose additional mechanisms to achieve balanced view sizes even with highly unbalanced subscription patterns. We present the design, theoretical analysis, and a detailed evaluation of the basic protocol and its refinements. Simulation results show that the reliability guarantees provided by SCAMP are comparable to previous schemes based on global knowledge. The scale of the experiments attests to the scalability of the protocol.
On heterogeneous overlay construction and random node selection in unstructured p2p networks
 in Proc. of INFOCOM
, 2006
"... Abstract — Unstructured p2p and overlay network applications often require that a random graph be constructed, and that some form of random node selection take place over that graph. A key and difficult requirement of many such applications is heterogeneity: peers have different node degrees in the ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Abstract — Unstructured p2p and overlay network applications often require that a random graph be constructed, and that some form of random node selection take place over that graph. A key and difficult requirement of many such applications is heterogeneity: peers have different node degrees in the random graph based on their capacity. Using simulations, this paper compares a number of techniques—some novel and some variations on known approaches—for heterogeneous graph construction and random node selection on top of such graphs. Our focus is on practical criteria that can lead to a genuinely deployable toolkit that supports a wide range of applications. These criteria include simplicity of operation, support for node heterogeneity, quality of random selection, efficiency and scalability, load balance, and robustness. We show that all these criteria can moreorless be met by all the approaches. Our novel approach, however, stands out as the best from a practical perspective because of its simplicity: it achieves the criteria while requiring each node to set only a single tuning parameter, its desired relative load. I.
1 Nonparametric Statistical Inference for Ergodic Processes
"... Abstract—In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, three problems are considered: goo ..."
Abstract

Cited by 12 (12 self)
 Add to MetaCart
Abstract—In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, three problems are considered: goodnessoffit (or identity) testing, process classification, and the change point problem. For each of the problems a test is constructed that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of distributional distance. Index Terms—Nonparametric hypothesis testing, stationary ergodic processes, goodnessoffit test, process classification, change point problem. I.
On hypotheses testing for ergodic processes
 In Proceedgings of Information Theory Workshop (2008
, 1998
"... We propose a method for statistical analysis of time series, that allows us to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, we consider three problems: goodnessoffit (or identity ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
We propose a method for statistical analysis of time series, that allows us to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, we consider three problems: goodnessoffit (or identity) testing, process classification, and the change point problem. For each of the problems we construct a test that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of distributional distance.
A comparison of structured and unstructured P2P approaches to heterogeneous random peer selection
 In Proc. Usenix Annual Technical Conference
, 2007
"... Random peer selection is used by numerous P2P applications; examples include applicationlevel multicast, unstructured file sharing, and network location mapping. In most of these applications, support for a heterogeneous capacity distribution among nodes is desirable: in other words, nodes with hig ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Random peer selection is used by numerous P2P applications; examples include applicationlevel multicast, unstructured file sharing, and network location mapping. In most of these applications, support for a heterogeneous capacity distribution among nodes is desirable: in other words, nodes with higher capacity should be selected proportionally more often. Random peer selection can be performed over both structured and unstructured graphs. This paper compares these two basic approaches using a candidate example from each approach. For unstructured heterogeneous random peer selection, we use Swaplinks, from our previous work. For the structured approach, we use the Bamboo DHT adapted to heterogeneous selection using our extensions to the itembalancing technique by Karger and Ruhl. Testing the two approaches over graphs of 1000 nodes and a range of network churn levels and heterogeneity distributions, we show that Swaplinks is the superior random selection approach: (i) Swaplinks enables more accurate random selection than does the structured approach in the presence of churn, and (ii) The structured approach is sensitive to a number of hardtoset tuning knobs that affect performance, whereas Swaplinks is essentially free of such knobs. 1
Climbing Down from the Top: Single Name Dynamics in Credit Top Down Models. Working paper, Quantitative Research JP
, 2008
"... In the topdown approach to multiname credit modeling, calculation of singe name sensitivities appears possible, at least in principle, within the socalled random thinning (RT) procedure which dissects the portfolio risk into individual contributions. We make an attempt to construct a practical RT ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In the topdown approach to multiname credit modeling, calculation of singe name sensitivities appears possible, at least in principle, within the socalled random thinning (RT) procedure which dissects the portfolio risk into individual contributions. We make an attempt to construct a practical RT framework that enables efficient calculation of single name sensitivities in a topdown framework, and can be extended to valuation and risk management of bespoke tranches. Furthermore, we propose a dynamic extension of the RT method that enables modeling of both idiosyncratic and defaultcontingent individual spread dynamics within a Monte Carlo setting in a way that preserves the portfolio “top”level dynamics. This results in a model that is not only calibrated to tranche and single name spreads, but can also be tuned to approximately match given levels of spread volatilities and correlations of names in the portfolio.
On Overlay Construction and Random Node Selection in Heterogeneous Unstructured P2P Networks
"... Abstract — Unstructured p2p and overlay network applications, including several that the authors wish to build, often require that a random graph be constructed, and that some form of random node selection take place over that graph. A key and difficult requirement of many such applications is heter ..."
Abstract
 Add to MetaCart
Abstract — Unstructured p2p and overlay network applications, including several that the authors wish to build, often require that a random graph be constructed, and that some form of random node selection take place over that graph. A key and difficult requirement of many such applications is heterogeneity: peers have different node degrees in the random graph based on their capacity. Using simulations, this paper compares a number of techniques—some novel and some variations on known approaches—for heterogeneous graph construction and random node selection on top of such graphs. Our focus is on practical criteria that can lead to a genuinely deployable toolkit that supports a wide range of applications. These criteria include simplicity of operation, support for node heterogeneity, quality (uniformity) of random selection, efficiency and scalability, load balance, and robustness. We show that all these criteria can moreorless be met by all the approaches. Our novel approach, however, stands out as the best from a practical perspective because of its simplicity: it achieves the criteria while requiring each node to set only a single tuning parameter, its desired relative load. I.
Probabilistic Model Distortion Measure and Its Application to
"... In parameter estimation and filtering, model approximation is quite common in engineering research and development. These approximations distort the original relation between the parameter of interest and the observation and cause the performance deterioration. It is crucial to have a measure to app ..."
Abstract
 Add to MetaCart
In parameter estimation and filtering, model approximation is quite common in engineering research and development. These approximations distort the original relation between the parameter of interest and the observation and cause the performance deterioration. It is crucial to have a measure to appraise these approximations. In this paper, we analyze the structure of the parameter inference and clarify its ingrained vagueness. Accordingly, we apprehend the commensuration between the model distortion and the difference between two probability density functions. We work out a distortion measure, and it turns out that the KullbackLeibler (KL) divergence can serve this purpose.
Skew JensenBregman Voronoi Diagrams ⋆
"... — Dedicated to the victims of Japan Tohoku earthquake (March 2011). Abstract. A JensenBregman divergence is a distortion measure defined by a Jensen convexity gap induced by a strictly convex functional generator. JensenBregman divergences unify the squared Euclidean and Mahalanobis distances with ..."
Abstract
 Add to MetaCart
— Dedicated to the victims of Japan Tohoku earthquake (March 2011). Abstract. A JensenBregman divergence is a distortion measure defined by a Jensen convexity gap induced by a strictly convex functional generator. JensenBregman divergences unify the squared Euclidean and Mahalanobis distances with the celebrated informationtheoretic JensenShannon divergence, and can further be skewed to include Bregman divergences in limit cases. We study the geometric properties and combinatorial complexities of both the Voronoi diagrams and the centroidal Voronoi diagrams induced by such as class of divergences. We show that JensenBregman divergences occur in two contexts: (1) when symmetrizing Bregman divergences, and (2) when computing the Bhattacharyya distances of statistical distributions. Since the Bhattacharyya distance of popular parametric exponential family distributions in statistics can be computed equivalently as JensenBregman divergences, these skew JensenBregman Voronoi diagrams allow one to define a novel family of statistical Voronoi diagrams. Keywords: Jensen’s inequality, Bregman divergences, JensenShannon divergence, Jensenvon Neumann divergence, Bhattacharyya distance,