Results 1  10
of
353
Research Experience
"... With three other students, I am currently researching greedy routing in social networks. For decades, researchers have been creating theoretical models of social networks in an attempt to explain Stanley Milgram's fascinating 1967 result 1 that is now popularly known as the "six degrees of ..."
Abstract
 Add to MetaCart
passes are required to reach a random target, especially given that only local information is used in deciding how to route the message. We have been working with a particular social network model called rankbased friendship (RBF), introduced by our adviser, David LibenNowell 2, in which
Handling Churn in a DHT
 In Proceedings of the USENIX Annual Technical Conference
, 2004
"... This paper addresses the problem of churnthe continuous process of node arrival and departurein distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then ..."
Abstract

Cited by 447 (24 self)
 Add to MetaCart
This paper addresses the problem of churnthe continuous process of node arrival and departurein distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models innetwork queuing, crosstraffic, and packet loss. These factors are typically missing in earlier simulationbased DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P filesharing applications, while using lower maintenance bandwidth than other DHT implementations.
Information Diffusion through Blogspace
 In WWW ’04
, 2004
"... We study the dynamics of information propagation in environments of lowoverhead personal publishing, using a large collection of weblogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation th ..."
Abstract

Cited by 386 (5 self)
 Add to MetaCart
We study the dynamics of information propagation in environments of lowoverhead personal publishing, using a large collection of weblogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation through our corpus, formalizing the notion of longrunning "chatter" topics consisting recursively of "spike" topics generated by outside world events, or more rarely, by resonances within the community. Second, we present a microscopic characterization of propagation from individual to individual, drawing on the theory of infectious diseases to model the flow. We propose, validate, and employ an algorithm to induce the underlying propagation network from a sequence of posts, and report on the results.
Koorde: A simple degreeoptimal distributed hash table
, 2003
"... Koorde is a new distributed hash table (DHT) based on Chord [15] and the de Bruijn graphs [2]. While inheriting the simplicity of Chord, Koorde meets various lower bounds, such as O(log n) hops per lookup request with only 2 neighbors per node (where n is the number of nodes in the DHT), and O(log n ..."
Abstract

Cited by 218 (1 self)
 Add to MetaCart
Koorde is a new distributed hash table (DHT) based on Chord [15] and the de Bruijn graphs [2]. While inheriting the simplicity of Chord, Koorde meets various lower bounds, such as O(log n) hops per lookup request with only 2 neighbors per node (where n is the number of nodes in the DHT), and O(log n/ log log n) hops per lookup request with O(log n) neighbors per node.
Improved proxy reencryption schemes with applications to secure distributed storage
 IN NDSS
, 2005
"... In 1998, Blaze, Bleumer, and Strauss proposed an application called atomic proxy reencryption, in which a semitrusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure reencryption will become increasingly popu ..."
Abstract

Cited by 190 (16 self)
 Add to MetaCart
In 1998, Blaze, Bleumer, and Strauss proposed an application called atomic proxy reencryption, in which a semitrusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure reencryption will become increasingly popular as a method for managing encrypted file systems. Although efficiently computable, the widespread adoption of BBS reencryption has been hindered by considerable security risks. Following recent work of Ivan and Dodis, we present new reencryption schemes that realize a stronger notion of security and we demonstrate the usefulness of proxy reencryption as a method of adding access control to the SFS readonly file system. Performance measurements of our experimental file system demonstrate that proxy reencryption can work effectively in practice.
An eventbased framework for characterizing the evolution of interaction graphs
, 2007
"... Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the e ..."
Abstract

Cited by 93 (3 self)
 Add to MetaCart
Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an eventbased characterization of critical behavioral patterns for temporally varying interaction graphs. We use nonoverlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We show how semantic information can be incorporated to reason about communitybehavior events. We also demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.
Correlation Clustering with Partial Information
, 2003
"... We consider the following general correlationclustering problem [1]: given a graph with real edge weights (both positive and negative), partition the vertices into clusters to minimize the total absolute weight of cut positive edges and uncut negative edges. Thus, large positive weights (represent ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
We consider the following general correlationclustering problem [1]: given a graph with real edge weights (both positive and negative), partition the vertices into clusters to minimize the total absolute weight of cut positive edges and uncut negative edges. Thus, large positive weights (representing strong correlations between endpoints) encourage those endpoints to belong to a common cluster; large negative weights encourage the endpoints to belong to different clusters; and weights with small absolute value represent little information. In contrast to most clustering problems, correlation clustering specifies neither the desired number of clusters nor a distance threshold for clustering; both of these parameters are effectively chosen to be the best possible by the problem definition. Correlation clustering was introduced by Bansal, Blum, and Chawla [1], motivated by both document clustering and agnostic learning. They proved NPhardness and gave constantfactor approximation algorithms for the special case in which the graph is complete (full information) and every edge has weight +1 or1. We give an O(log n)approximation algorithm for the general case based on a linearprogramming rounding and the "regiongrowing" technique. We also prove that this linear program has a gap of Ω(log n), and therefore our approximation is tight under this approach. We also give an O(r³)approximation algorithm for Kr,rminorfree graphs. On the other hand, we show that the problem is APXhard, and any o(log n)approximation would require improving the best approximation algorithms known for minimum multicut.
Wayfinding in Social Networks
"... With the recent explosion of popularity of commercial socialnetworking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
With the recent explosion of popularity of commercial socialnetworking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the smallworld phenomenon—the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends—and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis both on provable properties of these socialnetwork models and on the empirical validation of the models against real largescale socialnetwork data.
Tetris is Hard, Even to Approximate
 COCOON
, 2003
"... In the popular computer game of Tetris, the player is given a sequence of tetromino pieces and must pack them into a rectangular gameboard initially occupied by a given configuration of filled squares; any completely filled row of the gameboard is cleared and all pieces above it drop by one row. ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
In the popular computer game of Tetris, the player is given a sequence of tetromino pieces and must pack them into a rectangular gameboard initially occupied by a given configuration of filled squares; any completely filled row of the gameboard is cleared and all pieces above it drop by one row. We prove that in the o#ine version of Tetris, it is NPcomplete to maximize the number of cleared rows, maximize the number of tetrises (quadruples of rows simultaneously filled and cleared), minimize the maximum height of an occupied square, or maximize the number of pieces placed before the game ends. We furthermore show the extreme inapproximability of the first and last of these objectives to within a factor of p , when given a sequence of p pieces, and the inapproximability of the third objective to within a factor of 2#, for any # > 0. Our results
Results 1  10
of
353