Results 1 
8 of
8
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 Commun. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise. 1. MODELING REAL DATA “My experiences also strongly confirmed my previous opinion that the best theory is inspired by practice and the best practice is inspired by theory. ” [Donald E. Knuth: “Theory and Practice”, Theoretical Computer Science, 90 (1), 1–15, 1991.] Algorithms are highlevel descriptions of how computational tasks are performed. Engineers and experimentalists design and implement algorithms, and generally consider them a success if they work in practice. However, an algorithm that works well in one practical domain might perform poorly in another. Theorists also design and analyze algorithms, with the goal of providing provable guarantees about their performance. The traditional goal of theoretical computer science is to prove that an algorithm performs well This material is based upon work supported by the National
Small world phenomenon, rapidly mixing Markov chains, and average consensus algorithms
 in Proc. IEEE CDC’07
, 2007
"... Abstract — In this paper, we demonstrate the relationship between the diameter of a graph and the mixing time of a symmetric Markov chain defined on it. We use this relationship to show that graphs with the small world property have dramatically small mixing times. Based on this result, we conclude ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract — In this paper, we demonstrate the relationship between the diameter of a graph and the mixing time of a symmetric Markov chain defined on it. We use this relationship to show that graphs with the small world property have dramatically small mixing times. Based on this result, we conclude that addition of independent random edges with arbitrarily small probabilities to a cycle significantly increases the convergence speed of average consensus algorithms, meaning that small world networks reach consensus orders of magnitude faster than a cycle. Furthermore, this dramatic increase happens for any positive probability of random edges. The same argument is used to draw a similar conclusion for the case of addition of a random matching to the cycle. I.
Smoothed Analysis of Binary Search Trees
, 2006
"... Binary search trees are one of the most fundamental data structures. While the height of such a tree may be linear in the worst case, the average height with respect to the uniform distribution is only logarithmic. The exact value is one of the best studied problems in averagecase complexity. We in ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Binary search trees are one of the most fundamental data structures. While the height of such a tree may be linear in the worst case, the average height with respect to the uniform distribution is only logarithmic. The exact value is one of the best studied problems in averagecase complexity. We investigate what happens in between by analysing the smoothed height of binary search trees: Randomly perturb a given (adversarial) sequence and then take the expected height of the binary search tree generated by the resulting sequence. As perturbation models, we consider partial permutations, partial alterations, and partial deletions. On the one hand, we prove tight lower and upper bounds of roughly Θ((1 − p) · n/p) for the expected height of binary search trees under partial permutations and partial alterations, where n is the number of elements and p is the smoothing parameter. This means that worstcase instances are rare and disappear under slight perturbations. On the other hand, we examine how much a perturbation can increase the height of a binary search tree, i.e. how much worse well balanced instances can become.
Research Statement
"... complex networks. This has been the case since time immemorial. But only recently has technology allowed us to record the structure of these huge networks and use this information in decision making. Hyperlinking web pages together and ranking them based on network structure has already revolutioniz ..."
Abstract
 Add to MetaCart
complex networks. This has been the case since time immemorial. But only recently has technology allowed us to record the structure of these huge networks and use this information in decision making. Hyperlinking web pages together and ranking them based on network structure has already revolutionized the way we manage information (“Just google it”). This is only the beginning. To benefit from our new wealth in networked data, we need to really understand networks. We need accurate measurements, relevant models, and efficient algorithms. This is the focus of my research. It sits at the intersection of mathematics and computer science, and incorporates elements of economics as well. Current work: Network sampling, modeling, algorithms, and economics I use the term network sampling to refer to the process of gathering network data from the natural or artificial environment. The mathematical field of random graph theory was greatly invigorated by the observation of realworld graphs in the late 1990s [25, 6, 3]. In some of these works, such as the analysis of the power grid of the western United States, the network structure is known with little or no error. But in many other cases, we only
Proceedings Article Complex Contagion and the Weakness of Long Ties in Social Networks: Revisited
"... Diseases, information and rumors could spread fast in social networks exhibiting the small world property. In the diffusion of these “simple contagions”, which can spread through a single contact, a small network diameter and the existence of weak ties in the network play important roles. Recent stu ..."
Abstract
 Add to MetaCart
Diseases, information and rumors could spread fast in social networks exhibiting the small world property. In the diffusion of these “simple contagions”, which can spread through a single contact, a small network diameter and the existence of weak ties in the network play important roles. Recent studies by sociologists [Centola and Macy 2007] have also explored “complex contagions ” in which multiple contacts are required for the spread of contagion. [Centola and Macy 2007] and [Romero et al. 2011] have shown that complex contagions exhibit different diffusion patterns than simple ones. In this paper, we study three small world models and provide rigorous analysis on the diffusion speed of a kcomplex contagion, in which a node becomes active only when at least k of its neighbors are active. Diffusion of a complex contagion starts from a constant number of initial active nodes. We provide upper and lower bounds on the number of rounds it takes for the entire network to be activated. Our results show that compared to simple contagions, weak ties are not as effective in spreading complex contagions due to the lack of simultaneous active contacts; and the diffusion speed depends heavily on the the way weak ties are distributed in a network. We show that in NewmanWatts model with Θ(n) random edges added on top of a ring structure, the diffusion speed of a 2complex contagion is Ω ( 3 √ n) and O ( 5 √ n 4 log n) with high probability. In Kleinberg’s small world model (in which Θ(n) random edges are added with a spatial distribution inversely proportional
Smoothed Analysis of Balancing Networks
, 2009
"... In a load balancing network each processor has an initial collection of unitsize jobs, tokens, and in each round, pairs of processors connected by balancers split their load as evenly as possible. An excess token (if any) is placed according to some predefined rule. As it turns out, this rule cruc ..."
Abstract
 Add to MetaCart
In a load balancing network each processor has an initial collection of unitsize jobs, tokens, and in each round, pairs of processors connected by balancers split their load as evenly as possible. An excess token (if any) is placed according to some predefined rule. As it turns out, this rule crucially effects the performance of the network. In this work we propose a model that studies this effect. We suggest a model bridging the uniformlyrandom assignment rule, and the arbitrary one (in the spirit of smoothedanalysis) by starting from an arbitrary assignment of balancer directions, then flipping each assignment with probability α independently. For a large class of balancing networks our result implies that after O(log n) rounds the discrepancy is whp O((1/2−α) log n+log log n). This matches and generalizes the known bounds for α = 0 and α = 1/2.
Towards ContextAware, Real Time and Autonomous Decision Making Using Information Aggregation and Network Analytics
"... Abstract—We consider the problem of realtime, proactive decision making for dynamic and timecritical decisionevents where the choices made for multiple, individual decisions over time determine the final decision outcome of an event. We posit that the quality of such individual decisions can be s ..."
Abstract
 Add to MetaCart
Abstract—We consider the problem of realtime, proactive decision making for dynamic and timecritical decisionevents where the choices made for multiple, individual decisions over time determine the final decision outcome of an event. We posit that the quality of such individual decisions can be significantly improved if human decision makers are provided with decision aids in the form of dynamically updated information and dependencies between the different decision variables, and the humans affecting those decision variables. In this position paper, we propose the CONRAD (CONtext aware Realtime Adaptive Decision making) system that uses computational techniques from large scale network analysis and game theorybased distributed information aggregation to develop such decision aids. CONRAD’s functionalities are implemented through three subsystems a decision making subsystem that updates and mathematically combines information from different decision variables to predict the outcome of the decision event, a decision assessment subsystem that uses the currently predicted decision outcome to estimate the future decision trajectory and recommends information collectionrelated actions to the human decision maker, and, a network analysis subsystem that uses those recommended actions to dynamically update the dependencies and correlations between events and people influencing the decision variables. To the best of our knowledge, our work is one of the first attempts towards combining dynamic decision updates and using the predicted decision trajectory as a proactive feedback mechanism to dynamically update the correlations between decision variables so that human decision makers can make more strategicallyinformed and wellaligned decisions towards the desired outcome of decision events. I.