Results 1  10
of
10
A Revealed Preference Approach to Computational Complexity in Economics
, 2010
"... One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worstcase under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worstcase view of computational complexity in important ways, and is methodologically close to mainstream economics.
Highly Parallel Sparse MatrixMatrix Multiplication
, 2010
"... Generalized sparse matrixmatrix multiplication is a key primitive for many high performance graph algorithms as well as some linear solvers such as multigrid. We present the first parallel algorithms that achieve increasing speedups for an unbounded number of processors. Our algorithms are based on ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Generalized sparse matrixmatrix multiplication is a key primitive for many high performance graph algorithms as well as some linear solvers such as multigrid. We present the first parallel algorithms that achieve increasing speedups for an unbounded number of processors. Our algorithms are based on twodimensional block distribution of sparse matrices where serial sections use a novel hypersparse kernel for scalability. We give a stateoftheart MPI implementation of one of our algorithms. Our experiments show scaling up to thousands of processors on a variety of test scenarios.
Stochastic mean payoff games: Smoothed analysis and approximation schemes
 In Proc. of the 38th Int. Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science
, 2011
"... We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simp ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a longstanding open question if a polynomial algorithm exists that solves BWRgames. In fact, a pseudopolynomial algorithm for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudopolynomial algorithm: BWgames (the case with no random nodes) and ergodic BWRgames (in which the game’s value does not depend on the initial position) with constant number of random nodes. We show that the existence of a pseudopolynomial algorithm for BWRgames with a constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomialtime approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BWgames and ergodic BWRgames (assuming a technical requirement about the probabilities at the random nodes). 1.
Towards Explaining the Speed of kMeans
"... The kmeans method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worstcase runningtime. To explain the speed of the kmeans method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to ..."
Abstract
 Add to MetaCart
The kmeans method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worstcase runningtime. To explain the speed of the kmeans method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to Bregman divergences. 1 kMeans Clustering The problem of clustering data into classes is ubiquitous in computer science, with applications ranging from computational biology over machine learning to image analysis. The kmeans method is a very simple and implementationfriendly local improvement heuristic for clustering. It is used to partition a set X of n ddimensional data points into k clusters. (The number k of clusters is fixed in advance.) In kmeans clustering, our goal is not only to get a clustering of the data points, but also to get a center ci for each cluster Xi of the clustering X1,..., Xk. A center can be viewed as a representative of its cluster. We do not require centers to be among the data points, but they can be arbitrary points. The goal is to find a “good ” clustering, where “good ” means that the clustering should minimize the objective function k ∑ ∑ δ(x, ci). i=1 x∈Xi Here, δ denotes a distance measure. In the following, we will mainly use squared Euclidean distances, i.e., δ(x, ci) = ‖x − ci‖2. Of course, given the cluster centers c1,..., ck ∈ Rd, each point x ∈ X should be assigned to the cluster Xi whose center ci is closest to x. On the other hand, given a clustering X1,..., Xk
The Work of Daniel A. Spielman
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... Dan Spielman has made groundbreaking contributions in theoretical computer science and mathematical programming and his work has profound connections to the study of polytopes and convex bodies, to errorcorrecting codes, expanders, and numerical analysis. Many of Spielman’s achievements came with a ..."
Abstract
 Add to MetaCart
Dan Spielman has made groundbreaking contributions in theoretical computer science and mathematical programming and his work has profound connections to the study of polytopes and convex bodies, to errorcorrecting codes, expanders, and numerical analysis. Many of Spielman’s achievements came with a beautiful collaboration spanned over two decades with ShangHua Teng. This paper describes some of Spielman’s main achievements. Section 1 describes smoothed analysis of algorithms, which is a new paradigm for the analysis of algorithms introduced by Spielman and Teng. Section 2 describes Spielman and Teng’s explanation for the excellent practical performance of the simplex algorithm via smoothed analysis. Spielman and Teng’s theorem asserts that the simplex algorithm takes a polynomial number of steps for a random Gaussian perturbation of every linear programming problem. Section 3 is devoted to Spielman’s works on errorcorrecting codes and in particular his construction of lineartime encodable and decodable highrate codes based
Path Trading: Fast Algorithms, Smoothed Analysis, and Hardness Results
"... Abstract. The Border Gateway Protocol (BGP) serves as the main routing protocol of the Internet and ensures network reachability among autonomous systems (ASes). When traffic is forwarded between the many ASes on the Internet according to that protocol, each AS selfishly routes the traffic inside it ..."
Abstract
 Add to MetaCart
Abstract. The Border Gateway Protocol (BGP) serves as the main routing protocol of the Internet and ensures network reachability among autonomous systems (ASes). When traffic is forwarded between the many ASes on the Internet according to that protocol, each AS selfishly routes the traffic inside its own network according to some internal protocol that supports the local objectives of the AS. We consider possibilities of achieving higher global performance in such systems while maintaining the objectives and costs of the individual ASes. In particular, we consider how path trading, i.e. deviations from routing the traffic using individually optimal protocols, can lead to a better global performance. Shavitt and Singer (“Limitations and Possibilities of Path Trading between Autonomous Systems”, INFOCOM 2010) were the first to consider the computational complexity of finding such path trading solutions. They show that the problem is weakly NPhard and provide a dynamic program to find path trades between pairs of ASes. In this paper we improve upon their results, both theoretically and practically. First, we show that finding path trades between sets of ASes is also strongly NPhard. Moreover, we provide an algorithm that finds all Paretooptimal path trades for a pair of two ASes. While in principal the number of Paretooptimal path trades can be exponential, in our experiments this number was typically small. We use the framework of smoothed analysis to give theoretical evidence that this is a general phenomenon, and not only limited to the instances on which we performed experiments. The computational results show that our algorithm yields far superior running times and can solve considerably larger instances than the previous dynamic program. 1
Smoothed Analysis of the Successive Shortest Path Algorithm ∗
"... The minimumcost flow problem is a classic problem in combinatorial optimization with various applications. Several pseudopolynomial, polynomial, and strongly polynomial algorithms have been developed in the past decades, and it seems that both the problem and the algorithms are well understood. Ho ..."
Abstract
 Add to MetaCart
The minimumcost flow problem is a classic problem in combinatorial optimization with various applications. Several pseudopolynomial, polynomial, and strongly polynomial algorithms have been developed in the past decades, and it seems that both the problem and the algorithms are well understood. However, some of the algorithms ’ running times observed in empirical studies contrast the running times obtained by worstcase analysis not only in the order of magnitude but also in the ranking when compared to each other. For example, the Successive Shortest Path (SSP) algorithm, which has an exponential worstcase running time, seems to outperform the strongly polynomial MinimumMean Cycle Canceling algorithm. To explain this discrepancy, we study the SSP algorithm in the framework of smoothed analysis and establish a bound of O(mnφ(m + n log n)) for its smoothed running time. This shows that worstcase instances for the SSP algorithm are not robust and unlikely to be encountered in practice. 1
A Framework for Evaluating the Smoothness of DataMining Results
"... Abstract. The datamining literature is rich in problems that are formalized as combinatorialoptimization problems. An indicative example is the entityselection formulation that has been used to model the problem of selecting a subset of representative reviews from a review corpus [11, 22] or impo ..."
Abstract
 Add to MetaCart
Abstract. The datamining literature is rich in problems that are formalized as combinatorialoptimization problems. An indicative example is the entityselection formulation that has been used to model the problem of selecting a subset of representative reviews from a review corpus [11, 22] or important nodes in a social network [10]. Existing combinatorial algorithms for solving such entityselection problems identify a set of entities (e.g., reviews or nodes) as important. Here, we consider the following question: how do small or large changes in the input dataset change the value or the structure of the such reported solutions? We answer this question by developing a general framework for evaluating the smoothness (i.e, consistency) of the datamining results obtained for the input dataset X. We do so by comparing these results with the results obtained for datasets that are within a small or a large distance from X. The algorithms we design allow us to perform such comparisons effectively and thus, approximate the results ’ smoothness efficiently. Our experimental evaluation on real datasets demonstrates the efficacy and the practical utility of our framework in a wide range of applications. 1