Results 1  10
of
28
Tensor decompositions for learning latent variable models
, 2014
"... This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable mo ..."
Abstract

Cited by 72 (5 self)
 Add to MetaCart
(Show Context)
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable moments (typically, of second and thirdorder). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin’s perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
A Revealed Preference Approach to Computational Complexity in Economics
, 2010
"... One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worstcase under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worstcase view of computational complexity in important ways, and is methodologically close to mainstream economics.
Smoothed analysis of multiobjective optimization
 in FOCS, 2009
"... Abstract — We prove that the number of Paretooptimal solutions in any multiobjective binary optimization problem with a finite number of linear objective functions is polynomial in the model of smoothed analysis. This resolves a conjecture of René Beier [5]. Moreover, we give polynomial bounds on a ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Abstract — We prove that the number of Paretooptimal solutions in any multiobjective binary optimization problem with a finite number of linear objective functions is polynomial in the model of smoothed analysis. This resolves a conjecture of René Beier [5]. Moreover, we give polynomial bounds on all finite moments of the number of Paretooptimal solutions, which yields the first nontrivial concentration bound for this quantity. Using our new technique, we give a complete characterization of polynomial smoothed complexity for binary optimization problems, which strengthens an earlier result due to Beier and Vöcking [8]. Keywordsmultiobjective optimization; Paretooptimal solutions; smoothed analysis 1.
Union of Random Minkowski Sums and Network Vulnerability Analysis
"... Let C = {C1,..., Cn} be a set of n pairwisedisjoint convex sgons, for some constant s, and let π be a probability density function (pdf) over the nonnegative reals. For each i, let Ki be the Minkowski sum of Ci with a disk of radius ri, where each ri is a random nonnegative number drawn independ ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Let C = {C1,..., Cn} be a set of n pairwisedisjoint convex sgons, for some constant s, and let π be a probability density function (pdf) over the nonnegative reals. For each i, let Ki be the Minkowski sum of Ci with a disk of radius ri, where each ri is a random nonnegative number drawn independently from the distribution determined by π. We show that the expected complexity of the union of K1,..., Kn is O(n log n), for any pdf π; the constant of proportionality depends on s, but not on the pdf. Next, we consider the following problem that arises in analyzing the vulnerability of a network under a physical attack. Let G = (V, E) be a planar geometric graph where E is a set of n line segments with pairwisedisjoint relative interiors. Let ϕ: R≥0 → [0, 1] be an edge failure probability function, where a physical attack at a location x ∈ R 2 causes an edge e of E at distance r from x to fail with probability ϕ(r); we assume that ϕ is of the form 1 − Π(x), where Π is a cumulative distribution function on the nonnegative reals. The goal is to compute the most vulnerable location for G, i.e., the location of the attack that maximizes the expected number of failing edges of G. Using our bound on the complexity of the union of random Minkowski sums, we present a nearlinear MonteCarlo algorithm for computing a location that is an approximately most vulnerable location of attack for G. Categories and Subject Descriptors F.2.2 [Analysis of algorithms and problem complexity]: Nonnumerical algorithms and problems—Geometrical problems and computations
Stochastic mean payoff games: Smoothed analysis and approximation schemes
 In Proc. of the 38th Int. Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science
, 2011
"... We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simp ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a longstanding open question if a polynomial algorithm exists that solves BWRgames. In fact, a pseudopolynomial algorithm for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudopolynomial algorithm: BWgames (the case with no random nodes) and ergodic BWRgames (in which the game’s value does not depend on the initial position) with constant number of random nodes. We show that the existence of a pseudopolynomial algorithm for BWRgames with a constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomialtime approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BWgames and ergodic BWRgames (assuming a technical requirement about the probabilities at the random nodes). 1.
Sensitivity of diffusion dynamics to network uncertainty. Technical report, available at http://ndssl.vbi.vt.edu/supplementaryinfo/vskumar/sensitivityjair.pdf
, 2014
"... Simple diffusion processes on networks have been used to model, analyze and predict diverse phenomena such as spread of diseases, information and memes. More often than not, the underlying network data is noisy and sampled. This prompts the following natural question: how sensitive are the diffusion ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Simple diffusion processes on networks have been used to model, analyze and predict diverse phenomena such as spread of diseases, information and memes. More often than not, the underlying network data is noisy and sampled. This prompts the following natural question: how sensitive are the diffusion dynamics and subsequent conclusions to uncertainty in the network structure? In this paper, we consider two popular diffusion models: Independent cascade (IC) model and Linear threshold (LT) model. We study how the expected number of vertices that are influenced/infected, for particular initial conditions, are affected by network perturbations. Through rigorous analysis under the assumption of a reasonable perturbation model we establish the following main results. (1) For the IC model, we characterize the sensitivity to network perturbation in terms of the critical probability for phase transition of the network. We find that the expected number of infections is quite stable, unless the transmission probability is close to the critical probability. (2) We show that the standard LT model with uniform edge weights is relatively stable under network perturbations. (3) We study these sensitivity questions using extensive simulations on diverse real world networks and find that our theoretical predictions for both models match the observations quite closely. (4) Experimentally, the transient behavior, i.e., the time series of the number of infections, in both models appears to be more sensitive to network perturbations. 1.
Smoothed analysis of belief propagation for minimumcost flow and matching
 in Proc. WALCOM: Algorithms and Computation
, 2013
"... ar ..."
(Show Context)
Smoothed Analysis of the Successive Shortest Path Algorithm
, 2013
"... The minimumcost flow problem is a classic problem in combinatorial optimization with various applications. Several pseudopolynomial, polynomial, and strongly polynomial algorithms have been developed in the past decades, and it seems that both the problem and the algorithms are well understood. Ho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
The minimumcost flow problem is a classic problem in combinatorial optimization with various applications. Several pseudopolynomial, polynomial, and strongly polynomial algorithms have been developed in the past decades, and it seems that both the problem and the algorithms are well understood. However, some of the algorithms’ running times observed in empirical studies contrast the running times obtained by worstcase analysis not only in the order of magnitude but also in the ranking when compared to each other. For example, the Successive Shortest Path (SSP) algorithm, which has an exponential worstcase running time, seems to outperform the strongly polynomial MinimumMean Cycle Canceling algorithm. To explain this discrepancy, we study the SSP algorithm in the framework of smoothed analysis and establish a bound of O(mnφ(m + n log n)) for its smoothed running time. This shows that worstcase instances for the SSP algorithm are not robust and unlikely to be encountered in practice.