Results 1  10
of
17
Modelbased overlapping clustering
 In KDD
, 2005
"... While the vast majority of clustering algorithms are partitional, many real world datasets have inherently overlapping clusters. Several approaches to finding overlapping clusters have come from work on analysis of biological datasets. In this paper, we interpret an overlapping clustering model prop ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
(Show Context)
While the vast majority of clustering algorithms are partitional, many real world datasets have inherently overlapping clusters. Several approaches to finding overlapping clusters have come from work on analysis of biological datasets. In this paper, we interpret an overlapping clustering model proposed by Segal et al. [23] as a generalization of Gaussian mixture models, and we extend it to an overlapping clustering model based on mixtures of any regular exponential family distribution and the corresponding Bregman divergence. We provide the necessary algorithm modifications for this extension, and present results on synthetic data as well as subsets of 20Newsgroups and EachMovie datasets.
Toward a model for backtracking and dynamic programming
 Comput. Compl
"... We propose a model called priority branching trees (pBT) for backtracking and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
(Show Context)
We propose a model called priority branching trees (pBT) for backtracking and dynamic programming algorithms. Our model generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model due to Woeginger, and hence spans a wide spectrum of algorithms. After witnessing the strength of the model, we then show its limitations by providing lower bounds for algorithms in this model for several classical problems such as Interval Scheduling, Knapsack and Satisfiability.
A Class of Hard Small 01 Programs
 INFORMS Journal on Computing
, 1998
"... . In this paper, we consider a class of 01 programs which, although innocent looking, is a challenge for existing solution methods. Solving even small instances from this class is extremely di#cult for conventional branchandbound or branchandcut algorithms. We also experimented with basis redu ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
. In this paper, we consider a class of 01 programs which, although innocent looking, is a challenge for existing solution methods. Solving even small instances from this class is extremely di#cult for conventional branchandbound or branchandcut algorithms. We also experimented with basis reduction algorithms and with dynamic programming without much success. The paper then examines the performance of two other methods: a group relaxation for 0,1 programs, and a sortingbased procedure following an idea of Wolsey. Although the results with these two methods are somewhat better than with the other four when it comes to checking feasibility, we o#er this class of small 0,1 programs as a challenge to the research community. As of yet, instances from this class with as few as seven constraints and sixty 01 variables are unsolved. 1 Introduction Goal programming [2] is a useful model when a decision maker wants to come "as close as possible" to satisfying a number of incompatible go...
Generalized Knapsack Solvers for MultiUnit Combinatorial Auctions: Analysis and Application to Computational Resource Allocation
 In Workshop on Agent Mediated Electronic Commerce VI: Theories for and Engineering of Distributed Mechanisms and Systems
, 2004
"... The problem of allocating discrete computational resources motivates interest in general multiunit combinatorial exchanges. This paper considers the problem of computing optimal (surplusmaximizing) allocations, assuming unrestricted quasilinear preferences. We present a solver whose pseudopol ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
The problem of allocating discrete computational resources motivates interest in general multiunit combinatorial exchanges. This paper considers the problem of computing optimal (surplusmaximizing) allocations, assuming unrestricted quasilinear preferences. We present a solver whose pseudopolynomial time and memory requirements are linear in three of four natural measures of problem size: number of agents, length of bids, and units of each resource. In applications where the number of resource types is inherently a small constant, e.g., computational resource allocation, such a solver offers advantages over more elaborate approaches developed for highdimensional problems.
How well can PrimalDual and LocalRatio algorithms perform?
, 2007
"... We define an algorithmic paradigm, the stack model, that captures many primaldual and localratio algorithms for approximating covering and packing problems. The stack model is defined syntactically and without any complexity limitations and hence our approximation bounds are independent of the P v ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
We define an algorithmic paradigm, the stack model, that captures many primaldual and localratio algorithms for approximating covering and packing problems. The stack model is defined syntactically and without any complexity limitations and hence our approximation bounds are independent of the P vs NP question. We provide tools to bound the performance of primal dual and local ratio algorithms and supply a (log n + 1)/2 inapproximability result for set cover, a 4/3 inapproximability for min steiner tree, and a 0.913 inapproximability for interval scheduling on two machines.
Column basis reduction and decomposable knapsack problems
 Discrete Optimization
"... We propose a very simple preconditioning method for integer programming feasibility problems: replacing the problem b ′ ≤ Ax ≤ b x ∈ Zn with b ′ ≤ (AU)y ≤ b y ∈ Zn, where U is a unimodular matrix computed via basis reduction, to make the columns of AU short (i.e., have small Euclidean norm), and n ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
We propose a very simple preconditioning method for integer programming feasibility problems: replacing the problem b ′ ≤ Ax ≤ b x ∈ Zn with b ′ ≤ (AU)y ≤ b y ∈ Zn, where U is a unimodular matrix computed via basis reduction, to make the columns of AU short (i.e., have small Euclidean norm), and nearly orthogonal (see e.g., [26], [25]). Our approach is termed column basis reduction, and the reformulation is called rangespace reformulation. It is motivated by the technique proposed for equality constrained IPs by Aardal, Hurkens and Lenstra. We also propose a simplified method to compute their reformulation. We also study a family of IP instances, called decomposable knapsack problems (DKPs). DKPs generalize the instances proposed by Jeroslow, Chvátal and Todd, Avis, Aardal and Lenstra, and Cornuéjols et al. They are knapsack problems with a constraint vector of the form pM + r, with p> 0 and r integral vectors, and M a large integer. If the parameters are suitably chosen in DKPs, we prove • hardness results, when branchandbound branching on individual variables is applied; • that they are easy, if one branches on the constraint px instead; and • that branching on the last few variables in either the rangespace or the AHL reformulations is equivalent to branching on px in the original problem. We also provide recipes to generate such instances. Our computational study confirms that the behavior of the studied instances in practice is as predicted by the theory.
Basis reduction and the complexity of branchandbound
"... The classical branchandbound algorithm for the integer feasibility problem ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The classical branchandbound algorithm for the integer feasibility problem
A Sparse Knapsack Algotechcuit and its Synthesis
 IN INTERNATIONAL CONFERENCE ON APPLICATIONSPECIFIC ARRAY PROCESSORS (ASAP94
, 1994
"... We systematically derive an improved algorithm (called the sparse algorithm) for the general knapsack problem which has better average case performance than the standard (dense) dynamic programming algorithm. The derivation is based on transformation of the standard recurrences into a stream functio ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We systematically derive an improved algorithm (called the sparse algorithm) for the general knapsack problem which has better average case performance than the standard (dense) dynamic programming algorithm. The derivation is based on transformation of the standard recurrences into a stream functional programs, and cannot be achieved by the usual spacetime mapping techniques because the dependencies are statically unpredictable. Furthermore such a sparse algorithm for the general knapsack problem has not been proposed in the literature, to the best of our knowledge. We also implement the sparse algorithm on a linear asynchronous array with constant size memory on each PE (i.e., a wavefront array processor). Using LPGS partitioning, the algorithm can run on an arbitrasy size ring and has optimal time speedup.
Locally vs. Globally Optimized FlowBased Content Distribution to Mobile Nodes
"... Abstract—The paper deals with efficient distribution of timely information to flows of mobile devices. We consider the case where a set of Information Dissemination Devices (IDDs) broadcast a limited amount of information to passing mobile nodes that are moving along welldefined paths. This is the ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract—The paper deals with efficient distribution of timely information to flows of mobile devices. We consider the case where a set of Information Dissemination Devices (IDDs) broadcast a limited amount of information to passing mobile nodes that are moving along welldefined paths. This is the case, for example, in intelligent transportation systems. We develop a novel model that captures the main aspects of the problem, and define a new optimization problem we call MBMAP (Maximum Benefit Message Assignment Problem). We study the computational complexity of this problem in the global and local cases, and provide new approximation algorithms. I.