Results 1  10
of
154
Potential Function Methods for Approximately Solving Linear Programming Problems: Theory and Practice
, 2001
"... After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming ..."
Abstract

Cited by 160 (4 self)
 Add to MetaCart
After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming codes, running on the fastest computing hardware. Moreover, this is a trend that may well continue and intensify, as problem sizes escalate and the need for fast algorithms becomes more stringent. Traditionally, the focus in optimization algorithms, and in particular, in algorithms for linear programming, has been to solve problems "to optimality." In concrete implementations, this has always meant the solution ofproblems to some finite accuracy (for example, eight digits). An alternative approach would be to explicitly, and rigorously, trade o# accuracy for speed. One motivating factor is that in many practical applications, quickly obtaining a partially accurate solution is much preferable to obtaining a very accurate solution very slowly. A secondary (and independent) consideration is that the input data in many practical applications has limited accuracy to begin with. During the last ten years, a new body ofresearch has emerged, which seeks to develop provably good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context ofthe modern theory ofalgorithms. The result ofthis work has been a family ofalgorithms with solid theoretical foundations and with growing experimental success. In this manuscript we will study these algorithms, starting with some ofthe very earliest examples, and through the latest theoretical and computational developments.
Minimizing electricity cost: Optimization of distributed internet data centers in a multielectricitymarket environment
 In Proc. of INFOCOM
, 2010
"... Abstract—The study of CyberPhysical System (CPS) has been an active area of research. Internet Data Center (IDC) is an important emerging CyberPhysical System. As the demand on Internet services drastically increases in recent years, the power used by IDCs has been skyrocketing. While most existin ..."
Abstract

Cited by 93 (9 self)
 Add to MetaCart
(Show Context)
Abstract—The study of CyberPhysical System (CPS) has been an active area of research. Internet Data Center (IDC) is an important emerging CyberPhysical System. As the demand on Internet services drastically increases in recent years, the power used by IDCs has been skyrocketing. While most existing research focuses on reducing power consumptions of IDCs, the power management problem for minimizing the total electricity cost has been overlooked. This is an important problem faced by service providers, especially in the current multielectricity market, where the price of electricity may exhibit time and location diversities. Further, for these service providers, guaranteeing quality of service (i.e. service level objectivesSLO) such as service delay guarantees to the end users is of paramount importance. This paper studies the problem of minimizing the total electricity cost under multiple electricity markets environment while guaranteeing quality of service geared to the location diversity and time diversity of electricity price. We model the problem as a constrained mixedinteger programming and propose an efficient solution method. Extensive evaluations based on reallife electricity price data for multiple IDC locations illustrate the efficiency and efficacy of our approach. I.
Fast and Robust Earth Mover’s Distances
"... We present a new algorithm for a robust family of Earth Mover’s Distances EMDs with thresholded ground distances. The algorithm transforms the flownetwork of the EMD so that the number of edges is reduced by an order of magnitude. As a result, we compute the EMD by an order of magnitude faster tha ..."
Abstract

Cited by 88 (6 self)
 Add to MetaCart
(Show Context)
We present a new algorithm for a robust family of Earth Mover’s Distances EMDs with thresholded ground distances. The algorithm transforms the flownetwork of the EMD so that the number of edges is reduced by an order of magnitude. As a result, we compute the EMD by an order of magnitude faster than the original algorithm, which makes it possible to compute the EMD on large histograms and databases. In addition, we show that EMDs with thresholded ground distances have many desirable properties. First, they correspond to the way humans perceive distances. Second, they are robust to outlier noise and quantization effects. Third, they are metrics. Finally, experimental results on image retrieval show that thresholding the ground distance of the EMD improves both accuracy and speed. 1.
Perspectives of Monge Properties in Optimization
, 1995
"... An m × n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) f ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
An m &times; n matrix C is called Monge matrix if c ij + c rs c is + c rj for all 1 i ! r m, 1 j ! s n. In this paper we present a survey on Monge matrices and related Monge properties and their role in combinatorial optimization. Specifically, we deal with the following three main topics: (i) fundamental combinatorial properties of Monge structures, (ii) applications of Monge properties to optimization problems and (iii) recognition of Monge properties.
Quincy: Fair Scheduling for Distributed Computing Clusters
"... This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, a ..."
Abstract

Cited by 61 (1 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many gridcomputing environments. We argue that data intensive computation benefits from a finegrain resource sharing model that differs from the coarser semistatic resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this model of resourcesharing. We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with finegrain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvationfreedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data and CPUintensive jobs. We evaluate Quincy against an existing queuebased algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.
A PrimalDual Interior Point Method Whose Running Time Depends Only on the Constraint Matrix
, 1995
"... We propose a primaldual "layeredstep " interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares " (LLS) ste ..."
Abstract

Cited by 59 (8 self)
 Add to MetaCart
We propose a primaldual &quot;layeredstep &quot; interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a &quot;layered least squares &quot; (LLS) step. The algorithm returns an exact optimum after a finite number of stepsin particular, after O(n3:5c(A)) iterations, where c(A) is a function of the
Improved approximation algorithms for unsplittable flow problems (Extended Abstract)
 In Proceedings of the 38th Annual Symposium on Foundations of Computer Science
, 1997
"... ) Stavros G. Kolliopoulos 1 Clifford Stein 1 Abstract In the singlesource unsplittable flow problem we are given a graph G; a source vertex s and a set of sinks t 1 ; : : : ; t k with associated demands. We seek a single st i flow path for each commodity i so that the demands are satisfied and ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
) Stavros G. Kolliopoulos 1 Clifford Stein 1 Abstract In the singlesource unsplittable flow problem we are given a graph G; a source vertex s and a set of sinks t 1 ; : : : ; t k with associated demands. We seek a single st i flow path for each commodity i so that the demands are satisfied and the total flow routed across any edge e is bounded by its capacity c e : The problem is an NPhard variant of max flow and a generalization of singlesource edgedisjoint paths with applications to scheduling, load balancing and virtualcircuit routing problems. In a significant development, Kleinberg gave recently constantfactor approximation algorithms for several natural optimization versions of the problem [18]. In this paper we give a generic framework that yields simpler algorithms and significant improvements upon the constant factors. Our framework, with appropriate subroutines, applies to all optimization versions previously considered and treats in a unified manner directed and u...