Results 1  10
of
36
Algorithms for Knapsack Problems
, 1995
"... This thesis considers a family of combinatorial problems known under the name Knapsack Problems. As all the problems are A7)hard we are searching for exact solution techniques having reasonable solution times for nearly all instances encountered in practice, despite having exponential time bounds f ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
This thesis considers a family of combinatorial problems known under the name Knapsack Problems. As all the problems are A7)hard we are searching for exact solution techniques having reasonable solution times for nearly all instances encountered in practice, despite having exponential time bounds for a number of highly contrived problem instances. A similar behavior is known from the Simplex algorithm, which despite its exponential worstcase behavior has reasonable solution times for all realistic problems.
A minimal algorithm for the 01 Knapsack Problem.
 Operations Research
, 1994
"... Although several large sized 01 Knapsack Problems (KP) may be easily solved, it is often the case that most of the computational eort is used for preprocessing, i.e. sorting and reduction. In order to avoid this problem it has been proposed to solve the socalled core of the problem: A Knapsack ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
Although several large sized 01 Knapsack Problems (KP) may be easily solved, it is often the case that most of the computational eort is used for preprocessing, i.e. sorting and reduction. In order to avoid this problem it has been proposed to solve the socalled core of the problem: A Knapsack Problem de ned on a small subset of the variables. But the exact core cannot be identi ed without solving KP, so till now approximated core sizes had to be used.
Adaptive Scheduling Server for PowerAware RealTime Tasks
 ACM Transactions on Embedded Computing Systems
, 2003
"... In this paper we propose a novel scheduling framework for a dynamic realtime environment with energy constraints. This framework dynamically adjusts the CPU voltage/frequency so that no task in the system misses its deadline and the total energy savings of the system are maximized. ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
In this paper we propose a novel scheduling framework for a dynamic realtime environment with energy constraints. This framework dynamically adjusts the CPU voltage/frequency so that no task in the system misses its deadline and the total energy savings of the system are maximized.
Shares and utilities based power consolidation in virtualized server environments
 in Proceedings of the 11th IFIP/IEEE Integrated Network Management (IM 2009
, 2009
"... Abstract—Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the exi ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Abstract—Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47 % or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the minmax and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for powerperformance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the endtoend validity of our approach and demonstrate that our final candidate algorithm, PowerExpandMinMax, consistently yields the best overall utility across a broad spectrum of inputs – varying VM sizes and utilities, varying server capacities and varying power costs – thus providing a practical solution for administrators. I.
Selecting Highly Optimal Architectural Feature Sets with Filtered Cartesian Flattening
"... Software Productlines (SPLs) are software architectures that use modular software components that can be reconfigured into different variants for different requirements sets. Feature modeling is a common method used to capture the configuration rules for an SPL architecture. A key challenge develop ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Software Productlines (SPLs) are software architectures that use modular software components that can be reconfigured into different variants for different requirements sets. Feature modeling is a common method used to capture the configuration rules for an SPL architecture. A key challenge developers face when maintaining an SPL is determining how to select a set of architectural features for an SPL variant that simultaneously satisfy a series of resource constraints. This paper presents an approximation technique for selecting highly optimal architectural feature sets while adhering to resource limits. The paper provides the following contributions to configuring SPL architecture variants: (1) we provide a polynomial time approximation algorithm for selecting a highly optimal set of architectural features that adheres to a set of resource constraints, (2) we show how this algorithm can incorporate complex architectural configuration constraints; and (3) we present empirical results showing that the approximation algorithm can be used to derive architectural feature sets that are more than 90%+ optimal. 1
Practical Voltage Scaling for Mobile Multimedia Devices
, 2004
"... This paper presents the design, implementation, and evaluation of a practical voltage scaling (PDVS) algorithm for mobile devices primarily running multimedia applications. PDVS seeks to minimize the total energy of the whole device while meeting multimedia timing requirements. To do this, PDVS exte ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
This paper presents the design, implementation, and evaluation of a practical voltage scaling (PDVS) algorithm for mobile devices primarily running multimedia applications. PDVS seeks to minimize the total energy of the whole device while meeting multimedia timing requirements. To do this, PDVS extends traditional realtime scheduling by deciding what execution speed in addition to when to execute what applications. PDVS makes these decisions based on the discrete speed levels of the CPU, the total power of the device at different speeds, and the probability distribution of CPU demand of multimedia applications. We have implemented PDVS in the Linux kernel and evaluated it on an HP laptop. Our experimental results show that PDVS saves energy substantially without affecting multimedia performance. It saves energy by 14.4% to 37.2% compared to scheduling algorithms without voltage scaling and by up to 10.4% compared to previous voltage scaling algorithms that assume an ideal CPU with continuous speeds and cubic powerspeed relationship.
Generalized Knapsack Solvers for MultiUnit Combinatorial Auctions: Analysis and Application to Computational Resource Allocation
 In Workshop on Agent Mediated Electronic Commerce VI: Theories for and Engineering of Distributed Mechanisms and Systems
, 2004
"... The problem of allocating discrete computational resources motivates interest in general multiunit combinatorial exchanges. This paper considers the problem of computing optimal (surplusmaximizing) allocations, assuming unrestricted quasilinear preferences. We present a solver whose pseudopol ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
The problem of allocating discrete computational resources motivates interest in general multiunit combinatorial exchanges. This paper considers the problem of computing optimal (surplusmaximizing) allocations, assuming unrestricted quasilinear preferences. We present a solver whose pseudopolynomial time and memory requirements are linear in three of four natural measures of problem size: number of agents, length of bids, and units of each resource. In applications where the number of resource types is inherently a small constant, e.g., computational resource allocation, such a solver offers advantages over more elaborate approaches developed for highdimensional problems.
UtilityDirected Allocation
 In First Workshop on Algorithms and Architectures for SelfManaging Systems
, 2003
"... This paper considers the problem of allocating discrete resources according to utility functions reported by potential recipients and relates this abstract problem to resource allocation in a Utility Data Center (UDC). A simple integer program formulation, which generalizes wellknown knapsack probl ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper considers the problem of allocating discrete resources according to utility functions reported by potential recipients and relates this abstract problem to resource allocation in a Utility Data Center (UDC). A simple integer program formulation, which generalizes wellknown knapsack problems, permits a remarkable breadth of expression while retaining clarity and analytic tractability. In the UDC context, this formulation allows us to incorporate factors such as resource scarcity, user demand, and operating costs in a unified framework. It is equally applicable to long and shortterm allocation. If applied to shortterm dynamic reallocation it allows SLA violation penalties to be enforced or relaxed, thereby permitting principled preemption of resources. Retrospective analysis of past allocator inputs can guide economicallyoptimal capacity expansion. The proposed problem formulation is suitable both for UDCs that operate exclusively within an enterprise and for those that sell access to computational resources to external customers. The latter case involves multiple divergent interests contending for scarce resources, and this paper surveys the economic issues that arise in such situations and relevant literature, e.g., on mechanism design and auction theory.
Proactive Peak Power Management for Manycore Architectures
"... While power has long been a wellstudied problem, most dynamic power reduction techniques, e.g., V/f scaling, clock gating, etc., exploit slack in the execution behavior of programs to reduce average power. Peak power is often left untouched. However, peak power plays a large role in determining the ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
While power has long been a wellstudied problem, most dynamic power reduction techniques, e.g., V/f scaling, clock gating, etc., exploit slack in the execution behavior of programs to reduce average power. Peak power is often left untouched. However, peak power plays a large role in determining the characteristics and hence the cost of the power supply, thermal budgeting for the chip, as well as the reliability qualification of the processor. This paper proposes proactive peak power management policies that attempt to prevent the power of a processor from exceeding a certain threshold. The threshold is chosen to be close to the peak power of the processor, thereby minimizing the inefficiency due to the growing gap between average power and peak power of a processor, especially a multicore processor [14]. We demonstrate that proactive peak power management can enable the placement of several more cores on a die than the power budget would allow. This can result in significant (up to 47%, 33 % on average) improvements in throughput for a given power budget. We also show that proactive peak power management does not have to be centralized and heavyweight and can be applied even to manycore architectures (processors with a large number of cores). We investigate a number of efficient, decentralization techniques – e.g., mapping the proactive peak power management problem to a disjunctively constrained 01 knapsack problem, using machine learning and classical search/optimization approaches to reduce the decision space, and using distributed control algorithms for decentralized decision making. Several of these techniques can be used even to reduce average power through traditional dynamic and global power management. 1
Stochastic pRobust Location Problems
, 2004
"... Many objectives have been proposed for optimization under uncertainty. The typical stochastic programming objective of minimizing expected cost may yield solutions that are inexpensive in the long run but perform poorly under certain realizations of the random data. On the other hand, the typical ro ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Many objectives have been proposed for optimization under uncertainty. The typical stochastic programming objective of minimizing expected cost may yield solutions that are inexpensive in the long run but perform poorly under certain realizations of the random data. On the other hand, the typical robust optimization objective of minimizing maximum cost or regret tends to be overly conservative, planning against a disastrous but unlikely scenario. In this paper, we present facility location models that combine the two objectives by minimizing the expected cost while bounding the relative regret in each scenario. In particular, the models seek the minimumexpectedcost solution that is probust; i.e., whose relative regret is no more than 100p% in each scenario.