Results 1  10
of
118
Decomposition Techniques for Planning in Stochastic Domains
 IN PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI95
, 1995
"... This paper is concerned with modeling planning problems involving uncertainty as discretetime, finitestate stochastic automata. Solving planning problems is reduced to computing policies for Markov decision processes. Classical methods for solving Markov decision processes cannot cope with the siz ..."
Abstract

Cited by 109 (7 self)
 Add to MetaCart
This paper is concerned with modeling planning problems involving uncertainty as discretetime, finitestate stochastic automata. Solving planning problems is reduced to computing policies for Markov decision processes. Classical methods for solving Markov decision processes cannot cope with the size of the state spaces for typical problems encountered in practice. As an alternative, we investigate methods that decompose global planning problems into a number of local problems, solve the local problems separately, and then combine the local solutions to generate a global solution. We present algorithms that decompose planning problems into smaller problems given an arbitrary partition of the state space. The local problems are interpreted as Markov decision processes and solutions to the local problems are interpreted as policies restricted to the subsets of the state space defined by the partition. One algorithm relies on constructing and solving an abstract version of the original de...
Planning in the Presence of Cost Functions Controlled By An Adversary
 In Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a robot path planning problem where costs are influenced by sensors that an adversary places in the environment. We formulate ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a robot path planning problem where costs are influenced by sensors that an adversary places in the environment. We formulate the problem as a zerosum matrix game where rows correspond to deterministic policies for the planning player and columns correspond to cost vectors the adversary can select.
User Profile Replication for Faster Location Lookup in Mobile Environments
, 1995
"... We consider peruser profile replication as a mechanism for faster location lookup of mobile users in a Personal Communications Service system. We present a minimumcost maximumflow based algorithm to compute the set of sites at which a user profile should be replicated given known calling and user ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
We consider peruser profile replication as a mechanism for faster location lookup of mobile users in a Personal Communications Service system. We present a minimumcost maximumflow based algorithm to compute the set of sites at which a user profile should be replicated given known calling and user mobility patterns. We then present schemes for replication plans that gracefully adapt to changes in the calling and mobility patterns. 1 Introduction In a Personal Communications Service (PCS) system, users place and receive calls through a wireless medium. Calls may deliver voice, data, text, facsimile, or video information [JLLM94]. PCS users are located in systemdefined cells, which are bounded geographical areas. When a user places a call, the PCS infrastructure must route the call to the basestation located in the same cell as the callee. The basestation then transmits the data in the call to the PCS unit through the wireless medium. We consider the problem of locating users who...
Optimal scheduling of peertopeer file dissemination
 J. Scheduling
, 2006
"... Peertopeer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equallysized parts and then let users download each part (or, for netwo ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Peertopeer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equallysized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplinksharing version of the wellknown broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send/receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice. 1
A Scheme for Unifying Optimization and Constraint Satisfaction Methods
, 2000
"... Optimization and constraint satisfaction methods are complementary to a large extent, and there has been much recent interest in combining them. Yet no generally accepted principle or scheme for their merger has evolved. We propose a scheme based on two fundamental dualities, the duality of search a ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
Optimization and constraint satisfaction methods are complementary to a large extent, and there has been much recent interest in combining them. Yet no generally accepted principle or scheme for their merger has evolved. We propose a scheme based on two fundamental dualities, the duality of search and inference and the duality of strengthening and relaxation. Optimization as well as constraint satisfaction methods can be seen as exploiting these dualities in their respective ways. Our proposal is that rather than employ either type of method exclusively, one can focus on how these dualities can be exploited in a given problem class. The resulting algorithm is likely to contain elements from both optimization and constraint satisfaction, and perhaps new methods that belong to neither.
Learning permutations with exponential weights
 In 20th Annual Conference on Learning Theory
, 2007
"... Abstract. We give an algorithm for learning a permutation online. The algorithm maintains its uncertainty about the target permutation as a doubly stochastic matrix. This matrix is updated by multiplying the current matrix entries by exponential factors. These factors destroy the doubly stochastic ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Abstract. We give an algorithm for learning a permutation online. The algorithm maintains its uncertainty about the target permutation as a doubly stochastic matrix. This matrix is updated by multiplying the current matrix entries by exponential factors. These factors destroy the doubly stochastic property of the matrix and an iterative procedure is needed to renormalize the rows and columns. Even though the result of the normalization procedure does not have a closed form, we can still bound the additional loss of our algorithm over the loss of the best permutation chosen in hindsight. 1
PerUser Profile Replication in Mobile Environments: Algorithms, Analysis, and Simulation Results
 Journal on Special Topics in Mobile Networks and Applications, special issue on Data Management
, 1997
"... We consider peruser profile replication as a mechanism for faster location lookup of mobile users in a Personal Communications Service system. We present a minimumcost maximumflow based algorithm to compute the set of sites at which a user profile should be replicated given known calling and user ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
We consider peruser profile replication as a mechanism for faster location lookup of mobile users in a Personal Communications Service system. We present a minimumcost maximumflow based algorithm to compute the set of sites at which a user profile should be replicated given known calling and user mobility patterns. We then present schemes for replication plans that gracefully adapt to changes in the calling and mobility patterns. We show the costs and benefits of our replication algorithm against previous location lookup approaches through analysis. We also simulate our algorithm against other location lookup algorithms on a realistic model of a geographical area to evaluate critical system performance measures. A notable aspect of our simulations is that we use wellvalidated models of user calling and mobility patterns. 1 Introduction In a Personal Communications Service (PCS) system, users place and receive calls through a wireless medium. Calls may deliver voice, data, text, fa...
Data Locality Enhancement by Memory Reduction
 In Proceedings of the 15th ACM International Conference on Supercomputing
, 2001
"... In this paper, we propose memory reduction as a new approach to data locality enhancement. Under this approach, we use the compiler to reduce the size of the data repeatedly referenced in a collection of nested loops. Between their reuses, the data will more likely remain in higherspeed memory devi ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
In this paper, we propose memory reduction as a new approach to data locality enhancement. Under this approach, we use the compiler to reduce the size of the data repeatedly referenced in a collection of nested loops. Between their reuses, the data will more likely remain in higherspeed memory devices, such as the cache. Specically, we present an optimal algorithm to combine loop shifting, loop fusion and array contraction to reduce the temporary array storage required to execute a collection of loops. When applied to 20 benchmark programs, our technique reduces the memory requirement, counting both the data and the code, by 51% on average. The transformed programs gain a speedup of 1.40 on average, due to the reduced footprint and, consequently, the improved data locality. Categories and Subject Descriptors D.3.4 [Programming Languages]: Processorscompil ers, optimization General Terms Languages Keywords Array contraction, data locality, loop fusion, loop shifting 1.
Efficient retiming of large circuits
 IEEE Trans VLSI
, 1998
"... Abstract — Retiming, introduced by Leiserson and Saxe, is a powerful transformation of circuits that preserves functionality and improves performance. The ASTRA algorithm proposed an alternative view of retiming using the equivalence between retiming and clock skew optimization and also presented a ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Abstract — Retiming, introduced by Leiserson and Saxe, is a powerful transformation of circuits that preserves functionality and improves performance. The ASTRA algorithm proposed an alternative view of retiming using the equivalence between retiming and clock skew optimization and also presented a fast algorithm for minimum period (minperiod) retiming. Since minperiod retiming may significantly increase the number of flipflops in the circuit, minimum area (minarea) retiming is an important problem. Minarea retiming is a much harder problem than minperiod retiming, and previous techniques were not capable of handling large circuits in a reasonable time. This work defines the relationship between the Leiserson–Saxe and the ASTRA approaches and utilizes it for efficient minarea retiming of large circuits. The new algorithm, Minaret, uses the same basis as the Leiserson–Saxe approach. The underlying philosophy of the ASTRA approach is incorporated to reduce the number of variables and constraints generated in the problem. This allows minarea retiming of circuits with over 56 000 gates in under 15 min. Index Terms — Circuit optimization, design automation software, linear programming, logic synthesis, sequential logic circuits, very large scale integration. I.
Topological Design and Routing for LowEarth Orbit Satellite Networks
, 1995
"... We investigate a topological design and routing problem for LowEarth Orbit (LEO) satellite communication networks where each satellite can have a limited number of direct intersatellite links (ISL's) to a subset of satellites within its lineofsight. First, we model LEO satellite network as a FSA ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
We investigate a topological design and routing problem for LowEarth Orbit (LEO) satellite communication networks where each satellite can have a limited number of direct intersatellite links (ISL's) to a subset of satellites within its lineofsight. First, we model LEO satellite network as a FSA (Finite State Automaton) using satellite constellation information. Second, we solve a combined topological design and routing problem for each configuration corresponding to a state in the FSA. The topological design (or link assignment) problem deals with the selection of ISL's, and the routing problem handles the traffic distribution over the selected links to maximize the number of carried calls. In this paper, this NPcomplete mixed integer optimization problem is solved by a twostep heuristic algorithm that first solves the topological design problem, and then finds the optimal routing. The algorithm is iterated using the simulated annealing technique until the nearoptimal solution ...