Results 1  10
of
20
Project scheduling under uncertainty: Survey and research potentials
 European Journal of Operational Research
, 2005
"... The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the precomputed baseline schedule will be executed. However, in the real world, project activities are subject to ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the precomputed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, which is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling under uncertainty projects with deterministic network evolution structure. Ó 2004 Elsevier B.V. All rights reserved.
Modeling Applications for Adaptive QoSbased Resource Management
 IN PROCEEDINGS OF THE 2ND IEEE HIGHASSURANCE SYSTEM ENGINEERING WORKSHOP
, 1997
"... This paper describes two innovative models that facilitate adaptive QoSdriven resource management in distributed systems comprising heterogeneous computing, storage, and communication resources. The first model, denoted the Logical Application Stream Model (LASM), recursively captures a distribute ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
This paper describes two innovative models that facilitate adaptive QoSdriven resource management in distributed systems comprising heterogeneous computing, storage, and communication resources. The first model, denoted the Logical Application Stream Model (LASM), recursively captures a distributed application's structure, resource requirements, and relevant endtoend qualityofservice (QoS) parameters. Upon invocation of the application by a user, the resource manager can use this model to initially structure the endtoend application, allocate resources to this application, and schedule this application on these resources, so as to provide QoS to all applications and to efficiently utilize system resources; later, when the system state changes, the resource manager can use this application model to dynamically reallocate, reschedule, and restructure applications. The recursive nature of the model enables application developers to easily model largescale applications. We also describe a model, denoted the Benefit Function (BF), that captures user QoS preferences and enables the resource manager to gracefully degrade application QoS under certain conditions.
Complexity Measures for Assembly Sequences
 In Proc. IEEE Int. Conf. on Robotics and Automation
, 1996
"... Our work examines various complexity measures for twohanded assembly sequences. For many products there exists an exponentially large set of valid sequences, and a natural goal is to use automated systems to select wisely from the choices. Since assembly sequencing is a preprocessing phase for a lo ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Our work examines various complexity measures for twohanded assembly sequences. For many products there exists an exponentially large set of valid sequences, and a natural goal is to use automated systems to select wisely from the choices. Since assembly sequencing is a preprocessing phase for a long and expensive manufacturing process, any work towards ndinga\better" assembly sequence isofgreat value when it comes time to assemble the physical product in mass quantities. We take a step in this direction by introducing a formal framework for studying the optimization of several complexity measures. This framework focuses on the combinatorial aspect of the family of valid assembly sequences, while temporarily separating out the speci c geometric assumptions inherent to the problem. With an exponential number of possibilities, nding the true optimal cost solution is nontrivial. In fact in the most general case, our results show that even nding an approximate solution is hard. Furthermore, we can show several hardness results, even in simple geometric settings. Future work is directed towards using this model to study how the original geometric assumptions can be reintroduced toprove stronger approximation results. 1
SCHEDULING WITH AND/OR PRECEDENCE CONSTRAINTS
, 2004
"... In many scheduling applications it is required that the processing of some job be postponed until some other job, which can be chosen from a pregiven set of alternatives, has been completed. The traditional concept of precedence constraints fails to model such restrictions. Therefore, the concept h ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
In many scheduling applications it is required that the processing of some job be postponed until some other job, which can be chosen from a pregiven set of alternatives, has been completed. The traditional concept of precedence constraints fails to model such restrictions. Therefore, the concept has been generalized to socalled and/or precedence constraints which can cope with this kind of requirement. In the context of traditional precedence constraints, feasibility, transitivity, and the computation of earliest start times for jobs are fundamental, wellstudied problems. The purpose of this paper is to provide efficient algorithms for these tasks for the more general model of and/or precedence constraints. We show that feasibility as well as many questions related to transitivity can be solved by applying essentially the same lineartime algorithm. In order to compute earliest start times we propose two polynomialtime algorithms to cope with different classes of time distances between jobs.
A generic approach to schedulability analysis of realtime tasks
 Nordic J. of Computing
, 2003
"... Abstract. In offline schedulability tests for real time systems, tasks are usually assumed to be periodic, i.e. they are released with fixed rates. To relax the assumption of complete knowledge on arrival times, we propose to use timed automata to describe task arrival patterns. In a recent work, i ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Abstract. In offline schedulability tests for real time systems, tasks are usually assumed to be periodic, i.e. they are released with fixed rates. To relax the assumption of complete knowledge on arrival times, we propose to use timed automata to describe task arrival patterns. In a recent work, it is shown that for fixed priority scheduling strategy and tasks with only timing constraints (i.e. execution time and deadline), the schedulability of such models can be checked by reachability analysis on timed automata with two clocks. In this paper, we extend the above result to deal with precedence and resource constraints. This yields a unified task model, which is expressive enough to describe concurrency, synchronization, and tasks that may be periodic, aperiodic, preemptive or nonpreemptive with (or without) combinations of timing, precedence, and resource constraints. We present an operational semantics for the model, and show that the related schedulability analysis problem can be solved efficiently using the same technique. The presented results have been implemented in the TIMES tool for automated schedulability analysis. 1
Proactive algorithms for job shop scheduling with probabilistic durations
 Journal of Artificial Intelligence Research
"... Most classical scheduling formulations assume a fixed and known duration for each activity. In this paper, we weaken this assumption, requiring instead that each duration can be represented by an independent random variable with a known mean and variance. The best solutions are ones which have a hig ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Most classical scheduling formulations assume a fixed and known duration for each activity. In this paper, we weaken this assumption, requiring instead that each duration can be represented by an independent random variable with a known mean and variance. The best solutions are ones which have a high probability of achieving a good makespan. We first create a theoretical framework, formally showing how Monte Carlo simulation can be combined with deterministic scheduling algorithms to solve this problem. We propose an associated deterministic scheduling problem whose solution is proved, under certain conditions, to be a lower bound for the probabilistic problem. We then propose and investigate a number of techniques for solving such problems based on combinations of Monte Carlo simulation, solutions to the associated deterministic problem, and either constraint programming or tabu search. Our empirical results demonstrate that a combination of the use of the associated deterministic problem and Monte Carlo simulation results in algorithms that scale best both in terms of problem size and uncertainty. Further experiments point to the correlation between the quality of the deterministic solution and the quality of the probabilistic solution as a major factor responsible for this success. 1.
Probabilistic Analysis and Scheduling of Critical Soft RealTime Systems
, 1999
"... In addition to correctness requirements, a realtime system must also meet its temporal constraints, often expressed as deadlines. We call safety or mission critical realtime systems which may miss some deadlines critical soft realtime systems to distinguish them from hard realtime systems, where ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In addition to correctness requirements, a realtime system must also meet its temporal constraints, often expressed as deadlines. We call safety or mission critical realtime systems which may miss some deadlines critical soft realtime systems to distinguish them from hard realtime systems, where all deadlines must be met, and from soft realtime systems which are not safety or mission critical. The performance of a critical soft realtime system is acceptable as long as the deadline miss rate is below an application specific threshold. Architectural features of computer systems, such as caches and branch prediction hardware, are designed to improve average performance. Deterministic realtime design and analysis approaches require that such features be disabled to increase predictability. Alternatively, allowances must be made for for their effects by designing for the worst case. Either approach leads to a decrease in average performance. Since critical soft realtime systems do not require that all deadlines be met, average performance can be improved by adopting a probabilitistic approach. In order to allow a tradeoff between deadlines met and average
Rate Derivation and Its Applications to Reactive, Realtime Embedded Systems
 In Proc. the 35th Design Automation Conf
, 1998
"... An embedded system #the system# continuously interacts with its environment under strict timing constraints, called the external constraints, and it is important to knowhow these external constraints translate to time budgets, called the internal constraints, on the tasks of the system. Knowing the ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
An embedded system #the system# continuously interacts with its environment under strict timing constraints, called the external constraints, and it is important to knowhow these external constraints translate to time budgets, called the internal constraints, on the tasks of the system. Knowing these time budgets reduces the complexity of the system 's design and validation problem and helps the designers have a simultaneous control on the system's functional as well as temporal correctness from the beginning of the design #ow. The translation is carried out by #rst deriving the rate of each task in the system, hence the term #rate derivation", using the system's task structure and the rates of the input stimuli coming into the system from its environment. The derived task rates are later used to derive and validate the rest of the internal as well as external constraints. This paper proposes a general task graph model to represent the system's task structure, techniques for deriving ...
Intractability of assembly sequencing: Unit disks in the plane
 In Proceeding of the Workshop on Algorithms and Data Structures
, 1997
"... Abstract. We consider the problem of removing a given disk from a collection of unit disks in the plane. At each step, we allow a disk to be removed by a collisionfree translation to infinity, and the goal is to access a given disk using as few steps as possible. This Disks problem is a version of ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract. We consider the problem of removing a given disk from a collection of unit disks in the plane. At each step, we allow a disk to be removed by a collisionfree translation to infinity, and the goal is to access a given disk using as few steps as possible. This Disks problem is a version of a common task in assembly sequencing, namely removing a given part from a fully assembled product. Recently there has been a focus on optimizing assembly sequences over various cost measures, however with very limited algorithmic success. We explain this lack of success, proving strong inapproximability results in this simple geometric setting. Namely, we show that approximating the number of steps required to within a factor of 2 log1−γ n for any γ>0 is quasiNPhard. This provides the first inapproximability results for assembly sequencing, realized in a geometric setting. As a stepping stone, we study the approximability of scheduling with and/or precedence constraints. The Disks problem can be formulated
Exploiting Application Tunability for Efficient, Predictable Resource Management in Parallel and Distributed Systems
 In Proc. 13th Intl. Parallel Processing Symposium
, 1999
"... this paper, we propose a novel approach ..."