Results 1  10
of
30
Approximating Minimum Feedback Sets and Multicuts in Directed Graphs
 ALGORITHMICA
, 1998
"... This paper deals with approximating feedback sets in directed graphs. We consider two related problems: the weighted feedback vertex set (fvs) problem, and the weighted feedback edge set problem (fes). In the fvs (resp. fes) problem, one is given a directed graph with weights (each of which is at le ..."
Abstract

Cited by 103 (3 self)
 Add to MetaCart
This paper deals with approximating feedback sets in directed graphs. We consider two related problems: the weighted feedback vertex set (fvs) problem, and the weighted feedback edge set problem (fes). In the fvs (resp. fes) problem, one is given a directed graph with weights (each of which is at least 1) on the vertices (resp. edges), and is asked to find a subset of vertices (resp. edges) with minimum total weight that intersects every directed cycle in the graph. These problems are among the classical NPHard problems and have many applications. We also consider a generalization of these problems: subsetfvs and subsetfes, in which the feedback set has to intersect only a subset of the directed cycles in the graph. This subset consists of all the cycles that go through a distinguished input subset of vertices and edges, denoted by X . This generalization is also NPHard even when X = 2. We present approximation algorithms for the subsetfvs and subsetfes problems. The first algorithm we present achieves an approximation factor of O(log2 X). The second algorithm achieves an approximation factor of O(min(log tau log log tau; log n log log n)), where tau is the value of the optimum fractional solution of the problem at hand, and n is the number of vertices in the graph. We also define a multicut problem in a special type of directed networks which we call circular networks, and show that the subsetfes and subsetfvs problems are equivalent to this multicut problem. Another contribution of our paper is a combinatorial algorithm that computes a (1 + epsilon) approximation to the fractional optimal feedback vertex set. Computing the approximate solution is much simpler and more efficient than general linear programming methods. All of our algorithms use this approximate solution.
ContinuationBased Multiprocessing
, 1980
"... . Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g., a testandset instruction), the other two requirements are fulfilled by features already present in applicativ ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
. Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g., a testandset instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs), and process saving may be obtained through the use of the catch operator. The use of catch, in particular, allows an elegant treatment of process saving. We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design. 1. Introduction In the past few years, researchers have made progress in understanding the mecha...
Use of A Taxonomy of Security Faults
, 1996
"... Security in computer systems is important so as to ensure reliable operation and to protect the integrity of stored information. Faults in the implementation of critical components can be exploited to breach security and penetrate a system. These faults must be identified, detected, and corrected to ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
Security in computer systems is important so as to ensure reliable operation and to protect the integrity of stored information. Faults in the implementation of critical components can be exploited to breach security and penetrate a system. These faults must be identified, detected, and corrected to ensure reliability and safeguard against denial of service, unauthorized modification of data, or disclosure of information. We define a classification of security faults in the Unix operating system. We state the criteria used to categorize the faults and present examples of the different fault types. We present the design and implementation details of a prototype database to store vulnerability information collected from different sources. The data is organized according to our fault categories. The information in the database can be applied in static audit analysis of systems, intrusion detection, and fault detection. We also identify and describe software testing methods that should be effective in detecting different faults in our classification scheme.
Feedback set problems
 HANDBOOK OF COMBINATORIAL OPTIMIZATION
, 1999
"... ABSTRACT. This paper is a short survey of feedback set problems. It will be published in ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
ABSTRACT. This paper is a short survey of feedback set problems. It will be published in
Location Consistency  a New Memory Model and Cache Consistency Protocol
 IEEE Transactions on Computers
, 1998
"... Existing memory models and cache consistency protocols assume the memory coherence property which requires that all processors observe the same ordering of write operations to the same location. In this paper, we address the problem of defining a memory model that does not rely on the memory cohere ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Existing memory models and cache consistency protocols assume the memory coherence property which requires that all processors observe the same ordering of write operations to the same location. In this paper, we address the problem of defining a memory model that does not rely on the memory coherence assumption, and also the problem of designing a cache consistency protocol based on such a memory model. We define a new memory consistency model, called Location Consistency (LC), in which the state of a memory location is modeled as a partially ordered multiset (pomset) of write and synchronization operations. We prove that LC is strictly weaker than existing memory models, but is still equivalent to stronger models for parallel programs that have no data races. We also introduce a new multiprocessor cache consistency protocol based on the LC memory model. We prove that this LC protocol obeys the LC memory model. The LC protocol does not need to enforce single write ownership of memory...
Minimizing Flow Time Nonclairvoyantly
 In Proceedings of the 38th Symposium on Foundations of Computer Science
, 1997
"... We consider the problem of scheduling a collection of dynamically arriving jobs with unknown execution times so as to minimize the average response/flow time. This the classic CPU scheduling problem faced by timesharing operating systems. In the standard 3field scheduling notation this is the noncl ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
We consider the problem of scheduling a collection of dynamically arriving jobs with unknown execution times so as to minimize the average response/flow time. This the classic CPU scheduling problem faced by timesharing operating systems. In the standard 3field scheduling notation this is the nonclairvoyant version of 1 j pmtn; r j j P F j . Its easy to see that every algorithm that doesn't unnecessarily idle the processor is at worst ncompetitive, where n is the number of jobs. Yet there is no known nonclairvoyant algorithm, deterministic or randomized, with a competitive ratio provably o(n). In this paper with give a randomized nonclairvoyant algorithm, RMLF, that has competitive ratio \Theta(log n log log n) against an adaptive adversary. RMLF is a slight variation of the multilevel feedback (MLF) algorithm used by the Unix operating system, further justifying the adoption of this algorithm. Motwani, Phillips, and Torng [12] showed that every randomized nonclairvoyant algorithm...
Virtual Memory and Backing Storage Management in Multiprocessor Operating Systems Using ObjectOriented Design Techniques
 Proceedings of OOPSLA '89
, 1989
"... The Choices operating system architecture [?, ?, ?] uses class hierarchies and objectoriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry Parallel Computing Laboratory at the U ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
The Choices operating system architecture [?, ?, ?] uses class hierarchies and objectoriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry Parallel Computing Laboratory at the University of Illinois to study the performance of algorithms, mechanisms, and policies for parallel systems. This paper describes the architectural design and class hierarchy of the Choices memory and secondary storage management system.
Location Consistency: Stepping Beyond the Barriers of Memory Coherence and Serializability
 McGill University, School of Computer
, 1994
"... A memory consistency model represents a binding "contract" between software and hardware in a sharedmemory multiprocessor system. It is important to provide a memory consistency model that is easy to understand and that also facilitates efficient implementation. The memory consistency mod ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
A memory consistency model represents a binding "contract" between software and hardware in a sharedmemory multiprocessor system. It is important to provide a memory consistency model that is easy to understand and that also facilitates efficient implementation. The memory consistency model that has been most commonly used in past work is sequential consistency (SC), which requires the execution of a parallel program to appear as some interleaving of the memory operations on a sequential machine. To reduce the rigid constraints of the SC model, several relaxed consistency models have been proposed, notably weak ordering (or weak consistency) (WC), release consistency (RC), dataracefree0, and dataracefree1. These models allow performance optimizations to be correctly applied, while guaranteeing that sequential consistency is retained for a specified class of programs. We call these models SCderived models. A central assumption in the definitions of all SCderived memory consist...