Results 1 - 10
of
16
Concurrency attacks
- In the Fourth USENIX Workshop on Hot Topics in Parallelism (HOTPAR ’12
, 2012
"... Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. In this paper, we present an in-depth study of concurrency attacks and how they may affect existing defenses. Our study yields several interesting findings. For instanc ..."
Abstract
-
Cited by 10 (5 self)
- Add to MetaCart
(Show Context)
Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. In this paper, we present an in-depth study of concurrency attacks and how they may affect existing defenses. Our study yields several interesting findings. For instance, we find that concurrency attacks can corrupt non-pointer data, such as user identifiers, which existing memory-safety defenses cannot handle. Inspired by our findings, we propose new defense directions and fixes to existing defenses. 1
Determinism Is Overrated: What Really Makes Multithreaded Programs Hard to Get Right and What Can Be Done about It?
"... Our accelerating computational demand and the rise of multicore hardware have made parallel programs, especially shared-memory multithreaded programs, increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. Conventional wisdom h ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Our accelerating computational demand and the rise of multicore hardware have made parallel programs, especially shared-memory multithreaded programs, increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. Conventional wisdom has attributed these difficulties to nondeterminism, and researchers have recently dedicated much effort to bringing determinism into multithreading. In this paper, we argue that determinism is not as useful as commonly perceived: it is neither sufficient nor necessary for reliability. We present our view on why multithreaded programs are difficult to get right, describe a promising approach we call stable multithreading to dramatically improve reliability, and summarize our last four years ’ research on building and applying stable multithreading systems. 1
Input-Covering Schedules for Multithreaded Programs
"... We propose constraining multithreaded execution to small sets of input-covering schedules, which we define as follows: given a program P, we say that a set of schedules Σ covers all inputs of program P if, when given any valid input, P’s execution can be constrained to some schedule in Σ and still p ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
We propose constraining multithreaded execution to small sets of input-covering schedules, which we define as follows: given a program P, we say that a set of schedules Σ covers all inputs of program P if, when given any valid input, P’s execution can be constrained to some schedule in Σ and still produce a semantically-valid result. Our approach is to first compute a small Σ for a given program P, and then, at runtime, constrain P’s execution to always follow some schedule in Σ, and never deviate. We have designed an algorithm that uses symbolic execution to systematically enumerate a set of input-covering schedules, Σ. To deal with programs that run for an unbounded length of time, we partition execution into bounded epochs, find input-covering schedules for each epoch in isolation, and then piece the schedules together at runtime. We have implemented this algorithm and a constrained execution runtime, and we report early results. Our approach has the following advantage: because all possible runtime schedules are known a priori, we can seek to validate the program by thoroughly testing each schedule in Σ, in isolation, without needing to reason about the huge space of thread interleavings that arises due to conventional nondeterministic execution. 1.
Determinism Is Not Enough: Making Parallel Programs Reliable with Stable Multithreading
"... Our accelerating computational demand and the rise of multicore hardware have made parallel programs, especially shared-memory multithreaded programs, increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. Conventional wisdom h ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Our accelerating computational demand and the rise of multicore hardware have made parallel programs, especially shared-memory multithreaded programs, increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. Conventional wisdom has attributed these difficulties to nondeterminism (i.e., repeated executions of the same program on the same input may show different behaviors), and researchers have recently dedicated much effort to bringing determinism into multithreading. In this article, we argue that determinism is not as useful as commonly perceived: it is neither sufficient nor necessary for reliability. We present our view on why multithreaded programs are difficult to get right, describe a promising approach we call stable multithreading to dramatically improve reliability, and summarize our last four years ’ research on building and applying stable multithreading systems. 1
Make parallel programs reliable with stable multithreading
"... Our accelerating computational demand and the rise of multicore hardware have made parallel programs increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. In this article, we provide our view on why parallel programs, specific ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Our accelerating computational demand and the rise of multicore hardware have made parallel programs increasingly pervasive and critical. Yet, these programs remain extremely difficult to write, test, analyze, debug, and verify. In this article, we provide our view on why parallel programs, specifically multithreaded programs, are difficult to get right. We present a promising approach we call stable multithreading to dramatically improve reliability, and summarize our last four years ’ research on building and applying stable multithreading systems. 1
Abstract Avoiding State-Space Explosion in Multithreaded Programs with Input-Covering Schedules and Symbolic Execution
, 2014
"... This dissertation makes two high-level contributions: First, we propose an algorithm to perform symbolic execution of multithreaded programs from arbitrary program contexts. We argue that this can enable more efficient symbolic exploration of deep code paths in multithreaded programs by allowing the ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
This dissertation makes two high-level contributions: First, we propose an algorithm to perform symbolic execution of multithreaded programs from arbitrary program contexts. We argue that this can enable more efficient symbolic exploration of deep code paths in multithreaded programs by allowing the symbolic engine to jump directly to program contexts of interest. We are the first to attack this problem. Second, we propose constraining multithreaded executions to small sets of input-covering schedules, which are defined as follows: given a program P, we say that a set of schedules Σ covers all inputs of program P if, when given any input, P’s execution can be constrained to some schedule in Σ and still produce a semantically valid result. Our approach is to first compute a small Σ for a given program P, and then, at runtime, constrain P’s execution to always follow some schedule in Σ, and never deviate. This approach has the follow-ing advantage: because all possible runtime schedules are known a priori, we can seek to validate the program by thoroughly verifying each schedule in Σ, in isolation, without need-ing to reason about the huge space of thread interleavings that arises due to conventional nondeterministic execution.
ConcurrencyAttacks
"... Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. Questions such as whether these attacksarefeasibleandwhatcharacteristicstheyhaveremain largely unknown. In this paper, we present a preliminary study of concurrency att ..."
Abstract
- Add to MetaCart
(Show Context)
Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. Questions such as whether these attacksarefeasibleandwhatcharacteristicstheyhaveremain largely unknown. In this paper, we present a preliminary study of concurrency attacks and the security implicationsofrealworldconcurrencyerrors. Ourstudy yields several interesting findings. For instance, we observe that the exploitability of a concurrency error dependsonthedurationofthetimingwindowwithinwhich the error may occur. We further observe that attackers can increase this window through carefully crafted inputs. We also find that four out of five commonly used sequentialdefensesbecomeunsafewhenappliedtoconcurrent programs. Based on our findings, we propose new defense directions and fixes toexisting defenses.
Practically Making Threads Deterministic and Stable
"... Multithreaded programs are hard to get right. A key reason is that the contract between developers and runtimes grants exponentially many schedules to the runtimes. Two approaches improve correctness by reducing schedules. Deterministic multithreading (DMT) reduces schedules for each input down to o ..."
Abstract
- Add to MetaCart
(Show Context)
Multithreaded programs are hard to get right. A key reason is that the contract between developers and runtimes grants exponentially many schedules to the runtimes. Two approaches improve correctness by reducing schedules. Deterministic multithreading (DMT) reduces schedules for each input down to one. Stable multithreading (SMT) reduces schedules for all inputs, increasing coverage of checking tools and providing broader repeatability than just determinism (e.g., “similar inputs lead to similar behaviors ” and “adding debug printf doesn’t mask bugs”). However, despite years of effort, two difficult questions remain open. First, can the DMT and SMT approaches consistently achieve good performance on a wide range of programs? Second, can the approaches be made simple and adoptable? These concerns are not helped much by the limited evaluation of prior systems. We present PARROT, a simple, practical runtime that makes threads deterministic and stable by offering a new contract to developers. By default, it runs synchronizations in each thread in the well-defined round-robin order vastly reducing schedules and providing broad repeatability. When default schedules are slow, it allows advanced developers to write intuitive performance hints in code to switch or add schedules for speed. We believe this “meet in the middle ” contract makes it easier to write correct, efficient programs. We further present an ecosystem formed by integrating PARROT with a model checker called DBUG. This ecosystem is more effective than either system alone: DBUG checks the schedules that matter to PARROT, and PARROT greatly increases the coverage of DBUG. Results on a diverse set of 108 programs, roughly one order of magnitude more than any prior evaluation, show that PARROT is easy to use (averaging 1.2 lines of hints per program); achieves low overhead (8.3 % for 55 real-world programs and 19.5 % for all 108 programs), many times better than two prior systems; scales well to maximum allowed cores on a 24-core server and to different scales/types of workloads; and increases coverage of DBUG by at least 10 4 for most programs. PARROT’s source code, entire benchmark suite, and results are at
Abstract Interference-Free Regions and Their Application to Compiler Optimization and Data-Race Detection
, 2012
"... Programming languages must be defined precisely so that programmers can reason carefully about the behavior of their code and language implementers can provide correct and efficient compilers and interpreters. However, until quite recently, mainstream languages such as Java and C++ did not specify e ..."
Abstract
- Add to MetaCart
(Show Context)
Programming languages must be defined precisely so that programmers can reason carefully about the behavior of their code and language implementers can provide correct and efficient compilers and interpreters. However, until quite recently, mainstream languages such as Java and C++ did not specify exactly how programs that use shared-memory multithreading should behave (e.g., when do writes by one thread become visible to another thread?). The memory model of a programming language addresses such questions. The recently-approved memory model for C++ effectively requires programs to be “data-racefree”: all executions of the program must have the property that any conflicting memory accesses in different threads are ordered by synchronization. To meet this requirement, programmers must ensure that threads properly coordinate accesses to shared memory using synchronization mechanisms such as mutual-exclusion locks. We introduce a new abstraction for reasoning about data-race-free programs: interferencefree regions. An interference-free region, or IFR, is a region surrounding a memory access during which no other thread can modify the accessed memory location without causing a data race. Specifically, the interference-free region for a memory access extends from the
Trace Driven Dynamic Deadlock Detection and Reproduction
"... Dynamic analysis techniques have been proposed to detect potential deadlocks. Analyzing and comprehending each po-tential deadlock to determine whether the deadlock is feasi-ble in a real execution requires significant programmer ef-fort. Moreover, empirical evidence shows that existing anal-yses ar ..."
Abstract
- Add to MetaCart
(Show Context)
Dynamic analysis techniques have been proposed to detect potential deadlocks. Analyzing and comprehending each po-tential deadlock to determine whether the deadlock is feasi-ble in a real execution requires significant programmer ef-fort. Moreover, empirical evidence shows that existing anal-yses are quite imprecise. This imprecision of the analyses further void the manual effort invested in reasoning about non-existent defects. In this paper, we address the problems of imprecision of existing analyses and the subsequent manual effort nec-essary to reason about deadlocks. We propose a novel ap-proach for deadlock detection by designing a dynamic anal-ysis that intelligently leverages execution traces. To reduce the manual effort, we replay the program by making the ex-ecution follow a schedule derived based on the observed trace. For a real deadlock, its feasibility is automatically ver-ified if the replay causes the execution to deadlock. We have implemented our approach as part of WOLF and have analyzed many large (upto 160KLoC) Java programs. Our experimental results show that we are able to identify 74 % of the reported defects as true (or false) positives au-tomatically leaving very few defects for manual analysis. The overhead of our approach is negligible making it a com-pelling tool for practical adoption.