• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Exterminator: automatically correcting memory errors with high probability. (2007)

by Gene Novark, Emery D Berger, Benjamin G Zorn
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 68
Next 10 →

Automatically Patching Errors in Deployed Software

by Jeff H. Perkins , Sunghun Kim , Sam Larsen , Saman Amarasinghe , Jonathan Bachrach , Michael Carbin , Carlos Pacheco , Frank Sherwood, Stelios Sidiroglou , Greg Sullivan , Weng-fai Wong , Yoav Zibin, Michael D. Ernst, Martin Rinard , 2009
"... We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to ..."
Abstract - Cited by 102 (20 self) - Add to MetaCart
We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to learn invariants that characterize the application’s normal behavior, (2) uses error detectors to monitor the execution to detect failures, (3) identifies violations of learned invariants that occur during failed executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or the flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch. ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly

Automated fixing of programs with contracts

by Yi Wei, Yu Pei, Carlo A. Furia, Lucas S. Silva, Stefan Buchholz, Meyer Andreas Zeller - In Proceedings of the 19th International Symposium on Software Testing and Analysis , 2010
"... In program debugging, finding a failing run is only the first step; what about correcting the fault? Can we automate the second task as well as the first? The AutoFix-E tool au-tomatically generates and validates fixes for software faults. The key insights behind AutoFix-E are to rely on contracts p ..."
Abstract - Cited by 72 (7 self) - Add to MetaCart
In program debugging, finding a failing run is only the first step; what about correcting the fault? Can we automate the second task as well as the first? The AutoFix-E tool au-tomatically generates and validates fixes for software faults. The key insights behind AutoFix-E are to rely on contracts present in the software to ensure that the proposed fixes are notion of state based on the boolean queries of a class. Out of 42 faults found by an automatic testing tool in two widely used Eiffel libraries, AutoFix-E proposes successful fixes for 16 faults. Submitting some of these faults to experts shows that several of the proposed fixes are identical or close to fixes proposed by humans.
(Show Context)

Citation Context

...lated, the system tries to restore them from a faulty state by looking at the differences between the two states. ClearView can prevent the damaging effects of malicious code injections. Exterminator =-=[21]-=- is another tool for dynamic patching of memory errors such as out-of-bound. It monitors repeated runs of a program and allocates extra memory to accommodate out-of-bound references appropriately. 5.3...

Binary stirring: Self-randomizing instruction addresses of legacy x86 binary code

by Richard Wartell, Vishwath Mohan, Kevin W. Hamlen, Zhiqiang Lin - In Proc. ACM Conf. Computer and Communications Security , 2012
"... Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack ..."
Abstract - Cited by 60 (4 self) - Add to MetaCart
Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack that reuses instruction gadgets from the static binary code of victim applications. This paper introduces binary stirring, a new technique that imbues x86 native code with the ability to self-randomize its instruction addresses each time it is launched. The input to STIR is only the application binary code without any source code, debug symbols, or relocation information. The output is a new binary whose basic block addresses are dynamically determined at load-time. Therefore, even if an attacker can find code gadgets in one instance of the binary, the instruction addresses in other instances are unpredictable. An array of binary transformation techniques enable STIR to transparently protect large, realistic applications that cannot be perfectly disassembled due to computed jumps, code-data interleaving, OS callbacks, dynamic linking and a variety of other difficult binary features. Evaluation of STIR for both Windows and Linux platforms shows that stirring introduces about 1.6 % overhead on average to application runtimes.
(Show Context)

Citation Context

...e detect the attack. DieHard [7] is a simplified multi-variant framework that uses heap object randomization to make the variants generate differ-ent outputs in case of error or attack. Exterminator =-=[40]-=- extends this idea to derive runtime patches and automatically fix program bugs. Multi-variant systems frustrate ROP attacks by forcing the attacker to simultaneously subvert all the running variants,...

Data space randomization

by Eep Bhatkar, R. Sekar - In Proc. Int. Conf. on Detection of Intrusions and Malware, and Vulnerability Assessment , 2008
"... Abstract. Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vuln ..."
Abstract - Cited by 41 (2 self) - Add to MetaCart
Abstract. Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of nonpointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 2 32 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5 % to 30 % for a range of programs.
(Show Context)

Citation Context

...n. ASR techniques that augment AAR with relative address randomization (RAR) [10] are effective against all buffer overflows, including those not involving pointer corruption. DieHard [7] and DieFast =-=[31]-=- approaches provide randomization-based defense against memory corruption attacks involving heap objects. Randomization techniques with relatively small range of randomization, e.g., PaX with its 16-b...

HardBound: Architectural Support for Spatial Safety of the C Programming Language

by Joe Devietti, Colin Blundell, et al. , 2008
"... The C programming language is at least as well known for its absence of spatial memory safety guarantees (i.e., lack of bounds checking) as it is for its high performance. C’s unchecked pointer arithmetic and array indexing allow simple programming mistakes to lead to erroneous executions, silent da ..."
Abstract - Cited by 40 (8 self) - Add to MetaCart
The C programming language is at least as well known for its absence of spatial memory safety guarantees (i.e., lack of bounds checking) as it is for its high performance. C’s unchecked pointer arithmetic and array indexing allow simple programming mistakes to lead to erroneous executions, silent data corruption, and security vulnerabilities. Many prior proposals have tackled enforcing spatial safety in C programs by checking pointer and array accesses. However, existing software-only proposals have significant drawbacks that may prevent wide adoption, including: unacceptably high runtime overheads, lack of completeness, incompatible pointer representations, or need for non-trivial changes to existing C source code and compiler infrastructure. Inspired by the promise of these software-only approaches, this paper proposes a hardware bounded pointer architectural primitive that supports cooperative hardware/software enforcement of spatial memory safety for C programs. This bounded pointer is a new hardware primitive datatype for pointers that leaves the standard C pointer representation intact, but augments it with bounds information maintained separately and invisibly by the hardware. The bounds are initialized by the software, and they are then propagated and enforced transparently by the hardware, which automatically checks a pointer’s bounds before it is dereferenced. One mode of use requires instrumenting only malloc, which enables enforcement of per-allocation spatial safety for heap-allocated objects for existing binaries. When combined with simple intra-procedural compiler instrumentation, hardware bounded pointers enable a lowoverhead approach for enforcing complete spatial memory safety in unmodified C programs.
(Show Context)

Citation Context

... region originally allocated and then accessing a field that is beyond the bounds of the original allocation. To help detect and diagnose spatial errors in C programs, many software-only tools (e.g., =-=[3, 19, 24, 42, 43, 44, 50]-=-) and hardwaresupported techniques (e.g., [32, 47, 59, 65]) have been proposed. Although these techniques are useful, many of them do not provide complete spatial memory safety. Likewise, many special...

Improving Software Diagnosability via Log Enhancement

by Ding Yuan, Jing Zheng, Soyeon Park, Yuanyuan Zhou, Stefan Savage
"... Diagnosing software failures in the field is notoriously difficult, in part due to the fundamental complexity of trouble-shooting any complex software system, but further exacerbated by the paucity of information that is typically available in the production setting. Indeed, for reasons of both over ..."
Abstract - Cited by 38 (4 self) - Add to MetaCart
Diagnosing software failures in the field is notoriously difficult, in part due to the fundamental complexity of trouble-shooting any complex software system, but further exacerbated by the paucity of information that is typically available in the production setting. Indeed, for reasons of both overhead and privacy, it is common that only the run-time log generated by a system (e.g., syslog) can be shared with the developers. Unfortunately, the ad-hoc nature of such reports are frequently insufficient for detailed failure diagnosis. This paper seeks to improve this situation within the rubric of existing practice. We describe a tool, LogEnhancer that automatically “enhances ” existing logging code to aid in future post-failure debugging. We evaluate LogEnhancer on eight large, real-world applications and demonstrate that it can dramatically reduce the set of potential root failure causes that must be considered during diagnosis while imposing negligible overheads. D.2.5 [Testing and Debug-
(Show Context)

Citation Context

... some combination of latent software bugs, environmental conditions and/or administrative errors. While considerable effort is spent trying to eliminate such problems before deployment or at run-time =-=[9, 14, 38, 48]-=-, the size and complexity of modern systems combined with real time and budgetary constraints on developers have made it increasingly difficult to deliver “bullet-proof” software to end-users. Consequ...

DieHarder: Securing the Heap

by Gene Novark, Emery D. Berger
"... Heap-based attacks depend on a combination of memory management errors and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits, but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment o ..."
Abstract - Cited by 36 (3 self) - Add to MetaCart
Heap-based attacks depend on a combination of memory management errors and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits, but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, Free-BSD, and OpenBSD, and shows that they remain vulnerable to attack. It then presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware, while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.
(Show Context)

Citation Context

.... discuss the limitations of such systems in more detail [32]. However, more sophisticated techniques can limit the vulnerability of systems to repeated attacks. Systems such as Rx [29], Exterminator =-=[24, 25]-=-, and ClearView [28] can detect heap errors and adapt the application to cope with them. For example, Exterminator can infer the size of an overflow and pad subsequent allocations to ensure that an ov...

Polymorphing Software by Randomizing Data Structure Layout

by Zhiqiang Lin, Ryan D. Riley, Dongyan Xu
"... Abstract. This paper introduces a new software polymorphism technique that randomizes program data structure layout. This technique will generate different data structure layouts for a program and thus diversify the binary code compiled from the same program source code. This technique can mitigate ..."
Abstract - Cited by 19 (3 self) - Add to MetaCart
Abstract. This paper introduces a new software polymorphism technique that randomizes program data structure layout. This technique will generate different data structure layouts for a program and thus diversify the binary code compiled from the same program source code. This technique can mitigate attacks (e.g., kernel rootkit attacks) that require knowledge about data structure definitions. It is also able to disrupt the generation of data structure-based program signatures. We have implemented our data structure layout randomization technique in the open source compiler collection gcc-4.2.4 and applied it to a number of programs. Our evaluation results show that our technique is able to achieve software binary diversity. We also apply the technique to one operating system data structure in order to foil a number of kernel rootkit attacks. Meanwhile, programs produced by the technique were analyzed by a state-of-the-art data structure inference system and it was demonstrated that reliance on data structure signatures alone may lead to false negatives in malware detection. 1
(Show Context)

Citation Context

...ence detect the attack. DieHard [7] is a simplified multi-variant framework which uses heap object randomization to make the variants generate different outputs in case of an error or attack. DieFast =-=[32]-=- further leverages this idea to derive a runtime patch and automatically fix program bugs. Reverse stack execution [36], i.e, reverse the stack growth direction, can prevent stack smashing and format ...

Why nothing matters: The impact of zeroing

by Xi Yang, Stephen M. Blackburn, Daniel Frampton, Jennifer B. Sartor, Kathryn S. Mckinley - in Proceedings of the 2011 ACM international
"... Memory safety defends against inadvertent and malicious misuse of memory that may compromise program correctness and security. A critical element of memory safety is zero initialization. The direct cost of zero initialization is surprisingly high: up to 12.7%, with average costs ranging from 2.7 to ..."
Abstract - Cited by 17 (8 self) - Add to MetaCart
Memory safety defends against inadvertent and malicious misuse of memory that may compromise program correctness and security. A critical element of memory safety is zero initialization. The direct cost of zero initialization is surprisingly high: up to 12.7%, with average costs ranging from 2.7 to 4.5 % on a high performance virtual machine on IA32 architectures. Zero initialization also incurs indirect costs due to its memory bandwidth demands and cache displacement effects. Existing virtual machines either: a) minimize direct costs by zeroing in large blocks, or b) minimize indirect costs by zeroing in the allocation sequence, which reduces cache displacement and bandwidth. This paper evaluates the two widely used zero initialization designs, showing that they make different tradeoffs to achieve very similar performance. Our analysis inspires three better designs: (1) bulk zeroing with cache-bypassing (non-temporal) instructions to reduce the direct and indirect zeroing costs simultaneously, (2) concurrent non-temporal bulk zeroing that exploits parallel hardware to move work off the application’s critical path, and (3) adaptive zeroing, which dynamically chooses between (1) and (2) based on available hardware parallelism. The new software strategies offer speedups sometimes greater than the direct overhead, improving total performance by 3 % on average. Our findings invite additional optimizations and microarchitectural support. Categories and Subject Descriptors D3.4 [Programming Languages]: Processors—Memory management (garbage
(Show Context)

Citation Context

...nd PHP, the language specifications stipulate zero initialization. For the same reason, unmanaged native languages, such as C and C++, have begun to adopt zero initialization to improve memory safety =-=[26]-=-. We show that existing approaches of zero initialization are surprisingly expensive. On three modern IA32 architectures, the direct cost is around 2.7-4.5% on average and as much as 12.7% of all cycl...

PLR: A Software Approach to Transient Fault Tolerance for Multicore Architectures

by Alex Shye, Student Member, Joseph Blomstedt, Student Member, Student Member, Vijay Janapa Reddi, Student Member, Daniel A. Connors - IEEE Trans. Dependable Secure Comput , 2009
"... Abstract—Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point toward multicore designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper presents pr ..."
Abstract - Cited by 11 (0 self) - Add to MetaCart
Abstract—Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point toward multicore designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper presents process-level redundancy (PLR), a software technique for transient fault tolerance, which leverages multiple cores for low overhead. PLR creates a set of redundant processes per application process and systematically compares the processes to guarantee correct execution. Redundancy at the process level allows the operating system to freely schedule the processes across all available hardware resources. PLR uses a software-centric approach to transient fault tolerance, which shifts the focus from ensuring correct hardware execution to ensuring correct software execution. As a result, many benign faults that do not propagate to affect program correctness can be safely ignored. A real prototype is presented that is designed to be transparent to the application and can run on general-purpose single-threaded programs without modifications to the program, operating system, or underlying hardware. The system is evaluated for fault coverage and performance on a four-way SMP machine and provides improved performance over existing software transient fault tolerance techniques with a 16.9 percent overhead for fault detection on a set of optimized SPEC2000 binaries. Index Terms—Fault tolerance, reliability, transient faults, soft errors, process-level redundancy. Ç 1
(Show Context)

Citation Context

...cess replicas have been proposed for general-purpose systems to provide services other than fault tolerance. DieHard [39] proposes using replica machines for tolerating memory errors and Exterminator =-=[40]-=- uses process replicas to probabilistically detect memory errors. DieHard and Exterminator briefly mention using process replicas and do not elaborate on the challenges of nondeterministic events. Sha...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University