Results 1 - 10
of
250
The geometry of innocent flesh on the bone: Return-into-libc without function calls (on the x86
- In Proceedings of CCS 2007
, 2007
"... We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by m ..."
Abstract
-
Cited by 272 (7 self)
- Add to MetaCart
(Show Context)
We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
DieHard: probabilistic memory safety for unsafe languages
- in PLDI ’06
, 2006
"... Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system th ..."
Abstract
-
Cited by 188 (20 self)
- Add to MetaCart
(Show Context)
Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard’s memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard’s resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
Efficient Techniques for Comprehensive Protection from Memory Error Exploits
, 2005
"... Despite the wide publicity received by buffer overflow attacks, the vast majority of today’s security vulnerabilities continue to be caused by memory errors, with a significant shift away from stack-smashing exploits to newer attacks such as heap overflows, integer overflows, and format-string attac ..."
Abstract
-
Cited by 148 (7 self)
- Add to MetaCart
(Show Context)
Despite the wide publicity received by buffer overflow attacks, the vast majority of today’s security vulnerabilities continue to be caused by memory errors, with a significant shift away from stack-smashing exploits to newer attacks such as heap overflows, integer overflows, and format-string attacks. While comprehensive solutions have been developed to handle memory errors, these solutions suffer from one or more of the following problems: high overheads (often exceeding 100%), incompatibility with legacy C code, and changes to the memory model to use garbage collection. Address space randomization (ASR) is a technique that avoids these drawbacks, but existing techniques for ASR do not offer a level of protection comparable to the above techniques. In particular, attacks that exploit relative distances between memory objects aren’t tackled by existing techniques. Moreover, these techniques are susceptible to information leakage and brute-force attacks. To overcome these limitations, we develop a new approach in this paper that supports comprehensive randomization, whereby the absolute locations of all (code and data) objects, as well as their relative distances are randomized. We argue that this approach provides probabilistic protection against all memory error exploits, whether they be known or novel. Our approach is implemented as a fully automatic source-to-source transformation which is compatible with legacy C code. The address-space randomizations take place at load-time or runtime, so the same copy of the binaries can be distributed to everyone — this ensures compatibility with today’s software distribution model. Experimental results demonstrate an average runtime overhead of about 11%.
N-variant systems: A secretless framework for security through diversity
- In Proceedings of the 15th USENIX Security Symposium
, 2006
"... We present an architectural framework for systematically using automated diversity to provide high assurance detection and disruption for large classes of attacks. The framework executes a set of automatically diversified variants on the same inputs, and monitors their behavior to detect divergences ..."
Abstract
-
Cited by 121 (3 self)
- Add to MetaCart
(Show Context)
We present an architectural framework for systematically using automated diversity to provide high assurance detection and disruption for large classes of attacks. The framework executes a set of automatically diversified variants on the same inputs, and monitors their behavior to detect divergences. The benefit of this approach is that it requires an attacker to simultaneously compromise all system variants with the same input. By constructing variants with disjoint exploitation sets, we can make it impossible to carry out large classes of important attacks. In contrast to previous approaches that use automated diversity for security, our approach does not rely on keeping any secrets. In this paper, we introduce the N-variant systems framework, present a model for analyzing security properties of N-variant systems, define variations that can be used to detect attacks that involve referencing absolute memory addresses and executing injected code, and describe and present performance results from a prototype implementation. 1.
When Good Instructions Go Bad: Generalizing Return-Oriented Programming to RISC ABSTRACT
"... This paper reconsiders the threat posed by Shacham’s “return-oriented programming ” — a technique by which W⊕X-style hardware protections are evaded via carefully crafted stack frames that divert control flow into the middle of existing variable-length x86 instructions — creating short new instruct ..."
Abstract
-
Cited by 109 (8 self)
- Add to MetaCart
(Show Context)
This paper reconsiders the threat posed by Shacham’s “return-oriented programming ” — a technique by which W⊕X-style hardware protections are evaded via carefully crafted stack frames that divert control flow into the middle of existing variable-length x86 instructions — creating short new instructions streams that then return. We believe this attack is both more general and a greater threat than the author appreciated. In fact, the vulnerability is not limited to the x86 architecture or any particular operating system, is readily exploitable, and bypasses an entire category of malware protections. In this paper we demonstrate general return-oriented programming on the SPARC, a fixed instruction length RISC architecture with structured control flow. We construct a Turing-complete library of code gadgets using snippets of the Solaris libc, a general purpose programming language, and a compiler for constructing return-oriented exploits. Finally, we argue that the threat posed by return-oriented programming, across all architectures and systems, has negative implications for an entire class of security mechanisms: those that seek to prevent malicious computation by preventing the execution of malicious code.
Fast and Automated Generation of Attack Signatures: A Basis for Building Self-Protecting Servers
, 2005
"... Large-scale attacks, such as those launched by worms and zombie farms, pose a serious threat to our network-centric society. Existing approaches such as software patches are simply unable to cope with the volume and speed with which new vulnerabilities are being discovered. In this paper, we develop ..."
Abstract
-
Cited by 108 (8 self)
- Add to MetaCart
Large-scale attacks, such as those launched by worms and zombie farms, pose a serious threat to our network-centric society. Existing approaches such as software patches are simply unable to cope with the volume and speed with which new vulnerabilities are being discovered. In this paper, we develop a new approach that can provide effective protection against a vast majority of these attacks that exploit memory errors in C/C++ programs. Our approach, called COVERS, uses a forensic analysis of a victim server's memory to correlate attacks to inputs received over the network, and automatically develop a signature that characterizes inputs that carry attacks. The signatures tend to capture characteristics of the underlying vulnerability (e.g., a message field being too long) rather than the characteristics of an attack, which makes them effective against variants of attacks. Our approach introduces low overheads (under 10%), does not require access to source code of the protected server, and has successfully generated signatures for the attacks studied in our experiments, without producing false positives. Since the signatures are generated in tens of milliseconds, they can potentially be distributed quickly over the Internet to filter out (and thus stop) fastspreading worms. Another interesting aspect of our approach is that it can defeat guessing attacks reported against address-space randomization and instruction set randomization techniques. Finally, it increases the capacity of servers to withstand repeated attacks by a factor of 10 or more.
Automatically Patching Errors in Deployed Software
, 2009
"... We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to ..."
Abstract
-
Cited by 102 (20 self)
- Add to MetaCart
We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to learn invariants that characterize the application’s normal behavior, (2) uses error detectors to monitor the execution to detect failures, (3) identifies violations of learned invariants that occur during failed executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or the flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch. ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly
NOZZLE: A Defense Against Heap-spraying Code Injection Attacks
"... Heap spraying is a security attack that increases the exploitability of memory corruption errors in type-unsafe applications. In a heap-spraying attack, an attacker coerces an application to allocate many objects containing malicious code in the heap, increasing the success rate of an exploit that j ..."
Abstract
-
Cited by 80 (9 self)
- Add to MetaCart
(Show Context)
Heap spraying is a security attack that increases the exploitability of memory corruption errors in type-unsafe applications. In a heap-spraying attack, an attacker coerces an application to allocate many objects containing malicious code in the heap, increasing the success rate of an exploit that jumps to a location within the heap. Because heap layout randomization necessitates new forms of attack, spraying has been used in many recent security exploits. Spraying is especially effective in web browsers, where the attacker can easily allocate the malicious objects using JavaScript embedded in a web page. In this paper, we describe NOZZLE, a runtime heap-spraying detector. NOZZLE examines individual objects in the heap, interpreting them as code and performing a static analysis on that code to detect malicious intent. To reduce false positives, we aggregate measurements across all heap objects and define a global heap health metric. We measure the effectiveness of NOZZLE by demonstrating that it successfully detects 12 published and 2,000 synthetically generated heap-spraying exploits. We also show that even with a detection threshold set six times lower than is required to detect published malicious attacks, NOZZLE reports no false positives when run over 150 popular Internet sites. Using sampling and concurrent scanning to reduce overhead, we show that the performance overhead of NOZZLE is less than 7 % on average. While NOZZLE currently targets heap-based spraying attacks, its techniques can be applied to any attack that attempts to fill the address space with malicious code objects (e.g., stack spraying [42]). 1
Code Injection Attacks on Harvard-Architecture Devices
- ACM CCS 08
, 2008
"... Harvard architecture CPU design is common in the embedded world. Examples of Harvard-based architecture devices are the Mica family of wireless sensors. Mica motes have limited memory and can process only very small packets. Stack-based buffer overflow techniques that inject code into the stack and ..."
Abstract
-
Cited by 68 (2 self)
- Add to MetaCart
(Show Context)
Harvard architecture CPU design is common in the embedded world. Examples of Harvard-based architecture devices are the Mica family of wireless sensors. Mica motes have limited memory and can process only very small packets. Stack-based buffer overflow techniques that inject code into the stack and then execute it are therefore not applicable. It has been a common belief that code injection is impossible on Harvard architectures. This paper presents a remote code injection attack for Mica sensors. We show how to exploit program vulnerabilities to permanently inject any piece of code into the program memory of an Atmel AVR-based sensor. To our knowledge, this is the first result that presents a code injection technique for such devices. Previous work only succeeded in injecting data or performing transient attacks. Injecting permanent code is more powerful since the attacker can gain full control of the target sensor. We also show that this attack can be used to inject a worm that can propagate through the wireless sensor network and possibly create a sensor botnet. Our attack combines different techniques such as return oriented programming and fake stack injection. We present implementation details and suggest some counter-measures.
Binary stirring: Self-randomizing instruction addresses of legacy x86 binary code
- In Proc. ACM Conf. Computer and Communications Security
, 2012
"... Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack ..."
Abstract
-
Cited by 60 (4 self)
- Add to MetaCart
(Show Context)
Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack that reuses instruction gadgets from the static binary code of victim applications. This paper introduces binary stirring, a new technique that imbues x86 native code with the ability to self-randomize its instruction addresses each time it is launched. The input to STIR is only the application binary code without any source code, debug symbols, or relocation information. The output is a new binary whose basic block addresses are dynamically determined at load-time. Therefore, even if an attacker can find code gadgets in one instance of the binary, the instruction addresses in other instances are unpredictable. An array of binary transformation techniques enable STIR to transparently protect large, realistic applications that cannot be perfectly disassembled due to computed jumps, code-data interleaving, OS callbacks, dynamic linking and a variety of other difficult binary features. Evaluation of STIR for both Windows and Linux platforms shows that stirring introduces about 1.6 % overhead on average to application runtimes.