Results 1 - 10
of
456
A Taxonomy of DDoS Attack and DDoS Defense Mechanisms
- ACM SIGCOMM Computer Communication Review
, 2004
"... Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the probl ..."
Abstract
-
Cited by 358 (2 self)
- Add to MetaCart
Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.
Implementing Pushback: Router-Based Defense Against DDoS Attacks
- In Proceedings of Network and Distributed System Security Symposium
, 2002
"... Pushback is a mechanism for defending against distributed denial-of-service (DDoS) attacks. DDoS attacks are treated as a congestion-control problem, but because most such congestion is caused by malicious hosts not obeying traditional end-to-end congestion control, the problem must be handled by th ..."
Abstract
-
Cited by 355 (4 self)
- Add to MetaCart
Pushback is a mechanism for defending against distributed denial-of-service (DDoS) attacks. DDoS attacks are treated as a congestion-control problem, but because most such congestion is caused by malicious hosts not obeying traditional end-to-end congestion control, the problem must be handled by the routers. Functionality is added to each router to detect and preferentially drop packets that probably belong to an attack. Upstream routers are also notified to drop such packets (hence the term Pushback) in order that the router's resources be used to route legitimate traffic. In this paper we present an architecture for Pushback, its implementation under FreeBSD, and suggestions for how such a system can be implemented in core routers.
Code-Red: a case study on the spread and victims of an Internet worm
, 2002
"... On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the CodeRed (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack ..."
Abstract
-
Cited by 337 (6 self)
- Add to MetaCart
(Show Context)
On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the CodeRed (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet. In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the CodeRed worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and t...
Automated worm fingerprinting
- In OSDI
, 2004
"... Network worms are a clear and growing threat to the security of today’s Internet-connected hosts and networks. The combination of the Internet’s unrestricted connectivity and widespread software homogeneity allows network pathogens to exploit tremendous parallelism in their propagation. In fact, mod ..."
Abstract
-
Cited by 317 (9 self)
- Add to MetaCart
(Show Context)
Network worms are a clear and growing threat to the security of today’s Internet-connected hosts and networks. The combination of the Internet’s unrestricted connectivity and widespread software homogeneity allows network pathogens to exploit tremendous parallelism in their propagation. In fact, modern worms can spread so quickly, and so widely, that no human-mediated reaction can hope to contain an outbreak. In this paper, we propose an automated approach for quickly detecting previously unknown worms and viruses based on two key behavioral characteristics – a common exploit sequence together with a range of unique sources generating infections and destinations being targeted. More importantly, our approach – called “content sifting ” – automatically generates precise signatures that can then be used to filter or moderate the spread of the worm elsewhere in the network. Using a combination of existing and novel algorithms we have developed a scalable content sifting implementation with low memory and CPU requirements. Over months of active use at UCSD, our Earlybird prototype system has automatically detected and generated signatures for all pathogens known to be active on our network as well as for several new worms and viruses which were unknown at the time our system identified them. Our initial experience suggests that, for a wide range of network pathogens, it may be practical to construct fully automated defenses – even against so-called “zero-day” epidemics. 1
SOS: Secure overlay services
- In Proceedings of ACM SIGCOMM
, 2002
"... angelos,misra,danr¥ Denial of service (DoS) attacks continue to threaten the reliability of networking systems. Previous approaches for protecting networks from DoS attacks are reactive in that they wait for an attack to be launched before taking appropriate measures to protect the network. This lea ..."
Abstract
-
Cited by 253 (15 self)
- Add to MetaCart
angelos,misra,danr¥ Denial of service (DoS) attacks continue to threaten the reliability of networking systems. Previous approaches for protecting networks from DoS attacks are reactive in that they wait for an attack to be launched before taking appropriate measures to protect the network. This leaves the door open for other attacks that use more sophisticated methods to mask their traffic. We propose an architecture called Secure Overlay Services (SOS) that proactively prevents DoS attacks, geared toward supporting Emergency Services or similar types of communication. The architecture is constructed using a combination of secure overlay tunneling, routing via consistent hashing, and filtering. We reduce the probability of successful attacks by (i) performing intensive filtering near protected network edges, pushing the attack point perimeter into the core of the network, where high-speed routers can handle the volume of attack traffic, and (ii) introducing randomness and anonymity into the architecture, making it difficult for an attacker to target nodes along the path to a specific SOS-protected destination. Using simple analytical models, we evaluate the likelihood that an attacker can successfully launch a DoS attack against an SOSprotected network. Our analysis demonstrates that such an architecture reduces the likelihood of a successful attack to minuscule levels.
BotHunter: Detecting Malware Infection Through IDS-Driven Dialog Correlation
, 2007
"... We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external ent ..."
Abstract
-
Cited by 197 (18 self)
- Add to MetaCart
(Show Context)
We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.
Botz-4-sale: Surviving organized ddos attacks that mimic flash crowds
- In 2nd Symposium on Networked Systems Design and Implementation (NSDI
, 2005
"... Abstract – Recent denial of service attacks are mounted by professionals using Botnets of tens of thousands of compromised machines. To circumvent detection, attackers are increasingly moving away from bandwidth floods to attacks that mimic the Web browsing behavior of a large number of clients, and ..."
Abstract
-
Cited by 153 (1 self)
- Add to MetaCart
(Show Context)
Abstract – Recent denial of service attacks are mounted by professionals using Botnets of tens of thousands of compromised machines. To circumvent detection, attackers are increasingly moving away from bandwidth floods to attacks that mimic the Web browsing behavior of a large number of clients, and target expensive higher-layer resources such as CPU, database and disk bandwidth. The resulting attacks are hard to defend against using standard techniques, as the malicious requests differ from the legitimate ones in intent but not in content. We present the design and implementation of Kill-Bots, a kernel extension to protect Web servers against DDoS attacks that masquerade as flash crowds. Kill-Bots provides authentication using graphical tests but is different from other systems that use graphical tests. First, Kill-Bots uses an intermediate stage to identify the IP addresses that ignore the test, and persistently bombard the server with requests despite repeated failures at solving the tests. These machines are bots because their intent is to congest the server. Once these machines are identified, Kill-Bots blocks their requests, turns the graphical tests off, and allows access to legitimate users who are unable or unwilling to solve graphical tests. Second, Kill-Bots sends a test and checks the client’s answer without allowing unauthenticated clients access to sockets, TCBs, and worker processes. Thus, it protects the authentication mechanism from being DDoSed. Third, Kill-Bots combines authentication with admission control. As a result, it improves performance, regardless of whether the server overload is caused by DDoS or a true Flash Crowd. 1
Modeling Botnet Propagation Using Time Zones
- In Proceedings of the 13 th Network and Distributed System Security Symposium NDSS
, 2006
"... Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets ..."
Abstract
-
Cited by 132 (10 self)
- Add to MetaCart
(Show Context)
Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets representing millions of victims. We noted diurnal properties in botnet activity, which we suspect occurs because victims turn their computers off at night. Through binary analysis, we also confirmed that some botnets demonstrated a bias in infecting regional populations. Clearly, computers that are offline are not infectious, and any regional bias in infections will affect the overall growth of the botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping functions to capture regional variations in online vulnerable populations. The diurnal model also lets one compare propagation rates for different botnets, and prioritize response. Because of variations in release times and diurnal shaping functions particular to an infection, botnets released later in time may actually surpass other botnets that have an advanced start. Since response times for malware outbreaks is now measured in hours, being able to predict short-term propagation dynamics lets us allocate resources more intelligently. We used empirical data from botnets to evaluate the analytical model. 1
Internet Intrusions: Global Characteristics and Prevalence
, 2003
"... Network intrusions have been a fact of life in the Internet for many years. However, as is the case with many other types of Internet-wide phenomena, gaining insight into the global characteristics of intrusions is challenging. In this paper we address this problem by systematically analyzing a set ..."
Abstract
-
Cited by 130 (15 self)
- Add to MetaCart
Network intrusions have been a fact of life in the Internet for many years. However, as is the case with many other types of Internet-wide phenomena, gaining insight into the global characteristics of intrusions is challenging. In this paper we address this problem by systematically analyzing a set of firewall logs collected over four months from over 1600 different networks world wide. The first part of our study is a general analysis focused on the issues of distribution, categorization and prevalence of intrusions. Our data shows both a large quantity and wide variety of intrusion attempts on a daily basis. We also find that worms like CodeRed, Nimda and SQL Snake persist long after their original release. By projecting intrusion activity as seen in our data sets to the entire Internet we determine that there are typically on the order of 25B intrusion attempts per day and that there is an increasing trend over our measurement period. We further find that sources of intrusions are uniformly spread across the Autonomous System space. However, deeper investigation reveals that a very small collection of sources are responsible for a significant fraction of intrusion attempts in any given month and their on/off patterns exhibit cliques of correlated behavior. We show that the distribution of source IP addresses of the non-worm intrusions as a function of the number of attempts follows Zipf's law. We also find that at daily timescales, intrusion targets often depict significant spatial trends that blur patterns observed from individual "IP telescopes"; this underscores the necessity for a more global approach to intrusion detection. Finally, we investigate the benefits of shared information, and the potential for using this as a foundation for an automated, global intrus...
The Internet Motion Sensor: A Distributed Blackhole Monitoring System
- In Proceedings of Network and Distributed System Security Symposium (NDSS ’05
, 2005
"... As national infrastructure becomes intertwined with emerging global data networks, the stability and integrity of the two have become synonymous. This connection, while necessary, leaves network assets vulnerable to the rapidly moving threats of today’s Internet, including fast moving worms, distrib ..."
Abstract
-
Cited by 110 (16 self)
- Add to MetaCart
(Show Context)
As national infrastructure becomes intertwined with emerging global data networks, the stability and integrity of the two have become synonymous. This connection, while necessary, leaves network assets vulnerable to the rapidly moving threats of today’s Internet, including fast moving worms, distributed denial of service attacks, and routing exploits. This paper introduces the Internet Motion Sensor (IMS), a globally scoped Internet monitoring system whose goal is to measure, characterize, and track threats. The IMS architecture is based on three novel components. First, a Distributed Monitoring Infrastructure increases visibility into global threats. Second, a Lightweight Active Responder provides enough interactivity that traffic on the same service can be differentiated independent of application semantics. Third, a Payload Signatures and Caching mechanism avoids recording duplicated payloads, reducing overhead and assisting in identifying new and unique payloads. We explore the architectural tradeoffs of this system in the context of a 3 year deployment across multiple dark address blocks ranging in size from /24s to a /8. These sensors represent a range of organizations and a diverse sample of the routable IPv4 space including nine of all routable /8 address ranges. Data gathered from these deployments is used to demonstrate the ability of the IMS to capture and characterize several important Internet threats: the Blaster worm (August 2003), the Bagle backdoor scanning efforts