Results 1 - 10
of
197
BotMiner: Clustering Analysis of Network Traffic for Protocol- and Structure-Independent Botnet Detection
"... Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., central ..."
Abstract
-
Cited by 200 (14 self)
- Add to MetaCart
(Show Context)
Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires no a priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names/addresses). We start from the definition and essential properties of botnets. We define a botnet as a coordinated group of malware instances that are controlled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers/peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets (IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate. 1
All your iFRAMEs point to us
, 2008
"... As the web continues to play an ever increasing role in information exchange, so too is it becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by downloads are caused b ..."
Abstract
-
Cited by 173 (4 self)
- Add to MetaCart
(Show Context)
As the web continues to play an ever increasing role in information exchange, so too is it becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by downloads are caused by URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. Our analysis of billions of URLs over a 10 month period shows that a non-trivial amount, of over 3 million maliciousURLs, initiate drive-by downloads. An even more troubling finding is that approximately 1.3 % of the incoming search queries to Google’s search engine returned at least one URL labeled as malicious in the results page. We also explore several aspects of the drive-by downloads problem. We study the relationship between the user browsing habits and exposure to malware, the different techniques used to lure the user into the malware distribution networks, and the different properties of these networks.
Your Botnet is My Botnet: Analysis of a Botnet Takeover
"... Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security problems on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such a ..."
Abstract
-
Cited by 171 (28 self)
- Add to MetaCart
(Show Context)
Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security problems on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet and study its operations for a period of ten days. During this time, we observed more than 180 thousand infections and recorded almost 70 GB of data that the bots collected. While botnets have been “hijacked ” and studied previously, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of data from the infected victims. This data provides a new understanding of the type and amount of personal information that is stolen by botnets. 1.
Measurements and Mitigation of Peer-to-Peer-based Botnets: A Case Study on Storm Worm
"... Botnets, i.e., networks of compromised machines under a common control infrastructure, are commonly controlled by an attacker with the help of a central server: all compromised machines connect to the central server and wait for commands. However, the first botnets that use peer-to-peer (P2P) networ ..."
Abstract
-
Cited by 126 (10 self)
- Add to MetaCart
(Show Context)
Botnets, i.e., networks of compromised machines under a common control infrastructure, are commonly controlled by an attacker with the help of a central server: all compromised machines connect to the central server and wait for commands. However, the first botnets that use peer-to-peer (P2P) networks for remote control of the compromised machines appeared in the wild recently. In this paper, we introduce a methodology to analyze and mitigate P2P botnets. In a case study, we examine in detail the Storm Worm botnet, the most wide-spread P2P botnet currently propagating in the wild. We were able to infiltrate and analyze in-depth the botnet, which allows us to estimate the total number of compromised machines. Furthermore, we present two different ways to disrupt the communication channel between controller and compromised machines in order to mitigate the botnet and evaluate the effectiveness of these mechanisms.
An advanced hybrid peer-to-peer botnet,
- Proceedings of the First Workshop on Hot Topics in Understanding Botnets.
, 2007
"... ..."
(Show Context)
Outside the Closed World: On Using Machine Learning For Network Intrusion Detection
- In Proceedings of the IEEE Symposium on Security and Privacy
, 2010
"... Abstract—In network intrusion detection research, one popular strategy for finding attacks is monitoring a network’s activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. Ho ..."
Abstract
-
Cited by 82 (1 self)
- Add to MetaCart
(Show Context)
Abstract—In network intrusion detection research, one popular strategy for finding attacks is monitoring a network’s activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational “real world ” settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection. Keywords-anomaly detection; machine learning; intrusion detection; network security. I.
A Taxonomy of Botnet Structures
- In Proc. of the 23 Annual Computer Security Applications Conference (ACSAC'07
, 2007
"... We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using the performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In pa ..."
Abstract
-
Cited by 81 (4 self)
- Add to MetaCart
(Show Context)
We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using the performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In particular, our models show that for scale free botnets, targeted responses are particularly effective. Further, botmasters ’ efforts to improve the robustness of scale free networks comes at a cost of diminished transitivity. Botmasters do not appear to have any structural solutions to this problem in scale free networks. We also show that random graph botnets (e.g., those using P2P formations) are highly resistant to both random and targeted responses. We evaluate the impact of responses on different topologies using simulation. We also perform some novel measurements of a P2P network to demonstrate the utility of our proposed metrics. Our analysis shows how botnets may be classified according to structure, and given rank or priority using our proposed metrics. This may help direct responses, and suggests which general remediation strategies are more likely to succeed. 1
Studying Spamming Botnets Using Botlab
"... In this paper we present Botlab, a platform that continually monitors and analyzes the behavior of spamoriented botnets. Botlab gathers multiple real-time streams of information about botnets taken from distinct perspectives. By combining and analyzing these streams, Botlab can produce accurate, tim ..."
Abstract
-
Cited by 76 (2 self)
- Add to MetaCart
(Show Context)
In this paper we present Botlab, a platform that continually monitors and analyzes the behavior of spamoriented botnets. Botlab gathers multiple real-time streams of information about botnets taken from distinct perspectives. By combining and analyzing these streams, Botlab can produce accurate, timely, and comprehensive data about spam botnet behavior. Our prototype system integrates information about spam arriving at the University of Washington, outgoing spam generated by captive botnet nodes, and information gleaned from DNS about URLs found within these spam messages. We describe the design and implementation of Botlab, including the challenges we had to overcome, such as preventing captive nodes from causing harm or thwarting virtual machine detection. Next, we present the results of a detailed measurement study of the behavior of the most active spam botnets. We find that six botnets are responsible for 79 % of spam messages arriving at the UW campus. Finally, we present defensive tools that take advantage of the Botlab platform to improve spam filtering and protect users from harmful web sites advertised within botnet-generated spam.
Effective and Efficient Malware Detection at the End Host
"... Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam e-mails and denial of service attacks have malware as their underlying cause. That is, computers that are compromised with malware are often networked together to form botnets, and ..."
Abstract
-
Cited by 70 (5 self)
- Add to MetaCart
(Show Context)
Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam e-mails and denial of service attacks have malware as their underlying cause. That is, computers that are compromised with malware are often networked together to form botnets, and many attacks are launched using these malicious, attacker-controlled networks. With the increasing significance of malware in Internet attacks, much research has concentrated on developing techniques to collect, study, and mitigate malicious code. Without doubt, it is important to collect and study malware found on the Internet. However, it is even more important to develop mitigation and detection techniques based on the insights gained from the analysis work. Unfortunately, current host-based detection approaches
Traffic aggregation for malware detection
, 2008
"... Stealthy malware, such as botnets and spyware, are hard to detect because their activities are subtle and do not disrupt the network, in contrast to DoS attacks and aggressive worms. Stealthy malware, however, does communicate to exfiltrate data to the attacker, to receive the attacker’s commands, ..."
Abstract
-
Cited by 56 (4 self)
- Add to MetaCart
(Show Context)
Stealthy malware, such as botnets and spyware, are hard to detect because their activities are subtle and do not disrupt the network, in contrast to DoS attacks and aggressive worms. Stealthy malware, however, does communicate to exfiltrate data to the attacker, to receive the attacker’s commands, or to carry out those commands. Moreover, since malware rarely infiltrates only a single host in a large enterprise, these communications should emerge from multiple hosts within coarse temporal proximity to one another. In this paper, we describe a system called TĀMD (pronounced “tamed”) with which an enterprise can identify candidate groups of infected computers within its network. TĀMD accomplishes this by finding new communication “aggregates ” involving multiple internal hosts, i.e., communication flows that share common characteristics. We describe characteristics for defining aggregates—including flows that communicate with the same external network, that share similar payload, and/or that involve internal hosts with similar software platforms—and justify their use in finding infected hosts. We also detail efficient algorithms employed by TĀMD for identifying such aggregates, and demonstrate a particular configuration of TĀMD that identifies new infections for multiple bot and spyware examples, within traces of traffic recorded at the edge of a university network. This is achieved even when the number of infected hosts comprise only about 0.0097 % of all internal hosts in the network.