Results 1 -
4 of
4
Detecting Marionette Microblog Users for Improved Information Credibility
"... Abstract. In this paper, we mine a special group of microblog users: the “marionette ” users, who are created or employed by backstage “pup-peteers”, either through programs or manually. Unlike normal users that access microblogs for information sharing or social communication, the marionette users ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. In this paper, we mine a special group of microblog users: the “marionette ” users, who are created or employed by backstage “pup-peteers”, either through programs or manually. Unlike normal users that access microblogs for information sharing or social communication, the marionette users perform specific tasks to earn financial profits. For ex-ample, they follow certain users to increase their “statistical popularity”, or retweet some tweets to amplify their “statistical impact”. The fabri-cated follower or retweet counts not only mislead normal users to wrong information, but also seriously impair microblog-based applications, such as popular tweets selection and expert finding. In this paper, we study the important problem of detecting marionette users on microblog plat-forms. This problem is challenging because puppeteers are employing complicated strategies to generate marionette users that present similar behaviors as normal ones. To tackle this challenge, we propose to take into account two types of discriminative information: (1) individual user tweeting behaviors and (2) the social interactions among users. By inte-grating both information into a semi-supervised probabilistic model, we can effectively distinguish marionette users from normal ones. By apply-ing the proposed model to one of the most popular microblog platform (Sina Weibo) in China, we find that the model can detect marionette users with f-measure close to 0.9. In addition, we propose an application to measure the credibility of retweet counts.
Last Modification Date: 2013/05/09 Revision: #7
"... Sybil attacks in social and information systems have serious security implications. Out of many defence schemes, Graph-based Sybil Detection (GSD) had the greatest attention by both academia and industry. Even though many GSD algorithms exist, there is no analytical framework to reason about their d ..."
Abstract
- Add to MetaCart
(Show Context)
Sybil attacks in social and information systems have serious security implications. Out of many defence schemes, Graph-based Sybil Detection (GSD) had the greatest attention by both academia and industry. Even though many GSD algorithms exist, there is no analytical framework to reason about their design, especially as they make different assumptions about the used adversary and graph models. In this paper, we bridge this knowledge gap and present a unified framework for systematic evaluation of GSD algorithms. We used this framework to show that GSD algorithms should be designed to find local community structures around known non-Sybil identities, while incrementally tracking changes in the graph as it evolves over time.
Íntegro: Leveraging Victim Prediction for Robust Fake Account Detection in OSNs
"... Abstract—Detecting fake accounts in online social networks (OSNs) protects OSN operators and their users from various ma-licious activities. Most detection mechanisms attempt to predict and classify user accounts as real (i.e., benign, honest) or fake (i.e., malicious, Sybil) by analyzing user-level ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Detecting fake accounts in online social networks (OSNs) protects OSN operators and their users from various ma-licious activities. Most detection mechanisms attempt to predict and classify user accounts as real (i.e., benign, honest) or fake (i.e., malicious, Sybil) by analyzing user-level activities or graph-level structures. These mechanisms, however, are not robust against adversarial attacks in which fake accounts cloak their operation with patterns resembling real user behavior. We herein demonstrate that victims, benign users who control real accounts and have befriended fakes, form a distinct classifi-cation category that is useful for designing robust detection mech-anisms. First, as attackers have no control over victim accounts and cannot alter their activities, a victim account classifier which relies on user-level activities is relatively harder to circumvent. Second, as fakes are directly connected to victims, a fake account