Results 1 -
7 of
7
Social Turing Tests: Crowdsourcing Sybil Detection
"... As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effectively befriend legitimate users, rendering most automated Sybil detect ..."
Abstract
-
Cited by 20 (8 self)
- Add to MetaCart
(Show Context)
As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effectively befriend legitimate users, rendering most automated Sybil detection techniques ineffective. In this paper, we explore the feasibility of a crowdsourced Sybil detection system for OSNs. We conduct a large user study on the ability of humans to detect today’s Sybil accounts, using a large corpus of ground-truth Sybil accounts from the Facebook and Renren networks. We analyze detection accuracy by both “experts ” and “turkers ” under a variety of conditions, and find that while turkers vary significantly in their effectiveness, experts consistently produce near-optimal results. We use these results to drive the design of a multi-tier crowdsourcing Sybil detection system. Using our user study data, we show that this system is scalable, and can be highly effective either as a standalone system or as a complementary technique to current tools. 1
Let’s Do It at My Place Instead?: Attitudinal and Behavioral Study of Privacy in Client-side Personalization
- Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
, 2014
"... Many users welcome personalized services, but are reluc-tant to provide the information about themselves that personalization requires. Performing personalization exclu-sively at the client side (e.g., on one’s smartphone) may conceptually increase privacy, because no data is sent to a remote provid ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Many users welcome personalized services, but are reluc-tant to provide the information about themselves that personalization requires. Performing personalization exclu-sively at the client side (e.g., on one’s smartphone) may conceptually increase privacy, because no data is sent to a remote provider. But does client-side personalization (CSP) also increase users ’ perception of privacy? We developed a causal model of privacy attitudes and be-havior in personalization, and validated it in an experiment that contrasted CSP with personalization at three remote providers: Amazon, a fictitious company, and the “Cloud”. Participants gave roughly the same amount of personal data and tracking permissions in all four conditions. A structural equation modeling analysis reveals the reasons: CSP raises the fewest privacy concerns, but does not lead in terms of perceived protection nor in resulting self-anticipated satis-faction and thus privacy-related behavior. Encouragingly, we found that adding certain security features to CSP is likely to raise its perceived protection significantly. Our model predicts that CSP will then also sharply improve on all other attitudinal and behavioral privacy measures. Author Keywords Privacy; personalization; client-side; structural equation
Analysis of ecrime in Crowd-sourced Labor Markets: Mechanical Turk vs. Freelancer
"... Research in the economics of security has contributed more than a decade of empirical findings to the understanding of the microeconomics of (in)security, privacy, and ecrime. Here we build on insights from previous macro-level research on crime, and microeconomic analyses of ecrime to develop a set ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Research in the economics of security has contributed more than a decade of empirical findings to the understanding of the microeconomics of (in)security, privacy, and ecrime. Here we build on insights from previous macro-level research on crime, and microeconomic analyses of ecrime to develop a set of hypotheses to predict which variables are correlated with national participation levels in crowd-sourced ecrime. Some hypotheses appear to hold, e.g. Internet penetration, English literacy, size of the labor market, and government policy all are significant indicators of crowd-sourced ecrime market participation. Greater governmental transparency, less corruption, and more consistent rule of law lower the participation rate in ecrime. Other results are counter-intuitive. GDP per person is not significant, and unusually for crime, a greater percentage of women does not correlate to decreased crime. One finding relevant to policymaking is that deterring bidders in crowd-sourced labor markets is an ineffective approach to decreasing demand and in turn market size. 1
Crowdsourcing a HIT: Measuring Workers ’ Pre-task Interactions on Microtask Markets
"... The ability to entice and engage crowd workers to partici-pate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effec-tive recru ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
The ability to entice and engage crowd workers to partici-pate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effec-tive recruitment of crowd workers is often overlooked. To help us gain a better understanding of crowd recruitment strategies we propose three new metrics for measuring crowd workers ’ willingness to participate in advertised HITs: conversion rate, conversion rate over time, and nomi-nal conversion rate. We discuss how the conversion rate of workers—the number of potential workers aware of a task that choose to accept the task—can affect the quantity, qual-ity, and validity of any data collected via crowdsourcing. We also contribute a tool—turkmill—that enables requesters on Amazon Mechanical Turk to easily measure the conver-sion rate of HITs. We then present the results of two exper-iments that demonstrate how conversion rate metrics can be used to evaluate the effect of different HIT designs. We in-vestigate how four HIT design features (value proposition, branding, quality of presentation, and intrinsic motivation) affect conversion rates. Among other things, we find that including a clear value proposition has a strong significant, positive effect on the nominal conversion rate. We also find that crowd workers prefer commercial entities to non-profit or university requesters.
Experience in using MTurk for Network Measurement
"... ABSTRACT Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populationsand conclusions drawn from the data can be sensitive to these biases. Crowdso ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populationsand conclusions drawn from the data can be sensitive to these biases. Crowdsourcing is a promising approach to address these challenges, although the epistemological implications have not yet received substantial attention by the research community. We share our findings from leveraging Amazon's Mechanical Turk (MTurk) system for three distinct network measurement tasks. We describe our failure to outsource to MTurk an execution of a security measurement tool, our subsequent successful integration of a simple yet meaningful measurement within a HIT, and finally the successful use of MTurk to quickly provide focused small sample sets that could not be obtained easily via alternate means. Finally, we discuss the implications of our experiences for other crowdsourced measurement research.
An attitudinal and behavioral model of personalization at different providers1
"... Privacy is a central element affecting users ’ experiences with personalized services. We developed an integrative theoretical model that specifies how personal traits, privacy-related attitudes (both provider- and system-specific), and satisfaction with a personalization system influence users ’ ac ..."
Abstract
- Add to MetaCart
Privacy is a central element affecting users ’ experiences with personalized services. We developed an integrative theoretical model that specifies how personal traits, privacy-related attitudes (both provider- and system-specific), and satisfaction with a personalization system influence users ’ actual information disclosure behavior towards the system. The research model was validated in an experimental study, in which 390 subjects were randomly assigned to one of four different personalization conditions: client-side personalization, or remote personalization at three different providers. The results show that system-specific privacy concerns (SPC), perceived privacy protection (PPP) and satisfaction with the system (SAT) have direct impacts on information disclosure behavior (but differently for different types of personal data). General privacy concerns, general self-efficacy, privacy-specific self-efficacy and personalization condition affect information disclosure via their impacts on PPP, SPC and/or SAT. Client-side personalization, a privacy-enhancing feature, increased PPP, thereby increasing information disclosure. Theoretical and managerial implications are discussed.
Crowdsourcing a HIT: Measuring Workers ’ Pre- Interactions on Microtask Markets
"... The ability to entice and engage crowd workers to partici-pate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effec-tive recru ..."
Abstract
- Add to MetaCart
The ability to entice and engage crowd workers to partici-pate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effec-tive recruitment of crowd workers is often overlooked. To help us gain a better understanding of crowd recruitment strategies we propose three new metrics for measuring crowd workers ’ willingness to participate in advertised HITs: conversion rate, conversion rate over time, and nomi-nal conversion rate. We discuss how the conversion rate of workers—the number of potential workers aware of a task that choose to accept the task—can affect the quantity, qual-ity, and validity of any data collected via crowdsourcing. We also contribute a tool—turkmill—that enables requesters on Amazon Mechanical Turk to easily measure the conver-sion rate of HITs. We then present the results of two exper-iments that demonstrate how conversion rate metrics can be used to evaluate the effect of different HIT designs. We in-vestigate how four HIT design features (value proposition, branding, quality of presentation, and intrinsic motivation) affect conversion rates. Among other things, we find that including a clear value proposition has a strong significant, positive effect on the nominal conversion rate. We also find that crowd workers prefer commercial entities to non-profit or university requesters.