• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 590
Next 10 →

Crowdsourcing user studies with Mechanical Turk

by Aniket Kittur, H. Chi, Bongwon Suh - Proc. CHI 2008, ACM Pres , 2008
"... User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. M ..."
Abstract - Cited by 461 (8 self) - Add to MetaCart
. Micro-task markets, such as Amazon’s Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro

Imagenet: A large-scale hierarchical image database

by Jia Deng, Wei Dong, Richard Socher, Li-jia Li, Kai Li, Li Fei-fei - In CVPR , 2009
"... The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce her ..."
Abstract - Cited by 840 (28 self) - Add to MetaCart
datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We

Utility data annotation with amazon mechanical turk

by Er Sorokin, David Forsyth - In IEEE Conference on Computer Vision and Pattern Recognition Workshops , 2008
"... We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results for several different annotation problems. We ..."
Abstract - Cited by 213 (1 self) - Add to MetaCart
We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results for several different annotation problems

Quality Management on Amazon Mechanical Turk

by Panagiotis G. Ipeirotis, Foster Provost, Jing Wang
"... Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answ ..."
Abstract - Cited by 177 (8 self) - Add to MetaCart
Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit

Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution

by John Le, Andy Edmonds, Vaughn Hester, Lukas Biewald - In SIGIR 2010 workshop , 2010
"... The use of crowdsourcing platforms like Amazon Mechan-ical Turk for evaluating the relevance of search results has become an effective strategy that yields results quickly and inexpensively. One approach to ensure quality of worker judgments is to include an initial training period and sub-sequent s ..."
Abstract - Cited by 46 (1 self) - Add to MetaCart
The use of crowdsourcing platforms like Amazon Mechan-ical Turk for evaluating the relevance of search results has become an effective strategy that yields results quickly and inexpensively. One approach to ensure quality of worker judgments is to include an initial training period and sub

Bonus or Not? Learn to Reward in Crowdsourcing

by Ming Yin, Yiling Chen
"... Recent work has shown that the quality of work produced in a crowdsourcing working session can be influenced by the presence of performance-contingent financial incentives, such as bonuses for exceptional performance, in the session. We take an algorithmic approach to decide when to offer bonuses in ..."
Abstract - Add to MetaCart
working session to maximize a re-quester’s utility. Experiments on Amazon Mechan-ical Turk show that our approach leads to higher utility for the requester than fixed and random bonus schemes do. Simulations on synthesized data sets further demonstrate the robustness of our approach against different

Learning effective human pose estimation from inaccurate annotation

by Sam Johnson, Mark Everingham - In CVPR , 2011
"... The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Rece ..."
Abstract - Cited by 42 (1 self) - Add to MetaCart
, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechan-ical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a sig-nificant increase in pose estimation accuracy, while simul-taneously reducing computational expense

Inferring Users ’ Preferences from Crowdsourced Pairwise Comparisons: A Matrix Completion Approach

by Jinfeng Yi, Rong Jin, Shaili Jain, Anil K. Jain
"... Inferring user preferences over a set of items is an important problem that has found numerous applications. This work fo-cuses on the scenario where the explicit feature representation of items is unavailable, a setup that is similar to collaborative filtering. In order to learn a user’s preference ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
real-world benchmark datasets for collaborative filtering and one crowdranking dataset we collected via Amazon Mechan-ical Turk shows the promising performance of the proposed algorithm compared to the state-of-the-art approaches.

Cheap and Fast — But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks

by Rion Snow, Daniel Jurafsky, Andrew Y. Ng
"... Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon’s Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors ove ..."
Abstract - Cited by 247 (4 self) - Add to MetaCart
Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon’s Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors

Running experiments on amazon mechanical turk

by Gabriele Paolacci, Jesse Chandler, Panagiotis G. Ipeirotis - Judgment and Decision Making , 2010
"... Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechan ..."
Abstract - Cited by 57 (0 self) - Add to MetaCart
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about
Next 10 →
Results 1 - 10 of 590
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University