Results 1 - 10
of
206
uWave: Accelerometer-based Personalized Gesture Recognition and Its Applications
"... Abstract—The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statist ..."
Abstract
-
Cited by 79 (2 self)
- Add to MetaCart
(Show Context)
Abstract—The proliferation of accelerometers on consumer electronics has brought an opportunity for interaction based on gestures or physical manipulation of the devices. We present uWave, an efficient recognition algorithm for such interaction using a single three-axis accelerometer. Unlike statistical methods, uWave requires a single training sample for each gesture pattern and allows users to employ personalized gestures and physical manipulations. We evaluate uWave using a large gesture library with over 4000 samples collected from eight users over an elongated period of time for a gesture vocabulary with eight gesture patterns identified by a Nokia research. It shows that uWave achieves 98.6 % accuracy, competitive with statistical methods that require significantly more training samples. Our evaluation data set is the largest and most extensive in published studies, to the best of our knowledge. We also present applications of uWave in gesture-based user authentication and interaction with three-dimensional mobile user interfaces using user created gestures. Keywords-gesture recognition, acceleration, dynamic time warping, personalized gesture I.
OctoPocus: a dynamic guide for learning gesture-based command sets
- In Proc. of ACM UIST
, 2008
"... We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to ..."
Abstract
-
Cited by 70 (7 self)
- Add to MetaCart
(Show Context)
We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experiments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to conventional Help menus. It can also be adapted to a markbased gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu. ACM Classification: D.2.2 [Software Engineering]: Design
Protractor: A Fast and Accurate Gesture Recognizer. CHI'10
"... Protractor is a novel gesture recognizer that can be easily implemented and quickly customized for different users. Protractor uses a nearest neighbor approach, which recognizes an unknown gesture based on its similarity to each of the known gestures, e.g., training samples or examples given by a us ..."
Abstract
-
Cited by 47 (4 self)
- Add to MetaCart
(Show Context)
Protractor is a novel gesture recognizer that can be easily implemented and quickly customized for different users. Protractor uses a nearest neighbor approach, which recognizes an unknown gesture based on its similarity to each of the known gestures, e.g., training samples or examples given by a user. In particular, it employs a novel method to measure the similarity between gestures, by calculating a minimum angular distance between them with a closed-form solution. As a result, Protractor is more accurate, naturally covers more gesture variation, runs significantly faster and uses much less memory than its peers. This makes Protractor suitable for mobile computing, which is limited in processing power and memory. An evaluation on both a previously published gesture data set and a newly collected gesture data set indicates that Protractor outperforms its peers in many aspects. Author Keywords Gesture-based interaction, gesture recognition, templatebased
The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures
"... Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to ..."
Abstract
-
Cited by 44 (10 self)
- Add to MetaCart
(Show Context)
Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that “aligns” data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART. Author Keywords: Statistics, analysis of variance, ANOVA, factorial analysis, nonparametric data, F-test.
Searching and mining trillions of time series subsequences under dynamic time warping
- In SIGKDD
, 2012
"... Most time series data mining algorithms use similarity search as a core subroutine, and thus the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. The difficulty of scaling search to large datasets largely explains why most academic work on time ..."
Abstract
-
Cited by 43 (3 self)
- Add to MetaCart
(Show Context)
Most time series data mining algorithms use similarity search as a core subroutine, and thus the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. The difficulty of scaling search to large datasets largely explains why most academic work on time series data mining has plateaued at considering a few millions of time series objects, while much of industry and science sits on billions of time series objects waiting to be explored. In this work we show that by using a combination of four novel ideas we can search and mine truly massive time series for the first time. We demonstrate the following extremely unintuitive fact; in large datasets we can exactly search under DTW much more quickly than the current state-of-the-art Euclidean distance search algorithms. We demonstrate our work on the largest set of time series experiments ever attempted. In particular, the largest dataset we consider is larger than the combined size of all of the time series datasets considered in all data mining papers ever published. We show that our ideas allow us to solve higher-level time series data mining problem such as motif discovery and clustering at scales that would otherwise be untenable. In addition to mining massive datasets, we will show that our ideas also have implications for real-time monitoring of data streams, allowing us to handle much faster arrival rates and/or use cheaper and lower powered devices than are currently possible.
A Lightweight Multistroke Recognizer for User Interface Prototypes
"... With the expansion of pen- and touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multistroke gesture recognizers is nontrivial and time-consuming. Luckily, user interface prototypes often do no ..."
Abstract
-
Cited by 41 (9 self)
- Add to MetaCart
With the expansion of pen- and touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multistroke gesture recognizers is nontrivial and time-consuming. Luckily, user interface prototypes often do not require state-ofthe-art recognizers that are general and maintainable, due to the simpler nature of most user interface gestures. To enable easy incorporation of multistroke recognition in user interface prototypes, we present $N, a lightweight, concise multistroke recognizer that uses only simple geometry and trigonometry. A full pseudocode listing is given as an appendix. $N is a significant extension to the $1 unistroke recognizer, which has seen quick uptake in prototypes but has key limitations. $N goes further by (1) recognizing gestures comprising multiple strokes, (2) automatically generalizing from one multistroke to all possible multistrokes using alternative stroke orders and directions, (3) recognizing one-dimensional gestures such as lines, and (4) providing bounded rotation invariance. In addition, $N uses two speed optimizations, one with start angles that saves 79.1 % of comparisons and increases accuracy 1.3%. The other, which is optional, compares multistroke templates and candidates only if they have the same number of strokes, reducing comparisons further to 89.5 % and increasing accuracy another 1.7%. These results are taken from our study of algebra symbols entered in situ by middle and high schoolers using a math tutor prototype, on which $N was 96.6 % accurate with 15 templates.
Using strokes as command shortcuts: cognitive benefits and toolkit support
- In Proceedings of the 27th international conference on Human factors in computing systems
, 2009
"... This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of short ..."
Abstract
-
Cited by 37 (0 self)
- Add to MetaCart
(Show Context)
This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of shortcuts had the same level of performance with enough practice, stroke shortcuts had substantial cognitive advantages in learning and recall. With the same amount of practice, users could successfully recall more shortcuts and make fewer errors with stroke shortcuts than with keyboard shortcuts. The second half of the paper focuses on UI de-velopment support and articulates guidelines for toolkits to implement stroke shortcuts in a wide range of software ap-plications. We illustrate how to apply these guidelines by introducing the Stroke Shortcuts Toolkit (SST) which is a li-brary for adding stroke shortcuts to Java Swing applications with just a few lines of code.
Gestalt: Integrated Support for Implementation and Analysis in Machine Learning
"... We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it ..."
Abstract
-
Cited by 25 (3 self)
- Add to MetaCart
(Show Context)
We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.
Sketch and run: a stroke-based interface for home robots
- in CHI ’09: Proceedings of the 27th international conference on Human factors in
"... Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are d ..."
Abstract
-
Cited by 24 (6 self)
- Add to MetaCart
(Show Context)
Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are doing and what kind of work they can do. This paper presents an interface for the commanding home robots by using stroke gestures on a computer screen. This interface allows the user to control robots and design their behaviors by sketching the robot’s behaviors and actions on a top-down view from ceiling cameras. To convey a feeling of directly controlling the robots, our interface employs the live camera view. In this study, we focused on a house-cleaning task that is typical of home robots, and developed a sketch interface for designing behaviors of vacuuming robots. Author Keywords stroke-based interface, sketching interface, stroke gesture, home robot, human robot interaction
Computational support for sketching in design: A review
- Foundations and Trends in Human-Computer Interaction
, 2009
"... Computational support for sketching is an exciting research area at the intersection of design research, human-computer interaction, and artificial intelligence. Despite the prevalence of software tools, most designers begin their work with physical sketches. Modern computational tools largely treat ..."
Abstract
-
Cited by 24 (4 self)
- Add to MetaCart
(Show Context)
Computational support for sketching is an exciting research area at the intersection of design research, human-computer interaction, and artificial intelligence. Despite the prevalence of software tools, most designers begin their work with physical sketches. Modern computational tools largely treat design as a linear process beginning with a specific problem and ending with a specific solution. Sketch based design tools offer another approach that may fit design practice better. This article surveys literature related to such tools. First, we describe the practical basis of sketching—why people sketch, what significance it has in design and problem solving, and the cognitive activities it supports. Second, we survey computational support for sketching, including methods for performing sketch recognition and managing ambiguity, techniques for modeling recognizable elements, and human-computer interaction techniques for working with sketches. Last we propose challenges and opportunities for future advances in this field. Contents 1