Results 1 - 10
of
70
ShadowGuides: Visualizations for In-Situ Learning of Multi-Touch and Whole-Hand Gestures
"... We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures ..."
Abstract
-
Cited by 37 (4 self)
- Add to MetaCart
(Show Context)
We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system. Author Keywords Gesture learning, multi-finger, displacement, marking menus. ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User Interfaces. – Input devices and strategies; Graphical user interfaces.
Using strokes as command shortcuts: cognitive benefits and toolkit support
- In Proceedings of the 27th international conference on Human factors in computing systems
, 2009
"... This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of short ..."
Abstract
-
Cited by 37 (0 self)
- Add to MetaCart
(Show Context)
This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of shortcuts had the same level of performance with enough practice, stroke shortcuts had substantial cognitive advantages in learning and recall. With the same amount of practice, users could successfully recall more shortcuts and make fewer errors with stroke shortcuts than with keyboard shortcuts. The second half of the paper focuses on UI de-velopment support and articulates guidelines for toolkits to implement stroke shortcuts in a wide range of software ap-plications. We illustrate how to apply these guidelines by introducing the Stroke Shortcuts Toolkit (SST) which is a li-brary for adding stroke shortcuts to Java Swing applications with just a few lines of code.
The design and evaluation of multitouch marking menus
- In Proc. CHI
, 2010
"... Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Mark ..."
Abstract
-
Cited by 24 (0 self)
- Add to MetaCart
(Show Context)
Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities. Author Keywords Multi-finger input, multi-touch displays, marking menus.
Towards a Formalization of Multi-touch Gestures
, 2010
"... Multi-touch is a technology which offers new styles of interaction compared to traditional input devices like keyboard and mouse. Users can quickly manipulate objects or execute commands by means of their fingers and hands. Current multi-touch frameworks offer a set of standard gestures that are eas ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
(Show Context)
Multi-touch is a technology which offers new styles of interaction compared to traditional input devices like keyboard and mouse. Users can quickly manipulate objects or execute commands by means of their fingers and hands. Current multi-touch frameworks offer a set of standard gestures that are easy to use when developing an application. In contrast, defining new gestures requires a lot of work involving low-level recognition of touch data. To address this problem, we contribute a discussion of strategies towards a formalization of gestural interaction on multi-touch surfaces. A test environment is presented, showing the applicability and benefit within multi-touch frameworks. ACM Classification: H5.2 [Information interfaces and
GestureBar: improving the approachability of gesture-based interfaces
- In Proc. of CHI'09
"... GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly disclo ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
(Show Context)
GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly discloses how to execute commands with gestures, through animated images, detail tips and an out-ofdocument practice area. GestureBar’s simple design is also general enough for use with any recognition technique and for integration with standard, non-gestural UI components. We evaluate GestureBar in a formal experiment showing that users can perform complex, ecologically valid tasks in a purely gestural system without training, introduction, or prior gesture experience when using GestureBar, discovering and learning a high percentage of the gestures needed to perform the tasks optimally, and significantly outperforming a state of the art crib sheet. The relative contribution of the major design elements of GestureBar is also explored. A second experiment shows that GestureBar is preferred to a basic crib sheet and two enhanced crib sheet variations.
TouchViz: A Case Study Comparing Two Interfaces for Data Analytics on Tablets
, 2013
"... Figure 1: A touch and gesture oriented interface for visual data analytics (FLUID interface). As more applications move from the desktop to touch devices like tablets, designers must wrestle with the costs of porting a design with as little revision of the UI as possible from one device to the other ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
Figure 1: A touch and gesture oriented interface for visual data analytics (FLUID interface). As more applications move from the desktop to touch devices like tablets, designers must wrestle with the costs of porting a design with as little revision of the UI as possible from one device to the other, or of optimizing the interaction per device. We consider the tradeoffs between two versions of a UI for working with data on a touch tablet. One interface is based on using the conventional desktop metaphor (WIMP) with a control panel, push buttons, and checkboxes – where the mouse click is effectively replaced by a finger tap. The other interface (which we call FLUID) eliminates the control panel and focuses touch actions on the data visualization itself. We describe our design process and evaluation of each interface. We discuss the significantly better task Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Scale Detection for a priori Gesture Recognition
- in "CHI ’10: Proceedings of the SIGCHI conference on Human factors in computing systems", ACM
"... Gesture-based interfaces provide expert users with an effi-cient form of interaction but they require a learning effort for novice users. To address this problem, some on-line guid-ing techniques display all available gestures in response to partial input. However, partial input recognition algorith ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
Gesture-based interfaces provide expert users with an effi-cient form of interaction but they require a learning effort for novice users. To address this problem, some on-line guid-ing techniques display all available gestures in response to partial input. However, partial input recognition algorithms are scale dependent while most gesture recognizers support scale independence (i.e., the same shape at different scales actually invokes the same command). We propose an al-gorithm for estimating the scale of any partial input in the context of a gesture recognition system and illustrate how it can be used to improve users ’ experience with gesture-based systems.
Memorability of pre-designed and userdefined gesture sets
- Proc. CHI ’13
, 2013
"... We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
(Show Context)
We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier to remember, both immediately after creation and on the next day (up to a 24 % difference in recall rate compared to pre-designed gestures). We also discovered that the differences between gesture sets are mostly due to association errors (rather than gesture form errors), that participants prefer user-defined sets, and that they think user-defined gestures take less time to learn. Finally, we contribute a qualitative analysis of the tradeoffs involved in gesture type selection and share our data and a video corpus of 66 gestures for replicability and further analysis. Author Keywords Gesture sets; gesture memorability; user-defined gestures.
Estimating the Perceived Difficulty of Pen Gestures
"... Abstract. Our empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
Abstract. Our empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the relative ranking of difficulty among multiple gestures; and classifying a single gesture into five levels of difficulty. We confirm that the CLC model does not provide an accurate prediction of production time magnitude, and instead show that a reasonably accurate estimate can be calculated using only a few gesture execution samples from a few people. Using this estimated production time, our rules, on average, rank gesture difficulty with 90 % accuracy and rate gesture difficulty with 75 % accuracy. Designers can use our results to choose application gestures, and researchers can build on our analysis in other gesture domains and for modeling gesture performance.
LightGuide: Projected Visualizations for Hand Movement Guidance
"... a b c d Figure 1. An overview of the range of 3D cues we created to help guide a user’s movement. In (a), a user is shown a 2D arrow with a circle that moves in the horizontal plane, (b) shows a 3D arrow, (c) a 3D path where blue indicates the movement trajectory and (d) uses positive and negative s ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
(Show Context)
a b c d Figure 1. An overview of the range of 3D cues we created to help guide a user’s movement. In (a), a user is shown a 2D arrow with a circle that moves in the horizontal plane, (b) shows a 3D arrow, (c) a 3D path where blue indicates the movement trajectory and (d) uses positive and negative spatial coloring with an arrow on the user’s hand to indicate depth. LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user’s body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user’s hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85 % more accurately than when guided by video. Author Keywords On-demand interfaces; on-body computing; appropriated