• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

OctoPocus: a dynamic guide for learning gesture-based command sets. (2008)

by O Bau, W Mackay
Venue:Proc. UIST
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 70
Next 10 →

ShadowGuides: Visualizations for In-Situ Learning of Multi-Touch and Whole-Hand Gestures

by Dustin Freeman, Hrvoje Benko, Meredith Ringel Morris, Daniel Wigdor
"... We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures ..."
Abstract - Cited by 37 (4 self) - Add to MetaCart
We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system. Author Keywords Gesture learning, multi-finger, displacement, marking menus. ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User Interfaces. – Input devices and strategies; Graphical user interfaces.
(Show Context)

Citation Context

...s beyond the basic spatial manipulations described by Schneiderman [13]. Several learning techniques use in-situ visuals that enable the user to learn while doing single-touch and pen gestures (e.g.: =-=[1,9]-=-). Such techniques provide a gradual transition from novice to expert use without requiring any drastic change. However, teaching multi-touch and whole-hand gestures is a larger problem than single-to...

Using strokes as command shortcuts: cognitive benefits and toolkit support

by Shumin Zhai - In Proceedings of the 27th international conference on Human factors in computing systems , 2009
"... This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of short ..."
Abstract - Cited by 37 (0 self) - Add to MetaCart
This paper investigates using stroke gestures as shortcuts to menu selection. We first experimentally measured the per-formance and ease of learning of stroke shortcuts in com-parison to keyboard shortcuts when there is no mnemonic link between the shortcut and the command. While both types of shortcuts had the same level of performance with enough practice, stroke shortcuts had substantial cognitive advantages in learning and recall. With the same amount of practice, users could successfully recall more shortcuts and make fewer errors with stroke shortcuts than with keyboard shortcuts. The second half of the paper focuses on UI de-velopment support and articulates guidelines for toolkits to implement stroke shortcuts in a wide range of software ap-plications. We illustrate how to apply these guidelines by introducing the Stroke Shortcuts Toolkit (SST) which is a li-brary for adding stroke shortcuts to Java Swing applications with just a few lines of code.
(Show Context)

Citation Context

...ape channel of ShapeWriter ([9]). (3) Make stroke shortcuts visible to end users A well-known and important drawback of using strokes to activate commands is that these strokes are not self-revealing =-=[13, 6, 1]-=-. In other words, as opposed to buttons and menus, the user cannot guess which stroke-based commands are available and which stroke triggers which command. Often novel features of an interface are unu...

The design and evaluation of multitouch marking menus

by G. Julian Lepinski, Tovi Grossman, George Fitzmaurice - In Proc. CHI , 2010
"... Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Mark ..."
Abstract - Cited by 24 (0 self) - Add to MetaCart
Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities. Author Keywords Multi-finger input, multi-touch displays, marking menus.
(Show Context)

Citation Context

...W, W, NW), we obtain a set of 8 x 31 = 248 gestures. Figure 4 shows the 8 gestures for one of the chords. This design gives us a large gesture set, without any compound strokes [35], or iconic shapes =-=[3]-=-. Lift-and-Stroke Gestures A requirement of Directional Chording Gestures is that the chord itself must first be recognized. This is a challenge, since multitouch technologies typically do not recogni...

Towards a Formalization of Multi-touch Gestures

by Dietrich Kammer, Jan Wojdziak, Y Keck, Rainer Groh, Technische Universität Dresden, Severin Taranko , 2010
"... Multi-touch is a technology which offers new styles of interaction compared to traditional input devices like keyboard and mouse. Users can quickly manipulate objects or execute commands by means of their fingers and hands. Current multi-touch frameworks offer a set of standard gestures that are eas ..."
Abstract - Cited by 19 (2 self) - Add to MetaCart
Multi-touch is a technology which offers new styles of interaction compared to traditional input devices like keyboard and mouse. Users can quickly manipulate objects or execute commands by means of their fingers and hands. Current multi-touch frameworks offer a set of standard gestures that are easy to use when developing an application. In contrast, defining new gestures requires a lot of work involving low-level recognition of touch data. To address this problem, we contribute a discussion of strategies towards a formalization of gestural interaction on multi-touch surfaces. A test environment is presented, showing the applicability and benefit within multi-touch frameworks. ACM Classification: H5.2 [Information interfaces and
(Show Context)

Citation Context

...matics in HCIshave been explored at depth elsewhere [4]. GeForMT isssuited to establish gesture lexicons (gesticons) which cansbe used to create uniform feedback and feed-forward assshown by [36] and =-=[2]-=-, respectively. These mechanismsscan substantially aid the user when dealing with multitouch applications.sSince pragmatics is the most complex aspect of semiotics, asmore detailed discussion is out o...

GestureBar: improving the approachability of gesture-based interfaces

by Andrew Bragdon, Robert Zeleznik, Brian Williamson, Timothy Miller, Joseph J. Laviola - In Proc. of CHI'09
"... GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly disclo ..."
Abstract - Cited by 18 (2 self) - Add to MetaCart
GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly discloses how to execute commands with gestures, through animated images, detail tips and an out-ofdocument practice area. GestureBar’s simple design is also general enough for use with any recognition technique and for integration with standard, non-gestural UI components. We evaluate GestureBar in a formal experiment showing that users can perform complex, ecologically valid tasks in a purely gestural system without training, introduction, or prior gesture experience when using GestureBar, discovering and learning a high percentage of the gestures needed to perform the tasks optimally, and significantly outperforming a state of the art crib sheet. The relative contribution of the major design elements of GestureBar is also explored. A second experiment shows that GestureBar is preferred to a basic crib sheet and two enhanced crib sheet variations.
(Show Context)

Citation Context

...r major design elements. RELATED WORK Gestural UI research spans a broad set of topics, including: creating gesture sets [16] [12], disclosing gestural functions and teaching individual gestures [15] =-=[3]-=-, recognizing individual gestures [25] [23], and correcting recognition results [17] [14]. Although each of these areas influences the usability, our GestureBar work focuses specifically on novice app...

TouchViz: A Case Study Comparing Two Interfaces for Data Analytics on Tablets

by Steven M. Drucker, Danyel Fisher, Ramik Sadana, Jessica Herron, M. C. Schraefel , 2013
"... Figure 1: A touch and gesture oriented interface for visual data analytics (FLUID interface). As more applications move from the desktop to touch devices like tablets, designers must wrestle with the costs of porting a design with as little revision of the UI as possible from one device to the other ..."
Abstract - Cited by 15 (3 self) - Add to MetaCart
Figure 1: A touch and gesture oriented interface for visual data analytics (FLUID interface). As more applications move from the desktop to touch devices like tablets, designers must wrestle with the costs of porting a design with as little revision of the UI as possible from one device to the other, or of optimizing the interaction per device. We consider the tradeoffs between two versions of a UI for working with data on a touch tablet. One interface is based on using the conventional desktop metaphor (WIMP) with a control panel, push buttons, and checkboxes – where the mouse click is effectively replaced by a finger tap. The other interface (which we call FLUID) eliminates the control panel and focuses touch actions on the data visualization itself. We describe our design process and evaluation of each interface. We discuss the significantly better task Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
(Show Context)

Citation Context

...sorting was done through swiping, or filtering via flicking up or down). Ways of helping people discover the affordances of the FLUID interface is an interesting area to explore. Work by Bau & Mackay =-=[2]-=- as well as others explore techniques for assisting in learning gestures which would be useful in exploring in a more longitudinal study.Comparisons using mouse based interaction The two conditions t...

Scale Detection for a priori Gesture Recognition

by Olivier Bau - in "CHI ’10: Proceedings of the SIGCHI conference on Human factors in computing systems", ACM
"... Gesture-based interfaces provide expert users with an effi-cient form of interaction but they require a learning effort for novice users. To address this problem, some on-line guid-ing techniques display all available gestures in response to partial input. However, partial input recognition algorith ..."
Abstract - Cited by 12 (0 self) - Add to MetaCart
Gesture-based interfaces provide expert users with an effi-cient form of interaction but they require a learning effort for novice users. To address this problem, some on-line guid-ing techniques display all available gestures in response to partial input. However, partial input recognition algorithms are scale dependent while most gesture recognizers support scale independence (i.e., the same shape at different scales actually invokes the same command). We propose an al-gorithm for estimating the scale of any partial input in the context of a gesture recognition system and illustrate how it can be used to improve users ’ experience with gesture-based systems.
(Show Context)

Citation Context

...mproving users’ transition from novice to expert. For example, Kurtenbach et al. [6] used crib-sheets showing the gesture set displayed in response to users’ hesitation. More recently, Bau and Mackay =-=[3]-=- proposed OctoPocus, a dynamic on-line guide. If the user pauses during gesture input, the guide appears to show all possible gesture alternatives. As opposed to crib-sheets, that consume a large amou...

Memorability of pre-designed and userdefined gesture sets

by Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, Per Ola Kristensson - Proc. CHI ’13 , 2013
"... We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier ..."
Abstract - Cited by 11 (3 self) - Add to MetaCart
We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier to remember, both immediately after creation and on the next day (up to a 24 % difference in recall rate compared to pre-designed gestures). We also discovered that the differences between gesture sets are mostly due to association errors (rather than gesture form errors), that participants prefer user-defined sets, and that they think user-defined gestures take less time to learn. Finally, we contribute a qualitative analysis of the tradeoffs involved in gesture type selection and share our data and a video corpus of 66 gestures for replicability and further analysis. Author Keywords Gesture sets; gesture memorability; user-defined gestures.
(Show Context)

Citation Context

... 1099Session: Gesture Studies CHI 2013: Changing Perspectives, Paris, France RELATED WORK Gestures are used for a variety of tasks, including writing text (e.g. [10,1,34,26]), issuing commands (e.g. =-=[15,3,2, 5]-=-), and modifying objects (e.g. [27,7]). See the recent survey by Zhai et al. [39] for a comprehensive review of 2D gesture interfaces. In order for gestures to be used they have to be designed. For th...

Estimating the Perceived Difficulty of Pen Gestures

by Radu-daniel Vatavu, Daniel Vogel, Géry Casiez, Laurent Grisoni
"... Abstract. Our empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the ..."
Abstract - Cited by 11 (5 self) - Add to MetaCart
Abstract. Our empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the relative ranking of difficulty among multiple gestures; and classifying a single gesture into five levels of difficulty. We confirm that the CLC model does not provide an accurate prediction of production time magnitude, and instead show that a reasonably accurate estimate can be calculated using only a few gesture execution samples from a few people. Using this estimated production time, our rules, on average, rank gesture difficulty with 90 % accuracy and rate gesture difficulty with 75 % accuracy. Designers can use our results to choose application gestures, and researchers can build on our analysis in other gesture domains and for modeling gesture performance.
(Show Context)

Citation Context

...M=3 near 53%, and Ranking also approaches 91% (Fig 5, left). The effect of M over Rating is significant (� 2 (19)=476.4, p<.001). A Wilcoxon signed-rank test found significant effects between (1,20), =-=(3,20)-=- and (5,20) with a small Cohen effect (r<.3). The effect of M over Ranking was significant (� 2 (19)=4140.54, p<.001) with significant differences between (1,20) (r=.52), (3,20) and (5,20) with medium...

LightGuide: Projected Visualizations for Hand Movement Guidance

by Rajinder Sodhi, Hrvoje Benko, Andrew D. Wilson
"... a b c d Figure 1. An overview of the range of 3D cues we created to help guide a user’s movement. In (a), a user is shown a 2D arrow with a circle that moves in the horizontal plane, (b) shows a 3D arrow, (c) a 3D path where blue indicates the movement trajectory and (d) uses positive and negative s ..."
Abstract - Cited by 10 (1 self) - Add to MetaCart
a b c d Figure 1. An overview of the range of 3D cues we created to help guide a user’s movement. In (a), a user is shown a 2D arrow with a circle that moves in the horizontal plane, (b) shows a 3D arrow, (c) a 3D path where blue indicates the movement trajectory and (d) uses positive and negative spatial coloring with an arrow on the user’s hand to indicate depth. LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user’s body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user’s hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85 % more accurately than when guided by video. Author Keywords On-demand interfaces; on-body computing; appropriated
(Show Context)

Citation Context

...statically placed in video, previous literature has also looked at using co-located real-time feedback and feedforword mechanisms to provide on-demand assistance to guide users through gestural tasks =-=[3,11]-=-. Such systems lead users in-situ, while a user is in the process of performing the gesture. Our work draws upon this prior research where we explore how coFigure 2. Examples of how people currently f...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University