Results 1 - 10
of
24
Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments. CHI’11
, 2011
"... Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users ‟ realworld tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
(Show Context)
Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users ‟ realworld tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user‟s attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user‟s attention.
Proton++: A Customizable Declarative Multitouch Framework
"... Proton++ is a declarative multitouch framework that allows developers to describe multitouch gestures as regular expressions of touch event symbols. It builds on the Proton framework by allowing developers to incorporate custom touch attributes directly into the gesture description. These custom att ..."
Abstract
-
Cited by 16 (0 self)
- Add to MetaCart
(Show Context)
Proton++ is a declarative multitouch framework that allows developers to describe multitouch gestures as regular expressions of touch event symbols. It builds on the Proton framework by allowing developers to incorporate custom touch attributes directly into the gesture description. These custom attributes increase the expressivity of the gestures, while preserving the benefits of Proton: automatic gesture matching, static analysis of conflict detection, and graphical gesture creation. We demonstrate Proton++’s flexibility with several examples: a direction attribute for describing trajectory, a pinch attribute for detecting when touches move towards one another, a touch area attribute for simulating pressure, an orientation attribute for selecting menu items, and a screen location attribute for simulating hand ID. We also use screen location to simulate user ID and enable simultaneous recognition of gestures by multiple users. In addition, we show how to incorporate timing into Proton++ gestures by reporting touch events at a regular time interval. Finally, we present a user study that suggests that users are roughly four times faster at interpreting gestures written using Proton++ than those written in procedural event-handling code commonly used today. Author Keywords Proton++, custom attributes, touch events symbols, regular
Comparing free hand menu techniques for distant displays using linear, marking and finger-count menus
- In INTERACT’11
, 2011
"... Abstract. Distant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising int ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Distant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-OZ protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.
Bootstrapper: Recognizing Tabletop Users by their Shoes
"... step.han.ric.hter @ stu.dent.hpi.uni-pots-dam.de {chr.isti.an.holz, pat.rick.bau.disch} @ hpi.uni-pots-dam.de In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
(Show Context)
step.han.ric.hter @ stu.dent.hpi.uni-pots-dam.de {chr.isti.an.holz, pat.rick.bau.disch} @ hpi.uni-pots-dam.de In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users across sessions. We propose a different approach based on distinguishing users ’ shoes. While users are interacting with the table, our system Bootstrapper observes their shoes using one or more depth cameras mounted to the edge of the table. It then identifies users by matching camera images with a database of known shoe images. When multiple users interact, Bootstrapper associates touches with shoes based on hand orientation. The approach can be implemented using consumer depth cameras because (1) shoes offer large distinct features such as color, (2) shoes naturally align themselves with the ground, giving the system a well-defined perspective and thus reduced ambiguity. We report two simple studies in which Bootstrapper recognized participants from a database of 18 users with 95.8 % accuracy. ACM Classification: H.5.2 [Information interfaces and presentation]:
Touchless circular menus: toward an intuitive UI for touchless interactions with large displays
- Proc. AVI, 33
, 2014
"... Researchers are exploring touchless interactions in diverse usage contexts. These include interacting with public displays, where mouse and keyboards are inconvenient, activating kitchen devices without touching them with dirty hands, or supporting surgeons in browsing medical images in a sterile op ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Researchers are exploring touchless interactions in diverse usage contexts. These include interacting with public displays, where mouse and keyboards are inconvenient, activating kitchen devices without touching them with dirty hands, or supporting surgeons in browsing medical images in a sterile operating room. Unlike traditional visual interfaces, however, touchless systems still lack a standardized user interface language for basic command selection (e.g., menus). Prior research proposed touchless menus that require users to comply strictly with system-defined postures (e.g., grab, finger-count, pinch). These approaches are problematic because they are analogous to command-line interfaces: users need to remember an interaction vocabulary and input a pre-defined symbol (via gesture or command). To overcome this problem, we introduce and evaluate Touchless Circular Menus (TCM)—a touchless menu system optimized for large displays, which enables users to make simple directional movements for selecting commands. TCM utilize our abilities to make mid-air directional strokes, relieve users from learning posture-based commands, and shift the interaction complexity from users ’ input to the visual interface. In a controlled study (N=15), when compared with contextual linear menus using grab gestures, participants using TCM were more than two times faster in selecting commands and perceived lower workload. However, users made more command-selection errors with TCM than with linear menus. The menu’s triggering location on the visual interface significantly affected the effectiveness and efficiency of TCM. Our contribution informs the design of intuitive UIs for touchless interactions with large displays.
ShoeSense: A New Perspective on Hand Gestures and Wearable Applications
"... When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as w ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
Multitouch finger registration and its applications
- In Proc. OZCHI '10, ACM
, 2010
"... We present a simple finger registration technique that can distinguish in real-time which hand and fingers of the user are touching the touchscreen. The finger registration process is activated whenever the user places a hand, in any orientation, anywhere on the touchscreen. Such a finger registrati ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
We present a simple finger registration technique that can distinguish in real-time which hand and fingers of the user are touching the touchscreen. The finger registration process is activated whenever the user places a hand, in any orientation, anywhere on the touchscreen. Such a finger registration technique enables the design of intuitive multitouch interfaces that directly map different combinations of the user’s fingers to the interface operations. In this paper, we first study the effectiveness and robustness of the finger registration process. We then demonstrate the usability of our finger registration method for two new interfaces. Specifically, we describe the Palm Menu, which is an intuitive dynamic menu interface that minimizes hand and eye movement during operations, and a virtual mouse interface that enables user to perform mouse operations in multitouch environment. We conducted controlled experiments to compare the performance of the Palm Menu against common command selection interfaces and the virtual mouse against traditional pointing devices.
Design of Unimanual Multi-Finger Pie Menu Interaction
- ITS
, 2011
"... Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces.
Speech Augmented Multitouch Interaction Patterns
, 2011
"... Touch- and voice-based input have emerged as the most popular and relevant interaction modes to enable a natural interaction with computer systems. However, until now, they have mostly been treated separately. In particular, explicit design knowledge on the e ective combinations of these modes for a ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Touch- and voice-based input have emerged as the most popular and relevant interaction modes to enable a natural interaction with computer systems. However, until now, they have mostly been treated separately. In particular, explicit design knowledge on the e ective combinations of these modes for an improved user experience is currently not available in a comprehensive form. In this paper, we address this shortage and introduce design patterns which support developers in exploiting the possibilities of combined voice and touch interaction for newly developed systems, so that interaction with these systems becomes more natural for the respective end users. 1
Cuenesics: Using Mid-Air Gestures to Select Items on Interactive Public Displays
"... Figure 1. People performing mid-air gestures to select items on interactive public displays. The system was installed at three locations: (L1) on a large wall-projection in a coworking space, (L2) on a rear-projection screen in a students ’ cafeteria, (L3) on an LCD at a venue’s opening event Most o ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Figure 1. People performing mid-air gestures to select items on interactive public displays. The system was installed at three locations: (L1) on a large wall-projection in a coworking space, (L2) on a rear-projection screen in a students ’ cafeteria, (L3) on an LCD at a venue’s opening event Most of today’s public displays only show predefined con-tents that passers-by are not able to change. We argue that interactive public displays would benefit from immedi-ately usable mid-air techniques for choosing options, express-ing opinions or more generally selecting one among several items. We propose a design space for hand-gesture based mid-air selection techniques on interactive public displays, along with four specific techniques that we evaluated at three different locations in the the field. Our findings include: 1) if no hint is provided, people successfully use Point+Dwell for selecting items, 2) the user representation could be switched from Mirror to Cursor after registration without causing con-fusion, 3) people tend to explore items before confirming one, 4) in a public context, people frequently interact inadvertently (without looking at the screen). We conclude by providing recommendations for designers of interactive public displays to support immediate usability for mid-air selection.