Results 1 - 10
of
48
OmniTouch: wearable multitouch interaction everywhere
- In Proc. ACM UIST ’11
, 2011
"... Figure 1. OmniTouch is a wearable depth-sensing and projection system that allows everyday surfaces- including a wearer’s own body- to be appropriated for graphical multitouch interaction. OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on ..."
Abstract
-
Cited by 86 (11 self)
- Add to MetaCart
(Show Context)
Figure 1. OmniTouch is a wearable depth-sensing and projection system that allows everyday surfaces- including a wearer’s own body- to be appropriated for graphical multitouch interaction. OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces- without any calibration- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are “clicked” or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today’s mobile devices, they could do in the palm of their hand. ACM Classification: H.5.2 [Information interfaces and
Imaginary phone: Learning imaginary interfaces by transferring spatial memory from a familiar device
- Proc. UIST’11
"... We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call “transfer learning”. By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustr ..."
Abstract
-
Cited by 22 (4 self)
- Add to MetaCart
(Show Context)
We propose a method for learning how to use an imaginary interface (i.e., a spatial non-visual interface) that we call “transfer learning”. By using a physical device (e.g. an iPhone) a user inadvertently learns the interface and can then transfer that knowledge to an imaginary interface. We illustrate this concept with our Imaginary Phone prototype. With it users interact by mimicking the use of a physical iPhone by tapping and sliding on their empty non-dominant hand without visual feedback. Pointing on the hand is tracked using a depth camera and touch events are sent wirelessly to an actual iPhone, where they invoke the corre-sponding actions. Our prototype allows the user to perform everyday task such as picking up a phone call or launching the timer app and setting an alarm. Imaginary Phone there-by serves as a shortcut that frees users from the necessity of retrieving the actual physical device. We present two user studies that validate the three assump-tions underlying the transfer learning method. (1) Users build up spatial memory automatically while using a physi-cal device: participants knew the correct location of 68 % of their own iPhone home screen apps by heart. (2) Spatial memory transfers from a physical to an imaginary inter-face: participants recalled 61 % of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone: Participants could reliably acquire 0.95cm wide iPhone targets on their palm—sufficiently large to operate any iPhone standard widget. Author Keywords Imaginary interface, mobile, wearable, spatial memory,
ShadowPuppets: Supporting Collocated Interaction with Mobile Projector Phones Using Hand Shadows
"... Pico projectors attached to mobile phones allow users to view phone content using a large display. However, to provide input to projector phones, users have to look at the device, diverting their attention from the projected image. Additionally, other collocated users have no way of interacting with ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
Pico projectors attached to mobile phones allow users to view phone content using a large display. However, to provide input to projector phones, users have to look at the device, diverting their attention from the projected image. Additionally, other collocated users have no way of interacting with the device. We present ShadowPuppets, a system that supports collocated interaction with mobile projector phones. Shadow-Puppets allows users to cast hand shadows as input to mobile projector phones. Most people understand how to cast hand shadows, which provide an easy input modality. Additionally, they implicitly support collocated usage, as nearby users can cast shadows as input and one user can see and understand another user’s hand shadows. We describe the results of three user studies. The first study examines what hand shadows users expect will cause various effects. The second study looks at how users perceive hand shadows, examining what effects they think various hand shadows will cause. Finally, we present qualitative results from a study with our functional prototype and discuss design implications for systems using shadows as input. Our findings suggest that shadow input can provide a natural and intuitive way of interacting with projected interfaces and can support collocated collaboration. Author Keywords Projector-camera system, mobile projector phone, shadow,
On-body interaction: armed and dangerous
- Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction - TEI ’12
, 2012
"... Recent technological advances in input sensing, as well as ultra-small projectors, have opened up new opportunities for interaction – the use of the body itself as both an input and output platform. Such on-body interfaces offer new interac-tive possibilities, and the promise of access to computatio ..."
Abstract
-
Cited by 13 (0 self)
- Add to MetaCart
(Show Context)
Recent technological advances in input sensing, as well as ultra-small projectors, have opened up new opportunities for interaction – the use of the body itself as both an input and output platform. Such on-body interfaces offer new interac-tive possibilities, and the promise of access to computation, communication and information literally in the palm of our hands. The unique context of on-body interaction allows us to take advantage of extra dimensions of input our bodies naturally afford us. In this paper, we consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this op-portunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we proto-typed many applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. We highlight the most compelling techniques we uncovered. Further, this paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Finally, we bring to light several new and unique interaction techniques. ACM Classification: H.5.2 [Information interfaces and
Ad-binning: leveraging around device space for storing, browsing and retrieving mobile device content
- In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM
, 2013
"... Exploring information content on mobile devices can be tedious and time consuming. We present Around-Device Binning, or AD-Binning, a novel mobile user interface that allows users to off-load mobile content in the space around the device. We informed our implementation of AD-Binning by exploring var ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
Exploring information content on mobile devices can be tedious and time consuming. We present Around-Device Binning, or AD-Binning, a novel mobile user interface that allows users to off-load mobile content in the space around the device. We informed our implementation of AD-Binning by exploring various design factors, such as the minimum around-device target size, suitable item selection methods, and techniques for placing content in off-screen space. In a task requiring exploration, we find that AD-Binning improves browsing efficiency by avoiding the minute selection and flicking mechanisms needed for on-screen interaction. We conclude with design guidelines for off screen content storage and browsing.
Simpleflow: Enhancing Gestural Interaction with Gesture Prediction, Abbreviation and Autocompletion
- In Proceedings of INTERACT
, 2011
"... Abstract. Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture autocompletion for 2D predictive gesture entry. Both styles enable users to abbreviate gestures. We experimentally evaluate and compare both styles of visual autocompletion against each other and against non-predictive gesture entry. The best performing visual autocompletion is referred to as SimpleFlow. Our findings establish that users of SimpleFlow take significant advantage of gesture autocompletion by entering partial gestures rather than whole gestures. Compared to nonpredictive gesture entry, users enter partial gestures that are 41% shorter than the complete gestures, while simultaneously improving the accuracy (+13%, from 68% to 81%) and speed (+10%) of their gesture input. The results provide insights into why SimpleFlow leads to significantly enhanced performance, while showing how predictive gestures with simple visual autocompletion impacts upon the gesture abbreviation, accuracy, speed and cognitive load of 2D predictive gesture entry.
Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Microsoft Research TechReport MSR-TR-2012-111
"... ABSTRACT In this paper we present the results of a study of human preferences in using mid-air gestures for directing other humans. Rather than contributing a specific set of gestures, we contribute a set of gesture types, which together make a set of the core actions needed to complete any of our ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT In this paper we present the results of a study of human preferences in using mid-air gestures for directing other humans. Rather than contributing a specific set of gestures, we contribute a set of gesture types, which together make a set of the core actions needed to complete any of our six chosen tasks in the domain of human-to-human gestural communication without the speech channel. We observed 12 participants, cooperating to accomplish different tasks only using hand gestures to communicate. We analyzed 5,500 gestures in terms of hand usage and gesture type, using a novel classification scheme which combines three existing taxonomies in order to better capture this interaction space. Our findings indicate that, depending on the meaning of the gesture, there is preference in the usage of gesture types, such as pointing, pantomimic acting, direct manipulation, semaphoric, or iconic gestures. These results can be used as guidelines to design purely gesture driven interfaces for interactive environments and surfaces.
The Unadorned Desk: Exploiting the Physical Space around a Display as an Input Canvas
"... Abstract. In everyday office work, people smoothly use the space on their physical desks to work with documents of interest, and to keep tools and materials nearby for easy use. In contrast, the limited screen space of computer displays imposes interface constraints. Associated material is placed of ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract. In everyday office work, people smoothly use the space on their physical desks to work with documents of interest, and to keep tools and materials nearby for easy use. In contrast, the limited screen space of computer displays imposes interface constraints. Associated material is placed off-screen (i.e., temporarily hidden) and requires extra work to access (window switching, menu selection) or crowds and competes with the work area (e.g., palettes and icons). This problem is worsened by the increasing popularity of small displays such as tablets and laptops. To mitigate this problem, we investigate how we can exploit an unadorned physical desk space as an additional input canvas. With minimal augmentation, our Unadorned Desk detects coarse hovering over and touching of discrete areas (‘items’) within a given area on an otherwise regular desk, which is used as input to the desktop computer. We hypothesize that people’s spatial memory will let them touch particular desk locations without looking. In contrast to other augmented desks, our system provides optional feedback of touches directly on the computer’s screen. We conducted a user
Characterizing user performance with assisted direct offscreen pointing
- In Proc. MobileHCI 2011
"... The limited viewport size of mobile devices requires that users continuously acquire information that lies beyond the edge of the screen. Recent hardware solutions are capable of continually tracking a user‟s finger around the device. This has created new opportunities for interactive solutions, suc ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
The limited viewport size of mobile devices requires that users continuously acquire information that lies beyond the edge of the screen. Recent hardware solutions are capable of continually tracking a user‟s finger around the device. This has created new opportunities for interactive solutions, such as direct off-screen pointing: the ability to directly point at objects that are outside the viewport. We empirically characterize user performance with direct off-screen pointing when assisted by target cues. We predict time and accuracy outcomes for direct off-screen pointing with existing and derived models. We validate the models with good results (R 2 ≥ 0.9) and reveal that direct off-screen pointing takes up to four times longer than pointing at visible targets, depending on the desired accuracy tradeoff. Pointing accuracy degrades logarithmically with target distance. We discuss design implications in the context of several real-world applications. Author Keywords Direct off-screen pointing, off-screen target visualizations, performance models, Fitts ‟ law, steering law.
ShoeSense: A New Perspective on Hand Gestures and Wearable Applications
"... When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as w ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.