Results 1 - 10
of
52
DoubleFlip: A Motion Gesture for Mobile Interaction
- Proceedings of CHI '11, ACM
, 2011
"... Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures— deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design ..."
Abstract
-
Cited by 43 (4 self)
- Add to MetaCart
(Show Context)
Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures— deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design for the mobile computing paradigm. To address this issue, we present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device. We demonstrate that consensus exists among our participants on parameters of movement and on mappings of motion gestures onto commands. We use this consensus to develop a taxonomy for motion gestures and to specify an end-user inspired motion gesture set. We highlight the implications of this work to the design of smartphone applications and hardware. Finally, we argue that our results influence best practices in design for all gestural interfaces. Author Keywords Motion gestures, sensors, mobile interaction.
CueFlik: Interactive Concept Learning in Image Search
"... Web image search is difficult in part because a handful of keywords are generally insufficient for characterizing the visual properties of an image. Popular engines have begun to provide tags based on simple characteristics of images (such as tags for black and white images or images that contain a ..."
Abstract
-
Cited by 35 (5 self)
- Add to MetaCart
(Show Context)
Web image search is difficult in part because a handful of keywords are generally insufficient for characterizing the visual properties of an image. Popular engines have begun to provide tags based on simple characteristics of images (such as tags for black and white images or images that contain a face), but such approaches are limited by the fact that it is unclear what tags end-users want to be able to use in examining Web image search results. This paper presents CueFlik, a Web image search application that allows end-users to quickly create their own rules for re-ranking images based on their visual characteristics. End-users can then re-rank any future Web image search results according to their rule. In an experiment we present in this paper, end-users quickly create effective rules for such concepts as “product photos”, “portraits of people”, and “clipart”. When asked to conceive of and create their own rules, participants create such rules as “sports action shot ” with images from queries for “basketball ” and “football”. CueFlik represents both a promising new approach to Web image search and an important study in end-user interactive machine learning.
Gestalt: Integrated Support for Implementation and Analysis in Machine Learning
"... We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it ..."
Abstract
-
Cited by 25 (3 self)
- Add to MetaCart
(Show Context)
We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.
Gesture Coder: A Tool for Programming Multi-Touch Gestures by Demonstration
- In Proceedings of CHI 2012: ACM Conference on Human Factors in Computing Systems
"... Multi-touch gestures have become popular on a wide range of touchscreen devices, but the programming of these gestures remains an art. It is time-consuming and error-prone for a developer to handle the complicated touch state transitions that result from multiple fingers and their simultaneous movem ..."
Abstract
-
Cited by 24 (3 self)
- Add to MetaCart
(Show Context)
Multi-touch gestures have become popular on a wide range of touchscreen devices, but the programming of these gestures remains an art. It is time-consuming and error-prone for a developer to handle the complicated touch state transitions that result from multiple fingers and their simultaneous movements. In this paper, we present Gesture Coder, which by learning from a few examples given by the developer automatically generates code that recognizes multi-touch gestures, tracks their state changes and invokes corresponding application actions. Developers can easily test the generated code in Gesture Coder, refine it by adding more examples and, once they are satisfied with its performance, integrate the code into their applications. We evaluated our learning algorithm exhaustively with various conditions over a large set of noisy data. Our results show that it is sufficient for rapid prototyping and can be improved with higher quality and more training data. We also evaluated Gesture Coder’s usability through a within-subject study in which we asked participants to implement a set of multi-touch interactions with and without Gesture Coder. The results show overwhelmingly that Gesture Coder significantly lowers the threshold of programming multi-touch gestures. Author Keywords Multi-touch gestures, programming by demonstration, state
Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance
"... This thesis examines machine learning through the lens of human-computer interaction in order to address fundamental questions surrounding the application of machine learning to real-life problems, including: Can we make machine learning algorithms more usable and useful? Can we better understand th ..."
Abstract
-
Cited by 22 (6 self)
- Add to MetaCart
This thesis examines machine learning through the lens of human-computer interaction in order to address fundamental questions surrounding the application of machine learning to real-life problems, including: Can we make machine learning algorithms more usable and useful? Can we better understand the real-world consequences of algorithm choices and user interface designs for end-user machine learning? How can human interaction play a role in enabling users to efficiently create useful machine learning systems, in enabling successful application of algorithms by machine learning novices, and in ultimately making it possible in practice to apply machine learning to new problems? The scope of the research presented here is the application of supervised learning algorithms to contemporary computer music composition and performance. Computer music is a domain rich with computational problems requiring the modeling of complex phenomena, the construction of real-time interactive systems, and the support of human creativity. Though varied, many of these problems may be addressed
Cascadia: a system for specifying, detecting, and managing RFID events
- In Proc. of the Sixth MobiSys Conf
, 2008
"... Cascadia is a system that provides RFID-based pervasive computing applications with an infrastructure for specifying, extracting and managing meaningful high-level events from raw RFID data. Cascadia provides three important services. First, it allows application developers and even users to specify ..."
Abstract
-
Cited by 20 (9 self)
- Add to MetaCart
(Show Context)
Cascadia is a system that provides RFID-based pervasive computing applications with an infrastructure for specifying, extracting and managing meaningful high-level events from raw RFID data. Cascadia provides three important services. First, it allows application developers and even users to specify events using either a declarative query language or an intuitive visual language based on direct manipulation. Second, it provides an API that facilitates the development of applications which rely on RFID-based events. Third, it automatically detects the specified events, forwards them to registered applications and stores them for later use (e.g., for historical queries). We present the design and implementation of Cascadia along with an evaluation that includes both a user study and measurements on traces collected in a building-wide RFID deployment. To demonstrate how Cascadia facilitates application development, we built a simple digital diary application in the form of a calendar that populates itself with RFID-based events. Cascadia copes with ambiguous RFID data and limitations in an RFID deployment by transforming RFID readings into probabilistic events. We show that this approach outperforms deterministic event detection techniques while avoiding the need to specify and train sophisticated models. 1.
Midas: Fabricating Custom Capacitive Touch Sensors to Prototype Interactive Objects
"... An increasing number of consumer products include user interfaces that rely on touch input. While digital fabrication techniques such as 3D printing make it easier to prototype the shape of custom devices, adding interactivity to such prototypes remains a challenge for many designers. We introduce M ..."
Abstract
-
Cited by 17 (3 self)
- Add to MetaCart
(Show Context)
An increasing number of consumer products include user interfaces that rely on touch input. While digital fabrication techniques such as 3D printing make it easier to prototype the shape of custom devices, adding interactivity to such prototypes remains a challenge for many designers. We introduce Midas, a software and hardware toolkit to support the design, fabrication, and programming of flexible capacitive touch sensors for interactive objects. With Midas, designers first define the desired shape, layout, and type of touch sensitive areas, as well as routing obstacles, in a sensor editor. From this high-level specification, Midas automatically generates layout files with appropriate sensor pads and routed connections. These files are then used to fabricate sensors using digital fabrication processes, e.g., vinyl cutters and conductive ink printers. Using step-by-step assembly instructions generated by Midas, designers connect these sensors to the Midas microcontroller, which detects touch events. Once the prototype is assembled, designers can define interactivity for their sensors: Midas supports both record-and-replay actions for controlling existing local applications and WebSocket-based event output for controlling novel or remote applications. In a first-use study with three participants, users successfully prototyped media players. We also demonstrate how Midas can be used to create a number of touch-sensitive interfaces. ACM Classification:
Tracking Free-Weight Exercises
"... Abstract. Weight training, in addition to aerobic exercises, is an important component of a balanced exercise program. However, mechanisms for tracking free weight exercises have not yet been explored. In this paper, we study methods that automatically recognize what type of exercise you are doing a ..."
Abstract
-
Cited by 16 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Weight training, in addition to aerobic exercises, is an important component of a balanced exercise program. However, mechanisms for tracking free weight exercises have not yet been explored. In this paper, we study methods that automatically recognize what type of exercise you are doing and how many repetitions you have done so far. We incorporated a three-axis accelerometer into a workout glove to track hand movements and put another accelerometer on a user’s waist to track body posture. To recognize types of
Interactive Design of Multimodal User Interfaces -- Reducing technical and visual complexity
- JOURNAL ON MULTIMODAL USER INTERFACES
, 2009
"... In contrast to the pioneers of multimodal interaction, e.g. Richard Bolt in the late seventies, today’s researchers can benefit from various existing hardware devices and software toolkits. Although these development tools are available, using them is still a great challenge, particularly in terms ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
In contrast to the pioneers of multimodal interaction, e.g. Richard Bolt in the late seventies, today’s researchers can benefit from various existing hardware devices and software toolkits. Although these development tools are available, using them is still a great challenge, particularly in terms of their usability and their appropriateness to the actual design and research process. We present a three-part approach to supporting interaction designers and researchers in designing, developing, and evaluating novel interaction modalities including multimodal interfaces. First, we present a software architecture that enables the unification of a great variety of very heterogeneous device drivers and special-purpose toolkits in a common interaction library named ”Squidy”. Second, we introduce a visual design environment that minimizes the threshold for its usage (ease-of-use) but scales well with increasing complexity (ceiling) by combining the concepts of semantic zooming with visual dataflow programming. Third, we not only support the interactive design and rapid prototyping of multimodal interfaces but also provide advanced development and debugging techniques to improve technical and conceptual solutions. In addition,
A Specification Paradigm for the Design and Implementation of Tangible User Interfaces
, 2009
"... Tangible interaction shows promise to significantly enhance computer-mediated support for activities such as learning, problem solving, and design. However, tangible user interfaces are currently considered challenging to design and build. Designers and developers of these interfaces encounter sever ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
Tangible interaction shows promise to significantly enhance computer-mediated support for activities such as learning, problem solving, and design. However, tangible user interfaces are currently considered challenging to design and build. Designers and developers of these interfaces encounter several conceptual, methodological and technical difficulties. Among others, these challenges include: the lack of appropriate interaction abstractions, the shortcomings of current user interface software tools to address continuous and parallel interactions, as well as the excessive effort required to integrate novel input and output technologies. To address these challenges, we propose a specification paradigm for designing and implementing Tangible User Interfaces (TUIs), that enables TUI developers to specify the structure and behavior of a tangible user interface using high-level constructs, which abstract away implementation details. An important benefit of this approach, which is based on User Interface Description Language (UIDL) research, is that these specifications could be automatically or semi-automatically converted into concrete TUI implementations. In addition, such specifications could serve as a common ground for investigating both design and implementation concerns by TUI developers from different disciplines. Thus, the primary contribution of this paper is a high-level UIDL that provides developers,