Results 1 - 10
of
17
Mudra: A Unified Multimodal Interaction Framework
- In Proceedings of ICMI 2011, 13th International Conference on Multimodal Interaction
, 2011
"... In recent years, multimodal interfaces have gained momentum as an alternative to traditional WIMP interaction styles. Existing mul-timodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
In recent years, multimodal interfaces have gained momentum as an alternative to traditional WIMP interaction styles. Existing mul-timodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of multimodal interaction en-gines offering native fusion support across different levels of ab-stractions to fully exploit the power of multimodal interactions. We present Mudra, a unified multimodal interaction framework sup-porting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a cen-tral fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for multimodal interaction encourages the use of soft-ware engineering principles such as modularisation and composi-tion to support a growing set of input modalities as well as to enable the integration of existing or novel multimodal fusion engines.
reacTIVision and TUIO: A Tangible Tabletop Toolkit
- Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS2009
"... This article presents the recent updates and an evaluation of reacTIVision, a computer vision toolkit for fiducial marker tracking and multi-touch interaction. It also discusses the current and future development of the TUIO protocol and framework, which has been primarily designed as an abstraction ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
This article presents the recent updates and an evaluation of reacTIVision, a computer vision toolkit for fiducial marker tracking and multi-touch interaction. It also discusses the current and future development of the TUIO protocol and framework, which has been primarily designed as an abstraction layer for the description and transmission of pointers and tangible object states in the context of interactive tabletop surfaces. The initial protocol definition proved to be rather robust due to the simple and straightforward implementation approach, which also supported its widespread adoption within the open source community. This article also discusses the current limitations of this simplistic approach and provides an outlook towards a next generation protocol definition, which will address the need for additional descriptors and the protocol’s general extensibility.
Mt4j - a cross-platform multi-touch development framework
- Workshop of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems
, 2010
"... This article describes requirements and challenges of crossplatform multi-touch software engineering, and presents the open source framework Multi-Touch for Java (MT4j) as a solution. MT4j is designed for rapid development of graphically rich applications on a variety of contemporary hardware, from ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
This article describes requirements and challenges of crossplatform multi-touch software engineering, and presents the open source framework Multi-Touch for Java (MT4j) as a solution. MT4j is designed for rapid development of graphically rich applications on a variety of contemporary hardware, from common PCs and notebooks to large-scale ambient displays, as well as different operating systems. The framework has a special focus on making multi-touch software development easier and more efficient. Architecture and abstractions used by MT4j are described, and implementations of several common use cases are presented.
M.: A Framework for the Intelligent Delivery and User-adequate Visualization of Process Information
- in: Proc. 28th Symp. On Applied Computing (SAC’13
, 2013
"... manfred.reichert ..."
(Show Context)
GISpL: Gestures Made Easy
"... Figure 1: Controlling a widget using alternative input methods We present GISpL, the Gestural Interface Specification Language. GISpL is a formal language which allows both researchers and developers to unambiguously describe the behavior of a wide range of gestural interfaces using a simple JSON-ba ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Figure 1: Controlling a widget using alternative input methods We present GISpL, the Gestural Interface Specification Language. GISpL is a formal language which allows both researchers and developers to unambiguously describe the behavior of a wide range of gestural interfaces using a simple JSON-based syntax. GISpL supports a multitude of input modalities, including multi-touch, digital pens, multiple regular mice, tangible interfaces or mid-air gestures. GISpL introduces a novel view on gestural interfaces from a software-engineering perspective. By using GISpL, developers can avoid tedious tasks such as reimplementing the same gesture recognition algorithms over and over again. Researchers benefit from the ability to quickly reconfigure prototypes of gestural UIs on-the-fly, possibly even in the middle of an expert review. In this paper, we present a brief overview of GISpL as well as some usage examples of our reference implementation. We demonstrate its capabilities by the example of a multichannel audio mixer application being used with several different input modalities. Moreover, we present exemplary GISpL descriptions of other gestural interfaces and conclude by discussing its potential applications and future development.
LightTracker: An Open-Source Multitouch Toolkit
- Comput. Entertain
, 2010
"... In this article we present LightTracker – an open source toolkit for vision-based multi-touch setups. Using this toolkit, we can dynamically create and manipulate the image processing pipeline at runtime. After presenting various new requirements derived from hardware configurations as well as liter ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
In this article we present LightTracker – an open source toolkit for vision-based multi-touch setups. Using this toolkit, we can dynamically create and manipulate the image processing pipeline at runtime. After presenting various new requirements derived from hardware configurations as well as literature review, features and shortcomings of available tracking solutions are discussed and compared to our proposed toolkit. This is followed by a detailed description of the toolkits functionality including the filter chain and the improved calibration module. We also present implementation details such as the plugin system and the multi-threaded architecture. To illustrate the advantages of the toolkit, an interactive gaming couch table based on LightTracker is introduced.
Authoring Support for Post-WIMP Applications
"... Abstract. Employing post-WIMP interfaces, i.e. user interfaces going beyond the traditional WIMP (Windows, Icons, Menu, Pointer) paradigm, often implies a more complex authoring process for applications. We present a novel authoring method and a corresponding tool that aims to enable developers to c ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Employing post-WIMP interfaces, i.e. user interfaces going beyond the traditional WIMP (Windows, Icons, Menu, Pointer) paradigm, often implies a more complex authoring process for applications. We present a novel authoring method and a corresponding tool that aims to enable developers to cope with the added level of complexity. Regarding the development as a process conducted on different layers, we introduce a specific layer for post-WIMP in addition to layers addressing implementation or traditional GUI elements. We discuss the concept of cross layer authoring that supports different author groups in the collaborative creation of post-WIMP applications permitting them working independently on their respective layer and contributing their specific skills. The concept comprises interactive visualization techniques that highlight connections between code, GUI and post-WIMP functionality. It allows for graphical inspection while transitioning smoothly between layers. A cross layer authoring tool has been implemented and was well received by UI developers during evaluation.
Component-Based High Fidelity Interactive Prototyping of Post-WIMP Interactions.
"... In order to support interactive high-fidelity prototyping of post-WIMP user interactions, we propose a multi-fidelity design method based on a unifying component-based model and supported by an advanced tool suite, the OpenInterface Platform Workbench. Our approach strives for supporting a collabora ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
In order to support interactive high-fidelity prototyping of post-WIMP user interactions, we propose a multi-fidelity design method based on a unifying component-based model and supported by an advanced tool suite, the OpenInterface Platform Workbench. Our approach strives for supporting a collaborative (programmer-designer) and user-centered design activity. The workbench architecture allows exploration of novel interaction techniques through seamless integration and adaptation of heterogeneous components, high-fidelity rapid prototyping, runtime evaluation and fine-tuning of designed systems. This paper illustrates through the iterative construction of a running example how OpenInterface allows the leverage of existing resources and fosters the creation of non-conventional interaction techniques. Categories and Subject Descriptors H5.2 [Information interfaces and presentation]: User Interfaces.
Navigating in Complex Business Processes. in
- Proc. 23rd Int’l Conf. on Database and Expert Systems Applications (DEXA’12), LNCS 7447
, 2012
"... Abstract. In order to provide information needed in knowledge-intense business processes, large companies often establish intranet portals, which enable access to their process handbook. Especially, for large busi-ness processes comprising hundreds or thousands of process steps, these portals can he ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract. In order to provide information needed in knowledge-intense business processes, large companies often establish intranet portals, which enable access to their process handbook. Especially, for large busi-ness processes comprising hundreds or thousands of process steps, these portals can help to avoid time-consuming access to paper-based pro-cess documentation. However, business processes are usually presented in a rather static manner within these portals, e.g., as simple drawings or textual descriptions. Companies therefore require new ways of mak-ing large processes and process-related information better explorable for end-users. This paper picks up this issue and presents a formal navigation framework based on linear algebra for navigating in large business pro-cesses.
A Graphical Editor for the SMUIML Multimodal User Interaction Description Language
"... We present the results of an investigation on software support for the SMUIML multimodal user interaction description language (UIDL). In particular, we introduce a graphical UIDL editor for the creation of SMUIML scripts. The data management as well as the dialogue modelling in our graphical editor ..."
Abstract
- Add to MetaCart
(Show Context)
We present the results of an investigation on software support for the SMUIML multimodal user interaction description language (UIDL). In particular, we introduce a graphical UIDL editor for the creation of SMUIML scripts. The data management as well as the dialogue modelling in our graphical editor is based on the SMUIML language. Due to the event-centered nature of SMUIML, the multimodal dialogue modelling is represented by a state machine in the graphical SMUIML dialogue editor. Our editor further offers a real-time graphical debugging tool. Compared to existing multimodal dialogue editing solutions, the SMUIML graphical editor offers synchronised dual editing in graphical and textual form as well as a number of operators for the temporal combination of modalities. The presented graphical editor represents the third component of a triad of tools for the development of multimodal user interfaces, consisting of an XML-based modelling language, a framework for the authoring of multimodal interfaces and a graphical editor.