.: Immersive Mixed-Reality Configuration of Hybrid User Interfaces :.

People

Project Description

In hybrid user interfaces, information can be distributed over a variety of different, but complementary, displays. For example, these can include stationary, opaque displays and see-through, head-worn displays. Users can also interact through a wide range of interaction devices. In the unplanned, everyday interactions that we would like to support, we would not know in advance the exact displays and devices to be used, or even the users who would be involved. All of these might even change during the course of interaction. Therefore, a flexible infrastructure for hybrid user interfaces should automatically accommodate a changing set of input devices and the interaction techniques with which they are used. This project embodies the first steps toward building a mixed-reality system that allows users to configure a hybird user interface.

A key idea underlying our work is to immerse the user within the authoring environment. Immersive authoring has been explored by Lee and colleagues, in a system that has a wider range of possible parameters than we currently support. While their system is restricted to a single view and interaction with real objects is limited to AR Toolkit markers, our system supports multiple coordinated views with different visualizations and interaction with a variety of physical controllers.

In our scenario, a user interacts with physical input devices and 3D objects drawn on several desktop displays. The input devices can be configured to perform simple 3D transformations (currently scale, translation, and rotation) on the objects. The user's see-through head-worn display overlays lines that visualize data flows in the system, connecting the input devices and objects, and annotates each line with the iconic representation of its associated transformation. The user wields a tracked wand with which s/he can reconfigure these relationships, picking in arbitrary order the three elements that comprise each relationship: an input device, a 3D object, and an operation chosen from a desktop menu.

While we have designed our initial scenario to be simple, we would ultimately like to support users in their daily tasks, such as specifying how a pocket-sized device can control the volume of a living-room audio system or the brightness of the lights in a bedroom. Thus, the objects used in our scenario are placeholders for aspects of an application a user might want to manipulate, while the 3D transformations are placeholders for more general operations a user could perform on them.

Video

Download video here.

Publications

Sandor,C., Bell,B., Olwal,A., Temiyabutr,S., Feiner,S. " Visual end user configuration of hybrid user interfaces (demo description)." In Proc. ACM SIGMM 2004 Workshop on Effective Telepresence. New York, NY, USA. October 15, 2004.

System Images

Input devices used in our system (starting top left corner, clockwise):
  • A tracked board with PowerMate sensors (left) and MIDI sensors (right): sliders and bend sensors attached to playing cards.
  • A game controller.
  • A dance mat.
  • A wand with attached Intersense 6DOF tracker
Videomixed view through a user's tracked, see-through, head-worn display. Lines show the data flow between tracked input devices and virtual objects. Icons attached to the lines visualize the currently executed operations.
Untracked input devices are shown as screen-stabilized models. The picture shows a user translating a virtual object by pressing a button on a game controller.
With a tracked wand, the user can reconfigure the data flows. The example shown in this picture depicts a user who has already picked a 3D object on a screen; thus the selected object becomes highlighted on the screen and a virtual line gets attached. The user has also selected an operation. This causes the iconic representation of this operation to be displayed on the line. To complete the reconfiguration, the user next has to pick an input device.

Acknowledgements

Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any organization supporting this work.