Virtual Worlds Research at Columbia University's Computer Graphics and User Interfaces Laboratory


Our research on virtual worlds is centered about the development of new metaphors for visualizing and interacting effectively with rich information spaces. This work is performed within Columbia's Computer Graphics and User Interfaces Laboratory, in the Schapiro Center for Engineering and Physical Science Research. The lab's research spans a wide gamut, including knowledge-based graphics, animation, rendering, visualization, visual languages, and hypermedia. Our research facilities include UNIX and Win2K workstations with 3D graphics accelerators, custom-built wearable backpack computers with 3D graphics accelerators, 3D tracking systems (InterSense IS-600 Mark II Plus 6DOF and IS-300 Pro 3DOF hybrid trackers, an extended range Ascension Flock of Birds electromagnetic tracking system, Logitech ultrasonic trackers, Origin Instruments Dynasight optical radar trackers, a centimeter-level AshTech GG Surveyor RTK GPS receiver, and a sub-meter Trimble DSM differential GPS receiver), a variety of see-through head-worn displays (Sony LDI-D100B SVGA resolution displays and MicroOptical eyeglass displays), a 3D projection display (an Electrohome Marquee 100" diagonal rear-projection stereo display, and StereoGraphics CrystalEyes stereo eyewear), networking infrastructure (ranging from gigabit fiber to 11mbps wireless), and a digital video editing facility. Students also have shared access to a 4-processor SGI Onyx2 with InfiniteReality graphics.

Our n-Vision visualization testbed [Feiner & Beshers 90a, Feiner & Beshers 90b] allows users to explore abstract virtual worlds populated by objects representing functions of large numbers of variables. A ``3D window system'' partitions the physical space in which users interact -- a volume containing objects that are viewed in stereo and manipulated using the DataGlove. One current application is an example of ``financial visualization'' in which users can determine the effect of market variables on the value of financial instruments. n-Vision incorporates a novel approach to visualizing higher-dimensional data that uses nested heterogeneous coordinate systems. We are currently developing a knowledge-based ``world-design'' component that will select appropriate interaction and presentation techniques from n-Vision's repertoire [Beshers & Feiner 93].

We have also been experimenting with an approach to user-interface design that embeds the physically small flat panel display of a portable workstation within a virtually large information surround. The information surround is presented on our see-through, head-mounted display. A mirror beam splitter merges the user's view of the regular screen with that of the surrounding virtual world. We refer to this approach as a ``hybrid user interface'' because it attempts to combine the strengths of heterogeneous user interface technologies. Our current prototype is an X11 window manager that allows the user to move windows between the flat panel and the surround [Feiner & Shamash 91]. This project is being carried out in conjunction with a larger multifaculty collaboration in which we are building the software infrastructure for a 2Mbit/sec wireless mobile computing network [Duchamp, Feiner, & Maguire 91].

KARMA (Knowledge-based Augmented Reality for Maintenance Assistance) [Feiner, MacIntyre, & Seligmann 93] designs virtual worlds that explain how to operate, maintain, and repair equipment. By creating such material automatically using AI techniques, we address the tremendous human effort that currently underlies the design of ``hand-crafted'' hypermedia and multimedia presentations. This work builds on our research on the knowledge-based generation of 3D graphics [Karp & Feiner 93, Seligmann & Feiner 91] and on a multifaculty collaboration in the coordinated generation of text and graphics [Feiner & McKeown 91]. KARMA uses our see-through head-mounted display to create an ``augmented reality'' in which a synthesized virtual world overlays the user's view of the physical world. Our experimental domain is simple end-user maintenance for a laser printer. We attached several 3D trackers to key components of the printer, allowing the system to monitor their position and orientation, so that the physical and virtual worlds can be registered. A modified version of the IBIS rule-based illustration generation system [Seligmann & Feiner 91] interactively designs overlaid graphics and simple textual callouts that fulfill a set of goals that are input to the system.

While many virtual environments researchers have worked with simple 2D windows and control panels, we have developed support for a full X11 window system server within our augmented reality testbed [Feiner, MacIntyre, Haupt, & Solomon 93]. The user's head is tracked so that the display indexes into a large X bitmap, effectively placing the user inside a display space that is mapped onto part of a surrounding virtual sphere. By tracking the user's body, and interpreting head motion relative to it, we create a portable information surround that envelopes the user as they move about. We support three kinds of windows implemented on top of the X server: windows fixed to the head-mounted display, windows fixed to the information surround, and windows fixed to locations and objects in the 3D world. Objects can also be tracked, allowing windows to move with them. To demonstrate the utility of this model, we have developed a small hypermedia system that allows links to be made between windows and windows to be attached to objects. Thus, our hypermedia system can forge links between any combination of physical objects and virtual windows.

In conjunction with members of Columbia's School of Architecture, we have begun to explore the benefits of exposing a building's ``architectural anatomy,'' allowing the user to see the building's otherwise hidden structural systems [Feiner, Webster, Krueger, MacIntyre, & Keller 95]. Our first prototype application overlays a graphical representation of portions of a building's structural systems over the user's view of the surrounding room.


Program:

Opportunities to participate in our research projects are available as part of Columbia's undergraduate, M.S., and Ph.D. programs in Computer Science.


References