.: Collaborative Visualization of an Archaeological Excavation :.


Project Description:

As part of a larger initiative to bring novel visualization techniques to the field of archaeology (NSF ITR Computational Tools for Modeling, Visualizing and Analyzing Historic and Archaeological Sites), we have developed a collaborative system, called VITA (Visual Interaction Tool for Archaeology), for offsite visualization of an archaeological dig through both virtual and augmented reality. Our main focus is to create a visualization environment that provides a good match between the material being presented and the available media and devices. For example, we would like to use 2D visualization and interaction metaphors for interacting with 2D media (e.g., images, maps, and videos) and 3D immersive visualization and interaction metaphors for 3D data (e.g., 3D models, panoramic images, and spatial audio). Therefore, we have created a modular "hybrid" user interface that supports multiple visualization and interaction metaphors.

In our system, multiple users, wearing tracked, head-worn, see-through displays, can interact with the environment using tracked, instrumented gloves, a multi-user, multi-touch, projected table surface, large wall displays, and tracked hand-held displays. We take advantage of our ongoing work on 3D multimodal interaction to allow users to combine speech with head, hand, and arm gestures to aid them in their tasks. Although the dig site can be visualized as a purely virtual environment, when users collaborate using the projected table, their see-through head-worn displays allow them to see personalized overlaid material in context with the shared, projected table surface.

The excavation data was collected while onsite at Monte Polizzo in Sicily, Italy in July 2003. We were guests of Professor Ian Morris and his Stanford University archaeological excavation team. (More information can be found at their official website.)

Also, we've developped a set of cross-dimensional gestures to facilitate seamless transition of data between 2D and 3D displays. You can find more information about them here.


Download video here (DivX encoded, 19.5 MB), download divx codec here.

Publications and Talks:

Benko, H., Ishak, E.W., Feiner, S. "Cross-Dimensional Gestural Interaction Techniques for Hybrid Immersive Environments." In Proc. IEEE Virtual Reality (VR 2005). Bonn, Germany. March 10–12, 2005. pp. 209–116.

Benko, H., Ishak, E.W., Feiner, S. "Collaborative Mixed Reality Visualization of an Archaeological Excavation." The International Symposium on Mixed and Augmented Reality (ISMAR 2004), November 2004.

Benko, H., Ishak, E.W., Feiner, S. "VITA: Visual Interaction Tool for Archaeology (Demo)." The International Symposium on Mixed and Augmented Reality (ISMAR 2004), November 2004.

Benko, H., Ishak, E.W., Feiner, S. "VITA: Visual Interaction Tool for Archaeology (Demo)." The ACM Effective Telepresence Workshop (Multimedia 2004), October 15, 2004.

Allen, P., Feiner, S. Troccoli, A., Benko, H., Ishak, E., Smith, B."Seeing into the Past: Creating a 3D Modeling Pipeline for Archaeological Visualization." International Symposium on 3D Data Processing Visualization and Transmission (3DPVT 2004), 2004.

Allen, P., Feiner, S., Meskell, L., Ross, K., Troccoli, A., Smith, B., Benko, H., Ishak, E., Conlon, J. “Digitally Modeling, Visualizing and Preserving Archaeological Sites” [poster], Joint Conference on Digital Libraries 2004 (JCDL 2004), Tuscon, Arizona, June 7–11, 2004.

Benko, H., Ishak, E., Feiner, S. "Collaborative Visualization of an Archaeological Excavation". Workshop on Collaborative Virtual Reality and Visualization (CVRV 2003). Lake Tahoe, CA. October 26–28, 2003.

VITA System Images:

Two users simultaneously collaborate in VITA. While one user is inspecting the 3D virtual model of the ceramic vessel above the table comparing it with the high resolution image on the screen, the second user is looking at the 3D miniature terrain model next to the table. All AR images in this paper are captured through a live tracked video see-through display.
User inspecting various collected ceramic objects in his "virtual pack."
Real vs. virtual: (a) Real image of a portion of the excavated structure taken at the site. (b) User exploring the same section of the site in life-size–world mode of VITA; the other user, 3D terrain model, and several bone finds in this section are visible.
The table top user interface.
The DT module’s user interface, showing information about several layers and objects selected from the Harris Matrix. The currently selected layer is painted cyan and is highlighted in the background Harris Matrix, thus visually showing the context.
User’s AR interaction devices.
An architecture diagram showing all our modules for one user. A separate AR/VR module is needed for each additional user.

Older Images:

Two users collaborate in our system using the MERL DiamondTouch multi-touch, multi-user table and a handheld computer. The virtual model of the ceramic pot is shown next to the table and the 3D terrain model is shown in the background.
The MERL DiamondTouch user interface (Version 1), showing users being tracked in the 3D environment, and information about the currently selected object.
User’s view of a section of the site in AR. An older 3D model of the terrain and some objects in this section are visible. Another user is visible on the right. Objects are represented either with a 3D model or with a picture if the model is not available.

Related Research Links:


This project is funded in part by NSF ITR grant IIS-0121239 and IIS-00-82961, and Office of Naval Research Contracts N00014-99-1- 0394, N00014-99-1-0683, N00014-04-1-0005, and N00014-99-1-0249.
Thanks to:
Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or any other organization supporting this work.