Shared Façades: surface-embedded layout management for ad hoc collaboration using head-worn displays

Barrett Ens, Eyal Ofek, Neil Bruce, Pourang Irani

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

2 Citations (Scopus)

Abstract

Collaboration is a necessary, everyday human activity, yet computing environments specifically designed to support collaborative tasks have typically been aimed toward groups of experts in extensive, purpose-built environments. The cost constraints and design complexities of fully-networked, multi-display environments have left everyday computer users in the lurch. However, the age of ubiquitous networking and wearable technologies has been accompanied by functional head-worn displays (HWDs), which are now capable of creating rich, interactive environments by overlaying virtual content onto real-world objects and surfaces. These immersive interfaces can be leveraged to transform the abundance of ordinary surfaces in our built environment into ad hoc collaborative multi-display environments. This paper introduces an approach for distributing virtual information displays for multiple users. We first describe a method for producing spatially-constant virtual window layouts in the context of single users. This method applies a random walk algorithm to balance multiple constraints, such as spatial constancy of displayed information, visual saliency of the background, surface-fit, occlusion and relative position of multiple windows, to produce layouts that remain consistent across multiple environments while respecting the local geometric features of the surroundings. We then describe how this method can be generalized to include additional constraints from multiple users. For example, the algorithm can take the relative poses of two or more users into account, to prevent information from being occluded by objects in the environment from the perspective of each participant. In this paper, we however focus on describing how to make the content spatially-constant for one user, and discuss how it scales from one to multiple closely confined users. We provide an initial validation of this approach including quantitative and qualitative data in a user study. We evaluate weighting schemes with contrasting emphasis on spatial constancy and visual saliency, to determine how easily a user can locate spatially-situated information within the restricted viewing field of current head-worn display technology. Results show that our balanced constraint weighting schema produces better results than schemas that consider spatial constancy or visual saliency alone, when applied to models of two real-world test environments. Finally, we discuss our plans for future work, which will apply our window layout method in collaborative environments, to assist wearable technology users to engage in ad hoc collaboration with everyday analytic tasks.

Original languageEnglish
Title of host publicationCollaboration Meets Interactive Spaces
EditorsCraig Anslow, Pedro Campos, Joaquim Jorge
Place of PublicationCham Switzerland
PublisherSpringer
Chapter8
Pages153-176
Number of pages24
ISBN (Electronic)9783319458533
ISBN (Print)9783319458526
DOIs
Publication statusPublished - 1 Jan 2016
Externally publishedYes

Cite this