Eye contact over video

Jesper Kjeldskov, Mikael B. Skov, Jacob H. Smedegård, Jeni Paay, Thomas S. Nielsen

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

3 Citations (Scopus)

Abstract

Video communication systems traditionally offer limited or no experience of eye contact due to the offset between cameras and the screen. In response, we are experimenting with the use of multiple Kinect cameras for generating a 3D model of the user, and then rendering a virtual camera angle giving the user an experience of eye contact. In doing this, we use concepts from KinectFusion, such as a volumetric voxel data representation and GPU accelerated ray tracing for viewpoint rendering. This achieves a detailed 3D model from a noisy source, and delivers a promising video output in terms of visual quality, lag and frame rate, enabling the experience of eye contact and face gaze.

Original languageEnglish
Title of host publicationCHI EA 2014
Subtitle of host publicationOne of a ChiNd - Extended Abstracts, 32nd Annual ACM Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery (ACM)
Pages1561-1566
Number of pages6
ISBN (Print)9781450324748
DOIs
Publication statusPublished - 2014
Externally publishedYes
EventInternational Conference on Human Factors in Computing Systems 2014 - Metro Toronto Convention Centre, Toronto, Canada
Duration: 26 Apr 20141 May 2014
Conference number: 32nd
https://chi2014.acm.org/
https://dl.acm.org/doi/proceedings/10.1145/2556288 (Proceedings)

Conference

ConferenceInternational Conference on Human Factors in Computing Systems 2014
Abbreviated titleCHI 2014
Country/TerritoryCanada
CityToronto
Period26/04/141/05/14
Internet address

Keywords

  • Eye contact
  • Gaze
  • Kinect
  • Virtual view camera

Cite this