The Handbook of Multimodal-Multisensor Interfaces, Volume 2

Signal Processing, Architectures, and Detection of Emotion and Cognition

Sharon Oviatt (Editor), Bjoern Schuller (Editor), Cohen Phil (Editor), Daniel sonntag (Editor), Gerasimos Potamianos (Editor), antonio Kruger (Editor)

Research output: Book/ReportEdited BookOtherpeer-review

Abstract

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals.

This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.

This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep-learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users' states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology, and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.
Original languageEnglish
Place of PublicationNY NY USA
PublisherAssociation for Computing Machinery (ACM)
Number of pages531
ISBN (Electronic)9781970001709, 9781970001693
ISBN (Print)9781970001716, 9781970001686
DOIs
Publication statusPublished - 2019

Keywords

  • multimodal interfaces
  • multisensor interfaces
  • Human-Computer Interaction

Cite this

Oviatt, S., Schuller, B., Phil, C., sonntag, D., Potamianos, G., & Kruger, A. (Eds.) (2019). The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition . NY NY USA: Association for Computing Machinery (ACM). https://doi.org/10.1145/3107990
Oviatt, Sharon (Editor) ; Schuller, Bjoern (Editor) ; Phil, Cohen (Editor) ; sonntag, Daniel (Editor) ; Potamianos, Gerasimos (Editor) ; Kruger, antonio (Editor). / The Handbook of Multimodal-Multisensor Interfaces, Volume 2 : Signal Processing, Architectures, and Detection of Emotion and Cognition . NY NY USA : Association for Computing Machinery (ACM), 2019. 531 p.
@book{ebc7b0bff5fd47d39c6664fc9aa48b4c,
title = "The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition",
abstract = "The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals.This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep-learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users' states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology, and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.",
keywords = "multimodal interfaces, multisensor interfaces, Human-Computer Interaction",
editor = "Sharon Oviatt and Bjoern Schuller and Cohen Phil and Daniel sonntag and Gerasimos Potamianos and antonio Kruger",
year = "2019",
doi = "10.1145/3107990",
language = "English",
isbn = "9781970001716",
publisher = "Association for Computing Machinery (ACM)",
address = "United States of America",

}

The Handbook of Multimodal-Multisensor Interfaces, Volume 2 : Signal Processing, Architectures, and Detection of Emotion and Cognition . / Oviatt, Sharon (Editor); Schuller, Bjoern (Editor); Phil, Cohen (Editor); sonntag, Daniel (Editor); Potamianos, Gerasimos (Editor); Kruger, antonio (Editor).

NY NY USA : Association for Computing Machinery (ACM), 2019. 531 p.

Research output: Book/ReportEdited BookOtherpeer-review

TY - BOOK

T1 - The Handbook of Multimodal-Multisensor Interfaces, Volume 2

T2 - Signal Processing, Architectures, and Detection of Emotion and Cognition

A2 - Oviatt, Sharon

A2 - Schuller, Bjoern

A2 - Phil, Cohen

A2 - sonntag, Daniel

A2 - Potamianos, Gerasimos

A2 - Kruger, antonio

PY - 2019

Y1 - 2019

N2 - The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals.This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep-learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users' states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology, and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.

AB - The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces: user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces that often include biosignals.This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.This second volume of the handbook begins with multimodal signal processing, architectures, and machine learning. It includes recent deep-learning approaches for processing multisensorial and multimodal user data and interaction, as well as context-sensitivity. A further highlight is processing of information about users' states and traits, an exciting emerging capability in next-generation user interfaces. These chapters discuss real-time multimodal analysis of emotion and social signals from various modalities and perception of affective expression by users. Further chapters discuss multimodal processing of cognitive state using behavioral and physiological signals to detect cognitive load, domain expertise, deception, and depression. This collection of chapters provides walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology, and tutorial support for mastering this rapidly expanding field. In the final section of this volume, experts exchange views on the timely and controversial challenge topic of multimodal deep learning. The discussion focuses on how multimodal-multisensor interfaces are most likely to advance human performance during the next decade.

KW - multimodal interfaces

KW - multisensor interfaces

KW - Human-Computer Interaction

U2 - 10.1145/3107990

DO - 10.1145/3107990

M3 - Edited Book

SN - 9781970001716

SN - 9781970001686

BT - The Handbook of Multimodal-Multisensor Interfaces, Volume 2

PB - Association for Computing Machinery (ACM)

CY - NY NY USA

ER -

Oviatt S, (ed.), Schuller B, (ed.), Phil C, (ed.), sonntag D, (ed.), Potamianos G, (ed.), Kruger A, (ed.). The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition . NY NY USA: Association for Computing Machinery (ACM), 2019. 531 p. https://doi.org/10.1145/3107990