Face and body gesture analysis for multimodal HCI

Hatice Gunes, Massimo Piccardi, Tony Jan

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

2 Citations (Scopus)

Abstract

Humans use their faces, hands and body as an integral part of their communication with others. For the computer to interact intelligently with human users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. Multimodal interfaces allow humans to interact with machines through multiple modalities such as speech, facial expression, gesture, and gaze. In this paper, we present an overview of research conducted on face and body gesture analysis and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly, in this paper we present a vision-based framework that combines face and body gesture for multimodal HCI.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
EditorsMasood Masoodian, Steve Jones, Bill Rogers
PublisherSpringer Verlag
Pages583-588
Number of pages6
ISBN (Print)3540223126, 9783540223122
DOIs
Publication statusPublished - 2004
Externally publishedYes

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3101
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Face and body gesture analysis for multimodal HCI'. Together they form a unique fingerprint.

Cite this