LEADME

LEADME: Low-latency Emotion and Affect Detection in a Multimodal and immersive Environment

Since emotion is involved in every aspect of human life, it has gained a great deal of interest and attention in many research fields, such as neurology, psychology, sociology and computer science. In computer science, many researchers have endeavored to alter the “user-centered” orientation of human-computer interactions systems and to develop instead “human-centered” HCIs. This new term is more appropriate as it considers the overall human experience which is embodied in human emotions and interactions with machines. Nevertheless, current HCIs are still quite deficient in interpreting this affective information of emotion and they are still not able to take actions based on human emotion. Therefore, further insights should be provided in this research area in order to equip machines with an affective functionality, which will make them more suer-friendly, more sensitive to human being, and more efficient.

 

 The aim of this project is to conveive new methods for intelligent affective user interfaces, followed by implementation and assessment for validation and evaluation of their performance. The general architecture of the proposed system consists of a hardware/software platform, enriched by audio and visual sensors, together with neurophysiological signals recorded EEG, as well as physiological signals. The system is not only multimodal in its sensing, but also in its rendering. It features in addition to user behaviour and experienced emotions. Also additional information such as user profile is extracted implicitly from the user’s physiological sources. Most promising algorithms and methods under investigation in this project are interated in a platform for an end-to-end assessment of their impact in quality of experience offered. This is achieved by concentrating on two use cases: a multimodal immersive enrionment for entertainment and a multimodal immersive environment for gaming.