The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper attempts to recognize spontaneous agreement and disagreement based only on nonverbal multi-modal cues. Related work has mainly used verbal and prosodic cues. We demonstrate that it is possible to correctly recognize agreement and disagreement without the use of verbal context (i.e. words, syntax). We propose to explicitly model the complex hidden dynamics of the multimodal cues using a...
Past research on automatic laughter classification/detection has focused mainly on audio-based approaches. Here we present an audiovisual approach to distinguishing laughter from speech, and we show that integrating the information from audio and video channels may lead to improved performance over single-modal approaches. Both audio and visual channels consist of two streams (cues), facial expressions...
We present a bimodal information analysis system for automatic emotion recognition. Our approach is based on the analysis of video sequences which combines facial expressions observed visually with acoustic features to automatically recognize five universal emotion classes: anger, disgust, happiness, sadness and surprise. We address the challenges posed during the temporal analysis of the bimodal...
This paper is concerned with the problem of synthesizing animating face driven by new audio sequence, which is not present in the previously recorded database. Future video frames and past video frames influence the dynamics of current video frame, so the dynamics of speech and facial expressions needed to be learned to model an efficient speech driven facial animation. We have incorporated the features...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.