The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Welcome to the 10th IEEE International Conference on Automatic Face and Gesture Recognition (FG13) in Shanghai, China. The conference is the premier world conference on vision-based facial and body gesture modeling, analysis, and recognition. Since its first meeting in Zurich, the conference has been held nine times throughout the world. At its tenth meeting, the conference is held for the first time...
Laughter is a frequently occurring social signal and an important part of human non-verbal communication. However it is often overlooked as a serious topic of scientific study. While the lack of research in this area is mostly due to laughter's non-serious nature, it is also a particularly difficult social signal to produce on demand in a convincing manner; thus making it a difficult topic for study...
In this paper, a 3D surface representation defined around several reference points taken on the surface is introduced. Such representation is obtained from the superposition of a set of indexed levels of geodesic curves and radial lines. A sampling criterion through a generalized version of the Shannon theorem allows the determination of the minimum resolution of both curves that describes faithfully...
There is substantial interest in detection of human behaviour that may reveal people with deliberate malicious intent, who are engaging in deceit. Technology exists that is able to detect changes in facial patterns of movement and thermal signatures on the face. However, there is data deficiency in the research community for further study. Therefore this project aims to overcome the data deficiency...
This paper presents characterization of affect (valence and arousal) using the Magnetoencephalogram (MEG) brain signal. We attempt single-trial classification of movie and music videos with MEG responses extracted from seven participants. The main findings of this study are that: (i) the MEG signal effectively encodes affective viewer responses, (ii) clip arousal is better predicted than valence employing...
Facial expression is central to human experience. Its efficient and valid measurement is a challenge that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated...
Typical consumer media research requires the recruitment and coordination of hundreds of panelists and the use of relatively expensive equipment. In this work, we compare results from a legacy hardware dial mechanism for measuring media preference to those from automated facial analysis on two television programs, a sitcom and a drama series. We present an automated system for facial action detection...
Changes in eyebrow configuration, in combination with head gestures and other facial expressions, are used to signal essential grammatical information in signed languages. Motivated by the goal of improving the detection of non-manual grammatical markings in American Sign Language (ASL), we introduce a 2-level CRF method for recognition of the components of eyebrow and periodic head gestures, differentiating...
We propose a method to generate linguistically meaningful subunits in a fully automated fashion for sign language corpora. The ability to automate the process of subunit annotation has profound effects on the data available for training sign language recognition systems. The approach is based on the idea that subunits are shared among different signs. With sufficient data and knowledge of possible...
A novel approach Dynamic Image-to-Class Warping (DICW) is proposed to deal with partially occluded face recognition in this work. An image is partitioned into sub-patches, which are then concatenated in the raster scan order to form a sequence. A face consists of forehead, eyes, nose, mouth and chin in a natural order and this order does not change despite occlusion or small rotation. Thus, in this...
In this paper, we present evidence for a temporal relationship between eye blinks and smile dynamics (smile onset and offset). Smiles and blinks occur with high frequency during social interaction, yet little is known about their temporal integration. To explore the temporal relationship between them, we used an Active Appearance Models algorithm to detect eye blinks in video sequences that contained...
This paper presents a method to recognize attentional behaviors from a head-mounted binocular eye tracker in triadic interactions. By taking advantage of the first-person view, we simultaneously estimate the first-person and third-person gaze. The first-person gaze is computed using an appearance-based method relying on local features. In parallel, head pose tracking allows determining the coarse...
In this paper, we propose to construct a deep architecture, AU-aware Deep Networks (AUDN), for facial expression recognition by elaborately utilizing the prior knowledge that the appearance variations caused by expression can be decomposed into a batch of local facial Action Units (AUs). The proposed AUDN is composed of three sequential modules: the first module consists of two layers, i.e., a convolution...
Many real-world face and gesture datasets are by nature imbalanced across classes. Conventional statistical learning models (e.g., SVM, HMM, CRF), however, are sensitive to imbalanced datasets. In this paper we show how an imbalanced dataset affects the performance of a standard learning algorithm, and propose a distribution-sensitive prior to deal with the imbalanced data problem. This prior analyzes...
Micro-expressions are short, involuntary facial expressions which reveal hidden emotions. Micro-expressions are important for understanding humans' deceitful behavior. Psychologists have been studying them since the 1960's. Currently the attention is elevated in both academic fields and in media. However, while general facial expression recognition (FER) has been intensively studied for years in computer...
In this paper we suggest the use of resonance based decomposition of images for illumination invariant face recognition. Although illumination is mostly considered as the low-frequency part of images, these low-frequency contents may possess low- and/or high-resonance nature. We first assume that an input image can be considered as a combination of illumination and reflectance. The images are then...
Expression recognition from non-frontal faces is a challenging research area with growing interest. In this paper, we explore discriminative learning of Gaussian Mixture Models for multi-view facial expression recognition. Adopting the BoW model from image categorization, our image descriptors are computed using Soft Vector Quantization based on the Gaussian Mixture Model. We do extensive experiments...
The aim of our study is to examine whether the overall organization of behavior differs when people report truthful vs. deceptive messages within the framework of the T-pattern model. We tested the hypothesis that the differences between liars and truth tellers will be greater under high cognitive load conditions. We argue that recalling stories in reverse order will produce cognitive overloading...
To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs well when detecting occurrence may or may not perform well for intensity measurements. We compared two dimensionality reduction approaches - Principal Components Analysis with Large Margin Nearest Neighbor...
In this paper, a novel implicit video multi-emotion tagging method is proposed, which considers the relations between the users' outer facial expressions and inner emotions as well as the relations among multiple expressions. First, the audiences' expressions are inferred through a multi-expression recognition model, which consists of an image driven expression measurement recognition and a Bayesian...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.