The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
It is our great pleasure to welcome you to the 2d Facial Expression Recognition and Analysis challenge and workshop (FERA 2015), held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2015). It's been four years since the first facial expression recognition challenge (FERA 2011), and we're excited to come back to challenge researchers worldwide...
Ground truth annotation of the occurrence and intensity of FACS Action Unit (AU) activation requires great amount of attention. The efforts towards achieving a common platform for AU evaluation have been addressed in the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015). Participants are invited to estimate AU occurrence and intensity on a common benchmark dataset. Conventional...
Current approaches to automatic analysis of facial action units (AU) can differ in the way the face appearance is represented. Some works represent the whole face, dividing the bounding box region in a regular grid, and applying a feature descriptor to each subpatch. Alternatively, it is also common to consider local patches around the facial landmarks, and apply appearance descriptors to each of...
Automatic detection of Facial Action Units (AUs) is crucial for facial analysis systems. Due to the large individual differences, performance of AU classifiers depends largely on training data and the ability to estimate facial expressions of a neutral face. In this paper, we present a real-time Facial Action Unit intensity estimation and occurrence detection system based on appearance (Histograms...
The problem of learning several related tasks has recently been addressed with success by the so-called multi-task formulation, that discovers underlying common structure between tasks. Metric Learning for Kernel Regression (MLKR) aims at finding the optimal linear subspace for reducing the squared error of a Nadaraya-Watson estimator. In this paper, we propose two Multi-Task extensions of MLKR. The...
This article describes a system for participation in the Facial Expression Recognition and Analysis (FERA2015) sub-challenge for spontaneous action unit occurrence detection. The problem of AU detection is a multi-label classification problem by its nature, which is a fact overseen by most existing work. The correlation information between AUs has the potential of increasing the detection accuracy...
Automatic facial expression recognition has emerged over two decades. The recognition of the posed facial expressions and the detection of Action Units (AUs) of facial expression have already made great progress. More recently, the automatic estimation of the variation of facial expression, either in terms of the intensities of AUs or in terms of the values of dimensional emotions, has emerged in...
Despite efforts towards evaluation standards in facial expression analysis (e.g. FERA 2011), there is a need for up-to-date standardised evaluation procedures, focusing in particular on current challenges in the field. One of the challenges that is actively being addressed is the automatic estimation of expression intensities. To continue to provide a standardisation platform and to help the field...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.