The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a method for automatic emotion recognition as part of the FERA 2011 competition. The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, K-means clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking...
Online Social Networks are so popular nowadays that they are a major component of an individual's social interaction. They are also emotionally-rich environments where close friends share their emotions, feelings and thoughts. In this paper, a new framework is proposed for characterizing emotional interactions in social networks, and then using these characteristics to distinguish friends from acquaintances...
In recent years, feature extraction methods make an achievement in pattern recognition and computer vision. It extracts not only useful feature for classification, but also reduces the dimension of pattern samples. In this paper, we propose orthogonal supervised spectral discriminant analysis (OSSDA) which motivated by marginal fisher analysis (MFA) and spectral clustering. It put different weights...
Speech with various emotions aggravates the performance of speaker recognition system. The existing speaker modeling disregards the match of the emotional state between training and testing speech, and the systems suffer the lapsus of the emotion recognition as to practical application. We propose an alternative approach that exploits the prosodic difference to cluster affective speech, and then builds...
Facial expressions are the facial changes in response to a person's internal emotional states, intentions or social communications. In this paper, we fulfill the recognition of facial action units, i.e., the subtle change of facial expressions, and emotion-specified expressions. Our automatic facial expression analysis system includes face detection, facial component extraction, tracking and representation,...
In this paper we present a facial expression recognition model using fuzzy techniques in order to further detect human behaviors in the e-business. In this model, fuzzy clustering model is proposed to classify images, after extract the features that are used as inputs into a classification system. The outcome of the model is one of the preselected emotion categories. The motivation for the model is...
Electrocardiography (ECG) data acquisition, data preprocessing, feature extraction and emotion recognition based on ECG feature classification were effectively implemented. Joy and sad movies were selected and presented to 154 subjects whose ECG data were recorded at the movie presentation time. The automatic location of QRS complex, which is of critical importance for ECG feature extraction by the...
Facial expression recognition is an active research area that finds a potential application in human emotion analysis. This work presents an efficient approach of facial expression features clustering based on Support Vector Clustering (SVC). Common approaches to facial expression features clustering are designed considering two main parts: (1) features extraction, and (2) features clustering. In...
This paper puts forward a face recognition model combining global and local features which is adapted to small sorts face recognition in the embedded system. The mainly contribution of this paper is to bring out the thought of doing the identification using double-weight. Firstly, extract the features in the whole face picture and the subfields; then, construct different training set for each feature...
We propose a new method of recognizing emotional factors from human gestures by analyzing motion capture (MoCap) data. It features multi-factorization processing combined with HMM recognition. The multi-factorization processing factorizes MoCap data into a third-order tensor that consists of spatial, statistical, and frequency-spatial components. This multi-factorization localizes the data in the...
This paper presents the system for automatic emotion detection from music data stored in MIDI format files. First, the piece of music is divided into independent segments that potentially represent different emotional states. For this task the method of segmentation is used. The most important part is a features extraction from the music data. On this basis similar emotional parts are grouped by clustering...
Equable principal component analysis (EPCA) is a powerful technique of feature extracting. It can reduce a large set of correlated variables to a smaller number of uncorrelated components. Support vector machines (SVM) is a novel pattern classification approach. It is very efficient in solving clustering problems that are not linearly separable. This paper presents a method of expression recognition...
In this paper, we propose a novel framework for video-based facial expression recognition, which can handle the data with various time resolution including a single frame. We first use the haar-like features to represent facial appearance, due to their simplicity and effectiveness. Then we perform K-Means clustering on the facial appearance features to explore the intrinsic temporal patterns of each...
In this paper, lip and eye features are applied to classify the human emotion using a set of irregular and regular ellipse fitting equations using genetic algorithm (GA). A South East Asian face is considered in this study. The parameters relating the face emotions, in either case, are entirely different. All six universally accepted emotions and one neutral are considered for classifications. The...
Human communication is saturated with emotional context that aids in interpreting a speakers mental state. Speech analysis research involving the classification of emotional states has been studied primarily with prosodic (e.g., pitch, energy, speaking rate) and/or spectral (e.g., formants) features. Glottal waveform features, while receiving less attention (due primarily to the difficulty of feature...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.