The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents an interactive emotion recognition system using support vector machine for human-robot interaction. The proposed emotion recognition algorithm is composed of Harr wavelet transform, principal component analysis (PCA) method, and support vector machine (SVM). This algorithm is shown effective and useful in achieving both face identification and facial expression recognition. The...
Emotion plays a significant role in human communications in our daily life. With progress in human-machine interface technology, recent research has placed more emphasis on the recognition of emotion reaction. Comparing to some other ideal experimental settings, blog posts online would be respond more to real-world events. And a huge resource of text-based emotion can be found from the World Wide...
In the field of interaction between humans and robots emotions have been disregarded for a long time. During the last few years interest in emotion research in this area has been constantly increasing as giving a robot the ability to react to the emotional state of the user can help to make the interaction more human-like and enhance the acceptance of the robots. In this paper we investigate a method...
This paper describes a system that deploys acoustic and linguistic information from speech in order to decide whether the utterance contains negative or non-negative meaning. An earlier version of this system was submitted to the Interspeech-2009 Emotion Challenge evaluation. The speech data consist of short utterances of the children's speech, and the proposed system is designed to detect anger in...
Emotion classification of text is very important in applications like emotional text-to-speech (TTS) synthesis, human computer interaction, etc. Past studies on emotion classification focus on the writer's emotional state conveyed through the text. This research addresses the reader's emotions provoked by the text. The classification of documents into reader emotion categories has novel applications...
In this paper, we use EEG signals to classify two emotions-happiness and sadness. These emotions are evoked by showing subjects pictures of smile and cry facial expressions. We propose a frequency band searching method to choose an optimal band into which the recorded EEG signal is filtered. We use common spatial patterns (CSP) and linear-SVM to classify these two emotions. To investigate the time...
Emotion modeling evoked by natural scenes is challenging issue. In this paper, we propose a novel scheme for analyzing the emotion reflected by a natural scene, considering the human emotional status. Based on the concept of original GIST, we developed the fuzzy-GIST to build the emotional feature space. According to the relationship between emotional factors and the characters of image, L*C*H* color...
Modeling time series data of varying length is important in different domains. There are two paradigms for modeling the varying length sequential data. Tasks such as speech recognition need modeling the temporal dynamics and the correlations among the features. Hidden Markov models (HMM) are used for these tasks. In tasks such as speaker recognition, audio classification and speech emotion recognition,...
In order to achieve subject-independent facial feature detection and extraction and obtain robustness against illumination variety, a novel method of facial expression recognition using the combination of multi-step integral projection and Gabor transformation for feature detection and SVM for classification is presented in this paper. First, to avoid manually picked expression features, we propose...
In this paper, we describe an experimental investigation to evaluate the significance of different facial regions of a person in the task of gender classification. For this purpose we use a support vector machine (SVM) classifier on face images for gender classification. We perform experiments using different facial regions of varying resolution so that the significance of facial regions in this application...
In this paper, we propose a positive face expression recognition method based on the image around the mouth. We also estimate the direction of the face of a person to examine whether or not he/she has interest toward this side. In the proposed method, estimation of the position and the direction of a face is realized by using particle filter and FAST operator. The image around the mouth is detected...
This paper investigates computational emotion recognition using multi-modal bio-potential signals. Two vital signs, pulse and skin conductance response, are measured to evaluate three emotions: positive emotion(relax and pleasure), negative emotion(stressful and unpleasure), and normal. Psychological experiments using audio contents in order to excite emotions to subjects are undertaken to gather...
This paper proposes a novel method for facial expression recognition by using independent component analysis of Gabor features. In the feature extraction stage, Gabor feature vectors are firstly extracted from a set of facial expressions images, then using independent component analysis (ICA) to extract the independent Gabor features. After that, the independent Gabor features are used to train SVM...
Gender classification is a challenging problem, which finds applications in speaker indexing, speaker recognition, speaker diarization, annotation and retrieval of multimedia databases, voice synthesis, smart human-computer interaction, biometrics, social robots etc. Although it has been studied for more than thirty years, by no means it is a solved problem. Processing emotional speech in order to...
Emotion recognition is an important module in affective computing. It is usually studied based on facial and audio information with methodologies such as ANN, fuzzy set, SVM, HMM, etc. In this paper, a novel approach based on selective ensemble is proposed for emotion recognition. Simulation experiments prove that the proposed method has better performance than the method of single classifier, even...
It is important to properly select and extract the features of speech emotion, and to reasonably construct the classifier for improving the accuracy of the speech emotion recognition. In this paper, the cubic spline fitting is used to fit curves of prosodic features extracted from speech signals and then the derivative parameters features of these fitting curves are attained. We closely combined the...
An approach to recognize the emotion responses during multimedia presentation using the electroencephalogram (EEG) signals is proposed. The association between EEG signals and music-induced emotion responses was investigated in three factors, including: 1) the types of features, 2) the temporal resolutions of features, and 3) the components of EEG. The results showed that the spectrum power asymmetry...
During the driving, the good emotion can benefit the vehicle safety. The good emotion will result in the certain facial expressions and vice versa. The facial expression is used as a useful cue to perform the surveillance of driverpsilas status. Considering the characteristics of driving safety, the facial expressions of anger, happiness, sadness and fear are investigated. The main contribution of...
The paper presents the study and the performance results of a system for emotion classification using the architecture of a Distributed Speech Recognition System (DSR). The parameters used were extracted by the front-end ETSI Aurora eXtended of a mobile terminal in compliance with the ETSI ES 202 211 V1.1.1 standard. On the basis of the time trend of these parameters, over 3800 statistical parameters...
Our software demo package consists of an implementation for an automatic human emotion recognition system. The system is bi-modal and is based on fusing of data regarding facial expressions and emotion that has been extracted from speech signal. We have integrated Viola & Jones face detector (OpenCV), active appearance model - AAM (AAM-API) for extracting the face shape and support vector machines...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.