The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Facial expression recognition is an active research area in the field of signal social processing. The goal is to distinguish human emotion. The problem is similar emotion, variation of emotion, and independent object through face image. The existing research using various method for modeling human facial to entirely describe facial expression through face image. We consider to variation analysis...
Speech emotion recognition is a challenging problem, with identifying efficient features being of particular concern. This paper has two components. First, it presents an empirical study that evaluated four feature reduction methods, chi-square, gain ratio, RELIEF-F, and kernel principal component analysis (KPCA), on utterance level using a support vector machine (SVM) as a classifier. KPCA had the...
This paper aims to recognize the human emotional states into three defined areas in arousal-valence evaluation: Corresponding to calm, medium aroused, and excited, unpleasant, neutral valence and pleasant. And thanks to the relevance of the peripheral physiological signals in emotion recognition issue, we used in our contribution the multimodal dataset MAHNOB-HCI. In this database, there are emotional...
Strong and weak stress were elicited by real-scene thesis defense and pre-defending presentation of the thesis work, and tri-axial accelerometer data were acquired and analyzed for their ability to recognize strong and weak stress. Twenty-six subjects (7 females and 19 males) participated in the data acquisition experiment. A support vector machine classifier obtained a correct rate of 92.31% in the...
Emotions plays very important role in person's daily life. Estimating human emotions through Electroencephalogram (EEG) has become one of the challenging area today. The representation of electrical activity of human brain is an important feature of EEG signal. The emotion detection using EEG signal follows the steps of emotion elicitation stimuli, i.e, collection or creation of the database, pre-processing,...
Recent times have been marked with the increasing demand for more intelligent human computer interfaces. By adding emotion recognition abilities, voice based interfaces can be made more human centric. As natural languages do not share similar acoustic-phonetic features and vary in production of speech sound, the emotion recognition accuracy gets affected with respect to the user's language. This work...
Music emotion recognition (MER) detects the inherently emotional expression of people for a music clip. MER is helpful in music understanding, music retrieval, and other music-related applications. As volume of online musical contents expands rapidly in recent years, demands for retrieval by emotion have been emerging lately. Determining the emotional content of music computationally is an interdisciplinary...
It has been established that it is possible to reveal human emotions using electroencephalogram (EEG) signals. Most studies used a wide variety of data sets and methods, therefore a comparison between the performances of their approaches is difficult. This paper reports a study on the effects of the number of electrode channels and frequency bands for emotion classification based on a database for...
With the advent of technology, speech recognition is no longer just the capability of the humans. Voice based interfaces can turn most favorable for human computer interaction if computers respond according to its users emotional state. Emotion recognition from speech is a challenging problem as the system has to interact with diverse user utterances. This paper presents an age driven speech emotion...
Emotion recognition represents the position and motion of facial muscles. It contributes significantly in many fields. Current approaches have not obtained good results. This paper aimed to propose a new emotion recognition system based on facial expression images. We enrolled 20 subjects and let each subject pose seven different emotions: happy, sadness, surprise, anger, disgust, fear, and neutral...
Affective computing has become a growing field of research activities due to its wide use of application in human computer interface. Emotion recognition is one of the state-of-the-art techniques in determining current psychological state of human being. Human emotions are very overlapping in nature and thus it needs an efficient feature-extractor and classifier assembly. This paper reports a novel...
This paper presents the construction of Binary Support Vector Machines and its significance for efficient Speech Emotion Recognition (SER). German Emotional Speech Corpus EmoDB has been used in this study. Seven Binary Support Vector Machines (SVMs) corresponding to each of the seven emotions in the EmoDB, namely Anger-Not Anger, Boredom-Not Boredom, Disgust-Not Disgust, Fear-Not Fear, Happy-Not Happy,...
In recent years, enabling computer systems to recognize facial expressions and infer emotions from them in real time has become very important since such information can be used in emerging applications such as video games, educational software, computer-based tutoring for special need children for better human computer interactions. However, real time emotion recognition using video streams face...
Despite the existence of a robust model to identify basic emotions, the ability to classify a large group of emotions with reliability is yet to be developed. Hence, objective of this paper is to develop an efficient technique to identify emotions with an accuracy comparable to humans. The array of emotions addressed in this paper go far beyond what are present on the circumplex diagram. Due to the...
Speech emotion recognition is one of the recent challenges in speech processing and Human Computer Interaction (HCI) in order to address various operational needs for the real world applications. Besides human facial expressions, speech has been proven to be one of the most precious modalities for automatic recognition of human emotions. Speech is a spontaneous medium of perceiving emotions which...
In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based...
Emotion recognition has been an important research topic in the area of human computer interaction (HCI) for different application, in the last decade for instance proper emotion recognition has a wide range of applications in security, entertainment, and training. Emotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals such heart rate. This...
This paper proposes an approach to detect emotion from human speech employing majority voting technique over several machine learning techniques. The contribution of this work is in two folds: firstly it selects those features of speech which is most promising for classification and secondly it uses the majority voting technique that selects the exact class of emotion. Here, majority voting technique...
We propose in this paper, Face2Mus, a mobile application that streams music from online radio stations after identifying the user's emotions, without interfering with the device's usage. Face2Mus streams songs from online radio stations and classifies them into emotion classes based on audio features using an energy aware support vector machine (SVM) classifier. In parallel, the application captures...
Emotion Recognition is an important area of affective computing and has potential applications. This paper proposes a combinational model to compute the percentage of different emotions jointly present in a given speech input. This model is a weighted combination of the classifier models like Neural Network, k-Nearest Neighbors, Gaussian Mixture Model, Naïve Bayesian Classifier and Support Vector...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.