The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
One of the most challenging researches in the field of Human-Computer Interaction (HCI) is Speech Emotion Recognition (SER). Several factors affect to the classification result. For example, the accuracy of detecting emotion depends on type of emotion and number of emotion which is classified and quality of speech is also the importance feature. Four different emotion types (anger, happy, natural,...
Speech emotion recognition is a challenging problem, with identifying efficient features being of particular concern. This paper has two components. First, it presents an empirical study that evaluated four feature reduction methods, chi-square, gain ratio, RELIEF-F, and kernel principal component analysis (KPCA), on utterance level using a support vector machine (SVM) as a classifier. KPCA had the...
In recent years the emotion recognition from speech is area of more interest in human computer interaction. There are many different researchers which worked on emotion recognition from speech with different systems. This paper attempts emotion recognition from speech which is language independent. The emotional speech samples database is used for feature extraction. For feature extraction MFCC and...
We propose a neural-network training algorithm that is robust to data imbalance in classification. In our proposed algorithm, weights are introduced to training examples, effectively modifying the trajectory traversed in the parameter space during the learning process. Furthermore, the proposed algorithm would reduce to the normal stochastic gradient decent learning if the data is balanced. On the...
In this paper, we propose to use a kernel sparse representation based classifier (KSRC) for the task of speech emotion recognition. Further, the recognition performance using the KSRC is improved by imposing a group sparsity constraint. The speech utterances with same emotion may have different duration, but the frame sequence information does not play a crucial role in this task. Hence, in this work,...
Recent times have been marked with the increasing demand for more intelligent human computer interfaces. By adding emotion recognition abilities, voice based interfaces can be made more human centric. As natural languages do not share similar acoustic-phonetic features and vary in production of speech sound, the emotion recognition accuracy gets affected with respect to the user's language. This work...
In this paper, we propose a feature selection and representation combination method to generate discriminative features for speech emotion recognition. In feature selection stage, a Multiple Kernel Learning (MKL) based strategy is used to obtain the optimal feature subset. Specifically, features selected at least n times among 10-fold cross validation are collected to build a new feature subset named...
With the advent of technology, speech recognition is no longer just the capability of the humans. Voice based interfaces can turn most favorable for human computer interaction if computers respond according to its users emotional state. Emotion recognition from speech is a challenging problem as the system has to interact with diverse user utterances. This paper presents an age driven speech emotion...
Recognizing human emotions is the indispensable requirement for efficient human machine interaction. Besides human facial expressions, speech is one of the latest challenges in automatic recognition of emotions. Current approaches in automatic speaker recognition systems are partly to entirely based on Gaussian mixture models (GMM). In this research, we study and evaluate the combination of GMM approach...
Speech emotion recognition has become an active topic in pattern recognition. Specifically, support vector machine (SVM) is an effective classifier due to the application of the nonlinear mapping function, which can map the data into high or ever infinite dimensional feature space. However, a single kernel function might not sufficient to describe the different properties of spontaneous speech emotion...
This paper presents the construction of Binary Support Vector Machines and its significance for efficient Speech Emotion Recognition (SER). German Emotional Speech Corpus EmoDB has been used in this study. Seven Binary Support Vector Machines (SVMs) corresponding to each of the seven emotions in the EmoDB, namely Anger-Not Anger, Boredom-Not Boredom, Disgust-Not Disgust, Fear-Not Fear, Happy-Not Happy,...
Speech plays an important part in human-computer interaction. As a major branch of speech processing, speech emotion recognition (SER) has drawn much attention of researchers. Excellent discriminant features are of great importance in SER. However, emotion-specific features are commonly mixed with some other features. In this paper, we introduce an approach to pull apart these two parts of features...
Although a number of features derived from linear speech production theory have been investigated as speech emotion indicators, the recognition accuracy still stays unsatisfactory for realistic applications. In this paper, Teager Mel, a novel speech emotion feature is proposed based on Teager Energy Operator (TEO) and the Mel perception characteristics. Due to such advantages as nonlinear and simple,...
Despite the existence of a robust model to identify basic emotions, the ability to classify a large group of emotions with reliability is yet to be developed. Hence, objective of this paper is to develop an efficient technique to identify emotions with an accuracy comparable to humans. The array of emotions addressed in this paper go far beyond what are present on the circumplex diagram. Due to the...
Speech emotion recognition is one of the recent challenges in speech processing and Human Computer Interaction (HCI) in order to address various operational needs for the real world applications. Besides human facial expressions, speech has been proven to be one of the most precious modalities for automatic recognition of human emotions. Speech is a spontaneous medium of perceiving emotions which...
Recognition of human's emotion from speech has become one of the most challenging and attractive fields of research in speech processing area. The present study aimed to detect valence of emotions, using Non-Linear Dynamic features (NLDs). NLDs are extracted from the Discrete Cosine Transform (DCT) of descriptor contours computed from Phase Space Reconstruction (PSR) of speech. These features are...
With the increasing demand for spoken language interfaces in human-computer interactions, automatic recognition of emotional states from human speeches has become of increasing importance. Unfortunately, obtaining human annotations of emotion corpus to train a supervised system can become a laborious and costly effort. To address this, we explore active learning techniques with the objective of reducing...
In this paper, prosodic analysis of speech segments is performed to recognise emotions. Speech signal is segmented into words and syllables. Energy and pitch parameters are extracted from utterances, words and syllables separately to develop emotion recognition models. Eight emotions (anger, disgust, fear, happy, neutral, sad, sarcastic and surprise) of simulated emotion speech corpus, IITKGP SESC...
For the poor ability of discrimination in the case of recognizing emotion by using GMM model, an algorithm based on multi-output GMM and SVM is proposed, which combines the advantages of both GMM and SVM. The multidimensional output of GMM for one test speech are regarded as feature of emotion for SVM. This method takes advantage of the statistical properties of characterization of GMM and the strong...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.