The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The article presents an analysis of the possibility of recognizing speaker's emotions from speech signal in Polish language. In order to perform experiments a database containing speech recordings with emotional content was created. On its basis, extraction of features from the speech signals was performed. The most important step was to determine which of the previously extracted features were the...
This paper proposes epoch parameters extracted from LP (Linear Prediction) residual and zero frequency filtered speech signal for recognising the emotions present in speech. Instant of glottal closure within pitch period of LP residual is known as an 'epoch'. The significant excitation of vocal tract usually takes place at the instant of glottal closure. In this paper the epoch parameters namely strength...
This paper explores the Linear Prediction (LP) residual of speech signal for characterizing the basic emotions. The emotions used in this study are anger, compassion, disgust, fear, happy, neutral, sarcastic and surprise. LP residual is derived by inverse filtering of the speech signal, and the process is known as LP analysis. LP residual mainly contains higher order relations among the samples. For...
This paper analysis the feather of the time,amplitude, pitch and formant construction involved such four emotions as happiness, anger, surprise and sorrow. Through comparison with non-emotional quiet speech signal, we sum up the distribution law of emotional feather including different emotional speech. Nine emotional features were extracted form emotional speech for recognizing emotion. We introduce...
Spectral and excitation features, commonly used in automatic emotion classification systems, parameterise different aspects of the speech signal. This paper groups these features as speech production cues, broad spectral measures and detailed spectral measures and looks at how they differ in their performance in both speaker dependent and speaker independent systems. The extent of speaker normalisation...
In this paper, automatic identification of emotional states from human speech is addressed. While several papers have been published in the literature on speech emotion recognition, the features used are taken or modified from those used for speech recognition purposes. However, not all features used for speech recognition are of equal importance for emotion recognition. This paper addresses this...
A new method of speech emotion recognition in speech signal via Fuzzy Least Squares Support Vector Machines (FLSSVM) is proposed for speech emotion recognition. Based on extracting prosody and voice quality features from emotional speech, FLSSVM is used to construct the optimum separating hyperplane to realize recognizing the four main speech emotion in Chinese including anger, happiness, sadness...
In this paper, an emotion classification system based on speech signals is presented. The classifier can identify the most common emotions, namely anger, neutral, happiness and fear. The algorithm computes a number of acoustic features which are fed into the classifier based on a pattern recognition approach. The classification system is of potential benefit for ambient intelligence in which the emotional...
In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facial expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the LDA (linear discriminant analysis). On...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.