The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A large part of research on expressive speech focuses on a fixed palette of “fully-blown” emotions leaving a large set of interesting applications unaddressed. The work described here adopts a more generic approach to expressive speech analysis. By recognizing that affect is manifested in speech through a rich, diverse set of patterns involving various surface features, it seeks to reveal underlying...
Acoustic and articulatory cues of Mandarin Chinese vowels were analyzed for `Angry', `Sad', `Happy' and `Neutral' speech by using EMA recordings from a male speaker. The results suggest that: (1) The F0 range and register of four emotions can be grouped into 2 emotional dyads: `Angry' vs. `Happy', and `Sad' vs. `Neutral'; (2) Consistent differences in intonation were found across different emotions...
Emotional speech classification is a key problem in social interaction analysis. Traditional emotional speech classification methods are completely supervised and require large amounts of labeled data. In addition, various feature sets are usually used to characterize the emotional speech signals. Therefore, we propose a new co-training algorithm based on multi-view features. More specifically, we...
Automated analysis of human affective behavior has attracted increasing attention in recent years. With the research shift toward spontaneous behavior, many challenges have come to surface ranging from database collection strategies to the use of new feature sets (e.g., lexical cues apart from prosodic features). Use of contextual information, however, is rarely addressed in the field of affect expression...
In this paper, we presented the corpus collection procedure and proposed the effective feature representation. We collected the emotional speech from 50 male and 50 female speakers and the corresponding statistics of the corpus was also demonstrated. The emotional speech corpus was further processed manually for the feature extraction and classification experiments. After introducing the feature generation...
Speech contains non verbal elements known as paralanguage, including voice quality, emotion and speaking style, as well as prosodic features such as rhythm, intonation and stress. The study of nonverbal communication has focused on face-to-face interaction since that the behaviors of communicators play a major role during social interaction and carry information between the different speakers. In...
More and more efforts have been made for the research of emotional speech recently. Although we may, sometimes be able to make a definite perceptual decision on emotion state, emotion is actually a kind of cline in a large vector space. Different emotions can be thought of as zones along an emotional vector. To resolve the ambiguity of emotion perception, the authors make an array of perception experiments...
We examined if entertainment-robots should use emotions. In an experiment we presented jokes to participants to find out if different emotions have different effects on their pleasure. We found out that emotions do have an impact on userspsila perceptions when using entertainment robots.
In this paper we present the findings of our research which aims to develop an emotions filter that can be added to an existing Malay text-to-speech system to produce an output expressing happiness, anger, sadness and fear. The end goal is to produce an output that is as natural as possible, thus contributing towards the enhancement of the existing system. The emotions filter was developed by manipulating...
Emotional speech plays an important role for conveying the desired message. Emotions are manifested in speech signal at all the levels, in particular they are significant at suprasegmental level (i.e., prosodic level). In this paper four emotions are characterized (anger, compassion, happy and neutral) using the prosodic features such as duration, intonation (variation of pitch), and energy. The analysis...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.