The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In the field of Human Computer Interaction (HCI), human emotion recognition from speech signal is evolving as a recent research area. Speech is the most common way for communication among human beings. Speech consists of sentences, which can be further segregated into words. Words consist of phonemes which are considered to be the primary voice construction elements. This paper presents a classification...
This paper presents multi-modal analysis of human-computer interactions based on automatic inference of expressions in speech. It describes an automatic inference system that recognizes aural expressions of emotions, complex mental states and expression mixtures. The implementation is based on the observation that different vocal features distinguish different expressions. The system was trained on...
Lifelike agents, as a promising technology for human-computer interaction, have become focus of research community in resent years. In this paper, we will endow the lifelike agents with affective recognition capacity. There are three main contributions in this paper. Firstly, a hybrid of hidden Markov models (HMMs) and artificial neural network (ANN) is proposed to classify speech emotions. Secondly,...
To study effective speech features which can represent different emotion styles in infant voice, nonlinear features based on Teager Energy Operator are investigated. Neutral state and 4 emotional states (i.e. happiness, impatience, anger and fear) are classified from the infant voice database. MFCC extraction and HMM-based emotion classification are used as baseline system to evaluate the emotional...
Owned to the fact that the automatic recognition of emotions still constitutes a hot topic in the research on adaptive human-computer interfaces, there exists a large and growing variety of approaches and systems presented in numerous publications each yielding a different performance at different capabilities. During the development of two approaches to speech-based emotion recognition and having...
The involvement of emotions in dialogue design has attracted much interest in current research on human-computer interfaces during the past years. In this article we pick up the idea of using Hidden Markov Models (HMMs) to recognize emotions from speech signals and we describe the enhancements and optimizations of a speech-based emotion recognizer jointly operating with automatic speech recognition...
The following topics are discussed: computer vision and image analysis; face and human analysis; face recognition; character recognition and document analysis; clustering algorithms; signal, speech and image processing; signal coding and compression; document image enhancement; visualization and restoration; systems, robotics and applications; biometrics; biomedical imaging; fingerprints; range imaging...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.