The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The Universal Networking Language (UNL) is a worldwide generalizes form human interactive in machine independent digital platform for defining, recapitulating, amending, storing and dissipating knowledge or information among people of different affiliations. The theoretical and practical research associated with these interdisciplinary endeavor facilities in a number of practical applications in most...
The Immersive Naval Officer Training System (INOTS) is a blended learning environment that merges traditional classroom instruction with a mixed reality training setting. INOTS supports the instruction, practice and assessment of interpersonal communication skills. The goal of INOTS is to provide a consistent training experience to supplement interpersonal skills instruction for Naval officer candidates...
This paper describes the design, implementation and evaluation of an interactive virtual human Dr. Chestr: Computerized Host Encouraging Students to Review. Game show hosts exert a unique personality that becomes the trademark of their respective game shows. Our aim is to create virtual humans that can interact naturally and spontaneously using speech, emotions and gesture. Dr. Chestr is our virtual...
Despite major advances within the affective computing research field, modelling, analysing, interpreting and responding to naturalistic human affective behaviour still remains as a challenge for automated systems as emotions are complex constructs with fuzzy boundaries and with substantial individual variations in expression and experience. Thus, a small number of discrete categories (e.g., happiness...
Human-Robot peer-based teams are evolving from a far-off possibility into a reality. Human Performance Moderator Functions (HPMFs) can be used to predict human behavior by incorporating the effects of internal and external influences such as fatigue and workload. The applicability of HPMFs to human-robot teams is not proven. The presented research focuses on determining the applicability of workload...
In this paper, an investigation to find a relationship between the send characteristics of VoIP phone and the speech recognition performance is presented. Experimental results under various additive noisy environments show that for improved speech recognition the send characteristics should be adjusted differently from the adjustment based on human perception. The better send characteristics for speech...
This study analyzes the effect of stress in human and automatic stressed speech processing tasks for speech collected from non-professional speakers. The database of 33 keywords is collected under five stress conditions, namely, neutral, angry, happy, sad and Lombard from fifteen speakers. The first study is to understand the ability to identify stress by human and automatic speech processing. The...
National language processing (NLP) is one of the main problems in interactive communication between user and computer. Fast progress of IT technologies speeds up solutions of problems with speech processing. Computer systems of natural language need to transfer information in correct way from computer databases into natural language used by people, what is without knowing context complicated process...
The HuComTech project aims at developing a theory of multimodal human-computer interaction linking knowledge about human-human interaction to technological implementation. The purpose is to contribute to a more efficient and human-like human-computer interaction system by defining the main structural elements of communication, identifying their markers and defining their alignment with other markers...
This paper presents a method and a tool to collect data of initiation stages in conversational interaction in order to better understand how humans talk to machines. This is done by means of a platform-robot which both facilitates conversations between a robot and human interlocutors and records the interaction using multi-modal technology. The system is provided with a standard face detection algorithm...
The paper describes emergent verbal behaviour that arises when speech components are added to a robotics simulator. In the existing simulator the robot performs its activities silently. When speech synthesis is added, the first level of emergent verbal behaviour is that the robot produces spoken monologues giving a stream of simple explanations of its movements. When speech recognition is added, human-robot...
Does the emotional content of a robot's speech affect how people teach it? In this experiment, participants were asked to demonstrate several “dances” for a robot to learn. Participants moved their bodies in response to instructions displayed on a screen behind the robot. Meanwhile, the robot faced the participant and appeared to emulate the participant's movements. After each demonstration, the robot...
We present our method for learning object categories from the Internet using cues obtained through human-robot interaction. Such cues include an object model acquired by observation and the name of the object. Our learning approach emulates the natural learning process of children when they observe their environment, encounter unknown objects and ask adults the name of the object. Using this learning...
Gestures represent an important channel of human communication, and they are “co-expressive” with speech. For this reason, in human-machine interaction automatic gesture classification can be a valuable help in a number of tasks, like for example as a disambiguation aid in automatic speech recognition. Based on the hand gesture categorization proposed by D. McNeill in his reference works on gesture...
Speech motor control is one of the most complex ones in human's motor controls. Although a face-to-face communication is an important aspect of actual vocal communication, its neural mechanisms have not sufficiently investigated until now. This may originate in inherent measurement methods for brain functions which are not suitable for face-to-face communication. In this study, we tried to construct...
In human-robot interaction, gender and internal state detection play an important role in making the robot reacting in an appropriate manner. This research focuses on the important features to extract from a voice signal in order to construct successful gender and internal state detection systems, and shows the benefits of combining both systems together on the total average recognition score. Moreover,...
We report on an experiment in which a human collaborates with a small, autonomous, humanoid robotic toy. The experiment demonstrates that the robot's use of two simple gestures, namely orienting its head toward the addressee when it speaks and raising its arm in the direction of objects it refers to, significantly improve the human's perception of the robot's interaction skills and quality as a collaborator.
We have created a preliminary inference engine for generating gaze acts based on extracting the social context from conversational structure and timing in human-robot dialog.
This paper presents a Text-To-Gesture (TTG) system which enables an embodied agent to generate proper gestures for a given text input. To transform text information to a sequence of gestural motions, it is important to analyze coverbal gestures linguistically. It should be also concerned to reproduce natural motions synchronized with speech signals in a human-like fashion. This paper proposes a TTG...
In the present study are analyzed the emotional expressiveness for two languages (German and Romanian). The emotional states are joy, fury, sadness and neutral tone. This paper aims to give an overview of what has been done in this domain based on the formantic analysis. The findings are briefly discussed.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.