The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics‐based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand‐crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example‐based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run‐time to generate motion that accurately accomplishes tasks specified by the AI or human user....
In networked virtual environments, videoconferences or chatting over the Internet users are often graphically represented by virtual characters. Modeling realistic virtual heads of users suitable for animation implies a heavy artistic effort and resource cost. This paper introduces a system that generates a 3D model of a real human head with a little human intervention. The system receives five input...
Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and Web to mobile devices. High quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application...
In this paper we present initial results of the ongoing effort in building Victor, a virtual affective tutor system for online tutoring. Victor is an embodied conversational agent (ECA) integrated in a online tutoring system and capable of providing basic feedback about the learning process by talking to the student, gesturing and displaying facial expressions. The primary goal of this work is to...
The advancement of traffic makes world more and more internationalized and increases frequency of communication between people who come from different cultures. Differences in their conversation go beyond the languages they speak to the non-verbal behaviors they express while talking. To improve the abilities of the embodied conversational agents (ECAs) while interacting with human users we are working...
In this paper we present a new method for mapping a natural speech to the lip shape animation in the real time. The speech signal, represented by MFCC vectors, is classified into viseme classes using neural networks. The topology of neural networks is automatically configured using genetic algorithms. This eliminates the need for tedious manual neural network design by trial and error and considerably...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.