The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Emotional facial expression transfer involves sequence-to-sequence mappings from an neutral facial expression to another emotional facial expression, which is a well-known problem in computer graphics. In the graphics community, current considered methods are typically linear (e.g., methods based on blendshape mapping) and the dynamical aspects of the facial motion itself are not taken into account...
The feedback of a robot is a powerful means to establish smooth human-robot interaction (HRI). We report on a user study to assess the applicability of a screen in a human-robot game-playing scenario. The screen was deployed to compensate for expressive shortcomings of a social robot due to its mechanical limitations of non-movable facial features. The participants played Rock-Paper-Scissors with...
It is the aim of this work to research how short-comings of a social robot due to its expressive limitations may be overcome by multimodal feedback. An experiment is proposed in which a robot that cannot produce facial expressions plays a game of rock, paper, scissors with people. A screen which is built-in the torso of the robot is used to compensate for these limitations in expressiveness and provide...
In this paper, a novel vision system is proposed to estimate attention of people from rich visual clues for social robot to perform natural interactions with multiple participants in public environments. The vision detection and recognition modules include multi-person detection and tracking, upper-body pose recognition, face and gaze detection, lip motion analysis for speaking recognition, and facial...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.