The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We introduce a facial mimicry system, which combines facial expression analysis and synthesis on a robot, utilizing the facial action coding system. The activation of action units on a user's face is automatically extracted from a video stream and mapped to the robot, thus mirroring the facial expression. As a novel approach, a user study quantifies the congruence of the initial human facial expression...
Every day human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, body pose and gestures, allowing humans to pass large amounts of information in short time. In contrast, traditional human-machine communication is often unintuitive and requires specifically trained personal. In this paper, we present a real-time capable framework...
An air-ground multi-robot system is designed for the purpose of applying bio-inspired sensor-motor modeling on technical systems. It consists of a flying mini-quadrotor equipped with inertial and visual sensors, and a wheeled minirobot equipped with active markers. The system modules, a multi-sensory pose/motion estimation approach and a closed-loop control of the flying quadrotor are described in...
In this paper an autonomous switching between two basic attention selection mechanisms, top-down and bottom-up, is proposed, substituting manual switching. This approach fills the gab in object search using conventional top-down biased bottom-up attention selection: the latter one fails, if a group of objects is searched whose appearances can not be uniquely described by low-level features used in...
The goal of the autonomous city explorer (ACE) is to navigate autonomously, efficiently and safely in an unpredictable and unstructured urban environment. To achieve this aim, an accurate localization is one of the preconditions. Due to the characteristics of our navigation environment, an elaborated visual odometry system is proposed to estimate the current position and orientation of the ACE platform...
Emotional expressions are considered to be important for robotic and virtual agents to improve nonverbal communication in human-machine-interaction. In this paper we focus on a subset of emotional expressions, namely the smile and it's variations. The proposed concept for generating artificial smile sequences is based on the system-theoretic psychological model of smiling, which is based on the Zurich...
In the project Autonomous City Explorer, an interactive robot is designed to find its way to a given destination in unknown urban environments by interacting with pedestrians. Considering applications in a human dominated environment, the robot can be sent to a destination by tracking a landmark selected by users and described by 2D image features. To achieve a natural landmark selection from the...
This work proposes a Piecewise Linear (PL) system to model transitions of affect. Parameters of the model are identified based on a psychological experiment. The PL system describes affective reactions of humans to an external affective stimulus depending on the previous affective state. Results of the statistical analysis support that the previous affective state influences significantly the current...
In the autonomous city explorer (ACE) project a mobile robot is developed, which is capable of finding its way to a given destination in an unknown urban environment. An exemplary mission is to find the way from our institute to the Marienplatz, a public place in the center of Munich, without any prior knowledge or GPS information. Inspired by the behavior of humans in unknown environments, ACE must...
In this paper a novel implementation of the saliency map model on a multi-GPU platform using CUDA technology is presented. The saliency map model is a well-known computational model for bottom-up attention selection and serves as a basis of many attention control strategies of cognitive vision systems. A real-time implementation is the prerequisite of an application of bottom-up attention on mobile...
A biologically inspired foveated attention system in an object detection scenario is proposed. Thereby, a high-performance active multi-focal camera system imitates visual behaviors such as scan, saccade and fixation. Bottom-up attention uses wide-angle stereo data to select a sequence of fixation points in the peripheral field of view. Successive saccade and fixation of high foveal resolution using...
In this paper, an insect-inspired motion detector (Reichardt-model) is applied to visual servo control to ensure the stability of the system with high gain and time delay in its feedback. A Reichardt-based control scheme is compared with a conventional visual servoing approach. As a consequence of the specific velocity dependence of the Reichardt-model, the stability margin of the visual servo control...
Goal-directed guidance of gaze control based on coordinated task and stimulus parameters is essential for steering a mobile cognitive system efficiently and autonomously through the real world. This paper focuses on coordination mechanisms of top-down and bottom-up attentional allocation, with particular consideration of the current local environment. The top-down attention selection in the task-space...
Inspired by the expectation-based perception of humans, a surprise-driven active vision system is proposed. This vision system not only considers spatial saliency of objects in the environment, but also investigates temporal novelty in the neighborhood. Surprise is defined as the difference of the saliency probability distributions of two consecutive input images, which is measured using Kullback-Leibler...
Knowledge about the environment is essential for humanoid and mobile robots to move and act safely. The most intuitive way to perceive information about the environment is through the vision system. However, the accuracy provided by stereo vision is insufficient for many tasks. A more accurate representation is created by a laser range-finder, which delivers no color information. This paper describes...
Contrary to common emotion recognition techniques by face or speech analysis, physiological data are involuntary and continuously available. Thus, they allow for emotion detection even in situations without spoken words or in case of non-extreme emotions, which are more likely to occur in human-robot interaction (HRI). In this paper, we describe the results of an experiment investigating non-extreme...
Estimating the human body pose is of great interest for many tasks, such as human robot interaction, people tracking and surveillance. During the recent years, several approaches have been presented, which still have weaknesses regarding occlusions or complex scenes. In this paper, we present a novel algorithm for human body pose estimation using any three-dimensional representation of the environment,...
Several variations of methodological approaches are used to study the social acceptance in human-robot interaction. Due to the introduction of robots in the home, working practice and usage typically informing the design of new forms of technology are missing. Studying social acceptance in human-robot interaction thus needs new methodological concepts. We propose a so called breaching experiment with...
A multi-camera view direction planning strategy for mobile robots is discussed. Two concurrent tasks are considered: self-localization and object tracking. The approach is to assign the different tasks to different cameras, such that for each task an individual optimal view direction is selected based on the information gain maximization. Thereby, the individual task performance is significantly improved...
In this paper, an array of biologically inspired elementary motion detectors (EMDs) is implemented on an FPGA (field programmable gate array) platform. The well-known Reichardt-type EMD, modeling the insect's visual signal processing system, is very sensitive to motion direction and has low computational cost. A modified structure of EMD is used to detect local optical flow. Six templates of receptive...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.