The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this video, real-life acting professor Matthew Gray tutors Data the Robot (a Nao model) to improve his expression of emotion via Chekhov's Psychological Gestures. Though the video narrative is fictional and the robot actions pre-programmed, the aim of the dramatization is to introduce an acting methodology that social robots could use to leverage full body affect expressions. The video begins with...
Since HOAP series robots resemble human body structure, a HOAP robot is expected to interact with others in real-time. However, it has proven hard in terms of learning, recognition, and interaction in real-time. In this paper a Fuzzy Inference System (FIS) is proposed, which learns gestures with segmentation and motion primitives, recognize gestures with created rule-based system in learning phase,...
In the research field of augmented reality, many experimental systems have been introduced so far to provide natural interactions with a virtual character and a surrounded environment. To introduce a high level interaction, not only visual and auditory information but also other human sensations such as tactile sensations and somatosensory information should be presented to a user. In this study,...
Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and...
Human action understanding and recognition have various demands for different applications in the field of computer vision and human-machine interaction. Due to these issues, more than a decade, extensive researches are going on in this arena — to recognize various actions and activities. Researchers have been exploiting various action datasets and some of them become prominent. Though there are some...
With personal robotics and assistance to dependant people, robots are in continuous interface with humans. To enable a more natural communication, based on speech and gesture, robots must be endowed with auditive and visual perception capacities. This paper describes a modular multimodal interface based on speech and gestures in order to control an interactive robot called Jido. In this paper we describe...
Providing route directions is a complicated interaction. Utterances are combined with gestures and pronounced with appropriate timing. This study proposes a model for a robot that generates route directions by integrating three important crucial elements: utterances, gestures, and timing. Two research questions must be answered in this modeling process. First, is it useful to let robot perform gesture...
In the study of virtual reality and mixed reality, the developments of an input device and user interface are important for the interaction with virtual objects and environment. However, a mouse or a joystick is commonly used in present systems. If we could manipulate a virtual object intuitively by using natural gestures without any sensors or devices, the system would contribute to present reality...
This paper describes a novel framework for automatic lecture video editing by gesture, posture, and video text recognition. In content analysis, the trajectory of hand movement is tracked and the intentional gestures are automatically extracted for recognition. In addition, head pose is estimated through overcoming the difficulties due to the complex lighting conditions in classrooms. The aim of recognition...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.