The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Derived from ecological psychology, the term ‘affordance’ refers to the functional classification of objects. It simply means the set of actions a subject (i.e. humans and anthropomorphic agents) can possibly perform with an object. There are several paradigms in the researches regarding the approaches of affordance detection. These approaches include considering other contexts like the subject, ambient...
This paper reports our latest experimental results on analyzing human's continuous learning ability with the reflection cost. To fill in the missing piece of reinforcement learning framework for the learning robot, we focus on two human mental learning processes, awareness as pre-learning process and reflection as post-learning process. To observe mental learning processes of a human, we propose a...
In this study, a brain-computer interface (BCI) paradigm based on steady-state visually evoked potentials (SSVEP) has been presented to seek the effects of the movement of the stimulus targets on classification accuracy in virtual environment, which was seldom noticed previously. Several paths including fixed, up/down, broken line, random path, and different speeds were set for searching performances'...
The paper presents a proposal of sensor fusion combining data derived from a Kinect device and high-precision sensors. The main idea was to enhance the tracking of a human arm in order to obtain precise coordinates. The Kinect plays the role of a calibration device and sensors' data are used in kinematics equations for enhanced tracking of the arm. This way the resulting information has less uncertainty...
It is well known that image representations learned through ad-hoc dictionaries improve the overall results in object categorization problems. Following the widely accepted coding-pooling visual recognition pipeline, these representations are often tightly coupled with a coding stage. In this paper we show how to exploit ad-hoc representations both within the coding and the pooling phases. We learn...
In this paper, a human speaker tracking method on audio and video data is presented. It is applied to conversation tracking with a robot. Audiovisual data fusion is performed in a two-steps process. Detection is performed independently on each modality: face detection based on skin color on video data and sound source localization based on the time delay of arrival on audio data. The results of those...
Adaptive, model-free control of Type 1 Diabetes Mellitus (T1DM) is a lack in the field of diabetes control, since, most of the applied control strategies are model-based ones. The main problem is that difficult to formulate exact mathematical models to replicate the physiological processes, not just because of their behavior, rather then these processes are changing patient-by-patient. Furthermore,...
One of the most important tasks in building environment maps with partial information is to find a good alignment between pairs of point clouds representing consecutive frames. RANSAC and ICP are widely used algorithms to align pairs of frames: the former finds an initial transformation which is refined by the latter. Decreasing the alignment error in the first step can reduce the computational cost...
Rotary-wing unmanned aerial vehicles (UAV) are being widely used in different applications due to its several features, such as mobility, lightweight, embedded processing and capability of flying in different height levels. Among the possible applications they are used in surveillance tasks, agriculture environments monitoring, power lines inspections and diseases detection in crops. The images captured...
The use of emotional states for Human-Robot Interaction (HRI) has attracted considerable attention in recent years. One of the most challenging tasks is to recognize the spontaneous expression of emotions, especially in an HRI scenario. Every person has a different way to express emotions, and this is aggravated by the complexity of interaction with different subjects, multimodal information and different...
This paper is concerned with the interpretation of visual information for robot localization. It presents a probabilistic localization system that generates an appropriate observation model online, unlike existing systems which require pre-determined belief models. This paper proposes that probabilistic visual localization requires two major operating modes - one to match locations under similar conditions...
We propose a discriminative compact scene descriptor for single-view cross-season place recognition. Unlike previous bag-of-words approaches which rely on a library of vector quantized visual features, the proposed scene descriptor is based on a library of raw image data (such as available visual experience, images shared by other colleague robots, and publicly available image data on the web) that...
Development of intellectual information technology augmented reality for persons with disabilities, including converting visual images into sound and vice versa by generating a single concept. Use of the unified knowledge base for operating mechanism, that stores images concepts.
We present an evaluation of standard image features in the context of long-term visual teach-and-repeat mobile robot navigation, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that in the given long-term scenario, the viewpoint, scale and rotation invariance of the standard feature extractors is less important...
In this paper, we introduce a non-verbal multimodal joint visual attention model for human-robot interaction in household scenarios. Our model combines the bottom-up saliency and depth-based segmentation with the top-down cues such as pointing and gaze to detect the objects of interest according to the user. For generation of the top-down saliency maps, we have introduced novel methods for object...
The shortage of physicians afflicting developed countries encourages engineers and doctors to collaborate towards the development of telemedicine. In particular, robotic systems have the potential for helping doctors making examination. A very common examination that can be the goal of a robotic system is palpation. Most of the robotics systems that have been developed for palpation present interesting...
Mimicry and laughter are two social signals displaying affiliation among people. To date, however, their relationship remains uninvestigated and relatively unexploited in designing the behaviour of robots and virtual characters. This paper presents an experiment aimed at examining how laughter and mimicry are related. The hypothesis is that hand movements a person produces during a laughter episode...
This paper proposes a methodology for visual tracking of a dynamic generalized subject within an unknown map, by relying on its perception as a separate entity which can be distinguished spatially and visually from its environment. To this purpose, a 3D-representation of the visible scenery is examined, and the subject is spatially identified by its externally viewed hull via a mesh-connection algorithm...
It is easy for human beings to discern whether an observed acoustic signal is a direct speech, reflected speech or noise through simple listening. Relying purely on acoustic cues is enough for human beings to discriminate between the different kinds of sound sources which is not straightforward for machines. A robot equipped with the current robot audition mechanism in most cases, will fail to differentiate...
This paper presents an audio-visual beat-tracking method for an entertainment robot that can dance in synchronization with music and human dancers. Conventional music robots have focused on either music audio signals or dancing movements of humans for detecting and predicting beat times in real time. Since a robot needs to record music audio signals by using its own microphones, however, the signals...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.