The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Technological advances are being made to assist humans in performing ordinary tasks in everyday settings. A key issue is the interaction with objects of varying size, shape, and degree of mobility. Autonomous assistive robots must be provided with the ability to process visual data in real time so that they can react adequately for quickly adapting to changes in the environment. Reliable object detection...
This paper presents a dynamic thresholding algorithm for robotic apple detection. The algorithm enables robust detection in highly variable lighting conditions. The image is dynamically split into variable sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected so as to accommodate three different illumination levels for three different dimensions...
In this paper we describe an approach for detection and pose estimation of colored objects with only few or no textural features. The approach consists of two separate stages. First, we perform vision-based object detection and hypothesis filtering. Then, we estimate and validate the object's pose in 3-D laser scans. For object detection we integrate image segmentation results from multiple viewpoints...
In human-robot interaction scenarios, the ability to identify a single object from multiple objects is an important task for service robots. Although there has been recent progress in this area, it remains difficult for autonomous vision systems to recognize objects in natural conditions. The service robot should detect a particular object according to the user's demand. This paper describes a human...
The goal of saliency detection is to highlight objects in image data that stand out relative to their surrounding. Therefore, saliency detection aims to capture regions that are perceived as important. The most recent bottom-up approaches for saliency detection measure contrast based on visual features in 2D scenes, ignoring depth value. This work presents an effective method to measure saliency by...
This paper presents the detection and localization methods of entrance and staircase markers for the team E-Mobile in TechX Challenge 2013. Autonomous vehicles are required to detect and locate traffic cones beside the indoor entrance and staircase. One big challenge is from the unpredictable lighting conditions and environment. Different practical techniques such as color space selection, segmentation,...
The task of searching and grasping objects in cluttered scenes, typical of robotic applications in domestic environments requires fast object detection and segmentation. Attentional mechanisms provide a means to detect and prioritize processing of objects of interest. In this work, we combine a saliency operator based on symmetry with a segmentation method based on clustering locally planar surface...
In many scenarios, domestic robot will regularly encounter unknown objects. In such cases, top-down knowledge about the object for detection, recognition, and classification cannot be used. To learn about the object, or to be able to grasp it, bottom-up object segmentation is an important competence for the robot. Also when there is top-down knowledge, prior segmentation of the object can improve...
In this paper a visual self-localization method for a humanoid robot is presented. This one is based on monocular information. The goal of this method is to obtain the position (x; y) and orientation θ of the humanoid robot inside the field of play. The methods proposed include some digital image processing algorithms and geometric interpretation to perform a 3D monocular reconstruction, that allows...
This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach...
In this paper, a target localization method based on color recognition and connected component analysis is presented. The raw image is converted to HSI color space through a lookup table, followed by a line-by-line scan to find all the connected domains. By checking the size of each domain, most pseudo targets can be omitted and, meanwhile, the target position would be calculated. Owing to the absence...
This paper provides an intuitive way to inference the space of a scene using stereo cameras. We first segmented the ground out of the image by adaptively learning the ground model in the image. We then used the convex hull to approximate the scene space. Objects within the scene can also be detected with the stereo cameras. Finally, we organized the scene space and the objects within the scene into...
We present preliminary results of an algorithm for detecting obstacle-free regions in indoor environments using both color and texture information for visual robot navigation. By modeling color information in the L*u*v* color space, a color-based segmentation is performed to find similar regions. This segmentation yields a set of regions that are joined together into single areas using texture information...
This paper discusses the soccer robot orientation method which is based on sight, and locates soccer robot by using method based on color shape as well as establishes threshold segmentation, edge detection and the model of Hough transform for circle detection with an emphasis in threshold segmentation and Hough transform. Several methods are used for threshold segmentation, evident difference of images...
This paper describes the initial steps in the development of an object detection system for manipulation purposes, to be embedded in a mobile robot. The goal is to design a robotic system to aid workers in a manufacturing plant. The proposed implementation involves the integration of a Field Programmable Gate Array (FPGA) based electronic module with the manipulator arm of the robotic platform. The...
This paper describes the development of a FPGA-based object detection algorithm for manipulation purposes in a mobile robot. The target application is a robotic system which aids workers in a manufacturing plant. The whole system is provided with a camera which captures images of the objects that can be found in the environment. The FPGA extracts the most useful data from these images and performs...
The work presented here describes a novel vision-based motion detection system for telerobotic operations. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Color tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the color tags are...
This paper presents a saliency-based solution to boost trail detection. The proposed model builds on the empirical observation that trails are usually conspicuous structures in natural environments. This hypothesis is confirmed by the experimental results, where a strong positive correlation between trail location and visual saliency has been observed. These results are due in part to the proposed...
Most large-scale public environments provide direction signs to facilitate the orientation for humans and to find their way to a goal location in the environment. Thus, for a robot operating in the same environment, it would be beneficial to interpret such signs correctly for a safe and efficient navigation. In this work, we propose a novel approach to infer the meaning of direction signs and to use...
A human has various sensory perceptions, and effectively uses them in communication. Auditory and visual functions especially play an important role for recognizing someone to talk to and understanding the conversation. In vocal communication, we are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. In addition, we...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.