The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A big challenge in the precision agriculture is the detection of fruits in coffee crops on agricultural environments. This paper presents a comparison of four features set to detect the red fruits (mature) in Coffee plants. An Unmanned Aerial Vehicle (UAV) is used to obtain high-resolution RGB images of a coffee hall. The proposed methodology enables the extraction of visual features from image regions...
The human visual system employs an information selection mechanism, visual attention, so that higher-level cognitive processes can be restricted to a potentially important subset of the incoming information. This mechanism is amenable to efficient computational implementation and, consequently, it has been incorporated into many technological applications. Among these applications is autonomous mobile...
According to the real condition of the substation inspection robot provided a vision-based navigation control method for substation inspection robot can be run in the complex road environment, with strong anti-interference, implementation of simple, good stability and high precision in this paper. Reasonably plan the inspection paths, enabling robot to full check each device in substation. Robots...
In this paper, we present a method for vision-based place recognition in environments with a high content of similar features and that are prone to variations in illumination. The high similarity of features makes difficult the disambiguation between two different places. The novelty of our method relies on using the Bag of Words (BoW) approach to derive an image descriptor from a set of relevant...
Vision-based place recognition in underwater environments is a key component for autonomous robotic exploration. However, this task can be very challenging due to the inherent properties of this kind of places such as: color distortion, poor visibility, perceptual aliasing and dynamic illumination. In this paper, we present a method for vision-based place recognition in coral reefs. Our method relies...
As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial...
The article presents a way of visualizing point clouds created by 3D scanning in a coal mine. The first part focuses on the choice of individual algorithms for point cloud pre-processing (using the library PCL — Point Cloud Library), namely voxelization, outlier removing and smoothing. Then it is described the main rendering algorithm of the software — the chosen way of point rendering and some more...
This paper presents the last developments towards vision-based target tracking by an AUV. The main concepts behind the visual relative localization are provided and the results from a statistical analysis for the relative localization algorithm are presented. The purpose of this analysis is to ensure properness of data used to feed controllers that are responsible for governing the AUV motion. A new...
Numerous work has been done in biologically inspired robotics emulating models, systems and elements of nature for the purpose of solving traditional robotics problems. Chromatic behaviours are abundant in nature across a variety of living species to achieve camouflage, signaling, and temperature regulation. The ability of these creatures to successfully blend in with their environment by changing...
One of the most important tasks in building environment maps with partial information is to find a good alignment between pairs of point clouds representing consecutive frames. RANSAC and ICP are widely used algorithms to align pairs of frames: the former finds an initial transformation which is refined by the latter. Decreasing the alignment error in the first step can reduce the computational cost...
In this paper, we introduce a non-verbal multimodal joint visual attention model for human-robot interaction in household scenarios. Our model combines the bottom-up saliency and depth-based segmentation with the top-down cues such as pointing and gaze to detect the objects of interest according to the user. For generation of the top-down saliency maps, we have introduced novel methods for object...
Learning word meanings during natural interaction with a human faces noise and ambiguity that can be solved by analysing regularities across different situations. We propose a model of this cross-situational learning capacity and apply it to learning nouns and adjectives from noisy and ambiguous speeches and continuous visual input. This model uses two different strategy: a statistical filtering to...
The goal of saliency detection is to highlight objects in image data that stand out relative to their surrounding. Therefore, saliency detection aims to capture regions that are perceived as important. The most recent bottom-up approaches for saliency detection measure contrast based on visual features in 2D scenes, ignoring depth value. This work presents an effective method to measure saliency by...
We spend more than 80% of our lives indoors. Future indoor robot navigation is based on intelligent systems that provide accurate and smart information. In this paper we introduce a novel Aesthetic Marker decoding system, which finds and decodes machine readable visual markers inside buildings. Focused on seamless integration of visual code into our everyday human environment, our application has...
In this paper, robot control by human motion data is considered. The conventional motion reproduction structure had its limitation on the performance due to the lack of means for environmental sensing. This study implements the widely used vision based approach. The possible reproduction structure with both visual and tactile senses are discussed. The proposed reproduction structure with both visual...
Visual navigation is an important research field in robotics due to low cost of cameras and the good results that these systems usually achieve. This paper presents monocular and stereo vision-based detection methods. The obstacles are detected and fused through the Dempster-Shafer theory for generating a cloud of points that contains the probability of the existence of obstacles in the environment...
Loop-closure detection, which is the ability to recognize a previously visited place, is of primary importance for robotic localization and navigation problems. We here introduce SAIL-MAP, a method for loop-closure detection based on vision only, applied to topological simultaneous localization and mapping (SLAM). Our method allows the matching of camera images using a novel saliency-based feature...
Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In...
Recognizing a place with a visual glance is the first capacity used by humans to understand where they are. Making this capacity available to robots will make it possible to increase the redundancy of the localization systems available in the robots, and improve semantic localization systems. However, to achieve this capacity it is necessary to build a robust visual place recognition procedure that...
For smooth interaction between human and robot, the robot should have an ability to manipulate human attention and behaviors. In this study, we developed a visual attention model for manipulating human attention by a robot. The model consists of two modules, such as the saliency map generation module and manipulation map generation module. The saliency map describes the bottom-up effect of visual...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.