The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we present a method for vision-based place recognition in environments with a high content of similar features and that are prone to variations in illumination. The high similarity of features makes difficult the disambiguation between two different places. The novelty of our method relies on using the Bag of Words (BoW) approach to derive an image descriptor from a set of relevant...
Vision-based place recognition in underwater environments is a key component for autonomous robotic exploration. However, this task can be very challenging due to the inherent properties of this kind of places such as: color distortion, poor visibility, perceptual aliasing and dynamic illumination. In this paper, we present a method for vision-based place recognition in coral reefs. Our method relies...
Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In...
Recognizing a place with a visual glance is the first capacity used by humans to understand where they are. Making this capacity available to robots will make it possible to increase the redundancy of the localization systems available in the robots, and improve semantic localization systems. However, to achieve this capacity it is necessary to build a robust visual place recognition procedure that...
We present an approach to automatically learn the visual appearance of an environment in terms of object classes. The procedure is totally unsupervised, incremental, and can be executed in real time. The traversability property of an unseen object is also learnt without human supervision by the interaction between the robot and the environment. An incremental version of affinity propagation, a state-of-the-art...
Detecting visual changes in environments is an important computation with many applications in robotics and computer vision. Security cameras, remotely operated vehicles, and sentry robots could all benefit from robust change detection capability. We conjecture that if one has a mobile camera system the number of visual scenes that are experienced is limited (compared to the space of all possible...
While the arctic possesses significant information of scientific value, surprisingly little work has focused on developing robotic systems to collect this data. For arctic robotic data collection to be a viable solution, a method for navigating in the arctic, and thus of assessing glacial terrain, must be developed. Segmenting the ground plane from the rest of the image is one common aspect of a visual...
A team of robots working to explore and map an area may need to share information about landmarks so as to register their local maps and to plan effective exploration strategies. In previous papers we have introduced a combined image and spatial representation for landmarks: terrain spatiograms. We have shown that for manually selected views, terrain spatiograms provide an effective, shared representation...
We propose an algorithm for generating navigation summaries. Navigation summaries are a specialization of video summaries, where the focus is on video collected by a mobile robot, on a specified trajectory. We are interested in finding a few images that epitomize the visual experience of a robot as it traverses a terrain. This paper presents a novel approach to generating summaries in form of a set...
In this paper we present a system for appearance-based topological mapping and localisation using vision data. The algorithms are designed for robots which are equipped with FPGA cameras. Such cameras do not provide the entire image to the robot but simple image features like colour histograms.
This paper describes daily assistive task experiments that conducting on the HRP2JSK humanoid robot. We present overall action and recognition integrated system design to realize daily assistive behaviors autonomously and robustly, along with the demonstration that the HRP2JSK pours tea from a bottle to a cup and wash it after human drink it. To obtain autonomy and robustness, visual recognition and...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.