The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Multi-concept visual classification is emerging as a common environment perception technique, with applications in autonomous mobile robot navigation. Supervised visual classifiers are typically trained with large sets of images, hand annotated by humans with region boundary outlines followed by label assignment. This annotation is time consuming, and unfortunately, a change in environment requires...
The development of reliable and robust visual recognition systems is a main challenge towards the deployment of autonomous robotic agents in unconstrained environments. Learning to recognize objects requires image representations that are discriminative to relevant information while being invariant to nuisances, such as scaling, rotations, light and background changes, and so forth. Deep Convolutional...
In this paper, we present an online landmark selection method for distributed long-term visual localization systems in bandwidth-constrained environments. Sharing a common map for online localization provides a fleet of autonomous vehicles with the possibility to maintain and access a consistent map source, and therefore reduce redundancy while increasing efficiency. However, connectivity over a mobile...
This paper presents a novel structured knowledge representation called the functional object-oriented network (FOON) to model the connectivity of the functional-related objects and their motions in manipulation tasks. The graphical model FOON is learned by observing object state change and human manipulations with the objects. Using a well-trained FOON, robots can decipher a task goal, seek the correct...
Illumination changes are a typical problem for many outdoor long-term applications such as visual place recognition. Keypoints may fail to match between images taken at the same location but different times of the day. Although recently some methods are presented for creating shadow-free image representations, all of them have the limitation in terms of dealing with night images and non-Planckian...
Based on the real-time image capturing and recognition, the robot is responsible to inspect the working status of various equipments in the smart substation. In this paper, an adapted efficient robotics visual servo algorithm is proposed in order to improve the accuracy of capturing the target images. When the robot captures image of the equipment, SIFT method is utilized to match with the template...
Visual localization is the process of finding the location of a camera from the appearance of the images it captures. In this work, we propose an observation model that allows the use of images for particle filter localization. To achieve this, we exploit the capabilities of Gaussian Processes to calculate the likelihood of the observation for any given pose, in contrast to methods which restrict...
Wireless capsule endoscopy (WCE) is the prime diagnostic modality for the small-bowel. It consists in a swallowable color camera that enables the visual detection and assessment of abnormalities, without patient discomfort. The localization of the capsule is currently performed in the 3D abdominal space using radiofrequency (RF) triangulation. However, this approach does not provide sufficient information...
With the recent success of visual features from deep convolutional neural networks (DCNN) in visual robot self-localization, it has become important and practical to address more general self-localization scenarios. In this paper, we address the scenario of self-localization from images with small overlap. We explicitly introduce a localization difficulty index as a decreasing function of view overlap...
Robotics is the field currently taking its place as a leading candidate for dramatic changes in everyday life. Advances in the past 10 years in sensing, actuator and power technologies have fuelled an explosion of opportunities in this exciting, and surprisingly affordable domain. Small Unmanned Aircraft Systems (drones) are being rapidly developed for research, public service, and commercial applications,...
This paper presents a method of urine detection based on front vision and image recognition. Urine detection apparatus is composed of a conveyor belt, a urine detection card, an automatic detection needle and an automatic detection camera, a mobile device and so on. In order to realize the rapid and accurate identification of the circular hole in urine, the model of the front vision of the urine detection...
Attention-based bio-inspired vision can be studied as a different way to consider sensor processing, firstly allowing to reduce the amount of data transmitted by connected cameras and secondly advocating a paradigm shift toward neuro-inspired processing for the post-processing of the few regions extracted from the visual field. The computational complexity of the corresponding vision models leads...
Vision-based place recognition in underwater environments is a key component for autonomous robotic exploration. However, this task can be very challenging due to the inherent properties of this kind of places such as: color distortion, poor visibility, perceptual aliasing and dynamic illumination. In this paper, we present a method for vision-based place recognition in coral reefs. Our method relies...
Provides an abstract for each of the tutorialp presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings.
We present in this paper a real-time method for visual categorization to do robot grasping. We describe an object database with SURF feature points which we quantify with the Kmeans clustering algorithm to make visual words. Then, we train a Support Vector Machine classifier having as entries the distribution of the bag of features extracted earlier. Likewise, we do object recognition using the SVM...
Landmarks can be used as a reference to enable people or robots to localize themselves or to navigate in their environment. Automatic definition and extraction of appropriate landmarks from the environment has proven to be a challenging task when pre-defined landmarks are not present. We propose a novel computational model of automatic landmark detection from a single image without any pre-defined...
Virtual/mixed reality leveraging an encountered type haptic display will suffer difficulty if virtual and real objects are spatially discrepant. We propose a new method for resolving this issue, visual guidance. The visual guidance algorithm is defined and described in detail, and contrasted with a previously explored approach. The feasibility of the proposed algorithm is experimentally verified.
La preparación y almacenamiento de apresto catiónico y alumbre de la planta “Soluciones Máquinas” en la empresa Carvajal Pulpa y Papel Planta 1 ocasiona fluctuaciones en las concentraciones de los químicos, presentando en ocasiones una mala calidad del papel con pérdidas de hasta USD 400 por tonelada. Actualmente este proceso se realiza mediante un operador y en forma manual sin intervención de sistemas...
The possibility of more intuitive human-machine interfaces has sparked the development of new visual technologies. The way humans interact with elements of their environment should not be limited to the screens of phones or computers. Other alternatives where a sensation of spatial freedom are under development. Projection systems, using continuous light on surrounding surfaces, represent a major...
We developed an optical distortion correction technique for an eyeglasses-type wearable device using a multi-mirror array (MMA). This wearable device is small and light weight, but optics using MMA can cause optical distortions, such as geometric distortion and chromatic aberration of magnification, that depend on the user's pupil distance and degrade the visibility of displayed virtual images. We...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.