The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents a navigation method that enables an autonomous mobile robot to localize itself and identify its own orientation in order to follow a path in the environment. Both tasks use the same recognition method. The method is based on features data provided through an image captured by a single camera, which is trained by a neural network. No precise and accurate measurement is used in the...
We propose a visual recognition system for robotic applications in which distance to the visual objects can change a lot (for instance, trying to recognize a distant object learned from a short distance). Our system takes advantage of a single pan-tilt camera controllable in zoom and focus. Focus control allows to detect plans of sharpness in the scene and indirectly to compute a distance. Hence,...
Visual odometry is a new navigation technology using video data. For long-range navigation, an intrinsic problem of visual odometry is the appearance of drift. The drift is caused by error accumulation, as visual odometry is based on relative measurements, and will grow unboundedly with time. The paper first reviews algorithms which adopt various methods to suppress this drift. However, as far as...
In this paper, we introduce a new vision based method for robot navigation and human tracking. For robot navigation, we convert the captured image in a binary one, which after the partition is used as the input of the neural controller. The neural control system, which maps the visual information to motor commands, is evolved online using real robots. For human tracking, after face detection, the...
We describe a system which follows “trails” for autonomous outdoor robot navigation. Through a combination of visual cues provided by stereo omnidirectional color cameras and ladar-based structural information, the algorithm is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is simply modeled as...
In this paper, we consider the problem of realizing a vision-based navigation task. To this aim, we present an algorithm allowing to automatically compute the necessary reference visual features. This algorithm relies on a predictor/estimator pair able to determine the visual features depth sufficiently rapidly with respect to the control law sampling period. The proposed method can then be used on-line...
This paper presents a method for the automatic observation of unknown objects. We aim at finding the position of the targeted object and at capturing multiple views of its shape using a single eye-in-hand camera. The main goal is modeling the unknown object via 3D reconstruction before grasping and manipulating it with a robot hand. The proposed method built over a visual servo loop uses simple features...
We address the problem of vehicle (mobile robot) navigation by combining visual-based reconstruction and localization with metrical information given by the proprioceptive sensors such as the odometry sensor. The proposed approach extends the navigation system based on a monocular vision which is able to build a map and localize the vehicle in the real time way using only one camera. An extended Kalman...
With advances in wearable computing, wearable virtual reality and wearable robotics have become popular areas of research. In the present study, we consider the case in which an expert in first aid treatment at a remote critical care center receives an image from an assistant/cooperator who is with a patient, i.e., the sharing of visual information between an expert and an assistant. A head-mounted...
We propose an algorithm for generating navigation summaries. Navigation summaries are a specialization of video summaries, where the focus is on video collected by a mobile robot, on a specified trajectory. We are interested in finding a few images that epitomize the visual experience of a robot as it traverses a terrain. This paper presents a novel approach to generating summaries in form of a set...
We combine a visual odometry system with an aided inertial navigation filter to produce a precise and robust navigation system that does not rely on external infrastructure. Incremental structure from motion with sparse bundle adjustment using a stereo camera provides real-time highly accurate pose estimates of the sensor which are combined with six degree-of-freedom inertial measurements in an Extended...
This paper describes a novel approach for purely vision based mobile robot navigation. The visual obstacle avoidance and corridor following behavior rely on the segmentation of the traversable floor region in the omnidirectional robocentric view. The image processing employs a supervised approach in which the segmentation optimal with respect to the appearance of the local environment is determined...
This paper investigates stabilizing receding horizon control via an image space navigation function for a three-dimensional (3-D) visual feedback system. Firstly, a brief summary of a visual motion observer is given. Next, a visual motion error system is reconstructed in order to apply to time-varying desired motion. Then, visual motion observer-based stabilizing receding horizon control for the 3-D...
Visual maps of the seafloor should ideally provide the ability to measure individual features of interest in real units. Two-dimensional photomosaics cannot provide this capability without making assumptions that often fail over 3-D terrain, and are generally used for visualization, but not for measurement. Full 3-D structure can be recovered using stereo vision, structure from motion (SFM), or simultaneous...
This work describes a robot visual homing model that employs, for the first time, the conjugate gradient Temporal Difference (TD-conj) method. TD-conj was proved to be equivalent to a gradient TD method with a variable λ, denoted as (TD(λt(conj))), when both are used with function approximation techniques. This fact is employed in the model to improve its performance. Based on visual input that is...
This paper presents a visual navigation method based on guideline visual recognition for wheeled robot to inspect equipment and instrument in unattended substation. Wheeled robot collects the road information via camera and recognizes the guideline. According the deviation of the guideline's actual location and intended location use the PID control technology controls the left and right wheel's speed...
This paper describes a visual perception system which allows a social robot to conduct several tasks. The central part of this system is an artificial attention mechanism which is able to discriminate the most relevant information from all the visual information perceived by the robot. This attention mechanism is composed by three modules or stages. At the preattentive stage, a set of uniforms blobs...
This paper presents a visual navigation method based on guideline visual recognition for wheeled robot to inspect equipment and instrument in unattended substation. Wheeled robot collects the road information via camera and recognizes the guideline. According the deviation of the guideline's actual location and intended location use the PID control technology controls the left and right wheel's speed...
Precise calibration of camera intrinsic and extrinsic parameters, while often useful, is difficult to obtain during field operation and presents scaling issues for multi-robot systems. We demonstrate a vision-based approach to navigation that does not depend on traditional camera calibration, and present an algorithm for guiding a robot through a previously traversed environment using a set of uncalibrated...
Vision-based robotic applications such as Simultaneous Localization and Mapping (SLAM), global localization, and autonomous navigation have suffered from problems related to dynamic environments involving moving objects and kidnapping. One of the possible solutions to these problems is to establish robust correspondences when obtaining images from static scenes. Therefore we propose an efficient technique...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.