The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Agile robots, such as small Unmanned Aerial Vehicles (UAVs) can have a great impact on the automation of tasks, such as industrial inspection and maintenance or crop monitoring and fertilization in agriculture. Their deploy-ability, however, relies on the UAV's ability to self-localize with precision and exhibit robustness to common sources of uncertainty in real missions. Here, we propose a new system...
In this paper, we present a monocular visual-inertial odometry algorithm which, by directly using pixel intensity errors of image patches, achieves accurate tracking performance while exhibiting a very high level of robustness. After detection, the tracking of the multilevel patch features is closely coupled to the underlying extended Kalman filter (EKF) by directly using the intensity errors as innovation...
Recognition of human gesture leads to a dynamic field that produces many various methods. The main way to improve the recognition process is to perform data fusion based on a qualification of each recognition method. The advance in data fusion gives also several solutions and the choice of a fusion method is a crucial point. The goal of this paper is to present an approach were the choice of the fusion...
In this paper, we use sliding mode control theory to design a 3D vision based controller that is robust to bounded parametric estimation errors. First, we give a model of an eye-in-hand 6 DOF robotic system, including the Pose Reconstruction Algorithm (PRA) used to estimate the position and orientation of the camera with respcet to the scene. In a second step, we propose a model for the uncertainties...
We present a novel catadioptric-stereo rig consisting of a coaxially-aligned perspective camera and two spherical mirrors with distinct radii in a “folded” configuration. We recover a nearly-spherical dense depth panorama (360°×153°) by fusing depth from optical flow and stereo. We observe that for motion in a horizontal plane, optical flow and stereo generate nearly complementary distributions of...
This paper reports an algorithm for the registration of images with low overlap and low visual feature density—a typical characteristic of down-looking underwater imagery. Our algorithm exploits locally accurate temporal motion-priors and pairwise image correspondences to aggregate semi-rigid sets of sequential images. These sets are then used to search for visual correspondences across sets instead...
We present a stereo vision-aided inertial navigation system and demonstrate its potential in power line inspection at close range using an unmaned aerial vehicle. This is made possible by recent developments in visual odometry and a newly proposed algorithm for the loose coupling of an inertial measurement unit and visual odometry. Our experiments show promising results.
A reliable estimation of heart surface motion is an important prerequisite for the synchronization of surgical instruments in robotic beating heart surgery. In general, only an imprecise description of the heart dynamics and measurement systems is available. This means that the estimation of heart motion is corrupted by stochastic and systematic uncertainties. Without consideration of these uncertainties,...
To be able to determine the position of a static object in 3D space by means of computer vision, it has to be seen by cameras from at least two different view points. The same applies for measuring the position of a moving object based on images captured at one single time instant. However, if the cameras are not synchronized in time, or if a moving object is not visible in all images, one can not...
Sensor based robot control allows manipulation in dynamic and uncertain environments. Vision can be used to estimate 6-DOF pose of an object by model-based pose-estimation methods, but the estimate is not accurate in all degrees of freedom. Force offers a complementary sensor modality allowing accurate measurements of local object shape when the tooltip is in contact with the object. As force and...
Belief propagation methods are the state-of-the-art with multisensor state localization problems. However, when localization applications have to deal with multimodality sensors whose functionality depends on the environment of operation, we understand the need for an inference framework to identify confident and reliable sensors. Such a framework helps eliminate failed/non-functional sensors from...
The paper describes a robust method to extract 3D lines from stereo point clouds. This method combines 2D image information with 3D point clouds from a stereo camera. 2D lines are first extracted from the image in the stereo pair, followed by 3D line regression from the back-projected 3D point set of the images points in the detected 2D lines. In this paper, random sample consensus (RANSAC) is used...
The path followed by a mobile robot while mapping an environment (i.e. an exploration trajectory) plays a large role in determining the efficiency of the mapping process and the accuracy of any resulting metric map of the environment. This paper examines some important aspects of path planning in this context: the trade-offs between the speed of the exploration process versus the accuracy of resulting...
This paper proposes a method for augmenting the information of a monocular camera and a range finder. This method is a valuable step towards solving the SLAM problem in unstructured environments free from problems of using encoderspsila data. Proposed algorithm causes the robot to benefit from a feature-based map for filtering purposes, while it exploits an accurate motion model, based on point-wise...
iTASC (acronym for dasiainstantaneous task specification and controlpsila) by J. De Schutter (2007) is a systematic constraint-based approach to specify complex tasks of general sensor-based robot systems. iTASC integrates both instantaneous task specification and estimation of geometric uncertainty in a unified framework. Automatic derivation of controller and estimator equations follows from a geometric...
Towards large-scale environment, a novel metric-topological 3D map is proposed in our vision-based self localization system. Based on probabilistic line elements with directional information, the local metric map is developed using different feature levels. Then, the adjacent local metric maps are connected by topological structures. We design a nonlinear camera model which propagates directional...
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision offers a low-cost sensor modality, but low sample rate, high sensor delay and uncertain measurements limit its usability. This paper addresses three problems: uncertain visual measurements, different sampling rates and compensation of the sensor delay. To alleviate the problems above an approach for visual...
A new model validation approach to motion segmentation problem is proposed. In order to demonstrate the proposed method, we study the motion segmentation problem for a mobile wheeled robot. Experiments were carried out on a Pioneer 3 mobile robot and a stationary camera.
A new model validation approach to motion segmentation problem is proposed. In order to demonstrate the proposed method, we study the motion segmentation problem for a mobile wheeled robot. Experiments were carried out on a Pioneer 3 mobile robot and a stationary camera.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.