The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Estimating the 6-DoF pose of a camera from a single image relative to a pre-computed 3D point-set is an important task for many computer vision applications. Perspective-n-Point (PnP) solvers are routinely used for camera pose estimation, provided that a good quality set of 2D-3D feature correspondences are known beforehand. However, finding optimal correspondences between 2D key-points and a 3D point-set...
This paper presents a robust and efficient semidense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semidense map of a reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour...
Low-drift rotation estimation is a crucial part of any accurate odometry system. In this paper, we focus on the problem of 3D rotation estimation with dense depth sensors in environments that consist of piece-wise planar structures, such as corridors and office rooms. An efficient mean-shift paradigm is developed to extract and track planar modes in the surface normal vector distribution on the unit...
It is well-known that the relative pose problem can be generalized to non-central cameras. We present a further generalization, denoted the generalized relative pose and scale problem. It has surprising importance for classical problems such as solving similarity transformations for view-graph concatenation in hierarchical structure from motion and loop-closure in visual SLAM, both posed as a 2D-2D...
OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level...
Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information...
This paper introduces two novel solutions to the generalized-camera exterior orientation problem, which has a vast number of potential applications in robotics: (i) a minimal solution requiring only three point correspondences, and (ii) gPnP, an efficient, non-iterative n-point solution with linear complexity in the number of points. Already existing minimal solutions require exhaustive algebraic...
In this video, we present our latest results towards fully autonomous flights with a small helicopter. Using a monocular camera as the only exteroceptive sensor, we fuse inertial measurements to achieve a self-calibrating power-on-and-go system, able to perform autonomous flights in previously unknown, large, outdoor spaces. Our framework achieves Simultaneous Localization And Mapping (SLAM) with...
The SFly project is an EU-funded project, with the goal to create a swarm of autonomous vision controlled micro aerial vehicles. The mission in mind is that a swarm of MAV's autonomously maps out an unknown environment, computes optimal surveillance positions and places the MAV's there and then locates radio beacons in this environment. The scope of the work includes contributions on multiple different...
In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odometry supplying up-to-scale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via...
In this work, we present a novel, deterministic closed-form solution for computing the scale factor and the gravity direction of a moving, loosely-coupled, and monocular vision-inertial system. The methodology is based on analysing delta-velocities. On one hand, they are obtained from a differentiation of the up-to-scale camera pose computation by a visual odometry or visual SLAM algorithm. On the...
The objective of this paper is the full 6D relative localization of mobile devices, and direct robot-robot localization in particular. We present a novel relative localization system that consists of two complementary modules: a monocular vision module and a target module with four active or passive markers. The core localization algorithm running on the modules determines the marker positions in...
This work presents a method for estimating the egomotion of an aerial vehicle in challenging industrial environments. It combines binocular visual and inertial cues in a tightly-coupled fashion and operates in real time on an embedded platform. An extended Kalman filter fuses measurements and makes motion estimation rely more on inertial data if visual feature constellation is degenerate. Errors in...
This paper presents a closed-form solution for metric velocity estimation of a single camera using inertial measurements. It combines accelerometer and attitude measurements with feature observations in order to compute both the distance to the feature and the speed of the camera inside the camera frame. Notably, we show that this is possible by just using three consecutive camera positions and a...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.