The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents a flexible approach for calibrating omnidirectional single viewpoint sensors from planar grids. These sensors are increasingly used in robotics where accurate calibration is often a prerequisite. Current approaches in the field are either based on theoretical properties and do not take into account important factors such as misalignment or camera-lens distortion or over-parametrised...
This paper describes a view planning of multiple cameras for tracking multiple persons for surveillance purposes. When only a few active cameras are used to cover a wide area, planning their views is an important issue in realizing a competent surveillance system. We develop a multi-start local search (MLS)-based planning method which iteratively selects fixation points of the cameras by which the...
In this work, the relative pose problem is addressed in our structured light system. Assuming that there is an arbitrary planar structure in the scene, we suggest a method for estimating the rotation matrix and translation vector between the camera and the projector. In this system, the varying focal lengths of the camera are allowed and can be obtained without any further assumptions. Finally, we...
This paper describes a method that requires only a single projected point to calibrate a pair of cameras attached to pan-tilt units (PTUs) in a hand-eye robot configuration. Most existing calibration methods require either a set of known calibration locations or a large number of corresponding image features. These requirements usually imply human intervention, which complicates the calibration procedure...
This paper describes a method of mirror localization to calibrate a catadioptric imaging system. Even though the calibration of a catadioptric system includes the estimation of various parameters, in this paper we focus on the localization of the mirror. Since some previously proposed methods assume a single view point system, they have strong restrictions on the position and shape of the mirror....
We describe a fast method to relocalise a monocular visual SLAM (simultaneous localisation and mapping) system after tracking failure. The monocular SLAM system stores the 3D locations of visual landmarks, together with a local image patch. When the system becomes lost, candidate matches are obtained using correlation, then the pose of the camera is solved via an efficient implementation of RANSAC...
By active stereo we mean a stereo vision system that allows for independent panning and tilting for each of the two cameras. One advantage of active stereo in relation to regular stereo is the former's wider effective field of view; if an object is too close to the camera baseline, the depth to the object can still be estimated accurately by panning the cameras appropriately. Another advantage of...
Camera ego-motion consists of translation and rotation, in which rotation can be described simply by distant features. We present a robust rotation estimation using distant features given by our compound omnidirectional sensor. Features are detected by a conventional feature detector, and then distant features are identified by checking the infinity on the omnidirectional image of the compound sensor...
Flash LADAR cameras based on continuous-wave, time-of-flight range measurement deliver fast 3D imaging for robot applications including mapping, localization, obstacle detection and object recognition. The accuracy of the range values produced depends on characteristics of the scene as well as dynamically adjustable operating parameters of the cameras. In order to optimally set these parameters during...
Robot egomotion can be estimated from an acquired video stream up to the scale of the scene. To remove this uncertainty (and obtain true egomotion), a distance within the scene needs to be known. If no a priori knowledge on the scene is assumed, the usual solution is to derive "in some way" the initial distance from the camera to a target object. This paper proposes a new, very simple way...
This paper presents an algorithm which can effectively constrain inertial navigation drift using monocular camera data. It is capable of operating in unknown and large scale environments and assumes no prior knowledge of the size, appearance or location of potential environmental features. Low cost inertial navigation units are found on most autonomous vehicles and a large number of smaller robots...
Calibration techniques allow the estimation of the intrinsic parameters of a camera. This paper describes an adaptive visual servoing scheme which employs the visual data measured during the task to determine the camera intrinsic parameters. This approach is based on the virtual visual servoing approach. However, in order to increase the robustness of the calibration several aspects have been introduced...
In this paper, we propose an original approach to control camera position and/or lighting conditions in an environment using image gradient information. Our goal is to ensure a good viewing condition and good illumination of an object to perform vision-based task (recognition, tracking, etc.). Within the visual servoing framework, we propose solutions to two different issues: maximizing the brightness...
This paper presents a hybrid decoupled vision-based control scheme valid for the entire class of central catadioptric sensors (including conventional perspective cameras). First, we consider the structure from motion problem using imaged 3D points. Geometrical relationships are exploited to enable a partial Euclidean reconstruction by decoupling the interaction between translation and rotation components...
This paper presents a system which combines single-camera SLAM (simultaneous localization and mapping) with established methods for feature recognition. Besides using standard salient image features to build an on-line map of the camera's environment, this system is capable of identifying and localizing known planar objects in the scene, and incorporating their geometry into the world map. Continued...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.