The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Estimating the position and orientation (pose) of a moving platform in a three-dimensional (3D) environment is of significant importance in many areas, such as robotics and sensing. In order to perform this task, one can employ single or multiple sensors. Multi-sensor fusion has been used to improve the accuracy of the estimation and to compensate for individual sensor deficiencies. Unlike the previous...
We propose the DeepPose-based pose estimation system that is flexible with the change of bounding-box range for top-view images. Our purpose is to link person detection system and pose estimation system. We introduce Bounding-box Curriculum Learning (BCL) and Recurrent Pose Estimation (RPE). BCL is a learning technique of CNN inspired from Curriculum Learning. RPE is a recurrent process of pose estimation...
This paper reports on an optical visual fiducial system developed for relative-pose estimation of two ships at sea. Visual fiducials are ubiquitous in the robotics literature, however none are specifically designed for use in outdoor lighting conditions. Blooming of the CCD causes a significant bias in the estimated pose of square tags that use the outer corners as point correspondences. In this paper,...
Underwater high resolution 3D mapping implies a very accurate sensor pose estimation to fuse the sensor readings over time into one consistent model. In the case of optical mapping systems, the needed accuracy can easily lie outside of the specification of Doppler Velocity Logs normally used for pose estimation by remotely operated and autonomous underwater vehicles. This is especially the case for...
Time delays are one of the most common problems when utilizing a visual sensor for pose estimation or navigation in aerial robotics. Such time delays can grow exponentially as a function of the scene's complexity and the size of the mapping during classical Simultaneous Localization and Mapping (SLAM) strategies. In this article, a robust reconfigurable control scheme against pose estimation induced...
Given the advancing importance for light-weight production materials an increase in automation is crucial. This paper presents a prototypical setup to obtain a precise pose estimation for an industrial manipulator in a realistic production environment. We show the achievable precision using only a standard fiducial marker system (AprilTag) and a state-of-the art camera attached to the robot. The results...
This paper presents a real-time gesture-based human-robot interaction (HRI) interface for mobile and stationary robots. A human detection approach is used to estimate the entire 3D point cloud of a human being inside the field of view of a moving camera. Afterwards, the pose of the human body is estimated using an efficient self-organizing map approach. Furthermore, a hand-finger pose estimation approach...
There has been an increase of video surveillance systems in operation in public areas. The classical systems simply send the images to monitors. Nevertheless, there is a demand on giving more intelligence on these systems and asking them to automatically track objects or recognise people. One of the basic low-level tasks that these systems have to face with is the accurate deduction of the cameras'...
We present in this paper an original method to estimate the pose of a monocular camera while simultaneously modeling and capturing the elastic deformation of the object to be augmented. Our method tackles a challenging problem where ambiguities between rigid motion and non-rigid deformation are present. This issue represents a major lock for the establishment of an efficient surgical augmented reality...
The KinectFusion algorithm is now used routinely to reconstruct dense 3D surfaces at real-time frame rates using a commodity depth camera. To achieve robust pose estimation, the method conducts the frame-to-model tracking during camera tracking that must inevitably accompany the memory-bound, GPU-assisted volumetric computations for the model manipulation, to which mobile processors are often more...
This paper describes a method for calibrating non-overlapping cameras in a simple way: using markers on the cameras. By adding an AR (Augmented Reality) marker to a camera, we can find the transformation between the fixed AR marker and the camera's center. With such information, relative pose of cameras can be easily found as long as the marker located on them is visible. Our method consists of the...
The main challenge in multi-view camera calibration is precise pose estimation of cameras, especially when their fields of view have very little overlap. This work proposes a very accurate multi-view camera calibration method that does not require cameras to share theirs fields of view. A rotating stage is used to move the calibration target along a known trajectory, consisting of pure rotation, through...
Self-occlusion is a challenging problem existing in human pose estimation. In this paper we exploit a new cue to solve this problem: the torso orientation. We describe a technique to automatically detect self-occlusion in training set without visibility label. Given this prior information, we are able to jointly learn an occlusion-aware model to capture the pattern of self-occluded body parts. We...
In this paper, we introduce a vision-based localization algorithm that can accurately track responders during rescue operations in urban areas that are Global Navigation Satellite System (GNSS)-denied. The proposed algorithm works successfully with the rich visual features of an urban environment and obtains an average localization accuracy of 2.5 ft. In addition, we also provide a 3D representation...
This paper proposes several enhancements to the softPOSIT algorithm with applications to spacecraft pose estimation using a monocular camera. First, the proposed enhancements include a technique for reducing false matches as result of local minimum trapping. Second, this paper provides two strategies for iteration control parameter initialization by using the trace of the correspondence distance,...
In this work, we present a novel RGB-D SLAM algorithm. The novelty of the proposed algorithm lies in the use of both feature points and plane patches for pose estimation. A plane patch is defined as a small-sized patch constructed by using a feature point with small curvature. The feature points with small curvature are called plane points. The remaining feature points are classified as either smooth...
Understanding semantic meaning from hand gestures is a challenging but essential task in human-robot interaction scenarios. In this paper we present a baseline evaluation of the Innsbruck Multi-View Hand Gesture (IMHG) dataset [1] recorded with two RGB-D cameras (Kinect). As a baseline, we adopt a probabilistic appearance-based framework [2] to detect a hand gesture and estimate its pose using two...
This paper presents a concept which tackles the pose estimation problem (extrinsic calibration) for distributed, non-overlapping multi-camera networks. The basic idea is to use a visual SLAM technique in order to reconstruct the scene from a video which includes areas visible by each camera of the network. The reconstruction consists of a sparse, but highly accurate point cloud, representing a joint...
Multi-sensor fusion based pose estimation for UAVs (unmanned aerial vehicles) on ships was proposed in this paper. For take-off and landing of UAVs on ships, the novel artificial landmark was presented which could obtain accurate unique result and eliminate the pose ambiguity effectively. IMU data was effectively employed for pose estimation. EPnP algorithm was performed to get the initial estimation...
In this paper, we present a method of sensor data fusion to solve some problems in the robot navigation process. As we all known, 2D range laser has the advantage of high precision, long distance and obstacle avoidance. It is very common to solve the problem of Simultaneous Localization and Mapping (SLAM) during navigation by using lasers. However, if we use the laser-only method to build our environment...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.