The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper proposes a method of vision based pose estimation of randomly piled objects. It is necessary to estimate precise rotation angle of picking objects. However, it is non-trivial task because an object placed in every position makes distorted image far from right position image. We propose a precise pose estimation method of bin picking objects. The landmark feature of a picking object is extracted...
In this paper, a method combining PnP and OI algorithms is developed to measure the pose of a humanoid robot in high precision in order to play table tennis. The PnP-based algorithm is employed to obtain the rough pose, which is taken as the initial value of the OI algorithm. The OI algorithm optimizes the result in order to ensure the orthogonal orientation matrix of the pose. Considering the real-time,...
This paper addresses the problem of mapping three dimensional environments from a sequence of images taken by a calibrated camera, and simultaneously generating the camera motion trajectory. This is the Monocular SLAM problem in robotics, and is akin to the Structure from Motion (SFM) problem in computer vision. We present a novel map-aided 6-DOF relative pose estimation method based on a new formulation...
In this paper a new approach for robot control using position based visual servoing (PBVS) with an omnidirectional multi-camera system is presented. PBVS requires the explicit calculation of the position and orientation of the robot tool. Given only images without depth information, either additional geometric properties of the observed scene or stereo correspondences have to be provided for pose...
The aim of this paper is to propose the technology to allow people to control robots by means of everyday gestures without using sensors or controllers. The hand pose estimation we propose reduces the number of image features per data set to 64, which makes the construction of a large-scale database possible. This has also made it possible to estimate the 3D hand poses of unspecified users with individual...
The large collections of datasets for researchers working on the Simultaneous Localization and Mapping problem are mostly collected from sensors such as wheel encoders and laser range finders mounted on ground robots. The recent growing interest in doing visual pose estimation with cameras mounted on micro-aerial vehicles however has made these datasets less useful. In this paper, we describe our...
The objective of this paper is to develop a localization system for cooperative multiple mobile robots, in which each robot is assumed to observe a set of known landmarks and equipped with an omnidirectional camera. If only a limited number of landmarks are available, it suffers from law accuracy. In this paper, it is assumed that a robot can detect other robots by using the omnidirectional camera...
This paper discusses a perceptual system for intelligent robots. Robots should be able to perceive environments flexibly enough to realize intelligent behavior. We focus on a perceptual system based on the perceiving-acting cycle discussed in ecological psychology. The perceptual system we have proposed consists of a retinal model and a spiking-neural network realizing the perceiving-acting cycle...
This paper presents a novel solution to the autonomy of a Portable Robotic Device (PRD) for the visually impaired. The proposed method meets the PRD's requirement of providing 3D navigational information with a small-sized device. The proposed approach is to employ a 3D imaging sensor-the SwissRanger SR4000-for both pose estimation and perception. The SR4000 produces both intensity and range images...
Future normally-unmanned oil platforms offer potentially significantly lower commissioning and operation costs than their current manned counterparts. The ability to initiate and perform remote inspection and maintenance (I&M) operations is crucial for maintaining such platforms. This paper presents a system solution, including key components such as a 3D robot vision system, a robot tool and...
This paper presents a method of 3D localization using image edge-points detected from binocular stereo image sequences. The proposed method calculates camera poses using visual odometry, and updates the poses by reducing the accumulated errors using landmark recognition. Landmark recognition is done based on robust and scalable image-retrieval using image edge-points with SIFT descriptors and a vocabulary...
We present the Viewpoint Feature Histogram (VFH), a descriptor for 3D point cloud data that encodes geometry and viewpoint. We demonstrate experimentally on a set of 60 objects captured with stereo cameras that VFH can be used as a distinctive signature, allowing simultaneous recognition of the object and its pose. The pose is accurate enough for robot manipulation, and the computational cost is low...
We present a graph-based SLAM approach, using monocular vision and odometry, designed to operate on computationally constrained platforms. When computation and memory are limited, visual tracking becomes difficult or impossible, and map representation and update costs must remain low. Our system constructs a map of structured views using only weak temporal assumptions, and performs recognition and...
This paper presents an upper body tracking algorithm with a single monocular camera. In order to be suitable for human robot interaction, the designed method should be free to work on the moving camera platform and also can achieve real-time performance. The dimension of human posture model is extremely high, and we hereby focus on the visual extraction of head and arms. A hierarchical structure model...
We combine a visual odometry system with an aided inertial navigation filter to produce a precise and robust navigation system that does not rely on external infrastructure. Incremental structure from motion with sparse bundle adjustment using a stereo camera provides real-time highly accurate pose estimates of the sensor which are combined with six degree-of-freedom inertial measurements in an Extended...
Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating a relative posture between an object and a camera is an important task in these fields Visual markers are frequently used to estimate a relative posture between an object and a camera, but the usage of visual markers spoils a scene. In this paper, we propose a novel method for posture estimation...
We present an extension of a neuro-dynamic object recognition system that combines bottom-up recognition of matching patterns and top-down estimation of pose parameters in a recurrent loop. It is extended by an active foveal vision system. Adding the active vision component is easily integrated within the architecture and improves the recognition rate on previous experiments on the COIL-100 database...
The ability to recognize objects and to localize them precisely is essential in all service robotic applications. One of the main challenges for service robots during operation lies in the handling of unavoidable uncertainties which originate from model and sensor inaccuracies and are characteristic for realistic application scenarios. Robustness under real world conditions can only be achieved when...
When planning robotic grasping and manipulation maneuvers, knowledge of the shape and pose of the object of interest is critical information. In order for an autonomous or semi-autonomous system to operate intelligently in an unstructured environment and interact with novel objects, it must have the ability to recover this information at run time, even when no a priori information of the object is...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.