The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
RGBD SLAM systems have shown impressive results, but the limited field of view (FOV) and depth range of typical RGBD cameras still cause problems for registering distant frames. Monocular SLAM systems, in contrast, can exploit wide-angle cameras and do not have the depth range limitation, but are unstable for textureless scenes. We present a SLAM system that uses both an RGBD camera and a wide-angle...
In this paper, we proposed an indirect method to measure the distance of an object accurately by single visual cameras using triangulation. The object can be seen as the third point of a triangle with two known sides and one known angle. Distance to object can be determined indirectly on the base of known sides and angle, rather than being measured directly. It would be very useful in case there is...
This paper proposes an indoor scene three-dimensional (3D) reconstruction system using pan-tilt platform and RGB-D camera. The proposed system can automatically reconstruct 3D indoor scenes on a fixed position. An efficient point cloud registration algorithm is proposed to align point clouds based on extrinsic parameters of the RGB-D camera from every presetted pan-tilt control points. Then, a local...
Self-localization in unknown environment is one of the most fundamental tasks for mobile robot. In this paper, a novel natural landmarks based localization method is proposed for indoor robot equipped with binocular vision, ceiling corner is taken as natural landmarks. The absolute location of the robot is determined according to the principle of triangular localization. The ceiling corner feature...
Lately, the 3D applications have become a more popular topic in robotics, computer vision or augmented reality. Using cameras and computer vision techniques, it is possible to obtain accurate 3D models of large-scale environments, such as cities. Furthermore, the cameras are low-cost, non-intrusive sensors compared to other sensors such as laser scanners and also offer rich information about the environment...
Path planning for mobile robots requires rapidly finding collision-free trajectories in an uncertain and changing environment. Full collision checking with detailed, online-revised representations of the robot and world imposes a delay that undermines reactive obstacle avoidance. As a result, reactive vision-based approaches make various assumptions to arrive at simplified representations, such as...
Robotic teleoperation from a human operator's pose demonstrations provides an intuitive and effective means of control that has been made feasible by improvements in sensor technologies in recent years. However, the imprecision of low-cost depth cameras and the difficulty of calibrating a frame of reference for the operator introduce inefficiencies in this process when performing tasks that require...
Adaptive cable-driven parallel robots can adjust the position of one or more pulley blocks to optimize performance within a given workspace. Because of their augmented kinematic redundancy, adaptive systems have several advantages over their traditional counterparts featuring the same numbers of cables. In this paper, we explore the application of adaptive cable-driven robots to cable-suspended camera...
It is important to enable a robot to manipulate a target object that has no 3-D model information and is situated in an environment with other unknown objects nearby. This poses an open problem of how to combine perception and manipulation to enable the robot to build an appearance-based model of the target object on the spot to facilitate further manipulation of the object while avoiding the other...
We present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. We train a fast object category detector for instance detection on our data. Using the dataset we show that, although increasingly accurate...
Obtaining reliable state estimates at high altitude but GPS-denied environments, such as between high-rise buildings or in the middle of deep canyons, is known to be challenging, due to the lack of direct distance measurements. Monocular visual-inertial systems provide a possible way to recover the metric distance through proper integration of visual and inertial measurements. However, the nonlinear...
Visual SLAM in low illumination scenes remains a considerably challenging task since the available amount of appearance information frequently stays insufficient. To tackle with this problem, we propose a novel SLAM framework by using both appearance information and thermal information, which possesses illumination-free recognizable contents, in a flexible manner. The key idea is to continuously update...
The primary focus of this work is to examine how robots can achieve more robust sequential manipulation through the use of pre-touch sensors. The utility of close-range proximity sensing is evaluated through a robotic system that uses a new optical time-of-flight pre-touch sensor to complete a highly precise and sequential task — solving the Rubik's cube. The techniques used in this task are then...
We present an approach that allows the Georgia Tech Miniature Autonomous Blimp (GT-MAB) to detect and follow a human. This accomplishment is the first Human Robot Interaction (HRI) demonstration between an uninstrumented human and a robotic blimp. GT-MAB is an ideal platform for HRI missions because it is safe to humans and can support sufficient flight time for HRI experiments. However, due to complex...
Deep learning models have achieved state-of-the-art performance in recognizing human activities, but often rely on utilizing background cues present in typical computer vision datasets that predominantly have a stationary camera. If these models are to be employed by autonomous robots in real world environments, they must be adapted to perform independently of background cues and camera motion effects...
We propose in this paper a new active perception scheme based on Model Predictive Control under constraints for generating a sequence of visual servoing tasks. The proposed control scheme is used to compute the motion of a camera whose task is to successively observe a set of robots for measuring their position and improving the accuracy of their localization. This method is based on the prediction...
A Meal Assistance Robot is an assistive device that is used to aid individuals who cannot independently direct food to their mouths for consuming. For individuals who undergo loss of upper limb functions due to amputations, spinal cord injuries or cerebral palsy, self-feeding can be impossible, and to assist such individuals in regaining their independence meal assistance robots have been introduced...
Object pose estimation is one of the crucial parts in vision-based object manipulation system using standard industrial robot manipulator, particularly in pose estimation of the end effector of the robot arm to grasp the object targeted. This paper presents the utilization of stereo vision system to estimate the 3D (3 dimensional) object position and orientation to pick up and place the object targeted...
In this paper we address the problem of multi-robot localization with a heterogeneous team of low-cost mobile robots. The team consists of a single centralized observer with an inertial measurement unit (IMU) and monocular camera, and multiple picket robots with only IMUs and Red Green Blue (RGB) light emitting diodes (LED). This team cooperatively navigates a visually featureless environment while...
Dense depth map estimation from stereo cameras has many applications in robotic vision, e.g., obstacle detection, especially when performed in real-time. The range in which depth values can be accurately estimated is usually limited for two-camera stereo setups due to the fixed baseline between the cameras. In addition, two-camera setups suffer from wrong depth estimates caused by local minima in...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.