The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We describe a system which follows “trails” for autonomous outdoor robot navigation. Through a combination of visual cues provided by stereo omnidirectional color cameras and ladar-based structural information, the algorithm is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is simply modeled as...
Indoor flight is a new and challenging environment for unmanned aerial systems (UAS). To better explore this new flight regime, Embry-Riddle Aeronautical University (ERAU) developed a novel biologically inspired rotorcraft called SamarEye to compete in the 2009 and 2010 International Aerial Robotics Competitions (IARC). ERAU's approach to this competition was a unique aerodynamic configuration called...
Animals and insects use their own navigating system to return home in various ways. One of the most widely used senses is vision. They use visual information to remember the home snapshot image which is useful in returning home from an arbitrary location. Inspired by behaviours of insects and other animals, there have been many homing algorithms applied to mobile robots. These methods use visual information...
This paper presents a method of 3D localization using image edge-points detected from binocular stereo image sequences. The proposed method calculates camera poses using visual odometry, and updates the poses by reducing the accumulated errors using landmark recognition. Landmark recognition is done based on robust and scalable image-retrieval using image edge-points with SIFT descriptors and a vocabulary...
In this paper, we present a unified approach for a camera tracking system based on an error-state Kalman filter algorithm. The filter uses relative (local) measurements obtained from image based motion estimation through visual odometry, as well as global measurements produced by landmark matching through a pre-built visual landmark database and range measurements obtained from radio frequency (RF)...
This paper describes a topological SLAM system using a purely vision-based approach. This robot utilizes a GPU-based omnidirectional catadioptric stereovision system to perceive and plan its path in the environment. Subsequently, the omnidirectional images generated are used to incrementally build a database of image signatures based on the standard 2D Haar Wavelet decomposition. In order to maintain...
This paper describes a visual odometry algorithm that deals with the nearly degenerated situation caused by a false motion vector generated by independently moving objects, repetitive patterns and wrong depth information that often arise in visual odometry for outdoor service robots. To filter out these false motion vectors, we use temporal and spatial motion vector filter. The temporal motion vector...
We present a graph-based SLAM approach, using monocular vision and odometry, designed to operate on computationally constrained platforms. When computation and memory are limited, visual tracking becomes difficult or impossible, and map representation and update costs must remain low. Our system constructs a map of structured views using only weak temporal assumptions, and performs recognition and...
Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the object's distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensor's non-ambiguity...
In this paper we present a novel system for real-time, six degree of freedom visual simultaneous localization and mapping using a stereo camera as the only sensor. The system makes extensive use of parallelism both on the graphics processor and through multiple CPU threads. Working together these threads achieve real-time feature tracking, visual odometry, loop detection and global map correction...
In this paper, we propose a technique of learning a noise pattern of visual odometry for accurate and consistent 6DOF localization. The noise model is represented by three parameters of feature points as input: (I) The number of inliers among feature points, (II) Average of distances between feature points, (III) Variance of interior angles. The correlation between these parameters and estimation...
An omnidirectional Mecanum base allows for more flexible mobile manipulation. However, slipping of the Mecanum wheels results in poor dead-reckoning estimates from wheel encoders, limiting the accuracy and overall utility of this type of base. We present a system with a downward-facing camera and light ring to provide robust visual odometry estimates. We mounted the system under the robot which allows...
Visually estimating a robot's own motion has been an active field of research within the last years. Though impressive results have been reported, some application areas still exhibit huge challenges. Especially for car-like robots in urban environments even the most robust estimation techniques fail due to a vast portion of independently moving objects. Hence, we move one step further and propose...
We present a generalization of the Koenderink-van Doorn (KvD) algorithm that allows robust monocular localization with large motion between the camera frames for a wide range of optical systems including omnidirectional systems and standard perspective cameras. The KvD algorithm estimates simultaneously ego-motion parameters, i.e. rotation, translation, and object distances in an iterative way. However...
New methods based on vision have emerged in the area of mobile vehicle localization. Such methods offer an improved alternative in terms of accuracy to traditional localization methods like wheel odometry. In this paper we propose such a method that does not compromise precision and can run in real time. Depending on environment, feature numbers are sometimes insufficient. To solve this, our algorithm...
This paper presents the design and test of a CMOS integrated circuit implementing a 160×120-pixels 3D camera. The on-pixel processing allows the use of Indirect Time-Of-Flight technique for distance measurement with reset noise removal through Correlated Double Sampling and embedded fixed-pattern noise reduction, while a fast readout operation allows to stream out the pixels values at a maximum rate...
Visual maps of the seafloor should ideally provide the ability to measure individual features of interest in real units. Two-dimensional photomosaics cannot provide this capability without making assumptions that often fail over 3-D terrain, and are generally used for visualization, but not for measurement. Full 3-D structure can be recovered using stereo vision, structure from motion (SFM), or simultaneous...
A novel image-based label system is proposed in this paper, we can convert the coordinate in image into Cartesian coordinate by using an image from a single CCD camera. This system could correspond arbitrary two points on the image convert to the actual distance no matter the digital camera is vertical to the measuring plane or not and the height of digital camera. Here designed a new structure for...
The depth perception in the objects of a scene can be useful for tracking or applying visual servoing in mobile systems. 3D time-of-flight (ToF) cameras provide range images which give measurements in real time to improve these types of tasks. However, the distance computed from these range images is very changing with the integration time parameter. This paper presents an analysis for the online...
A method to compute the lean stance of the camera is proposed, which is based on the parallel perspective mapping model of the camera. This method improved the common monocular vision-based localization method and can be used in the operating system of unmanned underground mining vehicle to detect the travelling status of vehicle in real-time, fit the bad underground road condition, and reduce the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.