The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Detection of moving objects is a key component in mobile robotic perception and understanding of the environment. In this paper, we describe a realtime independent motion detection algorithm for this purpose. The method is robust and is capable of detecting difficult degenerate motions, where the moving objects is followed by a moving camera in the same direction. This robustness is attributed to...
In this paper we present a novel system for real-time, six degree of freedom visual simultaneous localization and mapping using a stereo camera as the only sensor. The system makes extensive use of parallelism both on the graphics processor and through multiple CPU threads. Working together these threads achieve real-time feature tracking, visual odometry, loop detection and global map correction...
In this work we propose the use of machine learning techniques to improve Simultaneous Localization and Mapping (SLAM) using an extended Kalman filter (EKF) and visual information for robot navigation. We are using the Viola and Jones approach for looking specific visual landmarks in environment. The landmarks are used to improve the robot localization in the EKF-SLAM system. Our experiments validate...
For a coexisting and collaborative society that incorporates humans and robots, the detection, tracking, and recognition of human motion are indispensable techniques for a robot to safely and securely interact with humans. The present paper proposes a motion tracking system using distributed network cameras that are placed in a sizeable environment, such as a street or a town. Model-based motion tracking...
This paper proposes a reliable solution to the problem of estimating the motion of a rigid object moving freely in 3D space, through the use of a passive vision system. The feature-based tracking technique builds upon the selection of a consistent set of features and their tracking on a frame-by-frame basis. A thorough investigation is conducted to determine a proper vision system setup, which results...
Accurate online localization is crucial for mobile robotics. In this paper, we describe a real-time image-based localization technique, which is based on a single calibrated camera. This can be supported by a second camera to improve accuracy and to provide the correct translational scale. Our goal is a robust and unbiased pose estimation in highly dynamic scenes on resource-limited systems. The presented...
In this paper we present an efficient algorithm for estimating the 3D localization in an urban environments by fusing measurements from GPS receiver, inertial sensor and vision. Such hybrid sensor is important for numerous applications including outdoor mobile augmented reality and 3D robot localization. Our approach is based on non-linear filtering of these complementary sensors using a multi-rate...
In human-robot interaction, it is important for the robot to know the head movement, gaze direction and expression of the conversation partner, since such information are deeply related with the attention, intention and emotion. Recently, many types of real-time measurement systems for head pose and gaze direction have been proposed and utilized for human interfaces and ergonomics applications. We...
When a human is introduced into a robotic cell, the robot controller must be aware of the human location in order to assure her/his physical integrity. This paper presents a pre-collision strategy which maintains a safety distance between a robot and a human who wears a tracking system composed of a motion capture suit and a UWB localization system. The system proposed is able to guide the robot using...
Object recognition techniques are well known in the field of machine vision, and aim at the classification of certain observed rigid objects based on the information acquired by a specific sensor. These techniques can either be performed in the 2D image space by simply applying suitable image processing algorithms, or in the real world 3D space by performing surface reconstruction of the object's...
This paper presents a new method for seamless fusion of arbitrary range sensors for mobile robot obstacle avoidance. This method, named virtual range scan (VRS), is able to deal with arbitrary sensor configurations (2D and 3D) and it is independent of the underlying obstacle avoidance strategy. This makes it a very flexible approach that can reuse existing 2D obstacle avoidance algorithms for 3D obstacle...
In the context of stereovision SLAM, we propose a way to enrich the landmark models. Vision-based SLAM approaches usually rely on interest points associated to a point in the Cartesian space: by adjoining oriented planar patches (if they are present in the environment), we augment the landmark description with an oriented frame. Thanks to this additional information, the robot pose is fully observable...
Creating robots able to interact and cooperate with humans in household environments and everyday life is an emerging topic. Our goal is to facilitate a human-like and intuitive interaction with such robots. Besides verbal interaction, gestures are a fundamental aspect in human-human interaction. One typical usage of interactive gestures is referencing of objects. This paper describes a novel integrated...
The Rutgers Arm II trains primarily shoulder motor control, arm dynamic response, endurance and cognitive anticipatory strategies in virtual environments. It improves on our earlier Rutgers Arm by replacing magnetic tracking with a visual tracking and by the use of a training table that tilts. Pilot trials with a single subject showed clear dependency on table tilt angle. Further trials are ongoing.
This paper addresses the problem of motion estimation and 3-D reconstruction through visual tracking with a single-viewpoint sensor and, in particular, how to generalize tracking to calibrated omnidirectional cameras. We analyze different minimization approaches for the intensity-based cost function (sum of squared differences). In particular, we propose novel variants of the efficient second-order...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.