The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance...
Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In...
In this paper we propose an efficient solution to jointly estimate the camera motion and a piecewise-rigid scene flow from an RGB-D sequence. The key idea is to perform a two-fold segmentation of the scene, dividing it into geometric clusters that are, in turn, classified as static or moving elements. Representing the dynamic scene as a set of rigid clusters drastically accelerates the motion estimation,...
This paper presents the temporal enhancement of the graph-based depth estimation method, designed for multiview systems with arbitrarily located cameras. The primary goal of the proposed enhancement is to increase the quality of estimated depth maps and simultaneously decrease the time of estimation. The method consists of two stages: the temporal enhancement of segmentation required in used depth...
We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation)...
In this paper, we propose an effective approach for moving object detection based on modeling the ego-motion uncertainty and using a graph-cut based motion segmentation. First, the relative camera pose is estimated by minimizing the sum of reprojection errors and its covariance matrix is calculated using a first-order errors propagation method. Next, a motion likelihood for each pixel is obtained...
In this paper, we tackle the problem of mapping multiple 3D rigid structures and estimating their motions from perspective views through a car-mounted camera. The proposed method complements conventional localization and mapping algorithms (such as Visual Odometry and SLAM) to estimate motions of other moving objects in addition to the vehicle's motion. We present a theoretical framework for robust...
This paper proposes an approach for 2D-to-3D conversion for multiview displays. It employs an object-based approach where objects at large depth differences are first segmented by semi-automatic tools. Appropriate depth values are assigned to these objects and the missing image pixels at the background are filled in by inpainting techniques so that different views of the image can be synthesized....
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.