The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We propose a method to derive shading by referring to an unspecified object to synthesize CG objects realistically into an actual scene. Since our method does not require specific light probes and can be implemented with a commercial RGB + depth sensor, it is applicable to consumer environments. The method conducts spherical harmonic (SH) basis functions regression against luminance of the reference...
We present a novel motion descriptor for gesture recognition based on depth camera. Since each object motion leads to a specific depth change characterized by depth difference, we can recognize object motion via Depth Difference Distribution (DDD) in object region. The DDD is approximated by DDD descriptor in three steps. First, each pixel's depth difference value is quantified into Depth Difference...
Short message service (SMS) is now an indispensable way of social communication. However the mobile spam is getting increasingly serious, troubling users' daily life and ruining the service quality. We propose a novel approach for spam message detection based on mining the underlying social network of SMS activities. Comparing with strategies on keywords or flow detection, our network-based approach...
We describe a mechanism based upon activity manifolds that map image data from more than one view to spatial pose. We learn the manifolds from training data which are motion capture data about real human subjects exercising the target actions. The nature of the training data allows the learned manifolds to conform naturally to multiple constraints, including (1) the body-part articulation constraint;...
Finding correspondences between two 3D shapes is common both in computer vision and computer graphics. In this paper, we propose a general framework that shows how to build correspondences by utilizing the isometric property. We show that the problem of finding such correspondences can be reduced to the problem of spectral assignment, which can be solved by finding the principal eigenvector of the...
This work introduces a new representation for Motion Capture data (MoCap) that is invariant under rigid transformation and robust for classification and annotation of MoCap data. This representation relies on distance matrices that fully characterize the class of identical postures up to the body position or orientation. This high dimensional feature descriptor is tailored using PCA and incorporated...
In this paper, a new dual dictionaries learning (DDL) method is proposed for robust 3D human pose estimation. The performance and applicability of traditional methods are limited by a lack of robustness to corrupted observations caused by occlusions or poor background subtraction. Our DDL approach aims at simultaneously constructing two overcomplete dictionaries, called the visual observation dictionary...
Recently, articulated pose estimation methods based on the pictorial structure framework have received much attention in computer vision. However, the performance of these approaches has been limited due to the presence of self-occlusion. This paper deals with the problem of handling self-occlusion in the pictorial structure framework. We propose an exemplar-based framework for implicit occlusion...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.