The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper presents an orientation estimation methods using inertial cues (IMU) and visual feature constraint. Our proposed approach combines both of these two modalities in an original way. Two feature-point correspondences between consecutive frames are firstly selected that not merely meet the requirement of descriptor similarity constraint but the locality constraint. Secondly, these two selected...
This paper presents an orientation estimate scheme using monocular camera and inertial measurement units (IMUs). Unlike the traditional wearable orientation estimation methods, our proposed approach combines both of these two modalities in a novel pattern. Firstly, two visual correspondences between consecutive frames are selected that not only meet the requirement of descriptor similarity constraint,...
This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered...
In this paper, an approach is presented to estimate the 3D position and orientation of head from RGB and depth images captured by a commercial sensor Kinect. We use 2D Scale-invariant feature transform (SIFT) features together with 3D histogram of oriented gradients (HOG) features which are extracted in a pair of RGB and depth images captured synchronously, named SIFT-HOG features, to improve the...
This paper describes an approach to location and orientation estimation of a person's face with color image and depth data from a Kinect sensor. The combined 2D and 3D histogram of oriented gradients (HOG) features, called RGBD-HOG features, are extracted and used throughout our approach. We present a coarse-to-fine localization paradigm to obtain localization results efficiently using multiple HOG...
The italic detection and slant rectification is a key step of optical character recognition (OCR). In this paper, a novel method is proposed to detect and rectify italic characters in Chinese advertising images. Based on observations on structures of many characters, the centroid angle is proposed and a statistical study on it is presented. According to the statistical results, the centroid angle...
Visual tracking of articulated objects in real 3D space is challenging with applications in advanced human–computer interfaces and gesture semantic understanding. In this paper, graphical model, constructing articulated human hand, and NBP algorithm embedded with CAMSHIFT, inferencing hand configuration in 3D space, are applied for visual hand tracking. We also introduce image depth cue captured by...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.