The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we present an effective and accurate gaze estimation method based on two-eye model of a subject with the tolerance of free head movement from a Kinect sensor. To accurately and efficiently determine the point of gaze, i) we employ two-eye model to improve the estimation accuracy; ii) we propose an improved convolution-based means of gradients method to localize the iris center in 3D...
In this paper, we address the 3D eye gaze estimation problem using a low-cost, simple-setup, and non-intrusive consumer depth sensor (Kinect sensor). We present an effective and accurate method based on 3D eye model to estimate the point of gaze of a subject with the tolerance of free head movement. To determine the parameters involved in the proposed eye model, we propose i) an improved convolution-based...
Depth image based human action recognition has attracted many attentions due to the popularity of the depth sensors. However, accurate recognition still remains a challenge because of various object appearances, poses and video sequences. In this paper, a novel skeleton joints descriptor based on 3D Moving Trend and Geometry (3DMTG) property is proposed for human action recognition. Specifically,...
We present a new method to correct perspective and geometric distortions as well as to segment and refine page frames, presented in document images of paperback books. The proposed digitization process of paperback books is just like the process of scanning. Unlike traditional methods, our method is independent of document contents and does not need additional hardware or multiple images or luminance...
With the rapid development of robotic technologies and their extensive applications in human life, human robot interaction via human motion is a fundamental topic in robotics. The critical problem in this topic is recognizing human motion in real time. In this paper, we propose a new method to represent 3D motion trajectory for fast recognizing complex motion trajectory. 3D trajectory is segmented...
This paper investigates gaze estimation solutions for interacting children with Autism Spectrum Disorders (ASD). Previous research shows that satisfactory accuracy of gaze estimation can be achieved in constrained settings. However, most of the existing methods can not deal with large head movement (LHM) that frequently happens when interacting with children with ASD scenarios. We propose a gaze estimation...
Micro aerial vehicle (MAV) can be used as an efficient image acquiring tool. However, aerial video sequence has a large amount of redundant information, which will delay the scene reconstruction process and decrease the structure accuracy. Meanwhile conventional frame decimation approaches normally have time-consuming feature matching step. This paper proposes a hierarchical frame decimation approach...
Motion recognition based on trajectory is important for motion analysis. Complicated motion recognition is still a challenge in various applications of robot and automation. In this paper, we propose a novel framework with a new model, Scaled Indexing of General Shapes (S-IGS), for complicated motion recognition. The Scaled IGS is a quantified hierarchical model, representing 3D motion trajectories...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.