The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Hand gesture recognition is highly valued for its potential applications in contactless human-computer interaction (HCI). Aiming at the problem that the gesture recognition system based on ordinary camera is susceptible to different lighting conditions and complex background environment, an improved algorithm based on depth image for fingertip detection and gesture recognition is proposed. Firstly,...
Human Computer Interaction (HCI) has become an important focus of both computer science researches and industrial applications. And, on-screen gaze estimation is one of the hottest topics in this rapidly growing field. Eye-gaze direction estimation is a sub-research area of on-screen gaze estimation and the number of studies that focused on the estimation of on-screen gaze direction is limited. Due...
A simple and robust gesture recognition system is proposed for better human-computer interaction using Microsoft's Kinect sensor. The Kinect is employed to construct skeletons for a subject in the 3D space using twenty body joint coordinates. From this skeletal information, ten joints are required and six triangles have been constructed along with six respective centroids. The feature space corresponds...
User identification and tracking are definitely the basic tasks in any human computer interaction (HCI) scenario. For these tasks we propose a multi-view approach utilizing multi-camera systems and audio processing systems. Face detectors and face recognizers are based on orientation histogram and eigenface techniques, and Mel Frequency Cepstral Coefficients (MFCC) are applied for speaker identification...
Among gestures in non-verbal communication, pointing gesture can be taken as one of natural human computer interfaces. Vision based hand pointing is an optimal model for human-computer interaction (HCI). One of key problems among the vision based pointing gesture is how to recognize the pointing. Aiming at some limits existing in the literature, a novel method is developed to estimate pointing gestures...
Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt...
This paper describes a system for action recognition with a single camera. Firstly, we use a two-layered background subtraction, which is based on both chromaticity and gradient, to extract human contours from the frame sequence captured by camera. This subtraction helps us remove shadows from foreground and get a good contour for recognition. Then we parameterize a human posture with a model called...
Facial expression recognition is an imperative task in human computer interaction systems. In this work we propose a new system for automatic expression recognition in video sequences. Our system uses color information to extract the facial features. Additionally, it includes a camera model and a registration step, in which we automatically build a person specific face model from stereo. Photogrammetric...
This paper presents an eye-gaze tracking system based on the image processing. All the computations are performed in software and the system just needs a PC camera attached to the user's computer. We first extract the facial regions form the images using the skin-color model and connected-component analysis. Then the eye regions are detected by employing the rules and area segmentation. After the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.