The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In recent years, hand gesture recognition framework as an effective sign language tool has been extensively explored by many researchers. This paper presents an idea of developing a framework using computer vision for hand gesture based sign language recognition from real-time video stream. The proposed system identifies hand-palm in video stream based on skin color and background subtraction scheme...
The aim of this project is to establish the communication between the hearing impaired people and non-sign-language-speaking people easier and to raise the awareness of the community about hearing impaired people. In this study, a subset of selected primitive hand shapes of Turkish Sign Language are recognized by a system based on Leap Motion device. The system was tested with selected participants...
Thanks to the advances in virtual reality and human modeling techniques, signing avatars have become increasingly used in a wide variety of applications like the automatic translation of web pages, interactive e-learning environments and mobile phone services, with a view to improving the ability of hearing impaired people to access information and communicate with others. But, to truly understand...
This thesis describes in general sign language's effect and differences in the information dissemination. Actually, because of differences in regional, cultural and individual circumstances, it should be bring forward practical programs. In practical information dissemination, a feasible program is proposed in terms of regional difference, cultural difference, and individual difference of the audience...
This article focuses on the development of computational methods and software for the computerized real-time Ukrainian sign language recognition system. The proposed recognition model differs from known approaches in the identification of hand shape in motion. The system makes use of a video camera as a sensor. Hand shape recognition method based on fingertips location and pseudo 2-dimentional image...
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To...
The sign language recognition is the most popular research area involving computer vision, pattern recognition and image processing. It enhances communication capabilities of the mute person. In this paper, we present an object based key frame selection. Hausdorff distance and Euclidean distance are used for shape similarity for hand gesture recognition. We proposed the use of nonlinear time alignment...
Deaf people use facial expressions as a non-manual channel for conveying grammatical information in sign language. Tracking facial features using the Kanade-Lucas-Tomasi (KLT) algorithm is a simple and effective method toward recognizing these facial expressions, which are performed simultaneously with head motions and hand signs. To make the tracker robust under these conditions, a Bayesian framework...
Expressions carry vital information in sign language. In this study, we have implemented a multi-resolution active shape model (MR-ASM) tracker, which tracks 116 facial landmarks on videos. Since the expressions involve significant amount of head rotation, we employ multiple ASM models to deal with different poses. The tracked landmark points are used to extract motion features which are used by a...
This paper presents a computer vision based virtual learning environment for teaching communicative hand gestures used in Sign Language. A virtual learning environment was developed to demonstrate signs to the user. The system then gives real time feedback to the user on their performance of the demonstrated sign. Gesture features are extracted from a standard web-cam video stream and shape and trajectory...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.