The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In recent years, hand gesture recognition framework as an effective sign language tool has been extensively explored by many researchers. This paper presents an idea of developing a framework using computer vision for hand gesture based sign language recognition from real-time video stream. The proposed system identifies hand-palm in video stream based on skin color and background subtraction scheme...
The paper proposes a framework for recognizing hand gesture which would serve not only as a way of communication between deaf and dumb and mute people, but also, as an instructor. Deaf and dumb individuals lack in proper communication with normal people and find it difficult to properly express themselves. Thus, they are subjected to face many issues in this regard. The sign language is very popular...
Cameras are embedded in many mobile/wearable devices and can be used for gesture recognition or even sign language recognition to help the deaf people communicate with others. In this paper, we proposed a vision-based gesture recognition system which can be used in environments with complex background. We design a method to adaptively update the skin color model for different users and various lighting...
The paper presents a comparison between different membership functions based type-1 fuzzy set for automatic hand gesture recognition for American Sign Language recognition. First pre-processing of the images is done using skin color based segmentation, morphological operations and to extract the hand gesture image from the background, Sobel edge detection technique is performed. Then the image is...
Hand gestures are used widely in communication. An important example is using in the sign languages. Many hand gesture silhouettes are the part of other hand gesture silhouettes. For example, V sign gesture is a part of the high five gesture, because we can create high five gesture silhouettes from the V sign gesture silhouettes by extending the other three fingers. Here we propose the partial contour...
Communications between deaf-mute and a normal person have always been a challenging task. This paper describes a way to reduce barrier of communication by developing an assistive device for deaf-mute persons. The advancement in embedded systems, provides a space to design and develop a sign language translator system to assist the dumb people, there exist a number of assistant tools. The main objective...
Sign language uses gestures instead of speech sound to communicate. However, it is rare that the normal people try to learn the sign language for interacting with deaf people. Therefore, the need for a translation from sign language to written or oral language becomes important. In this paper, we propose a prototype system that can recognize the hand gesture sign language in real time. We use HSV...
Thanks to the advances in virtual reality and human modeling techniques, signing avatars have become increasingly used in a wide variety of applications like the automatic translation of web pages, interactive e-learning environments and mobile phone services, with a view to improving the ability of hearing impaired people to access information and communicate with others. But, to truly understand...
This paper demonstrates the evaluation of various pixel level features for the dual handed sign language data set. Data sets are collected from the real life scenario. We compare the feature extraction methods like Histogram of Orientation Gradient (HOG), Histogram of Boundary Description (HBD) and the Histogram of Edge Frequency (HOEF). The accuracy of HOG and HBD found up to 71.4% and 77.3% whereas...
Sign language helps the deaf and mute to communicate effectively. The paper demonstrates the evaluation of various feature extraction techniques for the dual -handed sign language alphabets. The efficiency of features like Histogram of Orientation Gradient (HOG) is discussed followed by the demonstration of the Histogram of Edge Frequency (HOEF) which overcomes the short coming of HOG. The evaluation...
In this review paper, we analyse the basic components of sign language and examine several techniques which are helpful to design a large vocabulary recognition system for a sign language. The main focus of this research is to highlight the significance of unaddressed issues, their associated challenges and possible solutions over a wide technology spectrum.
Nowadays, sign language is commonly used as a communication language for auditory handicapped people. In addition to voice and controller pads, hand gestures can also be an effective way of communication between humans and robots or even between auditory handicapped people and robots. To be an effective sign recognition system, it should be glove-free, fast, small database and accurate. In this project,...
Reliable segmentation and motion tracking algorithms are required to achieve gesture detection and tracking for human-machine interaction. In this paper we present an efficient method for detecting and tracking moving hands in sign language video frames. We make use of the geodesic active region framework in conjunction with new color and motion forces; color information is provided by a skin color...
We have developed a prototype for a learning environment for deaf and hard of hearing children. This demonstration consists of hands-on experience with the prototype. In total, there are three exercises: 1) an introduction of all pictures and corresponding signs, 2) multiple choice sign-to-picture and 3) performing the sign that corresponds to the picture shown on the screen. The live recognition...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.