The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Sign language is a very important communication tool for hearing-impaired people and also for the communication between hearing-impaired and non-handicapped people. There are many methods for sign language recognition, some of which are based on Hidden Markov Model (HMM) and others are based on Support Vector Machine (SVM) and so forth. In fact, the most of previous methods recognize fingerspelling...
The sign language considered as the main language for deaf and dumb people. So, a translator is needed when a normal person wants to talk with a deaf or dumb person. In this paper, we present a framework for recognizing Bangla Sign Language (BSL) using Support Vector Machine. The Bangla hand sign alphabets for both vowels and consonants have been used to train and test the recognition system. Bangla...
The paper proposes a framework for recognizing hand gesture which would serve not only as a way of communication between deaf and dumb and mute people, but also, as an instructor. Deaf and dumb individuals lack in proper communication with normal people and find it difficult to properly express themselves. Thus, they are subjected to face many issues in this regard. The sign language is very popular...
In our day to day life many people suffer from hearing disabilities. The major disabilities are Deaf-Mute People. Around 360 million people in world suffer from hearing loss. The Interaction between Deaf-Mute and Common people becomes difficult task, Sign Language is the method used for it. The main reason for this is majority of hearing individuals unable to understand sign language, interaction...
Sign language is widely used by individuals with hearing impairment to communicate with each other conveniently using hand gestures. However, non-sign-language speakers find it very difficult to communicate with those with speech or hearing impairment since it interpreters are not readily available at all times. Many countries have their own sign language, such as American Sign Language (ASL) which...
Hearing and speech impaired people use Sign Language to convey their message to normal people. Sign Language has evolved as one of the major areas of research and study in computer vision. Researchers in sign language recognition used different input devices such as data gloves, web camera, depth camera, color camera, Microsoft's Kinect sensor, etc. to capture hand signs. In this paper we display...
With the rapid development of science and technology, the accelerated pace of life, people need to rely on language and hearing to exchange information, share information, learn and progress together. Currently there are about 70 million deaf and dumb people in the world, and in China there are about 20 million patients with different levels of hearing impairment. It is imminent to develop a set of...
Cameras are embedded in many mobile/wearable devices and can be used for gesture recognition or even sign language recognition to help the deaf people communicate with others. In this paper, we proposed a vision-based gesture recognition system which can be used in environments with complex background. We design a method to adaptively update the skin color model for different users and various lighting...
Nowadays, hand gesture is one of main considerations for hearing impaired people because they use sign language to communicate with each other and to normal people. In general, the normal people have difficulties with sign language therefore they need an interpreter supporting communication. Then the automatic hand gesture recognition system is needed to help hearing impaired people integrating into...
Communication and sign-language learning of the people with hearing disabilities in Thailand has been problematic due to limited number of sign-language experts. To facilitate the sign-language learning and communication between the hearing disability and ordinary people, the sign language-to-alphabet spelling conversion was developed based on electromyography (EMG) signal recorded from the forearm...
Sign language uses gestures instead of speech sound to communicate. However, it is rare that the normal people try to learn the sign language for interacting with deaf people. Therefore, the need for a translation from sign language to written or oral language becomes important. In this paper, we propose a prototype system that can recognize the hand gesture sign language in real time. We use HSV...
Various sign languages are used in India, but in schools for deaf, American Sign Language (ASL) is taught. So, the work is based on ASL. Sign recognition application is the development of more effective and friendly interfaces for human-machine interaction. It can provide an opportunity for a mute person to communicate with normal people without the need of an interpreter. We propose a novel system...
The Sign Language is a method of communication for deaf-dumb people. This paper presents the Sign Language Recognition system capable of recognizing 26 gestures from the Indian Sign Language by using MATLAB. The proposed system having four modules such as: pre-processing and hand segmentation, feature extraction, sign recognition and sign to text and voice conversion. Segmentation is done by using...
Deaf people use systems of communication based on sign language and finger spelling. Manual spelling, or finger spelling, is a system where each letter of the alphabet is represented by an unique and discrete movement of the hand. RGB and depth images can be used to characterize hand shapes corresponding to letters of the alphabet. The advantage of depth cameras over color cameras for gesture recognition...
The accurate classification of static hand gestures is a vital role to develop a hand gesture recognition system which is used for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC) application. A vision-based static hand gesture recognition algorithm consists of three stages: preprocessing, feature extraction and classification. The preprocessing stage...
This paper demonstrates the evaluation of various pixel level features for the dual handed sign language data set. Data sets are collected from the real life scenario. We compare the feature extraction methods like Histogram of Orientation Gradient (HOG), Histogram of Boundary Description (HBD) and the Histogram of Edge Frequency (HOEF). The accuracy of HOG and HBD found up to 71.4% and 77.3% whereas...
Sign language helps the deaf and mute to communicate effectively. The paper demonstrates the evaluation of various feature extraction techniques for the dual -handed sign language alphabets. The efficiency of features like Histogram of Orientation Gradient (HOG) is discussed followed by the demonstration of the Histogram of Edge Frequency (HOEF) which overcomes the short coming of HOG. The evaluation...
The paper considers automatic visual recognition of signed expressions. The proposed method is based on modeling gestures with subunits, which is similar to modeling speech by means of phonemes. To define the subunits a data-driven procedure is applied. The procedure consists in partitioning time series, extracted from video, into subsequences which form homogeneous groups. The cut points are determined...
We investigate the issue of sign language automatic phonetic subunit modeling, that is completely data driven and without any prior phonetic information. A first step of visual processing leads to simple and effective region-based visual features. Prior to the sub-unit modeling we propose to employ a pronunciation clustering step with respect to each sign. Afterwards, for each sign and pronunciation...
Sign language data can be expressed as the positional changes of hands over time. Although increasing the number of hand movement sensors increases the recognition rate, the data scales become larger. In addition, each sign language data has a different duration. When large data are generated continuously, lower memory usage and a standardized form of data are necessary to be applied immediately in...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.