The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The accurate classification of static hand gestures is a vital role to develop a hand gesture recognition system which is used for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC) application. A vision-based static hand gesture recognition algorithm consists of three stages: preprocessing, feature extraction and classification. The preprocessing stage...
This paper demonstrates the evaluation of various pixel level features for the dual handed sign language data set. Data sets are collected from the real life scenario. We compare the feature extraction methods like Histogram of Orientation Gradient (HOG), Histogram of Boundary Description (HBD) and the Histogram of Edge Frequency (HOEF). The accuracy of HOG and HBD found up to 71.4% and 77.3% whereas...
Sign language helps the deaf and mute to communicate effectively. The paper demonstrates the evaluation of various feature extraction techniques for the dual -handed sign language alphabets. The efficiency of features like Histogram of Orientation Gradient (HOG) is discussed followed by the demonstration of the Histogram of Edge Frequency (HOEF) which overcomes the short coming of HOG. The evaluation...
The paper considers automatic visual recognition of signed expressions. The proposed method is based on modeling gestures with subunits, which is similar to modeling speech by means of phonemes. To define the subunits a data-driven procedure is applied. The procedure consists in partitioning time series, extracted from video, into subsequences which form homogeneous groups. The cut points are determined...
We investigate the issue of sign language automatic phonetic subunit modeling, that is completely data driven and without any prior phonetic information. A first step of visual processing leads to simple and effective region-based visual features. Prior to the sub-unit modeling we propose to employ a pronunciation clustering step with respect to each sign. Afterwards, for each sign and pronunciation...
Sign language data can be expressed as the positional changes of hands over time. Although increasing the number of hand movement sensors increases the recognition rate, the data scales become larger. In addition, each sign language data has a different duration. When large data are generated continuously, lower memory usage and a standardized form of data are necessary to be applied immediately in...
A novel system for the recognition of spatiotemporal hand gestures used in sign language is presented. While recognition of valid sign sequences is an important task in the overall goal of machine recognition of sign language, recognition of movement epenthesis is an important step towards continuous recognition of natural sign language. We propose a framework for recognizing valid sign segments and...
We present a novel and robust system for recognizing two handed motion based gestures performed within continuous sequences of sign language. While recognition of valid sign sequences is an important task in the overall goal of machine recognition of sign language, detection of movement epenthesis is important in the task of continuous recognition of natural sign language. We propose a framework for...
In this paper we evaluate the performance of Conditional Random Fields (CRF) and Hidden Markov Models when recognizing motion based gestures in sign language. We implement CRF, Hidden CRF and Latent-Dynamic CRF based systems and compare these to a HMM based system when recognizing motion gestures and identifying inter gesture transitions. We implement a extension to the standard HMM model to develop...
We propose a novel approach for solving the Chinese manual alphabet in vision. Rather than focusing on local features and their consistencies in the images data, our approach aims at extracting both the global and local features of an image. Features calculated from gray-level co-occurrence matrix and other multi-features are introduced for the classifier to characterize the various visual properties...
The sign language recognition is the most popular research area involving computer vision, pattern recognition and image processing. It enhances communication capabilities of the mute person. In this paper, we present an object based key frame selection. Hausdorff distance and Euclidean distance are used for shape similarity for hand gesture recognition. We proposed the use of nonlinear time alignment...
Nowadays, sign language is commonly used as a communication language for auditory handicapped people. In addition to voice and controller pads, hand gestures can also be an effective way of communication between humans and robots or even between auditory handicapped people and robots. To be an effective sign recognition system, it should be glove-free, fast, small database and accurate. In this project,...
This paper presents a computer vision based virtual learning environment for teaching communicative hand gestures used in Sign Language. A virtual learning environment was developed to demonstrate signs to the user. The system then gives real time feedback to the user on their performance of the demonstrated sign. Gesture features are extracted from a standard web-cam video stream and shape and trajectory...
This paper presents an automatic sign language translator, which is able to translate Malaysian sign language using pattern-matching algorithm. The sign language translator is a vision-based system where the image of the sign is captured by a camera, processed and translated into English by the computer. This sign language translator is able to recognize alphabets (A-Z), numbers (0-9), finger spelling,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.