The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this proposal the Moodle platform was optimized based on the theory of communities of practice, with the development and integration of technologies to serve a bilingual public (portuguese / libras), generating MooBi — Bilingual Moodle. Tests to verify accessibility requirements have made it possible to detect nonconformities and to generate specifications and suggestions for a bilingual virtual...
This project is the general design of the “Glove” system, which translates the sign language used by disabled people into a daily language. With this system, people with disabilities will be able to communicate comfortably even if people who disabled people talks do not know the sign language in daily life. This provides disabled people that handle their everyday tasks more easily. Before the general...
The purpose of this paper is to develop and analyses device capable of identifying sign language. The recognition is performed using Multilayer Perceptron and all the input data are signals from flex sensors, accelerometers and gyroscopes. Artificial Neural Network is tested modifying parameters as: a) number of neurons in only middle layer, b) learning rate between input and middle layers and c)...
The technology, which is implemented using cross-platform tools, is proposed for modeling of gesture units of sign language, dynamic mapping between states of gesture units with a combination of gestures structures (words, sentences). The technology implemented simulated playback of gesture items and constructions using virtual model of spatial hand. With the cross-platform means technology achieves...
This research presents a new framework using web services to translate text into a sign language animated GIF. When users enter a sentence, the web service will analyzes the sentence using the longest words from a dictionary, then produces a series of images per word and integrates all images into an animated GIF. This will help the hearing impairment to access more facilities. In addition, the framework...
This paper presents a pilot study for a personalized media service which aims at creating an intelligent, sentiment-aware, and language-independent access to large archives of audiovisual documents, providing equal services to both mainstream and marginalized users. The proposed multi-modal framework analyzes aural, visual, and human descriptions, integrating them into an automatic content analyzer...
Sign language is a very important communication tool for hearing-impaired people and also for the communication between hearing-impaired and non-handicapped people. There are many methods for sign language recognition, some of which are based on Hidden Markov Model (HMM) and others are based on Support Vector Machine (SVM) and so forth. In fact, the most of previous methods recognize fingerspelling...
This paper describes a device for real-time expanding sign language images inserted in any TV programs by 2×2 scale factor. Bicubic interpolation is used as an image expansion method. The 2×2 scale factor simplifies the bicubic interpolation formula to just division by two operations which enables an efficient hardware implementation. As a prototype version, the proposed architecture is implemented...
Identifying hand configuration is a critical feature of sign language translation. In this paper, we describe our approach to recognize hand configurations in real time with the purpose of providing accurate predictions to be used in automatic sign language translation. To capture the hand configuration we rely on data gloves with 14 sensors that measure finger joints bending. These inputs are sampled...
Statistical Machine Translation (SMT) is one of the research areas in computer science. The research of Statistical Machine Translation shows a momentous output in the denary of years. Primarily, the research focuses on how to translate from one language to another language and vice versa. It rarely discusses the searching process of Statistical Machine Translation. The objectives of this paper are...
We present a smartwatch application that recognizes important sign sentences. We make use of modern smart watches like Samsung Gear that are equipped with inbuilt sensors including accelerometer, gyroscope and magnetometer. We show how well a smartwatch can recognize important sign sentences. We have implemented a smartwatch app that collects 3d accelerometer and 3d gyroscope data from the watch....
Sign Language is a medium for communication for many disabled people. The sign language recognition has many applications including gesture controlled activities like human computer interaction, gesture controlled home appliances and other electronic devices and many applications that uses gesture as the trigger input. The most important application is it provides a communication aid for deaf and...
This paper proposes a novel sign language learning system based on 2D image sampling and concatenating to solve the problems of conventional sign recognition. The system constructs the training data by sampling and concatenating from a sign language demonstration video at a certain sampling rate. The learning process is implemented with a well-known network, convolutional neural network. 6 sign language...
In the followed article is presented a program able to make gesture image recognition, it is capable to identify each one letter of alphabet. The developments objective is make possible any person can be able to understand and by self-learning to get acknowledge the signal language.
Vocal Communication is a way to convey our thoughts, messages and information. However, each of us is not gifted to be able to share our thoughts in verbal mode with others due to some physical disabilities. The deaf and the mute are the one who face extreme difficulty in conveying their messages to others. Normally deaf and the mute people use sign language for communication but it turns out to be...
Hand gesture is one of the most natural and expressive ways for the hearing impaired. However, because of the complexity of dynamic gestures either static gestures, postures, or a small set of dynamic gestures are focused by most researchers. In this paper, Kinect Motion Sensor Device is used to recognize the gesture of the user. But the gesture of each user of a particular word will be slightly different...
While BCIs have a wide range of applications, the majority of research in the field is concentrated on addressing the issues of controlling and communicating for paralysed patients. This research seeks to examine—through the completion of offline experimentation—a particular aspect; that is, the likelihood of linguistic communication with those paralysed patients, merely by means of neural activity...
Human beings have a natural ability to see, listen and interact with their external environment. Unfortunately, there are some people who are differently abled and do not have the ability to use their senses to the best extent possible. Such people depend on other means of communication like sign language. This presents a major roadblock for people in the deaf and dumb communities when they try to...
Sign language is the only means of communication for deaf and dump people which uses manual communication and body language to convey meaning. For any sign language, an interpreter is essential to communicate with deaf and dump people. To enhance interaction with community, Sign Language Recognition (SLR) is a growing field of research now a days. The task of SLR is language specific and a number...
This paper presents an approach for designing and implementing a smart glove for deaf and dumb people. There have been several researches done in order to find an easier way for non-vocal people to communicate with vocal people and express themselves to the hearing world. Developments have been made in sign language but mainly in American Sign Language. This research aims to develop a sign to Arabic...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.