On the nowadays society exist a lot of communication problems, particularly when the persons has sensory disabilities as deafness or blindness. This problem take place at the moment of interpreting the sign language. The present paper shows the development of a current research project that integrates an intelligent system in the recognition of images and its reproduction in hardware interpretations ends. For those purposes, a system of image acquisition with a webcam and an interface was implemented in Matlab, through which the video was displayed in real time, the image of the point gained, together with the translation of Colombian sign language. This system was trained to recognize 4-letter alphabet, obtaining an average error of 2%, concluding that such application was effective for translating letters acquired both the right hand or the left; similarly concluded that the type of radial neural network proves to be very useful for this type of operation and the higher classification have this training, the results will be more accurate cast. Finally it is important to note that the system integrates hardware with Arduino system that displays real-time translation in this system was trained to recognize 4 letters of the alphabet, giving an average error of 2%, the same way the PNN neural network proves to be very useful for this type of sorting operations.