For the people who are totally or partially unable to move or control their limbs and cannot rely on verbal communication, it is very important to obtain an interface capable of interpreting their limited voluntary movements, in order to allow communications with friends, relatives and care providers, or to send commands to a system. This paper presents a real time software application for disabled subjects, suffering from both motor and speech impairments, that provides message composition and speech synthesis functionalities based on face detection and head tracking. The proposed application runs on portable devices equipped with Android Operating System, and relies upon the O.S.’s native computer vision primitives, without resorting to any external software library. This way, the available camera sensors are exploited, and the device computational requirements accomplished. Experimental results show the effectiveness of the application in recognizing the user’s movements, and the reliability of the message composition and speech synthesis functionalities.