We developed an assistive-robotic-arm system which autonomously grasps a cup and brings it to the user's mouth. It was developed as a prototype of meal-assistance robot. We utilized two heterogeneous eye-in-hand cameras. One is the front-camera capturing objects, and the other is the side-camera capturing the user's face. The latter keeps an occlusion-free view even during the object bringing. We implemented a face recognition function which robustly identifies the user's face while predicting the face position. The arm is controlled by visual servoing technique. We verified the basic performance of the system through preliminary tests. The arm was able to execute the task, controlling the arm according to the position of the object and the user's face. We demonstrated the basic possibility for the meal-assistance robot.