This paper proposes a plausible approach for a humanoid robot to discover its own body part based on the coherence of two different sensory feedbacks; vision and proprioception. The image cues of a visually salient region are stored in a visuomotor base with the level of visuo proprioceptional coherence. The high coherence between the motions in the vision and proprioception suggests the visually attracted object in the view is correlated to its own motor functions. Then, the robot can defln the motor correlated objects in the view as the self-body parts without prior knowledge on the body appearances nor the body kinematics. The acquired visuomotor base is also useful to coordinate the head and arm posture to bring the hand inside the view, and also recognize it visually. The adaptable body part perception paradigm is effective when the body is possibly extended by the tool use. Each visual and proprioceptional processes are distributed in parallel, which allows on-line perception and real-time interaction with people and objects in the environment.