In this research, we present an object search framework using robot-gaze interaction that support patients with motor paralysis conditions. A patient can give commands by gazing to the target object and a robot starts to search autonomously. Apart from multiple gaze interaction, our approach uses few gaze interaction to specify location clue and object clue and thus integrates the RGB-D sensing to segment unknown objects from the environment. Based on hypotheses from gaze information, we utilize multiregion Graph Cuts method along with an analysis of depth information. Furthermore, our search algorithm allows a robot to find a main observation point which is the point where user can clearly observe the target object. If a first segmentation was not satisfied by the user, the robot is able to adapt its pose to find different views of object. The approach has been implemented and tested on the humanoid robot ENON. With a few gaze guidance, the success rate of segmentation of unknown objects was achieved to be 85%. The experimental results confirm its applicability on a wide variety of objects even when the target object was occluded by another object.