This paper describes the results of a study conducted to determine the efficiency of visual cues for a collaborative navigation task in a mixed-space environment. The task required a user with an exocentric view of a virtual room to navigate a fully immersed user with an egocentric view to an exit. The study compares natural hand-based gestures, a mouse-based interface and an audio only technique to determine their relative efficiency on task completion times. The results show that visual cue-based collaborative navigation techniques are significantly more efficient than an audio-only technique.