Robust and reliable obstacle detection is an important capability for mobile robots. In our previous works we have presented an approach for visual obstacle detection based on feature based monocular scene-reconstruction. Most existing feature-based approaches for visual SLAM and scene reconstruction select their features uniformly over the whole image based on visual saliency only. In this paper we present a novel attention-driven approach that guides the feature selection to image areas that provide the most information for mapping and obstacle detection. Therefore, we present an information theoretic derivation of the expected information gain that results from the selection of new image features. Additionally, we present a method for building a volumetric representation of the robots environment in terms of an occupancy voxel map. The voxel map provides top-down information that is needed for computing the expected information gain. We show that our approach for guided feature selection improves the quality of the created voxel maps and improves the obstacle detection by reducing the risk of missing obstacles.