Simultaneous acquisition of depth and texture information, such as that provided by RGB-D sensors, finds an ever increasing number of applications, including objects modeling, human-machine interfaces, and robot navigation. One of the challenges resulting from the use of densely populated 3D datasets originates from the massive acquisition, management and processing of the data generated. This reality often preempts full usage of the information available for autonomous systems to make educated decisions. Current methods for reducing dataset's dimension remain independent from the content of the model and therefore do not optimize the balance between the richness of the measurements and their compression. This paper presents two computational methods to selectively drive the selection of depth measurements over the most significant regions of a scene, characterized by their 3D features distribution, while capitalizing on the knowledge readily available in previously acquired depth data. One of the methods builds on self-organizing neural networks, namely neural gas, while the second one computes an empirical improvement metric. Both techniques are adapted to automatically establish which subset of depth measurements within a range sensor's field of view contribute most to the representation of the scene, and therefore streamline the depth measurements acquisition process.