This paper presents a novel method to construct a complete 3D map that includes all surfaces (ceiling, wall, and furniture tops, etc.) in indoor environments. A team of four robots, including three ground robots and one wall-climbing robot is deployed in a tetrahedron configuration that satisfies the perspective three point (P3P) problem. P3P problem is to estimate the pose of a perspective camera on the wall-climbing robot viewing three ground robots, which will produce up to four solutions using Grunert's algorithm while only one of them is genuine. We propose a probabilistic Bayesian algorithm that identifies the unique solution of the P3P problem using the mobility of the camera. Based on this technique, we introduce an intra-robot localization method to determine the geometric relationship among four robots. Each ground robot is equipped with a rotary laser range finder (LRF), a pan-tilt-zoom camera, and a LED cluster. The wall-climbing robot is fitted with a LRF, a perspective camera, and a motion sensor. Through the vision sensors, the robots obtain their relative poses by solving the P3P problem. Through the LRF on each robot, 4 laser point cloud maps are produced from each robot's point of view. With the information of relative poses of the multiple robots and the calibration data of each LRF and camera pair, the 4 partial maps are fused to acquire a complete 3D map that is rich with information of all surfaces. Our approach outperforms the traditional range image fusion algorithms in terms of time complexity and is suitable for real-time implementation. Real experiments verified the effectiveness of the method.