This paper describes an approach for tracking the objective planetary landing point based on reference points matching and homography. Once the initial objective landing point is specified, the proposed method will predict the objective landing points in descending image sequence during planetary landing. Only the acquired images of the terrain are utilized as input information. The landing point is unnecessary to be a special featured point, e.g., a maximum, a corner, or a key point. Actually, the landing point is predicted by multi reference points. First, the reference feature points are derived from the improved Speed-Up Robust Features (SURF). Then the reference points of current and previous image are matched by the fast approximate nearest neighbor search. At last, a novel objective landing point prediction algorithm based on homography is presented. This paper also proposes a scale independent precision criteria for landing point tracking tasks, base on which, the proposed method is evaluated on the datasets of different planets from Google Earth, including the Moon, the Mars, and the Earth. The qualitative tracking result and the quantitative precision evaluation results demonstrate the effectiveness and robustness of the proposed method.