This paper presents a novel methodology to perform matching between image points described by their respective features. Traditionally, such correspondences are determined by computing the similarity between descriptor vectors associated to each point which are obtained by invariant descriptors. Our methodology first obtains a coarse global registration among images, which constrains the correspondence space. Then, it analyzes the similarity among descriptors, thus reducing both the number and the severity of mismatches. The approach is sufficiently generic to be used with many feature descriptor methods. We present several experimental results that show significant increase in accuracy, number of successful matches, and execution time.