This paper presents an extension of the Non-Local Means denoising method, that effectively exploits the affine invariant self-similarities present in the images of real scenes. Our method provides a better image denoising result by grounding on the fact that in many occasions similar patches exist in the image but have undergone a transformation. The proposal uses an affine invariant patch similarity measure that performs an appropriate patch comparison by automatically and intrinsically adapting the size and shape of the patches. As a result, more similar patches are found and appropriately used. We show that this image denoising method achieves top-tier performance in terms of PSNR, outperforming consistently the results of the regular Non-Local Means, and that it provides state-of-the-art qualitative results.