We aim to improve the bag-of-visual-words (BOW) model for near-duplicate image retrieval, by introducing a more fine-grained pseudo-relevance feedback process. The BOW method is based on vector quantization of affine invariant descriptors of image patches. Despite its popularity and simplicity, the retrieval performance of BOW is often unsatisfactory due to the large and diverse variations of near-duplicate images. We thus propose an information-theoretic feedback framework that employs available cues in the search result to find more relevant duplicate images which are hard to retrieve by using conventional BOW approaches. Our algorithm is experimentally evaluated under a severely attacked image database, and shown to significantly improve the retrieval accuracy over a non-feedback baseline.