Presently, corporations and individuals have large image databases due to the explosion of multimedia and storage devices available. Furthermore, the accessibility to high speed internet has escalated the level of multimedia exchanged by users across cyberspace every second. Accordingly, it has increased the demand for searching among large databases of images. Conventionally, text-based image retrieval is used. The major problems in text-based image retrieval are related to annotation that is often impossible due to human perception of images being subjective, and also due to the size of the information that needs indexing. To overcome such limitations, content-based image retrieval systems have been proposed. However, there is a key hindrance, namely, the need to match the human visual system to overcome the semantic gap between human perception and low-level features. In this paper, we propose a new unsupervised method based on Hopfield neural networks that seeks to model human visual memory to increase the efficacy of retrieval and reduce the semantic gap. A comparative study with other neural-network based methods, such as the feed forward backpropagation and Boltzmann deep learning, shows the effectiveness of our method.