Cooperative caching schemes allow to improve the performance of multi-hop networks based on device-to-device (D2D) communications. Indeed, each node does not only share its transmission capabilities to physically extend the network, but it also shares its storage to cache copies of contents for the sake of other nodes. It results in an increased network performance for users, since caching decreases both network load and latency to reach a content. The design of effective caching policies in a network of caches is very challenging and all the known solutions must be adapted both to the topology and to the request traffic pattern. In this paper, we consider a linear topology, representing a sequence of adjacent nodes, investigating the performances of both local and distributed cooperative caching policies. We specifically investigate where to apply the caching policy. Interestingly, we show that a simple local caching policy, that caches only the contents requested by the node itself, is not worse (or even better) than distributed policies, in which the content is eventually cached across the path from the requester node to the closest copy of the content. In some sense, we show that simplicity pays off.