We consider the problem of preserving data in intermittently connected sensor networks wherein sensor nodes do not always have connected paths to the base stations. The generated data is first stored inside the network before being uploaded to the base station when uploading opportunities arise. Each node has both limited energy level and limited storage space, and is associated with a probability of failure. To guarantee that at any moment, each generated data item is available for being uploaded in the presence of node failure, we propose to replicate K copies of each data item in the network, where K depends upon the node failure probability. We refer to the problem as Data K-Availability Problem (DKAP). DKAP is naturally divided into two phases: K-Availability creation and K-Availability maintenance. For K-Availability creation, we show that the problem is NP-hard for arbitrary data sizes, and that it is equivalent to minimum cost flow problem for unit data sizes. For K-Availability maintenance, we show that it is NP-hard even for unit data sizes and design a centralized greedy heuristic. We further design an efficient and low-overhead distributed algorithm, which is applicable to both phases, and show using extensive simulations that it performs close to the heuristic.