Linear algebra solvers such as the conjugate gradients method are at the base of numerous scientific and industrial applications. Among these, Krylov type iterative solvers are highly memory bounded, meaning that their bottleneck lies in the transfer of data from memory. The tremendous computational power current multicore processors and hardware accelerators, such as the graphic processing unit (GPU), are capable of, puts a great strain on interconnects, both external (e.g., networks) and internal (e.g., memory and PCIe buses). By using a lower precision for most of the computations, the technique of iterative refinement enables one to reduce the amount of memory being transferred at all levels, thus increasing computational performance. Moreover, it makes possible the use of more cost-effective accelerators, such as cheap GPUs, that lack support for native double precision, for tasks requiring double precision accuracy. In this chapter, we propose two heuristics for automatically setting the two parameters needed for using iterative refinement: the inner residual reduction target and the stopping criteria. Although the heuristics prove effective for most matrices from our test collection, we find many cases in which the increase in iterations due to the restarting of the solver leads to an actual decrease in performance.