Today, many real-world machine learning and data analytics problems are of a scale that requires distributed optimization; unlike in centralized computing, these systems are vulnerable to network and node failures. Recently, coding-theoretic ideas have been applied to mitigate node failures in such distributed computing networks. Relaxing the exact recovery requirement of such techniques, we propose a novel approach for adding redundancy in large-scale convex optimization problems, making solvers more robust against sudden and persistent node failures and loss of data. This is done by linearly encoding the data variables; all other aspects the computation operate as usual. We show that under moderate amounts of redundancy, it is possible to recover a close approximation to the solution under node failures. In particular, we show that encoding with (equiangular) tight frames result in bounded objective error, and obtain an explicit error bound for a specific construction that uses Paley graphs. We also demonstrate the performance of the proposed technique for three specific machine learning problems, (two using real world datasets) namely ridge regression, binary support vector machine, and low-rank approximation.