Subspace clustering via Low-Rank Representation (LRR) has shown its effectiveness in clustering the data points sampled from a union of multiple subspaces. In original LRR, the noise in data is assumed to be Gaussian or sparse, which may be inappropriate in real-world scenarios, especially when the data is densely corrupted. In this paper, we aim to improve the robustness of LRR in the presence of large corruptions and outliers. First, we propose a robust LRR method by introducing the correntropy loss function. Second, a column-wise correntropy loss function is proposed to handle the sample-specific errors in data. Furthermore, an iterative algorithm based on half-quadratic optimization is developed to solve the proposed methods. Experimental results on Hopkins 155 dataset and Extended Yale Database B show that our methods can further improve the robustness of LRR and outperform other subspace clustering methods.