We introduce a method that incorporates robustness to one of the main building blocks of sparse modeling: dictionary learning. Particularly, we exploit correntropy to compute the principal components in cases where outliers might be detrimental without proper care. This is further added to one of the most utilized dictionary learning tools: K-SVD; the result is Correntropy K-SVD, or CK-SVD, a method that is based on a Maximum Correntropy Criterion (MCC) instead of the somewhat limited Minimum Squared Error (MSE) approach. The optimization is performed using the well-known Half-Quadratic (HQ) technique, which allows a fast and efficient implementation. The results show the importance of this work not only by outperforming K-SVD, but also by circumventing one of the main assumptions during learning overcomplete representations: the availability of untampered, noiseless and outlier-free samples for training stages.