Sparse representations of signals have received a great deal of attention in recent years, and the sparse representation classifier has very lately appeared in a speaker recognition system. This approach represents the (sparse) GMM mean supervector of an unknown speaker as a linear combination of an over-complete dictionary of GMM supervectors of many speaker models, and ℓ1-norm minimization results in a non-zero coefficient corresponding to the unknown speaker class index. Here this approach is tested on large databases, introducing channel-/session-variability compensation, and fused with a GMM-SVM system. Evaluations on the NIST 2001 SRE and NIST 2006 SRE database show that when the outputs of the MFCC UBM-GMM based classifier (for NIST 2001 SRE) or MFCC GMM-SVM based classifier (for NIST 2006 SRE) are fused with the MFCC GMM-Sparse Representation Classifier (GMM-SRC) based classifier, an absolute gain of 1.27% and 0.25% in EER can be achieved respectively.