In recent times the sparse representation classification (SRC) has received a lot of attention in many signal processing domains including language identification (LID). Traditionally, in SRC the dictionary is designed to be overcomplete. In case of SRC based LID systems using the GMM mean supervectors as language representation, the resulting dictionary is undercomplete due to lack of data. On the contrast, when lower dimensional i-vectors are used the overcomplete dictionary can be achieved. In this work we have explored the apprehension about the successful sparse coding with an undercomplete dictionary. The experimental studies done on NIST LRE 2007 dataset shows that the performance with the undercomplete dictionary turns out to be better than that with the overcomplete dictionary both with and without channel compensation.