Sparse representation of signals for classification is an active research area. Signals can potentially have a compact representation as a linear combination of atoms in an overcomplete dictionary. Based on this observation, a sparse-representation-based classification (SRC) has been proposed for robust face recognition and has gained popularity for various classification tasks. It relies on the underlying assumption that a test sample can be linearly represented by a small number of training samples from the same class. However, SRC implementations ignore the Euclidean distance relationship between samples when learning the sparse representation of a test sample in the given dictionary. To overcome this drawback, we propose an alternate formulation that we assert is better suited for classification tasks. Specifically, class-dependent sparse representation classifier (cdSRC) is proposed for hyperspectral image classification, which effectively combines the ideas of SRC and $K$-nearest neighbor classifier in a classwise manner to exploit both correlation and Euclidean distance relationship between test and training samples. Toward this goal, a unified class membership function is developed, which utilizes residual and Euclidean distance information simultaneously. Experimental results based on several real-world hyperspectral data sets have shown that cdSRC not only dramatically increases the classification performance over SRC but also outperforms other popular classifiers, such as support vector machine.