We describe an automatic method for classifying skin color, independent of lighting and imaging device characteristics, using consumer digital cameras and a simple color calibration target. After color normalization and face detection is performed, pixels of each face image are clustered in an unsupervised fashion. Pixels likely to be representative of skin color, rather than of distractors such as shadows, specularities, eyes, and lips, are identified by selecting the dominant clusters that have large number of pixels assigned per volume. A Gauss mixture model (GMM) of a person's skin color is formed from the pixels belonging to the selected clusters. When a set of exemplar images with skin color labels by an expert, we show that the label assigned by the same expert to a new, test face image can be predicted by comparison of the GMMs of the test image and the exemplars. Specifically, we use the label of the exemplar whose GMM has smallest KL divergence from that of the test image