The ubiquity of personal computing technology has produced an abundance of staggeringly large data sets — the Library of Congress has stored over 160 terabytes of web data and it is estimated that Facebook alone logs over 25 terabytes of data per day. There is a great need for systems by which one can elucidate the similarity and dissimilarity among and between groups in these data sets. Clustering is one way to find these groups. In this paper, we propose an approximation method for the fuzzy and possibilistic kernel c-means clustering algorithms. Our approximation constrains the cluster centers to be linear combinations of a size m randomly selected subset of the n input objects, where m << n. The proposed algorithm only requires an m × n rectangular portion of the full n × n kernel matrix and the n diagonal values, resulting in significant memory savings. Furthermore, the computational complexity of the c-means algorithm is substantially reduced. We demonstrate that up to 3 orders of magnitude of speedup are possible while achieving almost the same performance as the original kernel c-means algorithm.