Several feature-preserving two-class clustering methods are investigated in this paper. By preserving certain features of the input data, some formulas useful in calculating the two class representatives and population percentages are derived. The derived formulas are expressed in general forms suitable for any dimensionality higher than two. The complexities of the investigated methods are all of order N if the data size is N and hence are much faster than any other clustering method which uses N N dissimilarity matrix. Additionally, all investigated methods use no initial guesses. Experimental results are included to make a comparison among the four investigated methods so that only two methods are recommended. Further comparisons with the k-means method and hierarchical clustering methods also are included. The proposed feature-preserving approach was found to be fast, automatic and suitable for any field requiring fast high-dimensional two-class clustering.