In pattern recognition, the principal component analysis (PCA) is one of the most famous feature extraction methods for dimensionality reduction of high-dimensional datasets. Furthermore, Simple-PCA (SPCA) which is a faster version of the PCA, has been carried out effectively by iterative operated learning. However, in SPCA, when input data are distributed in a complex way, SPCA might not be efficient because it is learned without class information of the dataset. Thus, SPCA cannot be said that it is optimal for classification. In this paper, we propose a new learning algorithm, which is learned with the class information of the dataset. Eigenvectors spanning eigenspace of the dataset are obtained by calculation of data variations belonging to each class. We will show the derivation of the proposed algorithm and demonstrate some experiments to compare the SPCA with the proposed algorithm by using UCI datasets.