Speech is one of the most promising models through which various human emotions such as happiness, anger, sadness, and normal state can be determined, apart from facial expressions. Researchers have proved that acoustic parameters of a speech signal such as energy, pitch, Mel frequency Cepstral Coefficient (MFCC) are vital in determining the emotion state of a person. There is an increasing need for a new Feature selection method, to increase the processing rate and recognition accuracy of the classifier, by selecting the discriminative features. This study investigates the various feature selection algorithms, used for selecting the optimal features from speech vectors which are extracted using MFCC. The feature selected is then used in the modeling stage.