This paper presents a pragmatic study on feature subset selection evaluators. In data mining, dimensionality reduction in data preprocessing plays a vital role for improving the performance of the machine learning algorithms. Many techniques have been proposed by researchers to achieve dimensionality reduction. Beside the contribution of feature subset selection in dimensionality reduction gives a significant improvement in accuracy, it reduces the false prediction ratio and reduces the time complexity for building the learning model in machine learning algorithm as the result of removing redundant and irrelevant attributes from the original dataset. This study analyzes the performance of these Cfs, Consistency and Filtered attribute subset evaluators in view of dimensionality reduction with the wide range of test datasets and learning algorithms namely probability-based Naive Bayes, tree-based C4.5(J48) and instance-based IB1.