Deep convolutional neural networks (CNNs) based face recognition approaches have been dominating the field. The success of CNNs is attributed to their ability to learn rich image representations. But training CNNs relies on estimating millions of parameters and requires a very large number of annotated training images. A widely-used alternative is to fine-tune the CNN that has been pre-trained using a large set of labeled images. However, we show that fine-tuning pre-trained CNNs cannot provide satisfactory face recognition performance when training and testing datasets have large differences. To address this problem, we propose to improve the face recognition performance of CNNs by using non-CNN features. Extensive experiments are conducted on LFW and FRGC databases using the pre-trained CNN model, VGG-Face. Results show that the complementary information contained in non-CNN features greatly improves the face verification rate/accuracy of CNNs on LFW and FRGC databases. Furthermore, we show that non-CNN features are more effective in enhancing the performance of pre-trained CNNs than fine-tuning.