The curse of dimensionality in machine learning we are introducing in this paper is caused by increasing data, and the research focus of feature selection is described. The formalized definitions of redundancy and relevance of feature are presented. Based on entropy and mutual information of information theory, the issue of how to measure the relevance between features is discussed. The main contributions of this paper can be summarized as follows: i.in this paper a general framework to solve the problem of feature selection is proposed, by using this framework the selected attributes are clear and the selection and use of attributes are separated; ii, a cross propagation method for generation of attribute subset is proposed; iii, a hybrid feature selection method based on genetic algorithm and information gain(HFSAGI) is proposed. The experiment results show that, the classification accuracy of HFSAGI is better than others in the case of more attributes.