The better understanding of human face perception helps to create the more intelligent computer system to recognize the expressions in the faces. In this paper, we investigated how the expression-dependent facial features (i.e., eye area, mouth area) and expression-independent facial features contribute to smiling and angry expression judgments in the human expression system, using a high level facial adaptation paradigm. We used eye area, mouth area, combination of eye area and mouth area, and holistic face as adapting stimuli, and tested in holistic expression images. The result revealed that the eye area or the mouth area would induce approximately the same expression aftereffect when they were separately presented, suggesting that these two areas can activate the expression neural system to the similar degree. Furthermore, the combination of eye area and mouth area can not generate significantly stronger expression aftereffect than that of isolated counterparts, suggesting no accumulation effect for different expression dependent facial features. Finally, the holistic face obtains the stronger adaptation effect than that by single expression dependent facial features, indicating that the expression-independent facial features also contribute to the expression recognition. Our findings help to gain the further insight into the neural mechanism of human vision system, and has implication for computer system to simulate the human behavior of expression recognition.