Decision tree learning is one of the most widely used and practical methods for inductive inference. A fundamental issue in it is the attribute selection measure. The information gain measure is the most popular one for addressing this issue. However, a notable disadvantage of it is that it is biased towards selecting attributes with many values. Motivated by this fact, the gain ratio measure penalizes the attributes with many values by incorporating a term called split information. Unfortunately, the gain ratio measure suffers from another inevitable practical issue that the denominator sometimes is zero or very small. In this paper, we single out an improved attribute selection measure called average gain, which penalizes the attributes with many values by dividing the number of attribute values. We experimentally tested its effectiveness using 36 UCI data sets.