C4.5 is the most well-known inductive learning algorithm. It can be used to build decision trees as well as production rules. Production rules are a very common formalism for representing and using knowledge in many real-world domains. C4.5 generates production rules from raw trees. It has been shown that the set of production rules is usually both simpler and more accurate than the decision tree from which the ruleset was formed. This research shows that generating production rules from pruned trees usually results in significantly simpler rulesets than generating rules from raw trees. This reduction in complexity is achieved without reducing prediction accuracies. Furthermore, the new approach uses significantly less induction time than the latter. This paper uses experiments in a wide variety of natural domains to illustrate these points. It also shows that the new method scales up better than the old one in terms of ruleset size, the number of rules, and learning time when the training set size increases. This is an important characteristic for learning algorithms used for data mining.