A promising approach to Bayesian classification is based on exploiting frequent patterns, i.e., patterns that frequently occur in the training data set, to estimate the Bayesian probability. Pattern-based Bayesian classification focuses on building and evaluating reliable probability approximations by exploiting a subset of frequent patterns tailored to a given test case. This paper proposes a novel and effective approach to estimate the Bayesian probability. Differently from previous approaches, the Entropy-based Bayesian classifier, namely EnBay, focuses on selecting the minimal set of long and not overlapped patterns that best complies with a conditional-independence model, based on an entropy-based evaluator. Furthermore, the probability approximation is separately tailored to each class. An extensive experimental evaluation, performed on both real and synthetic data sets, shows that EnBay is significantly more accurate than most state-of-the-art classifiers, Bayesian and not.