An efficient classification model has mostly classification rules with high confidence and large support. However, such a model may fail in real applications, because there exist objects or events that are very important, but rare and difficult to predict. In this work, we consider classification rules that are relatively abnormal, with respect to those rules that have high confidence and large support. We present a method for computing both normal and abnormal classification models in one phase and show the important complementary role of abnormal models with respect to normal models in classification through experimentation on UCI datasets.