AdaBoost is one of the most excellent Boosting algorithms. It has a solid theoretical basis and has made great success in practical applications. AdaBoost can boost a weak learning algorithm with an accuracy slightly better than random guessing into an arbitrarily accurate strong learning algorithm, bringing about a new method and a new design idea to the design of learning algorithm. This paper first introduces how Boosting, just a conjecture when proposed, was proved right, and how this proof led to the origin of AdaBoost algorithm. Second, training and generalization error of AdaBoost are analyzed to explain why AdaBoost can successfully improve the accuracy of a weak learning algorithm. Third, different theoretical models to analyze AdaBoost are given. Meanwhile, many variants derived from these models are presented. Fourth, extensions of binary-class AdaBoost to multiclass AdaBoost are described. Besides, applications of AdaBoost algorithm are also introduced. Finally, interested directions which need to be further studied are discussed. For Boosting theory, these directions include deducing a tighter generalization error bound and finding a more precise weak learning condition in multiclass problem. For AdaBoost, the stopping conditions, the way to enhance anti-noise capability and how to improve the accuracy by optimizing the diversity of the base classifiers, are good questions to be in-depth researched.