To reduce the estimation error introduced by insufficient training data, the parameters of probabilistic models are usually smoothed by different techniques, such as Good–Turing smoothing and back-off smoothing. However, the discriminative power of the model cannot be significantly enhanced simply with the smoothing techniques. Therefore, in this paper an adaptive learning method is adopted to enhance the discrimination power of a probabilistic model. Also, a novel tying scheme is proposed to tie the unreliable parameters which never or rarely occurred in the training data, so that those unreliable parameters can have more chance to be adjusted by the learning procedure. In the task of tagging Brown Corpus, this approach greatly reduces the number of parameters from 578 759 to 27 947 and reduces the error rate of theambiguous words(i.e. the words with more than one possible part of speech) from 5?48 to 4?93%, corresponding to 10?4% error reduction rate. Furthermore, a probabilistic model is usually simplified to enable reliable estimates of its parameters using the limited amount of training data. As a consequence, the modelling error is increased because some discriminative features are sacrificed while simplifying that model. Therefore, a probabilistic classification model is proposed to reduce the modelling error by better using the discriminative features selected by theClassification and Regression Treemethod. This proposed model achieves 19?16% error reduction rate for the top 30 error-contributing words, which contribute 31?64% of the overall tagging errors.