Consider the partitioned linear regression model $$ {\user1{\mathcal{A}}} = {\left( {y,X_{1} \beta _{1} + X_{2} \beta _{2} ,\sigma ^{2} V} \right)} $$ and its four reduced linear models, where y is an n × 1 observable random vector with E(y) = Xβ and dispersion matrix Var(y) = σ2 V, where σ2 is an unknown positive scalar, V is an n × n known symmetric nonnegative definite matrix, X = (X 1 : X 2) is an n×(p+q) known design matrix with rank(X) = r ≤ (p+q), and β = (β′ 1: β′2 )′ with β1 and β2 being p×1 and q×1 vectors of unknown parameters, respectively. In this article the formulae for the differences between the best linear unbiased estimators of M 2 X 1β1under the model $$ {\user1{\mathcal{A}}} $$ and its best linear unbiased estimators under the reduced linear models of $$ {\user1{\mathcal{A}}} $$ are given, where M 2 = I -X 2 X 2 + . Furthermore, the necessary and sufficient conditions for the equalities between the best linear unbiased estimators of M 2 X 1β1 under the model $$ {\user1{\mathcal{A}}} $$ and those under its reduced linear models are established. Lastly, we also study the connections between the model $$ {\user1{\mathcal{A}}} $$ and its linear transformation model.