Recently, we proposed canonical inner bounds for a broad class of multiterminal source coding problems, subsuming an array of known results. Computation of those bounds assumes importance in view of open tightness questions, as well as their role as benchmarks, and in other contexts. In this backdrop, as computational algorithms depend exponentially on alphabet sizes of auxiliary random variables, estimating their sizes remains an important task. In existing literature, each auxiliary alphabet size is bounded within a positive integer constant of the corresponding source alphabet size. In a significant improvement, we give a tight bound, by showing that this constant integer can in fact be set to zero.