In this paper we study the distributional consensus problem arising from a Markov chain that is generated by a decentralized control strategy. In contrast to other treatments that study the strong, mean square or weak notions of consensus, this paper studies the convergence of the algorithm in terms of the convergence of the Markov chain to its stationary distribution. The choice to study convergence to the stationary measure of the Markov chain enables the study of weaker notions of consensus, ones that depend on the topology used to describe the limiting distribution. By enforcing some reasonable assumptions on the connectivity of the communication network, we prove that the Markov chain associated with the closed loop controller is φ-irreducible and strongly aperiodic. Moreover, we present sufficient conditions that ensure that the controlled Markov chain converges in total variation norm to its stationary distribution at a geometric rate. We provide a rigorous analysis by constructing a stochastic Lyapunov function and showing that a suitable drift condition holds.