Quantization is an increasingly important operation both because of applications in networked control and the computational benefits of working with finite state spaces. In this paper, we consider quantized approximations of stationary policies for a discrete-time Markov decision process with discounted and average costs and weakly continuous transition probability kernels. We show that deterministic stationary quantizer policies approximate optimal deterministic stationary policies with arbitrary precision under mild technical conditions. We thus extend recent and older results in the literature which consider more stringent continuity conditions for the transition kernels, such as setwise continuity, which limit the applicability of such results. In particular, the weaker continuity requirements allow for the study of partially observable Markov decision processes under practical conditions.