Bayesian networks (BNs) represent joint space probabilities compactly and enable one to carry out efficient inferencing. Although the Dempster-Shafer (DS) belief theoretic framework captures a wider class of imperfections, its utility in such graphical models is limited. This is mainly due to the requirement of having to maintain a basic probability assignment (BPA) for the whole power set of propositions of interest. In this paper, we introduce a simpler BPA that can still capture many types of imperfections that are commonly encountered in practice. This BPA is then used to develop the DS-BN, a graphical dependency model that represents the joint space belief distribution. We show how this DS-BN can efficiently carry out inferences within the DS theoretic framework. Its utility is illustrated by modeling a problem involving missing values and then comparing the inferences made with those obtained via a BN that learns its parameters using the EM algorithm.