In human-robot interaction trust is one of the main factors to take into account for enabling effective interaction. Limited models exist that delineate the development of trust in real world scenarios. Reshaping one of these models we show how a probabilistic framework based on Bayesian Networks (BNs) can incorporate the reliability of information sources into the decisional process of artificial systems. Furthermore, using a developmental approach we gain some insight on how children estimate people's reliability and how some aspects of the Theory of Mind (ToM) can affect that estimation. To test the model we reproduced a developmental experiment in a computational simulation and we embedded the BNs inside an artificial agent. The simulation results are in line with the real data, and confirm that BNs have the potential for being included as trust evaluator modules in robotic systems.