Meta-analysis refers to the quantitative synthesis of evidence from a set of related studies. Inference based on naive use of meta-analysis may be erroneous, however, due to publication bias, the tendency of investigators or editors to base decisions regarding submission or acceptance of manuscripts for publication depending on the strength of the investigator's study findings. Weighted distributions are ideally suited to model this phenomenon, since the weight function is proportional to the probability that the measurement (in this case, the results of a study) gets observed (published). Models induced by several competing weight functions are compared, including one model which does not account for publication bias, using the education data of Hedges and Olkin (1985). This allows us to investigate the sensitivity of the overall effect estimates over a range of models. Here, such models are fit hierarchically from a Bayesian perspective using non-informative priors in order to let the data drive the inference. Bayesian calculations are carried out using Markov-chain Monte-Carlo methods such as Gibbs sampling, the Metropolis algorithm, and Monte-Carlo estimation. Several questions of interest are posed, and possible solutions suggested.