We present a new full Bayesian approach for language modeling based on the shared Dirichlet priors. This model is constructed by introducing the Dirichlet distribution to represent the uncertainty of n-gram parameters in training phase as well as in test time. Given a set of training data, the marginal likelihood over n-gram probabilities is illustrated in a form of linearly-interpolated n-grams. The hyperparameters in Dirichlet distributions are interpreted as the prior backoff information which is shared for the group of n-gram histories. This study estimates the shared hyperparameters by maximizing the marginal distribution of n-gram given the training data. Such Bayesian language model is connected to the smoothed language model. Experimental results show the superiority of the proposed method to the other methods in terms of perplexity and word error rate.