Bayesian reinforcement learning provides an elegant solution to the optimal tradeoff between exploration and exploitation of the uncertainty in learning. Unfortunately, the size of the learning parameters grows exponentially with the problem horizon. In this paper, we propose a novel Monte Carlo tree search for Bayesian reinforcement learning approach using a compact factored representation, to solve the Bayesian reinforcement learning problem online. At first, we exploit a factored representation to describe the states and transition function to reduce the size of learning parameters. Then, we can formulate model-based Bayesian reinforcement learning as a partially observable Markov decision processes. At last, we we apply the partially observable Monte-Carlo planning method as an online solver for Bayesian reinforcement learning. The experimental results show that the proposed approach is an effective way for improving the learning efficiency in large-scale Bayesian reinforcement learning.