We study and analyze a non-Bayesian model of learning that was recently proposed by Epstein et al. in. In this model, an agent uses an i.i.d. sequence of observations to update her belief on the true state of the world. The agent receives a series of signals that are generated randomly according to a probability distribution that depends on the true state of the world. The model differs from the standard Bayesian model in that the agent exhibits a bias towards her prior belief. Instead of using Bayes' rule to incorporate new information, the agent takes a convex combination of her prior and the Bayesian update to form the posterior belief. In, the authors show that even though in this model the agent underreacts repeatedly to new information, the forecast of the future is asymptotically almost surely correct. In this paper, we prove a much stronger result and show that in the absence of identification problems, the agent will asymptotically almost surely learn the unknown state of the world. We also use a linearization of the update governing evolution of the agent's belief to find the rate of learning and bound it in terms of the Kullback-Leibler divergence of the signal distributions under the true state and other states.