Approximate Bayesian inference is a powerful methodology for constructing computationally efficient statistical learning mechanisms in problems where incomplete information is collected sequentially. Approximate Bayesian models have been developed and applied in a variety of different domains; however, this work has thus far been primarily computational, and convergence or consistency results for approximate Bayesian estimators are largely unavailable. We develop a new consistency theory for these learning schemes by interpreting them as stochastic approximation (SA) algorithms with additional “bias” terms. We prove the convergence of a general SA algorithm of this type, and apply this result to demonstrate, for the first time, the consistency of several approximate Bayesian methods from the recent literature.