A key problem in large-scale reinforcement learning is to deal with big data, in terms of a very large number of environment states and many possible actions. Function approximation can improve the ability of a reinforcement learner to solve large-scale problems. Tile coding and Kanerva coding are two classical methods for implementing function approximation, but these methods may give poor performance when applied to large-scale, high-dimensional instances. In the paper, we evaluate a collection of hard instances of the predator-prey pursuit problem, a classic reinforcement learning platform with scalable state-action space, to compare these two methods and their optimization techniques. We first show that Kanerva coding gives better results than Tile coding when the dimension of the instances increases. We then describe a feature optimization mechanism and show that it can increase the fraction of instances that are solved by both Tile coding and Kanerva coding. Finally, we demonstrate that a fuzzy approach to function approximation can further increase the fraction of instances. We show that our fuzzy approach to Kanerva coding outperforms fuzzy Tile coding when feature optimization is applied. We conclude that discrete and fuzzy Kanerva coding represent powerful function approximation techniques that can outperform discrete and fuzzy Tile coding on large-scale, high-dimensional learning problems.