Designing optimal controllers continues to be challenging as systems are becoming complex and are inherently nonlinear. The principal advantage of reinforcement learning (RL) is its ability to learn from the interaction with the environment and provide an optimal control strategy. In this paper, RL is explored in the context of control of the benchmark cart-pole dynamical system with no prior knowledge of the dynamics. RL algorithms such as temporal-difference, policy-gradient actorcritic, and value-function approximation are compared in this context with the standard linear quadratic regulator solution. Further, we propose a novel approach for integrating RL and swing-up controllers.