This paper aims at designing a novel Hybrid Lyapunov theory based Fuzzy Reinforcement Learning Controller with guaranteed stability for non linear systems. One of the major difficulties faced in applying Reinforcement Learning (RL) to real world problems is its limited capability to cope with continuous state spaces. In Fuzzy Q-Learning (FQL) fuzzy Inference systems have been used as universal function approximators to deal with the ‘Curse of Dimensionality’. In this work we propose a hybrid Lyapunov fuzzy RL controller that combines FQL based control with a Lyapunov control ensuring guaranteed stability. The composite action thus generated from these two distinct control paradigms forms the backbone of the controller presented in this paper. To showcase effectiveness of the proposed methodology, we simulate it on the benchmark problem of Inverted Pendulum Control (IPC). Simulation results elucidate the effectiveness and efficacy of the proposed control scheme over Fuzzy Q learning control.