This paper proposes a simple, hierarchical decision-making approach to reinforcement learning, under the framework of Markov decision processes. According to the approach, the choice of an action, in every time stage, is made through a successive elimination of actions and sets of actions from the underlined action-space, until a single action is decided upon. Based on the approach, the paper defines a hierarchical Q-function, and shows that this function can be the basis for an optimal policy. A hierarchical reinforcement learning algorithm is then proposed. The algorithm, which can be shown to converge to the hierarchical Q-function, provides new opportunities for state abstraction