Proactive cognitive agents need to be capable of both generating their own goals and enacting them. In this paper, we cast this problem as that of maintaining equilibrium, that is, seeking opportunities to act that keep the system in desirable states while avoiding undesirable ones. We characterize desirability of states as graded preferences, using mechanisms from the field of fuzzy logic. As a result, opportunities for an agent to act can also be graded, and their relative preference can be used to infer when and how to act. This paper provides a formal description of our computational framework, and illustrates how the use of degrees of desirability leads to well-informed choices of action.