The aim of this chapter is to extend methods and algorithms from the previous chapters to more general classes of problems. We describe a class of discrete control problems for which a dynamic programming technique can be used efficiently. The results from the first chapter are generalized for the case of control problems with varying time of states’ transitions of dynamical systems. Additionally, we consider a control problem with an algorithmically defined objective function. We show that the concept of multi-objective games for the considered class of control problems can be applied and we propose a new algorithm for determining optimal strategies of the players.