In this paper, we develop a general-use autonomous control strategy for managing the trade-off between process- ing throughput and power consumption in data centers. This approach relies on the concept that the delay in completing computational tasks can be reduced at the cost of more power. The scheme's generality allows it to be applied at multiple hierarchical levels within a data center, or another system with similar architecture. In particular, we show that our scheme converges asynchronously to a unique solution. This property allows the control strategy to be implemented in a low-complexity, yet robust and scalable manner. These properties are particularly important when considering data center power control system architectures, which can involve a wide variety of distributed computing resources performing diverse tasks. The presented scheme is mostly decentralized, except for a single global power stress signal provided by a redundant central authority. Based on this power stress, computing resources independently and autonomously manage power consumption to optimally balance power versus delay.