The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by a future superintelligence will likely be a political problem, that is, one of establishing a peaceful form of coexistence with other intelligent agents in a situation of mutual vulnerability, and not a technological problem of control.