Artificial intelligence began with an enthusiastic embrace of newly available computing machinery and the basic question of what kinds of problems we could solve with it. The first 50 years focused on programming computers to perform tasks that previously only humans could do. Then, people began comparing machines to humans as problem solvers, and the race was on to see where machines could match or even surpass human performance. Success in solving math word problems, winning checkers and chess championships, understanding natural language, and generating plans and schedules reinforced our efforts to build supercapable machines. The author call these puppets, not to derogate the machines but to respect the importance of the programmers and builders who were actually responsible for their accomplishments. From time to time, many of us have recognized the field's rate-limiting factor under various names and viewpoints, such as the knowledge-acquisition bottleneck and the challenges of machine learning, system bootstrapping, artificial life, and self-organizing systems. Mostly, however, these efforts have had limited success. The little bit of learning and adaptation they've demonstrated has paled in comparison to the puppeteers' laborious inputs