Opportunities for interleaving or parallelizing actions are abundant in everyday activities. Being able to perceive, predict and exploit such opportunities leads to more efficient and robust behavior. In this paper, we present a mobile manipulation platform that exploits such opportunities to optimize its behavior, e.g. grasping two objects from one location simultaneously, rather than navigating to two different locations. To do so, it uses a general least-commitment representation of place, called ARPLACE, from which manipulation is predicted to be successful. Models for ARPLACEs are learned from experience using support vector machines and point distribution models, and take into account the robot's morphology and skill repertoire. We present a transformational planner that reasons about ARPLACEs, and applies transformation rules to its plans if more robust and efficient behavior is predicted.