Using cameras to geometrically calibrate projector-based displays has been widely reported in the literature over the last decade. Most systems project structured-light patterns during a setup phase before starting the application in order to evaluate the geometric mapping from projector pixels to locations on the display surface. If this mapping changes e.g. the projector is moved, the application must be stopped and calibration re-initiated. Our pose estimation technique is based on detecting a set of image correspondences between the moving projector and a static camera using only the imagery displayed by the projector. We use an up-front calibration process to establish the geometry of the display surface, the pose of the camera, and the initial pose of the projector.