Amplitude panning produces interaural time difference (ITD) cues that help localize images in directions between loudspeakers. However, if the panning gains are static, then the ITD cues produced in this way vary inconsistently as the listener's head rotates, compared with a real source, and so the dynamic ITD cues are inaccurate. This effect destabilizes the perception of the image and overall scene, and is worse for loudspeakers that are more widely spaced relative to the listener. Based on a simple head model that is accurate in the low-frequency ITD regime, the ITD is calculated for a general field, including those produced by panning. A simple formula is derived relating head orientation, image direction, and a field description vector. Panning functions are then found that compensate for head orientation and are valid for any image direction. For the special case when the listener is facing the image, the functions are equivalent to vector base amplitude panning. The performance is first assessed objectively using measured binaural responses, rather than the simple head model. Subjective comparison is then made with pre-existing listening tests and new listening tests in which the listener's head is tracked to control the panning gains in real-time. These show that images can be stabilized as predicted, and, furthermore, that with the same panning functions, images can be produced in all directions using two loudspeakers placed in front.