In vision-based autonomous spacecraft docking multiple views of scene structure captured with the same camera and scene geometry are available under different lighting conditions. These ??multiple exposure?? images must be processed to localize visual features to compute the pose of the target object. This paper describes a novel multi-channel edge detection algorithm that localizes the structure of the target object by merging the gradient information of these multiple exposure images using tensor voting. This approach reduces the effect of illumination variation, including the effect of shadow edges, over the use of a single image and over simple combinations of single-channel edge maps. Compared to a recently proposed multi-channel edge detection approach using GMM modeling, the proposed approach can generate edge maps of comparable quality in much less time.