The problem of on-board intersection perception and modeling is complex and requires a wide field of view, dense and accurate data acquisition and processing, robust object detection and classification and fast response time. This goal can be achieved by using a large and redundant set of heterogeneous sensors and by fusing their information. Among the on-board sensors the visual sensors have the following main advantages: they are passive, and they provide the highest volume of information. The use of a pair of visual sensors in stereo configuration opens not only the possibility to infer the 3D coordinates for any image point but also the possibility to compute the 3D motion vector for any pixel. The exploitation of the motion information in driving assistance systems requires the estimation of the ego motion. This paper presents the architecture, implementation and use of a powerful on-board 6D visual sensor for intersection driving assistance.