This article deals the problem of data fusion applied to road safety by proposing a solution based on a multi-level approach allowing the exploitation of additional and redundant data which emanate from two systems of perception: an omnidirectional vision sensor and a rangefinder laser. The first part concerns the processing of sensory data stemming from both sensors allowing the extraction of primitives finishing in the detection of surrounding vehicles. The second part deals with the quantification of the uncertainties of the vehicles discovered, followed by a determination of situations of danger and the evaluation of their level of dangerousness with the aim of supplying the driver with an indicator of global danger around the vehicle