In safety engineering for non-autonomous vehicles, it is generally assumed that safety is achieved if the vehicleappropriately follows certain control commands from humanssuch as steering or acceleration commands. This fundamentalassumption becomes problematic if we consider autonomousvehicles that decide on their own which behavior is mostreasonable in which situation. Safety criticality extends to thedecision-making process and the related perception of theenvironment. These, however, are so complex that they require theapplication of concepts for intelligence that do not harmonize withtraditional safety engineering. In this paper, we investigate theseproblems and propose a solution.