A key skill for autonomous exploration and inspection missions is the ability to find safe and traversable paths within previously unknown environments. We present an approach for mapping typical environments encountered by autonomous planetary exploration robots, a pre-interpreted multi-resolution 3-D environment model generated from point cloud data, and a hybrid planner for basically any kind of mobile robot. Our system builds upon and enhances freely available standard frameworks such as ROS and OMPL. We present results of our system applied to our six-legged walking robot LAURON V, showing the progression from individual 3-D point clouds to a rich environment model queried by an RRT*-based planner to find and adapt a feasible and optimal path.