In order to enable robot systems to solve challenging tasks autonomously, it is sufficient to provide visual perception systems, which can solve different recognition tasks. The work presented here is designed for maintenance applications which are very similar to (dis-)assembly tasks. Therefore, we describe a novel perception strategy using RGBD data to get the required information for manipulation planning, like object class, position and geometric appearance. To handle this problem, well-known segmentation and recognition algorithms are used and task-specifically allocated. The system works with different levels of detail. On the one hand, it is possible to fit geometric primitives related to the different components. On the other hand, classifiers are used to assign the corresponding object semantic. Combining this information in a scene analysis allows the creation of a relational graph, which is further used for manipulation planning. All methods are explained and experimental validated.