One of the most important challenges in the field of human-computer interaction, is maintaining and enhancing the willingness of the user to interact with the technical system. This willingness to cooperate provides a solid basis which is needed for a collaborative human-computer dialogue. This paper investigates how the intelligibility of a technical system can be upheld by providing explanations to the user. We show that the content of an explanation and the decision when to provide it should not be based solely on a user's knowledge level. We conclude that several input factors should be taken into account for this decision and that especially the concept of human-computer trust has to be considered. Finally we present our approach of a modular architecture for an intelligent system being able to provide explanations.