Given that users provided with explanations have a better performance than ones without them, we have developed an explanation generation mechanism based on the selection of the most relevant variable. However, explanations generated automatically by an intelligent assistant system (IAS) and those generated manually by a human expert could have inconsistences. In this work, we present a formal validation model that uses first order logic to formalize the explanations given by the human and the IAS output as well. The aim of this validation is to prove the IAS correctness and the explanations soundness. Experimental results demonstrate that most of the explanations generated automatically in a training plant operator domain are consistent and sound with those provided by the expert. We consider that this method is a useful tool to evaluate the precision of the explanation generation mechanism that could also be extended to other domains.