In order to evaluate the performance of the dialogue-manager component of a developing, Slovenian and Croatian spoken dialogue system, two Wizard-of-Oz experiments were performed. The only difference between the two experiment settings was in the dialogue-management manner, i.e., while in the first experiment dialogue management was performed by a human, the wizard, in the second experiment it was performed by the newly-implemented dialogue-manager component. The data from both Wizard-of-Oz experiments was evaluated with the PARADISE evaluation framework, a potential general methodology for evaluating and comparing different versions of spoken-language dialogue systems. The study ascertains a remarkable difference in the performance functions when taking different satisfaction-measure sums or even individual scores as the target to be predicted, it proves the indispensableness of the recently introduced database parameters when evaluating information-providing dialogue systems, and it confirms the dialogue manager’s cooperativity subject to the incorporated knowledge representation.