In this paper we present a framework that combines two algorithms respectively developed for Sentiment Analysis and Emotion Recognition in users spoken utterances. We propose modeling the users emotional state by means of the fusion of the outputs generated by both algorithms. This process considers the probabilities assigned to the different emotions by both algorithms. The proposed framework can be integrated as an additional module in the architecture of a spoken dialog system, using the information generated as an additional input for the dialog manager to decide the next system response.