When training machines classifiers, it is possible to replace hard classification targets by their emphasized soft versions so as to reduce the negative effects of using cost functions as approximations to misclassification rates. This emphasis has the same effect as sample editing methods which have proved to be effective for improving classifiers performance. In this paper, we explore the effectiveness of using emphasized soft targets with generative models, such as Gaussian mixture models, that offer some advantages with respect to decision (prediction) oriented architectures, such as an easy interpretation and possibilities of dealing with missing values. Simulation results support the usefulness of the proposed approach to get better performance and show a low sensitivity to design parameters selection.