We have developed a systematic methodology for designing emotional facial expressions for humanoid robots, especially those with limited degrees of freedom. The methodology is firmly grounded in the psychological literature on human static and dynamic emotional facial expressions. We demonstrate the methodology by applying it to a recent humanoid robot and evaluate the results, confirming that the observed confusion matrix agrees qualitatively with the predictions of the methodology. We also investigate how robot facial emotion recognition compares for dynamic versus static expressions.