Social robots can benefit by adding deceptive capabilities. In particular, robotic deception should benefit the deceived human partners when used in the context of human-robot interaction (HRI). We define this kind of robotic deception as a robot's other-oriented deception and aim to add these capabilities to the robotic systems. Toward that end, we develop a computational model inspired by criminological definition of deception. In this paper, we establish a definition of other-oriented robotic deception in HRI and present a novel model that can enable a humanoid robot to autonomously generate other-oriented deceptive actions during the interaction.