In this work we attempt to create a human-like virtual player able to play the mirror game with a human player in real time. A control architecture is developed in order to drive such virtual player to lead or track a human player during the game while maintaining a certain degree of similarity to its individual signature. The virtual player is able to exhibit diverse kinematic characteristics by integrating the relevant intrinsic dynamics. Two alternative control strategies are presented here to implement its cognitive architecture, namely PD control and a receding horizon optimal control strategy. To validate and compare the performance of these control strategies, we establish a benchmark based on experimental data collected from two human players so as to evaluate the human-like performance of the virtual player when playing together with a human. Experimental validation is provided showing the advantages and disadvantages of using different control strategies and models to drive the virtual player.