Human re-identification is still a challenging task due to the human pose and illumination variations. Nowadays, surveillance cameras with high frame rate are capable of capturing several consecutive frames from each person. Multi-shot images provide richer information of the target person compared to a single-shot image. They, however, produce a high cost of information redundancy which may degrade the performance of re-identification systems. In this paper, we propose a novel framework that combines sparse coding and manifold constraints to extract discriminative information from multi-shot images of one pedestrian for person re-identification across a set of non-overlapped surveillance cameras. The evaluation over two standard multi-shot datasets shows very competitive accuracy of our framework against the state-of-the-art.