In this paper, we present a temporal-state shape context (TSSC) method that exploits space-time shape variations for human action recognition. In our method, the silhouettes of objects in a video clip are organized into three temporal states. These states are defined by fuzzy time intervals, which can lessen the degradation of recognition performance caused by time warping effects. The TSSC features capture local characteristics of the space-time shape induced by consecutive changes of silhouettes. Experimental results show that our method is effective for human action recognition, and is reliable when there are various kinds of deformations. Moreover, our method can identify spatially inconsistent parts between two shapes of the actions, which could be useful in action analysis applications.