This paper proposes a spatially constrained Bag-of-Visual-Words (BOV) method for hyperspectral image classification. We firstly extract the texture feature. The spectral and texture features are used as two types of low-level features, based on which, the high-level visual-words are constructed by the proposed method. We use the entropy rate superpixel segmentation method to segment the hyperspectral into patches that well keep the homogeneousness of regions. The patches are regarded as documents in BOV model. Then k-means clustering is implemented to cluster pixels to construct codebook. Finally, the BOV representation is constructed with the statistics of the occurrence of visual-words for each patch. Experiments on a real data show that the proposed method is comparable to several state of the art methods.