Visual attention (VA), defined as the ability of a biological or artificial visual system to rapidly detect potentially relevant parts of a visual scene, provides a general purpose solution for low level feature detection in a visual architecture. Numerous computational models of visual attention have been suggested during the last two decades. In saliency map of VA, how to select weights of each feature map that can correctly reflect the response salience between feature maps is important. A sparse embedding visual attention (SEVA) model, inspired by sparse representation, is presented. This paper describes the feature saliency index measured by sparse representation that adjusts the weights of each feature map in proportion of its average contribution to the saliency map. The proposed visual attention system is examined by using different scene images. Results show that the SEVA model consistently outperforms the traditional VA model, attributed to the adaptation of the weights of each feature map.