This paper proposes a novel method for estimating visual saliency based on a typical agreement that image saliency depends mainly on local and global contrast from various feature channels. We compute the contrast between image patches on different low-level feature maps which are generated by color space conversion and support value transform. To obtain the representative measurement effectively, we calculate the dissimilarity in a reduced dimensional principal component space. In addition, our method may be easily extended for more conspicuous feature channels in an efficient manner. Experimental results on two public available human eye fixation datasets demonstrate that our method outperforms other seven state-of-the-art saliency models.