Our understanding of human visual perception has been paramount in the development of tools for digital video processing. For this reason, saliency detection, i.e., the determination of visual importance in a scene, has come to the forefront in recent literature. In the proposed work, a new method for scale-aware saliency detection is introduced. Scale determination is afforded through a scale–space model utilizing color and texture cues. Scale information is fed back to a discriminant saliency engine by automatically tuning center–surround parameters through a soft weighting. Excellent results are demonstrated for the proposed method through its performance against a database of measured human fixations. Further evidence of the proposed algorithm's performance is demonstrated through an application to frame rate upconversion. The ability of the algorithm to detect salient objects at multiple scales allows for class-leading performance both objectively, in terms of peak signal-to-noise ratio/structural similarity index, and subjectively. Finally, the need for operator tuning of saliency parameters is dramatically reduced by the inclusion of scale information. The proposed method is well suited for any application requiring automatic saliency determination for images or video.