A perceptually motivated speech enhancement approach is proposed in this paper. Different from the conventional sparse and low-rank model based approaches, this new approach takes into account the perceptual differences in different frequency bands of the human auditory system, and separates speech from background noises in the Mel spectral domain. After two propositions for the Mel frequency weighted spectrogram are proved, speech enhancement can be modeled as a sparse and low-rank constrained optimization problem, which is solved efficiently by the alternating direction method of multipliers (ADMM). The proposed approach is totally unsupervised, neither the speech nor the noise dictionary needs to be trained beforehand. The experimental results have shown its promising performance under strong background noises. The performance can be further improved by information fusion technique at high input SNRs.