Compressed sensing is a new scheme which shows the ability to recover sparse signal from fewer measurements, using l 1 minimization. Recently, Chartrand and Staneva showed in Chartrand and Staneva (Inverse Problems 24:1–14, 2009) that the l p minimization with 0 < p < 1 recovers sparse signals from fewer linear measurements than does the l 1 minimization. They proved that l p minimization with 0 < p < 1 recovers S-sparse signals x ∈ ℝ N from fewer Gaussian random measurements for some smaller p with probability exceeding $$ 1 - 1 \Bigg/ {N\choose S}. $$ The first aim of this paper is to show that above result is right for the case of Gaussian random measurements with probability exceeding 1 − 2e − c(p)M , where M is the numbers of rows of Gaussian random measurements and c(p) is a positive constant that guarantees $1-2e^{-c(p)M}>1 - 1 / {N\choose S}$ for p smaller. The second purpose of the paper is to show that under certain weaker conditions, decoders Δ p are stable in the sense that they are (2,p) instance optimal for a large class of encoder for 0 < p < 1.