Random projection methods give distributions over k ×d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a vector the norm of the resulting vector, , is up to distortion ε equal to the norm of x w.p. at least 1 − δ. The Johnson Lindenstrauss lemma shows that such distributions exist over dense matrices for k (the target dimension) in . Ailon and Chazelle and later Matousek showed that there exist entry-wise i.i.d. distributions over sparse matrices Ψ which give the same guaranties for vectors whose ℓ ∞ is bounded away from their ℓ2 norm. This allows to accelerate the mapping x↦Ψx. We claim that setting Ψ as any column normalized deterministic dense matrix composed with random ±1 diagonal matrix also exhibits this property for vectors whose ℓ p (for any p > 2) is bounded away from their ℓ2 norm. We also describe a specific tensor product matrix which we term lean Walsh. It is applicable to any vector in in O(d) operations and requires a weaker ℓ ∞ bound on x then the best current result, under comparable running times, using sparse matrices due to Matousek.