It is well-known that has subspaces of dimension proportional to N on which the ℓ1 and ℓ2 norms are uniformly equivalent, but it is unknown how to construct them explicitly. We show that, for any δ> 0, such a subspace can be generated using only N δ random bits. This improves over previous constructions of Artstein-Avidan and Milman, and of Lovett and Sodin, which require O(N logN), and O(N) random bits, respectively.
Such subspaces are known to also yield error-correcting codes over the reals and compressed sensing matrices. Our subspaces are defined by the kernel of a relatively sparse matrix (with at most N δ non-zero entries per row), and thus enable compressed sensing in near-linear O(N 1 + δ ) time.
As in the work of Guruswami, Lee, and Razborov, our construction is the continuous analog of a Tanner code, and makes use of expander graphs to impose a collection of local linear constraints on vectors in the subspace. Our analysis is able to achieve uniform equivalence of the ℓ1 and ℓ2 norms (independent of the dimension). It has parallels to iterative decoding of Tanner codes, and leads to an analogous near-linear time algorithm for error-correction over reals.