It was shown recently that the $K$ L1-norm principal components (L1-PCs) of a real-valued data matrix $\mathbf X \in \mathbb {R}^{D \times N}$ ($N$ data samples of $D$ dimensions) can be exactly calculated with cost $\mathcal {O}(2^{NK})$ or, when advantageous, $\mathcal {O}(N^{dK - K + 1})$ where $d=\mathrm{rank}(\mathbf X)$ , $K<d$. In applications where $\mathbf X$ is large (e.g., “big” data of large $N$ and/or “heavy” data of large $d$), these costs are prohibitive. In this paper, we present a novel suboptimal algorithm for the calculation of the $K < d$ L1-PCs of $\mathbf X$ of cost $\mathcal O (ND \mathrm{min} \lbrace N,D\rbrace + N^2K^2(K^2 + d))$, which is comparable to that of standard L2-norm PC analysis. Our theoretical and experimental studies show that the proposed algorithm calculates the exact optimal L1-PCs with high frequency and achieves higher value in the L1-PC optimization metric than any known alternative algorithm of comparable computational cost. The superiority of the calculated L1-PCs over standard L2-PCs (singular vectors) in characterizing potentially faulty data/measurements is demonstrated with experiments in data dimensionality reduction and disease diagnosis from genomic data.