If the signal statistics are given, direct vector quantization (DVQ) according to these statistics provides the highest coding efficiency, but requires unmanageable storage requirements. In. code-excited linear predictive (CELP) coding. a single “compromise” codebook is trained in the prediction residual-domain and the space-filling and shape advantages of vector quantization (VQ) are utilized in a non-optimal, average sense. In this paper. we propose a Karhunen-Loève Transform (KLT)-based classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT. The memory and shape advantages can be also used, since each codebook is designed based on a narrow class of KL T -domain statistics. Our experiments show that the KLT-CVQ provides a higher SNR than CELP and (single-codebook) DVQ, and has a computational complexity similar to DVQ and much lower than CELP. Storage requirements are modest because of the energy concentration property of the KL T.