Video-coding systems require a large external memory bandwidth to encode a single video frame. Many modules of the current video encoders must access the external memory to read and write data resulting in large power consumption, since memory-related power is dominant in current digital systems. Moreover, external memory access represents an important performance bottleneck in current multimedia systems. In this sense, this article presents the Reference Frame Context Adaptive Variable-Length Coder (RFCAVLC), which is a low-complexity lossless solution to compress the reference data before storing them in the external memory. The proposed approach is based on Huffman codes and employs eight static code tables to avoid the cost of the on-the-fly statistical analysis. The best table to encode each block is selected at run time using a context evaluation, resulting in a context-adaptive configuration. The proposed RFCAVLC reaches an average compression ratio superior to 32 % for the evaluated video sequences. The RFCAVLC architectures, encoder, and decoder were designed and synthesized targeting FPGA and 65 nm TSMC standard cell library. The RFCAVLC design is able to reach real-time encoding for WQSXGA (3,200 × 2,048 pixels) at 33 fps. The RFCAVLC also achieves power savings related to external memory communication that exceed 30 % when processing HD 1,080p videos at 30 fps.