Research Papers
SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation
Key-Value (KV) cache has become a bottleneck of LLMs for long-context
generation. Despite the numerous efforts in this area, the optimization for the
decoding phase is generally ignored. However, we believe such optimization is
crucial, especially for long-output generation tasks based on the following two
observations: (i) Excessive compression during the prefill phase, which
requires specific full context impairs the comprehension of the reasoning task;
(ii) Deviation of heavy hitters occurs in the reasoning tasks with long
outputs. Therefore, SCOPE, a simple yet efficient framework that separately
performs KV cache optimization during the prefill and decoding phases, is
introduced. Specifically, the KV cache during the prefill phase is preserved to
maintain the essential information, while a novel strategy based on sliding is
proposed to select essential heavy hitters for the decoding phase. Memory usage
and memory transfer are further optimized using adaptive and discontinuous
strategies. Extensive experiments on LongGenBench show the effectiveness and
generalization of SCOPE and its compatibility as a plug-in to other
prefill-only KV compression methods.