The chunk fragmentation problem inherently associated with deduplication systems significantly slows down the restore performance, as it causes the restore process to assemble chunks which are distributed in a large number of containers as a result of storage indirection. Existing solutions attempting to address the fragmentation problem either sacrifice deduplication efficiency or require additional memory resources. In this work, we propose a new restore cache scheme, which accelerates the restore process using the same amount of cache space as that of the traditional LRU restore cache. We leverage the recipe knowledge to recognize the containers which will soon be accessed for restoring a backup version and classify those containers into bursty containers which are differentiated from other regular containers. Bursty and regular containers are then put in two separate caches, respectively. Bursty containers, containing many chunks that will be needed for restore within a short period of time, are put in a smaller cache managed at the container granularity. On the contrary, regular containers are put in the other bigger cache managed at the chunk granularity, with chunks which will not be used dropped off at the time when the containers are brought in. In doing so, bursty containers have better chances to be quickly evicted from the restore cache, avoiding their unnecessarily occupying cache space for too long. Our evaluation results have demonstrated that our proposed cache scheme can improve restore speed factor by up to 3.05X and reduce the number of container reads by 67.3% on average, relative to a conventional LRU restore cache.