For applications in worst-case execution time analysis and in security, it is desirable to statically classify memory accesses into those that result in cache hits, and those that result in cache misses. Among cache replacement policies, the least recently used (LRU) policy has been studied the most and is considered to be the most predictable.
The state-of-the-art in LRU cache analysis presents a tradeoff between precision and analysis efficiency: The classical approach to analyzing programs running on LRU caches, an abstract interpretation based on an range abstraction, is very fast but can be imprecise. An exact analysis was recently presented, but, as a last resort, it calls a model checker, which is expensive.
In this paper, we develop an analysis based on abstract interpretation that comes close to the efficiency of the classical approach, while achieving exact classification of all memory accesses as the model-checking approach. Compared with the model-checking approach we observe speedups of several orders of magnitude. As a secondary contribution we show that LRU cache analysis problems are in general NP-complete.
|Fast and exact analysis for LRU caches (presentation.pdf)||451KiB|
Fri 18 Jan
|16:37 - 16:59|
Xin YiNational University of Defense Technology, Liqian ChenNational University of Defense Technology, Xiaoguang MaoNational University of Defense Technology, Tao JiNational University of Defense TechnologyLink to publication DOI File Attached
|16:59 - 17:21|
Krishnendu ChatterjeeIST Austria, Amir Kafshdar GoharshadyIST Austria, Nastaran OkatiFerdowsi University of Mashhad, Andreas PavlogiannisEPFL, SwitzerlandLink to publication DOI Pre-print File Attached
|17:21 - 17:43|
Valentin TouzeauUniv. Grenoble Alpes, Claire MaizaVerimag, France, David MonniauxCNRS, VERIMAG, Jan ReinekeSaarland UniversityLink to publication DOI File Attached