Predictability and performance aware replacement policy PVISAM for unified shared caches in real-time multicores
Missing the deadline of an application task can be catastrophic in real-time systems. Therefore, to ensure timely completion of tasks, offline worst-case execution time and schedulability analysis is often performed for such real-time systems. One of the important inputs to this analysis is a safe u...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/140617 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Missing the deadline of an application task can be catastrophic in real-time systems. Therefore, to ensure timely completion of tasks, offline worst-case execution time and schedulability analysis is often performed for such real-time systems. One of the important inputs to this analysis is a safe upper bound of misses in each processor cache memory used by the system. Cache miss prediction techniques have matured significantly for private caches in single-core processors; however, remained as a challenge for unified, shared caches in multicore processors. According to prior studies, a task's miss upper bound on a shared cache can be predicted using available private cache prediction techniques only if the shared cache maintains core-based independent static partitions. The problem is, such partitions require the use of infeasible 'write-update consistency protocol' and wastes valuable cache space by duplicate caching. In this regard, this paper presents a novel cache replacement policy called 'predictable variable isolation in shared antipodal memory (PVISAM).' Its replacement decisions generate virtual core-based partitions that support demand-based runtime size adjustment and line sharing to better utilize space. Moreover, these partitions require no consistency protocol. Trace-driven experimental results for Parsec benchmark applications reveal that performance of a unified shared cache memory improves by 101.68 × on average (minimum 1.09× and maximum 1138.50 × ) when PVISAM is used instead of either the aforementioned write-update protocol-based predictable partitioning or the widely used write-invalidate consistency protocol-based partitioning. PVISAM can improve cache performance by 0.74 × on average (minimum 0.02 × and maximum 1.12 × ) compared to having no partitions at all. Both predictable partitioning and PVISAM improve unified, shared cache predictability by 63.44% (minimum 26.89% and maximum 99.99%) and 19.36% (minimum 1.58% and maximum 72.51%) on average compared to no partitions and write-invalidate protocol-based partitioning, respectively. Experimental results for synthetic traces show that PVISAM remarkably improves cache performance and predictability when compared to its three competitors even in scenarios that stress the cache. |
---|