Loading...
Research Project
Lightweight Computation for Networks at the Edge
Funder
Authors
Publications
Analyzing Fixed Task Priority Based Memory Centric Scheduler for the 3-Phase Task Model
Publication . Arora, Jatin; Rashid, Syed Aftab; Maia, Cláudio; Tovar, Eduardo
The sharing of main memory among concurrently executing tasks on a multicore platform results in increasing the execution times of those tasks in a non-deterministic manner. The use of phased execution models that divide the execution of tasks into distinct memory and execution phase(s), e.g., the PRedictable Execution Model (PREM) and the 3-Phase task model, along with Memory Centric Scheduling (MCS) present a promising solution to reduce main memory interference among tasks.
Existing works in the state-of-the-art that focus on MCS have considered (i) a TDMA based memory scheduler, i.e., tasks' memory requests are served under a static TDMA schedule, and (ii) Processor-Priority (PP) based memory scheduler, i.e., tasks' memory requests are served depending on the priority of the processor/core on which the task is executing. This paper extends MCS by considering a Task-Priority (TP) based memory scheduler, i.e., tasks' memory requests are served under a global priority order depending on the priority of the task that issues the requests. We present an analysis to bound the total memory interference that can be suffered by the tasks under the TP-based MCS. In contrast to most existing works on MCS that consider non-preemptive tasks, our analysis considers limited preemptive scheduling. Additionally, we investigate the impact of different preemption points on the memory interference of tasks. Experimental results show that our proposed TP-based MCS can significantly reduce memory interference that can be suffered by the tasks in comparison to the PP-based MCS approach.
Bus-Contention Aware WCRT Analysis for the 3-Phase Task Model Considering a Work- Conserving Bus Arbitration Scheme
Publication . Arora, Jatin; Maia, Cláudio; Rashid, Syed Aftab; Nelissen, Geoffrey; Tovar, Eduardo
Today multicore processors are used in most modern systems that require computational logic. However, their applicability in systems with stringent timing requirements is still an ongoing research. This is due to the difficulty of ensuring the timing correctness of tasks executing on a multicore platform that comprises a number of shared hardware resources, e.g., caches, memory bus and the main memory. Concurrent accesses to any of these shared resources can generate uncontrolled interference, which complicates the estimations of tasks' worst-case execution time (WCET) and the worst-case response time (WCRT).
The use of the 3-phase task execution model helps in upper bounding the contention due to the sharing of bus/main memory in multicore systems. It divides the execution of tasks into distinct memory and execution phases, where tasks can only access the bus/main memory during their memory phases. This makes bus/memory access patterns of tasks more predictable, enabling a preciser computation of bus/memory contention.
In this work, we show how the bus contention can be computed for the 3-phase task model considering a work-conserving, i.e., round-robin (RR) based, arbitration policy at the memory bus. This is different from existing works that analyze the time-division multiple access (TDMA) and first-come-first-serve (FCFS) based bus arbitration policies. First, we present a solution to model the bus contention that can be suffered/caused by tasks executing on the same/remote cores of a multicore system under an RR-based bus arbitration scheme. We then evaluate the impact of resulting bus contention on taskset schedulability. Experimental results show that our proposed RR-based bus contention analysis can improve taskset schedulability by up to 100 percentage points than the TDMA-based analysis and up to 40 percentage points than the FCFS-based bus contention analysis.
Tightening the CRPD Bound for Multilevel non- Inclusive Caches
Publication . Syed Aftab, Rashid; Nelissen, Geoffrey; Tovar, Eduardo
Tasks running on microprocessors with cache memories are often subjected to cache related preemption delays (CRPDs). CRPDs may significantly increase task execution times, thereby, affecting their schedulability. Schedulability analysis accounting for the impact of CRPD has been extensively studied over the past two decades for systems with a single level of cache. Yet, the literature on CRPD for multilevel non-inclusive caches is relatively scarce. Two main challenges exist when analyzing multilevel caches: (1) characterization of the indirect effect of preemption, i.e., capturing the increase in cache interference at lower cache levels (e.g., L2 cache) due to the evictions of cache content from a higher cache level (e.g., L1 cache), and (2) upper bounding the maximum CRPD suffered by tasks at lower
cache levels (e.g., L2 cache), i.e., determining the cache content of tasks that can be evicted from lower cache levels in case of preemptions. Existing analysis that focus on bounding CRPD for multilevel non-inclusive caches overestimate the values of (1) and (2) leading to pessimistic worst-case response time (WCRT) estimations. In this work, we reduce
the excessive pessimism of the state-of-the-art CRPD analysis for multilevel non-inclusive caches by (i) introducing the notion of multi-level useful cache blocks, i.e., cache blocks that can cause CRPD at different cache levels, and use it to compute a tighter bound on the indirect effect of preemption of tasks; and (ii) deriving a new analysis to compute tighter bounds on the CRPD of tasks at lower cache levels (e.g., L2 cache). We performed a thorough experimental evaluation using benchmarks to compare the performance of our proposed CRPD analysis against the state-of-the-art CRPD analysis. Experimental results show that our proposed CRPD analysis dominates the existing analysis and improves task set schedulability by up to 20% percentage points
Cache-aware Schedulability Analysis of PREM Compliant Tasks
Publication . Rashid, Syed Aftab; Awan, Muhammad Ali; Souto, Pedro; Bletsas, Konstantinos; Tovar, Eduardo
The Predictable Execution Model (PREM) is useful for mitigating inter-core interference due to shared resources such as the main memory. However, it is cache-agnostic, which makes schedulabulity analysis pessimistic, via overestimation of prefetches and write-backs. In response, we present cache-aware schedulability analysis for PREM tasks on fixed-task-priority partitioned multicores, that bounds the number of cache prefetches and write-backs. Our approach identifies memory blocks loaded in the execution of a previous scheduling interval of each task, that remain in the cache until its next scheduling interval. Doing so, greatly reduces the estimated prefetches and write backs. In experimental evaluations, our analysis improves the schedulability of PREM tasks by up to 55 percentage points.
Organizational Units
Description
Keywords
Contributors
Funders
Funding agency
European Commission
Funding programme
H2020
Funding Award Number
732505