Introduction
Last updated
Last updated
Multicore systems are becoming increasingly popular for developing RT systems. Why?
For years processing power increases arose (mainly) from using higher clock frequencies.
Higher clock frequencies and smaller transistors lead to higher dynamic and static power consumption.
Chip temperature eventually reached levels beyond the capability of cooling systems.
... and the demand for processing power in real-time applications never ceased to increase ...
Moving/developing applications to multicore platforms is not simple nor straightforward.
Simple use of sequential languages hides the intrinsic concurrency that must be exploited to allow benefiting from the hardware redundancy.
Programmers have to take approaches that allow to split the code in segments that can be executed in parallel, on different cores (special languages, annotations, ...).
How to split the code into segments/jobs that can be executed in parallel?
How to allocate code segments to different cores?
How to assess the schedulability on multicore platforms?
How to handle dependencies?
How to cope with the impact of shared resources on the WCET.
WCET is fundamental for RT analysis.
Existing RT analysis assumes that the WCET of a task is constant when it is executed alone or together with other tasks.
While this assumption is reasonable for single-core chips, it is NOT true for multicore chips!
Example: Impact of shared resources on the WCET of code on an 8-core platform, by Lockheed Martin Space Systems.
Shared resources include main memory, memory bus, cache, etc.
In a single core system, concurrent tasks are sequentially executed on the processor.
Access to physical resources is implicitly serialized. E.g., two tasks can never cause a contention for a simultaneous memory access.
In a multicore platform, different tasks can run simultaneously on different cores.
several conflicts can arise while accessing physical resources.
Important issue, as existing RT analysis assumes that WCET is constant an known!
L3 cache is typically shared by all cores ⇒ cache conflicts.
In multicore systems, L1 and L2 caches have the same problem seen in single-core systems.
L3 cache lines can also be evicted by applications running on different cores.
Possible approaches to attenuate the impact include e.g. partition the last level cache to simulate the cache architecture of a single-core chip. But the size of each partition becomes small...
Similar situation with access to main memory, I/O devices, etc.
Despite largely out of the control of the programmer, these issues have a significant impact on the scheduling strategies, as we will see!
In summary, the WCET uncertainty get much worse in Multicore systems!