Notes - MIECT
Sistemas Operativos E De Tempo-real
Notes - MIECT
Sistemas Operativos E De Tempo-real
  • Sistemas Operativos e de Tempo-real
  • Basic Concepts About Real-Time Systems
    • Preliminaries
    • Definitions
    • Objective of the Study of RTS
    • Requirements of Real-Time Systems
  • Real Time Model
    • Real Time Model
    • Temporal Control
    • Task states and execution
    • Kernel/RTOS Architecture
      • Time Management Functions
    • Examples of RTOS
  • Practical Class 01
    • Real-Time Services in Linux
    • Using the Linux real-time services
  • Scheduling Basics
    • Basic concepts
    • Scheduling Algorithms
      • Basic algorithms
    • Static Cyclic Scheduling
    • Exercise
  • Fixed Priority Scheduling
    • Online scheduling with fixed priorities
    • Schedulability tests based on utilization
      • Deadline Monotonic Scheduling DM
    • Response-time analysis
  • Practical Class 2
    • Xenomai brief introduction
    • API
    • Developing an application
  • Dynamic Priority Scheduling
    • On-line scheduling with dynamic priorities
    • Analysis: CPU utilization bound
    • Analysis: CPU Load Analysis
    • Other deadline assignment criteria
  • Exclusive Access to Shared Resources
    • The priority inversion problem
    • Techniques for allowing exclusive access
    • Priority Inheritance Protocol
    • Priority Ceiling Protocol
    • Stack Resource Policy
    • Notes
  • Aperiodic Servers
    • Joint scheduling of periodic and aperiodic tasks
    • Aperiodic Servers
    • Fixed Priority Servers
    • Dynamic Priority Servers
  • Limited preemption, release jitter and overheads
    • Non-preemptive scheduling
    • Impact of Release Jitter
    • Accounting for overheads
    • Considerations about the WCET
  • Profiling and Code Optimization
    • Code optimization techniques
      • CPU independent optimization techniques
      • Cache impact
      • Optimization techniques dependent on memory architecture
      • Architecture-dependent optimization techniques
    • Profiling
  • Multiprocessor Scheduling, V1.2
    • Introduction
    • Definitions, Assumptions and Scheduling Model
    • Scheduling for Multicore Platforms
    • Task allocation
Powered by GitBook
On this page
  1. Limited preemption, release jitter and overheads

Considerations about the WCET

PreviousAccounting for overheadsNextCode optimization techniques

Last updated 2 years ago

Task’s WCET

Evaluating the task’s execution time.

  • Can be made via source code analysis, to determine the longest execution path, according with the input data.

    • Then the corresponding object code is analyzed to determine the required number of CPU cycles.

  • Note that the execution time of a task may vary from instance to instance, according with the input data or internal state, due to presence of conditionals and cycles, etc.

  • It is also possible execute tasks in isolation and in a controlled fashion, feeding them with adequate input data and measuring its execution time on the target platform.

    • This experimental method requires extreme care to make sure that the longest execution paths are reached, a necessary condition to obtain an upper bound on the execution time!

  • Modern processors use features like pipelines and caches (data and/or instructions) that improve dramatically the average execution time but that present an increased gap between the average and the worst-case scenarios.

    • For these cases are used specific analysis that try to reduce the pessimism, e.g. by bounding the maximum number of cache misses and pipeline flushes, according with the particular instruction sequences.

  • Nowadays there is an growing interest on stochastic analysis of the execution times and respective impact in terms of interference.

  • The basic idea consists in determining the distribution of the probability of the execution times and use an estimate that covers a given target (e.g. 99% of the instances).

  • In many cases (mainly when the worst case is infrequent and much worst than the average case) this technique allows reducing drastically the impact of the gap between the average execution time and the WCET (higher efficiency).