Scheduling Mechanisms
Monitoring Processor Time Consumption. Specifically, to maintain a server in the user level, we must monitor the server budget consumption at the user level. What we need for this purpose is an interval timer (a stop watch) that can be reset at the replenishment time of each server. The timer counts whenever the server executes. The timer signals the userlevel scheduler when the count reaches the server budget. Such a timer is similar to the UNIX interval timer ITIMER VIRTUAL12 in the sense that it counts only at times when the server executes. Indeed, if ITIMER VIRTUAL were more accurate, it could be used for this purpose. Unfortunately, ITIMER VIRTUAL has twomajor shortcomings. The first is its resolution, and the second is its accuracy.
In most UNIX systems, the kernel updates ITIMER VIRTUAL only at times when it services clock interrupts. As a consequence, the resolution of the timer is coarse and the consumed budget measured by the timer may be erroneous.
Tracking Busy Intervals. Similarly, in a fixed-priority system, it is simple for the kernel to monitor and detect the beginnings and ends of busy intervals of threads with priorities higher than that of a server (or any thread). In Chapter 7, we called this set of threads TH, server priority πs , and a busy interval of TH a level-(πs−1) busy interval. You recall that the beginning of a busy interval of TH is an instant prior to which no thread in the set is ready for execution and at which a thread in the set is released. The end of the busy interval is the first instant when all threads in TH that were released before the instant have completed. Since a new busy interval may begin immediately after a busy interval ends, the processor may be continuously busy executing threads in TH for the entire duration. This is why the boundaries of busy intervals cannot be detected by simply monitoring the idle/busy state of the thread set TH .
Input information maintained by kernel:
• currentBudget: Set to server budget at replenishment time.
• decremented: FALSE if the budget has not been decremented in current clock interrupt period; TRUE if otherwise.
• server priority πs .
• level-πiBusy: TRUE if the current time is in a level-πs busy interval; FALSE otherwise
• BEGIN and END: Beginning and end of the current level-πs busy interval.
Monitoring consumed server budget:
• When scheduling the server,
– set time slice to currentBudget;
– set decremented to FALSE;
• When preempting or suspending the server in the midst of a clock interrupt period,
– if decremented is FALSE,
∗ decrement (remaining) time slice by tick size;
∗ if time slice is 0, suspend server;
∗ otherwise, set decremented to TRUE and currentBudget = remaining time slice;
• At a clock interrupt when the server is executing,
– if decremented is TRUE, set decremented to FALSE;
– Otherwise,
∗ decrement time slice by tick size;
∗ if time slice is 0, suspend server.
Tracking level-πs busy interval:
• Upon clock interrupt,
– if level-πsBusy is TRUE,
∗ if the queues for priorities πs or higher are empty,
set END to current time and level-πsBusy to FALSE;
service timer events;
if a thread in TH is released, set BEGIN to current time and level-πsBusy to TRUE;
∗ otherwise (i.e., if the queues are not empty), service timer events;
– otherwise (i.e., if level-πsBusy is FALSE),
∗ service timer events;
∗ if a thread in TH is released, set BEGIN to current time and level-πsBusy to TRUE;
• When releasing a thread at times other than clock interrupts,
– if level-πsBusy is FALSE and the newly released thread is in TH, set BEGIN to current time and level-πsBusy to TRUE;
• When a thread of priority πs or higher completes,
– if level-πsBusy is TRUE and queues for priorities πs and higher become empty, set END to current time and set level-πsBusy to FALSE.
Static Configuration. As Multiprocessor System Environment pointed out, we want multiprocessor or distributed real-time applications to be statically configured so we can validate their end-to-end timing constraints. In a static system, computation tasks are partitioned into modules, each module is bound to a processor, and the tasks and threads on each processor are scheduled according to a uniprocessor scheduling algorithm. Over a network, each message stream is sent over a fixed route. Many real-time operating systems are uniprocessor, multitasking systems, so if we run our application on multiple networked machines on any of these operating systems, our application is naturally statically configured.
Release Guard Mechanism. In Multiprocessor System Environment it was also pointed out that simply binding computation tasks to CPUs is not sufficient. If the tasks are scheduled on a fixed-priority basis, we should synchronize the releases of periodic tasks on different CPUs according to one of the nongreedy protocols discussed in Section 9.4.1. Similarly, sending message streams through fixed routes is not sufficient. Traffic shaping at the network interfaces is needed. By doing so, the worst-case response time of each end-to-end task can be computed by the sum of worst-case response times of the subtasks on individual CPUs and networks.