Stack Based Priority-ceiling Protocol
Introduction: The different definitions arise from two different motivations: to provide stack-sharing capability and to simplify the priority-ceiling protocol. They led to the two different names of the same protocol.
Motivation and Definition ofStack-Sharing Priority-Ceiling Protocol
A resource in the system is the run-time stack. Thus far, we have assumed that each job has its own run-time stack. Sometimes, especially in systems where the number of jobs is large, it may be necessary for the jobs to share a common run-time stack, in order to reduce overall memory demand. (According to Baker [Bake91], if there are 10 jobs at each of 10 priority levels, the storage saving is 90 percent.) Space in the (shared) stack is allocated to jobs contiguously in the last-in-first-out manner. When a job J executes, its stack space is on the top of the stack. The space is freed when the job completes. When J is preempted, the preempting job has the stack space above J ’s. J can resume execution only after all the jobs holding stack space above its space complete, free their stack spaces, and leave J ’s stack space on the top of the stack again.
Clearly, if all the jobs share a common stack, schedules such as the one in Figure 8–10 should not be allowed. You can see that if the jobs were to execute according this schedule, J5 would resume after J4 is blocked. Since J4 is not complete, it still holds the space on the top of the stack at this time. The stack space of J5 would be noncontiguous after this time, which is not allowed, or J5 would not be allowed to resume, which would result in a deadlock between J5 and J4 (i.e., J5 holds Black and avoidance blocks J4 and J4 holds the “top of the stack” and blocks J5).
From this example, we see that to ensure deadlock-free sharing of the run-time stack among jobs, we must ensure that no job is ever blocked because it is denied some resource once its execution begins. This observation leads to the following modified version of the priority-ceiling protocol, called the stack-based, priority-ceiling protocol. It is essentially the same as the stack-based protocol designed by Baker [Bake91], which we will describe in the next section. Like Baker’s protocol, this protocol allows jobs to share the run-time stack if they never self-suspend.
In the statement of the rules of the stack-based, priority-ceiling protocol, we again use the term (current) ceiling ˆ (t) of the system, which is the highest-priority ceiling of all the resources that are in use at time t Ω. is a nonexisting priority level that is lower than the lowest priority of all jobs. The current ceiling is when all resources are free.
Rules Defining Basic Stack-Based, Priority-Ceiling Protocol
0. Update of the Current Ceiling: Whenever all the resources are free, the ceiling of the system is Ω. The ceiling ˆ (t) is updated each time a resource is allocated or freed.
1. Scheduling Rule: After a job is released, it is blocked from starting execution until its assigned priority is higher than the current ceiling ˆ (t) of the system. At all times, jobs that are not blocked are scheduled on the processor in a priority-driven, preemptive manner according to their assigned priorities.
2. Allocation Rule: Whenever a job requests a resource, it is allocated the resource.
We note that according to the scheduling rule, when a job begins to execute, all the resources it will ever need during its execution are free. (Otherwise, if one of the resources it will need is not free, the ceiling of the system is equal to or higher than its priority.) This is why the allocation rule is as simple as stated above. More importantly, no job is ever blocked once its execution begins. Likewise, when a job J is preempted, all the resources the preempting job will require are free, ensuring that the preempting job can always complete so J can resume. Consequently, deadlock can never occur.
The schedule in Figure 8–17 shows how the system of jobs in Figure 8–10 would be scheduled if the stack-based, priority-ceiling protocol were used instead of the basic priorityceiling protocol. To better illustrate the stack-based protocol, we let J2 be released at 4.8 and the execution time of the critical section of J2 be 1.2. At time 2 when J4 is released, it is blocked from starting because its priority is not higher than the ceiling of the system, which is equal to 2 at the time. This allows J5 to continue execution. For the same reason, J3 does not start execution when it is released. When J2 is released at time 4.8, it cannot start execution because the ceiling of the system is 2. At time 5, the resource held by J5 becomes free and the ceiling of the system is at Ω. Consequently, J2 starts to execute since it has the highest priority among all the jobs ready at the time. As expected, when it requests the resource Black at time 6, the resource is free. It acquires the resource and continues to execute. At time 7 when J1 is released, its priority is higher than the ceiling of the system, which is 2 at the time. (Again, this fact indicates that the resource Shaded, which it will require later, is free.) J1, therefore, preempts J2 and holds the space on the top of the stack until it completes at time 10. J2 then resumes and completes at 11. Afterwards, J3, J4, and J5 complete in the order of their priorities.
From this example, we see that the scheduling rule of the stack-based priority-ceiling protocol achieves the same objective as the more complicated priority-inheritance rule of the basic priority-ceiling protocol. (As a consequence of this rule, J5 is not preempted by J4 and J3 while it holds Black.) When we compare the schedule in Figure 8–17 with the schedule in Figure 8–10, which is produced by the basic priority-ceiling protocol, we see that the higher-priority jobs J1, J2 and J3 either complete earlier than or at the same time as when they are scheduled according to the basic priority-ceiling protocol.