Real Time Systems

Introduction To Real Time Systems

5 min read
Introduction: The correctness of many systems and devices in our modern society depends not only on the effects or results they produce but also on the time at which these results are produced. These real-time systems range from the anti-lock braking controller in automobiles to the vital-sign monitor in hospital intensive-care units. For example, when the driver of a car applies the brake, the anti-lock braking controller analyzes the environment in which the controller is embedded (car speed, road surface, direction of travel) and activates the brake with the appropriate frequency within fractions of a second. Both the result (brake activation) and the time at which the result is produced are important in ensuring the safety of the car, its driver, and passengers. Recently, computer hardware and software are increasingly embedded in a majority of these real-time systems to monitor and control their operations. These computer systems are called embedded systems, real-time computer systems, or simply real-time systems. Unlike conventional, non-real-time computer systems, real-time computer systems are closely coupled with the environment being monitored and controlled. Examples of real-time systems include computerized versions of the braking controller and the vital-sign monitor, the new generation of airplane and spacecraft avionics, the planned Space Station control software, highperformance network and telephone switching systems, multimedia tools, virtual reality systems, robotic controllers, battery-powered instruments, wireless communication devices (such as cellular phones and PDAs), astronomical telescopes with adaptive-optics systems, and many safety-critical industrial applications. These embedded systems must satisfy stringent timing and reliability constraints in addition to functional correctness requirements. Figure 1.1 shows a model of a real-time system. A real-time system has a decision component that interacts with the external environment (in which the decision

Other Type Of Dependencies

7 min read
OTHER TYPES OF DEPENDENCIES Like nonreal-time applications, real-time applications sometimes contain redundant modules, carry out heuristic searches, use multiple versions, execute some job conditionally, and so forth. We add other extensions to the classical precedence graphs in order to model such jobs and dependencies. These extensions include temporal distance, OR jobs, conditional branches, and pipe (or pipeline). Temporal Dependency: Some jobs may be constrained to complete within a certain amount of time relative to one another. We call the difference in the completion times of two jobs the temporal distance [HaLH] between them. Jobs are said to have a temporal distance constraint if their temporal distance must be no more than some finite value. Jobs with temporal distance constraints may or may not have deadlines. As an example, we consider the display of video frames and the accompanying audio when the video is that of a person speaking. To have lip synchronization, the time between the display of each frame and the generation of the corresponding audio segment must be no more than 160 msec [StNa]. Another example is the visual and audio displays of a passive sonar system [MoMW]. The synthesized audio signal and the accompanied visual display must be presented to the operator no more than 100 msec apart. These timing requirements can be stated more naturally in terms of temporal distance constraints than in terms of deadlines of jobs.

Other Application Of Real Time Systems

4 min read
Only frames 1 αk, for k = 0, 1, 2, . . . are encoded independently of other frames, where α is an application-specified integer constant. These frames are called I-frames (i.e., intra-coded frames). The coder treats each I-frame as a still image, and the decoder can decompress each compressed I-frame independently of other frames. Consequently, I-frames are points for random access of the video. The smaller the constant α, the more random accessible is the video and the poorer the compression ratio. A good compromise is α = 9. The frames between consecutive I-frames are called P- and B-frames. When α is equal to 9, the sequence of frames produced by the motion estimation step are I, B, B, P, B, B, P, B, B, I, B, B, P, . . . . For every k ≥ 0, frame 1 9k 3 is a P-frame (i.e., a predictive-coded frame). The coder obtains a P-frame from the previous I-frame (or P-frame) by predicting how the image carried by the I-frame changes in the time interval between the times of these frames. Specifically, if amajor block in the P-frame closely resembles a major block in the previous I-frame, then the coder represents the P-frame major block by six 8×8 pixel blocks that give the differences between the six P-frame pixel blocks and the corresponding pixel blocks of the best matching (i.e., resembling most closely) major block in the previous I-frame. In addition, the coder generates a motion vector based on which the decoder can identify the best matching I-frame major block. Such a P-frame major block is said to be predictively coded. On the other hand, some P-frame major blocks may be images of newly visible objects and, hence, cannot be obtained from any major block in the previous I-frame. The coder represents them in the same way as I-frame major blocks. A B-frame is a bidirectionally predicted frame: It is predicted from both the previous I-frame (or P-frame) and the subsequent P-frame (or I-frame). One way is to represent every B-frame major block by the differences between the values of its pixel blocks and the corresponding pixel blocks of the best matching major block in either the previous I-frame or the subsequent P-frame. Alternatively, for each B-frame major block, an interpolation of the best matching major blocks in the I-frame and P-frame is first computed. The B-frame major block is represented by the difference between it and this interpolation. Again, the coder generates the motion vectors that the decoder will need to identify the best matching I-frame and P-frame major blocks. Whereas some P-frame major blocks are encoded independently, none of the B-frame major blocks are. Discrete Cosine Transform and Encoding. In the second step, a cosine transform13 is performed on each of the 8×8 pixel blocks produced by the coder after motion estimation. We let x(i, j ), for i, j = 1, 2, . . . , 8, denote the elements of an 8×8 transform matrix obtained from transforming the originalmatrix that gives the 8×8 values of a pixel block. The transformmatrix usually has more zeros than the original matrix. [In the extreme when all the entries of the original matrix are equal, only x(0, 0) is nonzero.] Moreover, if the entries x(i, j )’s are ordered in nondecreasing order of i j , zero entries tend to be adjacent, forming sequences of zeros, and adjacent entries tend to have similar values. By quantizing the x(i, j )’s to create more zeros, encoding the entries in the transform matrix as 2-tuples (run length, value), and using a combination of variable-length and fixed-length codes to further reduce the bit rate, significant compression is achieved.

Soft Real Time Systems

4 min read
SOFT REAL-TIME SYSTEMS A system in which jobs have soft deadlines is a soft real-time system. The developer of a soft real-time system is rarely required to prove rigorously that the system surely meet its realtime performance objective. Examples of such systems include on-line transaction systems and telephone switches, as well as electronic games. The less rigorous validation required of the system and, often, more relaxed timing constraints allow the developer to consider other performance metrics equally seriously. Meeting all deadlines is not the only consideration, sometimes, not even the primary consideration. An occasional missed deadline or aborted execution is usually considered tolerable; it may be more important for such a system to have a small average response time and high throughput. A system may have critical timing requirements but is nevertheless considered to be a soft real-time system. An example is a stock price quotation system. It should update the price of each stock each time the price changes. Here, a late update may be highly undesirable, because the usefulness of a late price quotation decreases rapidly with time. However, in a volatile market when prices fluctuate at unusually high rates, we expect that the system cannot keep up with every price change but does its best according to some criteria. Occasional late or missed updates are tolerated as a trade-off for other factors, such as cost and availability of the system and the number of users the system can serve. The timing requirements of soft real-time systems are often specified in probabilistic terms. Take a telephone network for example. In response to our dialing a telephone number, a sequence of jobs executes in turn, each routes the control signal from one switch to another in order to set up a connection through the network on our behalf. We expect that our call will be put through in a short time. To ensure that this expectation is met most of the time by the network, a timing constraint may be imposed on this sequence of jobs as a design objective (e.g., the sequence must complete in no more than 10 seconds for 95 percent of the time and in no more than 20 seconds for 99.95 percent of the time). The users are usually satisfied if after extensive simulation and trial use, the system indeed appears to meet this requirement.

Aperiodic And Sporadic Tasks

3 min read
Aperiodic and Sporadic Tasks Earlier, we pointed out that a real-time system is invariably required to respond to external events, and to respond, it executes aperiodic or sporadic jobs whose release times are not known a priori. An operator’s adjustment of the sensitivity setting of a radar surveillance system is an example. The radar system must continue to operate, but in addition, it also must respond to the operator’s command. Similarly, when a pilot changes the autopilot from cruise mode to standby mode, the system must respond by reconfiguring itself, while continuing to execute the control tasks that fly the airplane. A command and control system must process sporadic data messages, in addition to the continuous voice and video traffic. In the periodic task model, the workload generated in response to these unexpected events is captured by aperiodic and sporadic tasks. Each aperiodic or sporadic task is a stream of aperiodic or sporadic jobs, respectively. The interarrival times between consecutive jobs in such a task may vary widely and, in particular, can be arbitrarily small. The jobs in each task model the work done by the system in response to events of the same type. For example, the jobs that execute to change the detection threshold of the radar system are in one task; the jobs that change the operation mode of the autopilot are in one task; and the jobs that process sporadic data messages are in one task, and so on. Specifically, the jobs in each aperiodic task are similar in the sense that they have the same statistical behavior and the same timing requirement. Their interarrival times are identically distributed random variables with some probability distribution A(x). Similarly, the execution times of jobs in each aperiodic (or sporadic) task are identically distributed random variables, each distributed according to the probability distribution B(x). These assumptions mean that the statistical behavior of the system and its environment do not change with time,that is, the system is stationary. That the system is stationary is usually valid in time intervals of length on the order of H, in particular, within any hyperperiod of the periodic tasks during which no periodic tasks are added or deleted.

Precedence Constraints And Data Dependency

5 min read
PRECEDENCE CONSTRAINTS AND DATA DEPENDENCY: Data and control dependencies among jobs may constrain the order in which they can execute. In classical scheduling theory, the jobs are said to have precedence constraints if they are constrained to execute in some order. Otherwise, if the jobs can execute in any order, they are said to be independent. For example, in a radar surveillance system, the signal-processing task is the producer of track records, while the tracker task is the consumer. In particular, each tracker job processes the track records generated by a signal-processing job. The designer may choose to synchronize the tasks so that the execution of the kth tracker job does not begin until the kth signal-processing job completes. The tracker job is precedence constrained. In general, a consumer job has this constraint whenever it must synchronize with the corresponding producer job(s) and wait until the latter completes in order to execute. As another example, we consider queries to an information server. Suppose that before each query is processed and the requested information retrieved, its authorization to access the requested information is first checked. The retrieval job cannot begin execution until the authentication job completes. The communication job that forwards the information to the requester cannot begin until the retrieval job completes. Similarly, in a communication system, the jobs that generate an acknowledgement of a message and transmit the acknowledgement message cannot begin until the job that receives and processes the message completes.

Hard And Soft Timing Constraints

7 min read
HARD AND SOFT TIMING CONSTRAINTS: It is common to divide timing constraints into two types: hard and soft. There are many definitions of hard and soft real-time constraints. Before stating the definition used in this book, let us first look at three frequently encountered definitions so you will be aware of them. They are based on the functional criticality of jobs, usefulness of late results, and deterministic or probabilistic nature of the constraints. Common Definitions: According to a commonly used definition, a timing constraint or deadline is hard if the failure to meet it is considered to be a fatal fault. A hard deadline is imposed on a job because a late result produced by the job after the deadline may have disastrous consequences. (As examples, a late command to stop a train may cause a collision, and a bomb dropped too late may hit a civilian population instead of the intended military target.) In contrast, the late completion of a job that has a soft deadline is undesirable. However, a few misses of soft deadlines do no serious harm; only the system’s overall performance becomes poorer and poorer when more and more jobs with soft deadlines complete late. This definition of hard and soft deadlines invariably leads to the question of whether the consequence of a missed deadline is indeed serious enough. The question of whether a timing constraint is hard or soft degenerates to that of how serious is serious. In real-time systems literature, the distinction between hard and soft timing constraints is sometimes stated quantitatively in terms of the usefulness of results (and therefore the overall system performance) as functions of the tardinesses of jobs. The tardiness of a job measures how late it completes respective to its deadline. Its tardiness is zero if the job completes at or before its deadline; otherwise, if the job is late, its tardiness is equal to the difference between its completion time (i.e., the time instant at which it completes execution) and its deadline. The usefulness of a result produced by a soft real-time job (i.e, a job with a soft deadline) decreases gradually as the tardiness of the job increases, but the usefulness of a result produced by a hard real-time job (i.e., a job with a hard deadline) falls off abruptly and may even become negative when the tardiness of the job becomes larger than zero. The deadline of a job is softer if the usefulness of its result decreases at a slower rate. By this means, we can define a spectrum of hard/soft timing constraints. This quantitative measure of hardness and softness of deadlines is sometimes useful. It is certainly more appealing to computer scientists and engineers who have been trained not to rely on handwaving, qualitative measures. However, there is often no natural choice of usefulness functions. When choices are made, it is difficult to validate that the choices are sound and that different measures of the overall system performance as functions of tardinesses indeed behave as specified by the usefulness functions. Consequently, this kind of quantitative measure is not as rigorous as it appears to be.

Heigh Level Control

3 min read
Real-Time Command and Control: The controller at the highest level of a control hierarchy is a command and control system. An Air Traffic Control (ATC) system is an excellent example. Figure 1–5 shows a possible architecture. The ATC system monitors the aircraft in its coverage area and the environment (e.g, weather condition) and generates and presents the information needed by the operators (i.e., the air traffic controllers). Outputs from the ATC system include the assigned arrival times to metering fixes for individual aircraft. As stated earlier, these outputs are reference inputs to on-board flight management systems. Thus, the ATC system indirectly controls the embedded components in low levels of the control hierarchy. In addition, the ATC system provides voice and telemetry links to on-board avionics. Thus it supports the communication among the operators at both levels (i.e., the pilots and air traffic controllers). The ATC system gathers information on the “state” of each aircraft via one or more active radars. Such a radar interrogates each aircraft periodically. When interrogated, an air- craft responds by sending to the ATC system its “state variables”: identifier, position, altitude, heading, and so on. (In Figure 1–5, these variables are referred to collectively as a track record, and the current trajectory of the aircraft is a track.) The ATC system processes messages from aircraft and stores the state information thus obtained in a database. This information is picked up and processed by display processors. At the same time, a surveillance system continuously analyzes the scenario and alerts the operators whenever it detects any potential hazard (e.g., a possible collision). Again, the rates at which human interfaces (e.g., keyboards and displays) operate must be at least 10 Hz. The other response times can be considerably larger. For example, the allowed response time from radar inputs is one to two seconds, and the period of weather updates is in the order of ten seconds.

Time

2 min read
Introduction: Time is an essential concept in real-time systems, and keeping time using accurate clocks is thus required to ensure the correct operations of these systems. The master source for time is Paris’s International Atomic Time (TAI), an average of several laboratory atomic clocks in the world. Since the earth’s rotational rate slows by a few milliseconds each day, another master time source called the Universal Coordinated Time (UTC) performs leap corrections to the time provided by TAI while maintaining TAI’s accuracy,making the time length of every natural solar day constant. UTC is used as the world time, and UTC time signals are sent from the National Institute of Standards and Technology (NIST) radio station, WWVB, in Fort Collins, Colorado, and other UTC radio stations to specialized receivers. Selected radios, receiver-clocks, some phone-answering systems, and even some VCRs have the capability to receive these UTC signals to maintain accurate clocks. Some computers have receivers to receive these UTC signals and thus the time provided by their internal clocks is as accurate as UTC. Note that depending on the location of the receiver-clock, there is a delay in receiving the UTC signal. For instance, it takes around 5 ms for WWVB’s UTC signal to get from Fort Collins to a receiver-clock in my Real-Time Systems Laboratory in Houston, Texas. For computers whose time is kept by quartz-based computer clocks, we must ensure that these clocks are periodically synchronized such that they maintain a bounded drift relative to UTC. Software or logical clocks can be derived from computer clocks [Lamport, 1978]. In this text, when we refer to wall clock or absolute time, we refer to the standard time provided by a bounded-drift computer clock or UTC. Thus there is a mapping Clock: real time→standard clock time.

Testing

3 min read
TESTING Testing is perhaps the oldest technique for detecting errors or problems in implemented software, hardware, or non-computer systems. It consists of executing or operating (in the case of a non-computer system) the system to be tested using a finite set of inputs and then checking to see if the corresponding outputs or behavior are correct with respect to the specifications. To test a real-time system, the values as well as the timing of the inputs are important. Similarly, both the output values and the time at which they are produced must be checked for correctness. Many approaches have been developed for testing software, hardware, and noncomputer systems. The simplest technique is of course to perform an exhaustive test run of the system with every possible input and then to check if the corresponding output is correct. This approach is not practical except for small systems with limited input space. For larger systems, the time needed to test is prohibitively long. For systems with an infinite number of possible inputs, this approach is of course not viable. Since relatively little training is required on the part of the testing personnel, testing has been and will continue to be used extensively in industry. There are three common techniques for software testing in the current state-ofthe- practice: functional testing, structural testing, and code reading. Functional testing uses a “black box” approach in which the programmer creates test data from the program specification using one or a combination of techniques such as boundary value analysis and equivalence partitioning. Then the program is executed using these test data as input, and the corresponding program behavior and output are compared with those described in the program specification. If the program behavior or output deviates from the specification, the programmer attempts to identify the erroneous part of the program and correct it.

Verification Of Systems

2 min read
VERIFICATION The previous two techniques are good for revealing errors in the simulated or actual system but usually cannot guarantee that the system satisfy a set of requirements. To apply formal verification techniques to a real-time system, we must first specify the system requirements and then the system to be implemented using an unambiguous specification language. Since the applications expert (programmer or system designer) is usually not knowledgeable in formal methods, a formal methods expert collaborates with the applications expert to write the requirements and system specification. Both experts work closely to ensure that the specifications reflect the real requirements and system’s behavior. Once these specifications are written, the formal methods expert can verify whether the system specification satisfy the specified requirements using his/her favorite formal verification methods and tools. These formal methods and tools can show the satisfaction of all requirements or the failure to satisfy certain requirements. They may also pinpoint areas for further improvement in terms of efficiency. These results are communicated to the applications expert who can then revise the system specification or even the system requirements. The formal specifications are next revised to reflect these changes and can be analyzed again by the formal methods expert. These steps are repeated until both experts are happy with the fact that the specified system satisfies the specified requirements.

Run Time Monitoring

3 min read
RUN-TIME MONITORING Despite the use of the best state-of-the-art techniques for static or pre-run-time analysis and verification of a real-time system, there will often be system behavior that was not anticipated. This unexpected behavior may be caused by events and actions not modeled by the static analysis tools or may be the result of making simplified assumptions about the real-time system. Therefore, it is necessary to monitor the execution of the real-time system at run-time and to make appropriate adjustments in response to a monitored behavior that violates specified safety and progress constraints. Even if the real-time system meets the specified safety and progress constraints at run-time, monitoring may provide information that can improve the performance and reliability of the monitored system. Here, the monitored real-time system is the target system and its components, such as programs, are called target programs. The monitoring system is the system used to monitor and record the behavior of the target system. It consists of instrumentation programs, instrumentation hardware, and other monitoring modules. Basically, the monitoring system records the behavior of interest of the target system and produces event traces. These event traces may be used on-line as a feedback to the real-time controller or may be analyzed off-line to see if the target system needs to be fine-tuned. There are two broad types of monitoring techniques: intrusive and nonintrusive. Intrusive monitoring uses the resources of the target system to record its behavior and thus may alter the actual behavior of the target system. A simple example is the insertion of print statements in a target program to display the values of the program variables. Another example is the extra statements inserted in the programs of computing nodes to record their states and the exchanges of these state variables in taking a snapshot of a distributed real-time system. A non-computer example is the addition of speed and impact-force sensing instruments in an automobile to record its performance. In all these monitoring cases, the monitoring system’s use of the target system’s resources (processor, memory, electricity, fuel) may change the target system’s behavior. The degree of intrusion varies in different monitoring systems, and different target systems may accept monitoring systems with different degrees of intrusion or interference.

Periodic Task Model

6 min read
PERIODIC TASK MODEL: The periodic task model is a well-known deterministic workload model [LiLa]. With its various extensions, the model characterizes accurately many traditional hard real-time applications, such as digital control, real-time monitoring, and constant bit-rate voice/video transmission. Many scheduling algorithms based on this model have good performance and well-understood behavior. There are now methods and tools to support the design, analysis, and validation of real-time systems that can be accurately characterized by the model. For these reasons, we want to know it well and be able to use it proficiently. Periods, Execution Times, and Phases of Periodic Tasks: In the periodic task model, each computation or data transmission that is executed repeatly at regular or semiregular time intervals in order to provide a function of the system on a continuing basis is modeled as a period task. Specifically, each periodic task, denoted by Ti , is a sequence of jobs. The period pi of the periodic task Ti is the minimum length of all time intervals between release times of consecutive jobs in Ti. Its execution time is the maximum execution time of all the jobs in it. With a slight abuse of the notation, we use ei to denote the execution time of the periodic task Ti , as well as that of all the jobs in it. At all times, the period and execution time of every periodic task in the system are known. This definition of periodic tasks differs from the one often found in real-time systems literature. In many published works, the term periodic task refers to a task that is truly periodic, that is, interrelease times of all jobs in a periodic task are equal to its period. This definition has led to the common misconception that scheduling and validation algorithms based on the periodic task model are applicable only when every periodic task is truly periodic.

Data Dependency

2 min read
Data Dependency As an example, data dependency cannot be captured by a precedence graph. In many realtime systems, jobs communicate via shared data. Oftentimes, the designer chooses not to synchronize producer and consumer jobs. Rather, each producer places the data generated by it in a shared address space to be used by the consumer at any time. In this case, the classical precedence graph should show that the producer and consumer are independent because they are not explicitly constrained to execute in turn. As an example, in an avionics system, the navigation job updates the location of the airplane periodically. These data are placed in a shared space. Whenever the flight management job needs navigation data, it reads the most current data produced by the navigation job. There is no precedence constraint between the navigation job and the flight management job. In a task graph, data dependencies among jobs are represented explicitly by datadependency edges among jobs. There is a data-dependency edge from a vertex Ji to vertex Jk in the task graph if the job Jk consumes data generated by Ji or the job Ji sends messages to Jk . A parameter of an edge from Ji to Jk is the volume of data from Ji to Jk . In Chapter 9, we will describe algorithms that partition an application system into modules and assign different modules to different processors for execution. Since the cost of communication between jobs on the same processor is usually much lower than that between jobs on different processors, one of the key factors we need to consider in choosing a partition is the amount of data exchanged among jobs. For this purpose, we will make use of the information provided by the data volume parameter.

Typical Real-time Applications

4 min read
Introduction: In this section we describes several representative classes of real-time applications: digital control, optimal control, command and control, signal processing, tracking, real-time databases, and multimedia. DIGITAL CONTROL: Many real-time systems are embedded in sensors and actuators and function as digital controllers. Figure 1–1 shows such a system. The term plant in the block diagram refers to a controlled system, for example, an engine, a brake, an aircraft, a patient. The state of the plant is monitored by sensors and can be changed by actuators. The real-time (computing) system estimates from the sensor readings the current state of the plant and computes a control output based on the difference between the current state and the desired state (called reference input in the figure). We call this computation the control-law computation of the controller. The output thus generated activates the actuators, which bring the plant closer to the desired state. Sampled Data Systems: Long before digital computers became cost-effective and widely used, analog (i.e., continuoustime and continuous-state) controllers were in use, and their principles were well established. Consequently, a common approach to designing a digital controller is to start with an analog controller that has the desired behavior. The analog version is then transformed into a digital (i.e., discrete-time and discrete-state) version. The resultant controller is a sampled data system. It typically samples (i.e., reads) and digitizes the analog sensor readings periodically and carries out its control-law computation every period. The sequence of digital outputs thus produced is then converted back to an analog form needed to activate the actuators.

Real Times, Dead Lines And Timing Constraints

3 min read
RELEASE TIMES, DEADLINES, AND TIMING CONSTRAINTS The release time of a job is the instant of time at which the job becomes available for execution. The job can be scheduled and executed at any time at or after its release time whenever its data and control dependency conditions are met. As an example, we consider a system which monitors and controls several furnaces. After it is initialized and starts execution (say at time 0), the system samples and reads each temperature sensor every 100 msec and places the sampled readings in memory. It also computes the control law of each furnaceevery 100 msec in order to process the temperature readings and determine flow rates of fuel, air, and coolant. Suppose that the system begins the first control-law computation at time 20 msec. The fact that the control law is computed periodically can be stated in terms of release times of the control-law computation jobs J0, J1, . . . , Jk, . . . . The release time of the job Jk in this job stream is 20 k × 100 msec, for k = 0, 1, . . . . We say that jobs have no release time if all the jobs are released when the system begins execution. The deadline of a job is the instant of time by which its execution is required to be completed. Suppose that in the previous example, each control-law computation job must complete by the release time of the subsequent job. Then, their deadlines are 120 msec, 220 msec, and so on, respectively. Alternatively, if the control-law computation jobs must complete sooner, their deadlines may be 70 msec, 170 msec, and so on. We say that a job has no deadline if its deadline is at infinity. In this example, as in many others, it is more natural to state the timing requirement of a job in terms of its response time, that is, the length of time from the release time of the job to the instant when it completes. We call the maximum allowable response time of a job its relative deadline. Hence the relative deadline of every control-law computation job mentioned above is 100 or 50 msec. The deadline of a job, sometimes called its absolute deadline, is equal to its release time plus its relative deadline.

Execution Time

5 min read
Execution Time: Another temporal parameter of a job, Ji , is its execution time, ei . ei is the amount of time required to complete the execution of Ji when it executes alone and has all the resources it requires. Hence, the value of this parameter depends mainly on the complexity of the job and the speed of the processor used to execute the job, not on how the job is scheduled. The actual amount of time required by a job to complete its execution may vary for many reasons. As examples, a computation may contain conditional branches, and these conditional branches may take different amounts of time to complete. The branches taken during the execution of the computation job depend on the input data. If the underlying system has performance enhancing features (e.g., cache memory and pipeline), the amount of time a computation takes to complete may vary each time it executes even when it has no conditional branches. For these reasons, the actual execution time of a computation job is unknown until it completes. Similarly, the actual amount of time to transmit each frame of a MPEGcompressed video is different from frame to frame because the numbers of bits used to encode different frames are different. The actual execution time of the job modeling the transmission of a frame is unknown a priori. What can be determined a priori through analysis and measurement are the maximum and minimum amounts of time required to complete each job. In other words, we know that the execution time ei of the job Ji is in the range [ei, ei ],where ei and ei are the minimum execution time and the maximum execution time of Ji ,respectively. We usually assume that we know ei and ei of every hard real-time job Ji but the actual execution time of the job is unknown.

Hard Real Time Systems

5 min read
HARD REAL-TIME SYSTEMS: The requirement that all hard timing constraints must be validated invariably places many restrictions on the design and implementation of hard real-time applications as well as on the architectures of hardware and system software used to support them. To justify this requirement, this section examines briefly several examples of hard real-time systems and discuss why hard timing constraints are imposed and why users require their satisfaction be validated and guaranteed. Some Reasons for Requiring Timing Guarantees: Many embedded systems are hard real-time systems. Deadlines of jobs in an embedded system are typically derived from the required responsiveness of the sensors and actuators monitored and controlled by it. As an example, we consider an automatically controlled train. It cannot stop instantaneously. When the signal is red (stop), its braking action must be activated a certain distance away from the signal post at which the train must stop. This braking distance depends on the speed of the train and the safe value of deceleration. From the speed and safe deceleration of the train, the controller can compute the time for the train to travel the braking distance. This time in turn imposes a constraint on the response time of the jobs which sense and process the stop signal and activate the brake. No one would question that this timing constraint should be hard and that its satisfaction must be guaranteed. Similarly, each control-law computation job of a flight controller must be completed in time so that its command can be issued in time. Otherwise, the plane controlled by it may become oscillatory (and the ride bumpy) or even unstable and uncontrollable. For this reason, we want the timely completion of all control-law computations guaranteed.

Definition Of Ceiling-priority Protocol

4 min read
Definition ofCeiling-Priority Protocol As we will see shortly, the worst-case performance of the stack-based and the basic priorityceiling protocols are the same. The former is considerably simpler and has a lower context switching overhead. This is a good reason for using the stack-based version if jobs never self-suspend even when the jobs have their own stacks. Indeed, the stack-based version is the protocol supported by the Real-Time Systems Annex of Ada95 [Cohe96]. In Section 8.3, we mentioned that it is called the ceiling-priority protocol. The following rules defines it more formally. Rules Defining the Ceiling-Priority Protocol 1. Scheduling Rule: (a) Every job executes at its assigned priority when it does not hold any resource. Jobs of the same priority are scheduled on the FIFO basis. (b) The priority of each job holding any resource is equal to the highest of the priority ceilings of all resources held by the job. 2. Allocation Rule: Whenever a job requests a resource, it is allocated the resource. We note that when jobs never self-suspend, the stack-based priority-ceiling and ceilingpriority protocols are the same. By saying these protocols are the same we mean that they will produce the same schedule for all jobs. (If you are not convinced, you may want to apply this set of rules on the example given by Figure 8–8(a). The schedule you will produce should be same as one in Figure 8–17.) The two sets of rules give two implementations of the same protocol.

Definitions Ofpr Otocols And Duration Ofblocking

6 min read
Definitions of Protocols and Duration of Blocking A preemption-ceiling protocol makes decisions on whether to grant a free resource to any job based on the preemption level of the job in a manner similar to the priority-ceiling protocol. This protocol also assumes that the resource requirements of all the jobs are known a priori. After assigning preemption levels to all the jobs, we determine the preemption ceiling of each resource. Specifically, when there is only 1 unit of each resource, which we assume is the case here, the preemption ceiling (R) of a resource R is the highest preemption level of all the jobs that require the resource. For the example in Figure 8–19, the preemption ceiling of Black is 1, while the preemption ceiling of Shaded is 2. The (preemption) ceiling of the system ˆ (t) at any time t is the highest preemption ceiling of all the resources that are in use at t .When the context is clear and there is no chance of confusion, we will simply refer to ˆ (t) as the ceiling of the system. We again use to Ω denote a preemption level that is lower than the lowest preemption level among all jobs since there is no possibility of confusion. When all the resources are free, we say that the ceiling of the system is Ω. Like the priority-ceiling protocol, the preemption-ceiling protocol also has a basic version and a stack-based version. The former assumes that each job has its own stack and the latter allows the jobs to share a common stack. Basic versions of priority-ceiling and preemption-ceiling protocols differ mainly in their allocation rules. For this reason, only the allocation rule of the basic preemption-ceiling protocol is given below. You can see that the principle of this rule for both protocols is the same, the only difference being the parameters (i.e., priority or preemption levels and ceilings) used by the rule. Rules of Basic Preemption-Ceiling Protocol

Other Real-time Concurrency Control Schemes

4 min read
Other Real-Time Concurrency Control Schemes One way to improve the responsiveness of soft real-time jobs that read and write multiple data objects is to abort and restart the lower-priority job whenever it conflicts (i.e., contends for some data object) with a higher-priority job. A policy that governs which job proceeds at the expense of which job is called a conflict resolution policy. Well-Known Conflict Resolution Policies. To explain conflict resolution policies that have been proposed for database transactions, we note that each transaction typically keeps a copy of each object it reads and may write in its own memory space. When it completes all the reads, writes, and computations, it writes all data objects it has modified back to the global space. This last step is called commit. So, until a transaction commits, no shared data object in the database is modified, and it is safe to abort and restart a transaction and take back the data objects allocated to it. Abbott, et al. [AbMo] showed that the 2PL-HP schemes perform well for soft real-time transactions compared with Optimistic Concurrency Control (OCC) schemes, which we will explain shortly. According to the 2PL-HP scheme, all transactions follow a two-phase policy in their acquisitions of (locks of) data objects. Whenever two transactions conflict (i.e., they want to access the same object), the lower-priority transaction is restarted immediately (i.e., the executing instance is aborted and new instance is made ready immediately). In essence, this scheme allocates data objects to transactions preemptively. Therefore, priority inversion cannot occur.

Modified Rules

6 min read
Modified Rules: It is straightforward to modify the ceiling-priority protocol so it can deal with multiple-unit resources. In essence, the scheduling and allocation rules remains unchanged except for the new definition of priority ceiling of resources. However, since more than one job can hold (some units of) a resource, scheduling rule 1b needs to be rephrased for clarity. It should read as follows: Scheduling Rule of Multiple-Unit Ceiling-Priority Protocol: Upon acquiring a resource R and leaving k ≥ 0 free units of R, a job executes at the higher of its priority and the priority ceiling (R, k) of R. Similarly, the allocation rule of the priority-ceiling (or preemption-ceiling) protocol for multiple units of resources is a straightforward modification of the allocation rule of the basic priority-ceiling (preemption-ceiling) protocol. Allocation Rule of Multiple-Unit Priority-(Preemption-) Ceiling Protocol Whenever a job J requests k units of resource R at time t , one of the following two conditions occurs: (a) Less than k units of R are free. J ’s request fails and J becomes directly blocked. (b) k or more units of R are free. (i) If J ’s priority π(t) [preemption level ψ(t)] is higher than the current priority ceiling ψ (t) [preemption ceiling ψ (t)] of the system at the time, k units of R are allocated to J until it releases them. (ii) If J ’s priority π(t) [preemption level ψ(t)] is not higher than the system ceiling ψ (t) [ ψ (t)], k units of R are allocated to J only if J holds the resource(s) whose priority ceiling (preemption ceiling) is equal to ψ (t) [ ψ (t)]; otherwise, J ’s request is denied, and J becomes blocked.

Basic Priority-ceiling Protocols

7 min read
BASIC PRIORITY-CEILING PROTOCOL: The priority-ceiling protocol [ShRL88, ShRL90] extends the priority-inheritance protocol to prevent deadlocks and to further reduce the blocking time. This protocol makes two key assumptions: 1. The assigned priorities of all jobs are fixed. 2. The resources required by all jobs are known a priori before the execution of any job begins. To define the protocol, we need two additional terms. The protocol makes use of a parameter, called priority ceiling, of every resource. The priority ceiling of any resource Ri is the highest priority of all the jobs that require Ri and is denoted by (Ri ). For example, the priority ceiling (Black) of the resource Black in the example in Figure 8–8 is 2 because J2 is the highest priority job among the jobs requiring it. Similarly, (Shaded) is 1. Because of assumption 2, the priority ceilings of all resources are known a priori. We note that if the resource access control protocol includes the priority-inheritance rule, then a job can inherit a priority as high as x during its execution if it requires a resource with priority ceiling x. At any time t , the current priority ceiling (or simply the ceiling) ˆ (t) of the system is equal to the highest priority ceiling of the resources that are in use at the time, if some resources are in use. If all the resources are free at the time, the current ceiling ˆ (t) is equal to Ω, a nonexisting priority level that is lower than the lowest priority of all jobs. As an example, we again look at the system in Figure 8–8. In the interval [0, 1) when both resources in the system are free, the current ceiling of the system is equal to Ω, lower than 5, the priority of the lowest priority job J5. In (1, 3], Black is held by J5; hence, the current ceiling of the system is 2. In (3, 13] when Shaded is also in use, the current ceiling of the system is 1, and so it is in (13, 14].

Properties Ofthe Priority-inheritance Protocol

2 min read
Properties ofthe Priority-Inheritance Protocol: This example illustrates that when resource accesses are controlled by the priority-inheritance protocol, there are two types of blocking: direct blocking and priority-inheritance blocking (or simply inheritance blocking). J2 is directly blocked by J5 in (6, 11] and by J4 in (11, 12.5], and J1 is directly blocked by J4 in (8, 13]. In addition, J3 is blocked by J5 in (6, 7] because the latter inherits a higher priority in this interval. Later at time 8, when J4 inherits priority 1 from J1, J3 becomes blocked by J4 as a consequence. Similarly, the job J2 in Figure 8–7(b) suffers inheritance blocking by J3 in (5, 7]. This type of blocking suffered by jobs that are not involved in resource contention is the cost for controlling the durations of priority inversion suffered by jobs that are involved in resource contention. The example in Figure 8–8 also illustrates the fact that jobs can transitively block each other. At time 9, J5 blocks J4, and J4 blocks J1. So, priority inheritance is transitive. In the time interval (9, 11), J5 inherits J4’s priority, which J4 inherited from J1. As a consequence, J5 indirectly inherits the J1’s priority. The priority-inheritance protocol does not prevent deadlock. To see why, let us suppose that J5 in this example were to request the resource Shaded sometime after Shaded has been granted to J4 (e.g., at time 6.5). These two jobs would be deadlocked.

Assumptions On Resources And Their Usage

6 min read
ASSUMPTIONS ON RESOURCES AND THEIR USAGE: We continue to focus on the case where the system contains only one processor. In addition, the system also contains ρ types of serially reusable resources named R1, R2, . . . , Rρ. There are νi indistinguishable units of resource (of type) Ri, for 1 ≤ i ≤ ρ. Serially reusable resources are typically granted (i.e., allocated) to jobs on a nonpreemptive basis and used in a mutually exclusive manner. In other words, when a unit of a resource Ri is granted to a job, this unit is no longer available to other jobs until the job frees the unit. Again, examples of such resources are mutexes, reader/writer locks, connection sockets, printers, and remote servers. A binary semaphore is a resource (type) that has only 1 unit while a counting semaphore has many units. A system containing five printers has 5 units of the printer resource. There is only 1 unit of an exclusive write-lock. A resource that has an infinite number of units has no effect on the timing behavior of any job since every job can have the resource at any time; there is no need to include the resource in our model. Therefore, we lose no generality by assuming that every resource Ri has a finite number of units. Some resources can be used by more than one job at the same time. We model such a resource as a resource type that has many units, each used in a mutually exclusive manner. For example, a file that can be read by at most ν users at the same time is modeled as a resource that has ν exclusive units. By modeling shared resources in this manner, we do not need to treat them differently.

Effect Of Resource Contention And Resource Access Control (Rac)

4 min read
EFFECTS OF RESOURCE CONTENTION AND RESOURCE ACCESS CONTROL: A resource access-control protocol, or simply an access-control protocol, is a set of rules that govern (1) when and under what conditions each request for resource is granted and (2) how jobs requiring resources are scheduled. Before moving on to describe several well-known resource access-control protocols and their performance, let us examine in more detail the undesirable effects of resource contention. By looking more closely at what can happen when jobs contend for resources, we should see more clearly the specific design objectives of such a protocol. Priority Inversion, Timing Anomalies, and Deadlock: Resource contentions among jobs can also cause priority inversion. Because resources are allocated to jobs on a nonpreemptive basis, a higher-priority job can be blocked by a lower-priority job if the jobs conflict, even when the execution of both jobs is preemptable. In the example in Figure 8–2, the lowest priority job J3 first blocks J2 and then blocks J1 while it holds the resource R. As a result, priority inversion occurs in intervals (4, 6] and (8, 9]. When priority inversion occurs, timing anomalies invariably follow. Figure 8–3 gives an example. The three jobs are the same as those shown in Figure 8–2, except that the critical section in J3 is [R; 2.5]. In other words, the execution time of the critical section in J3 is shortened by 1.5. If we were not warned earlier in Section 4.8 about timing anomalies, our intuition might tell us that as a consequence of this reduction in J3’s execution time, all jobs should complete sooner. Indeed, this reduction does allow jobs J2 and J3 to complete sooner. Unfortunately, rather than meeting its deadline at 14, J1 misses its deadline because it does not complete until 14.5.

Nonpreemptive Critical Sections

4 min read
NONPREEMPTIVE CRITICAL SECTIONS: The simplest way to control access of resources is to schedule all critical sections on the processor nonpreemptively [Mok]. (In other words, when a job requests a resource, it is always allocated the resource. When a job holds any resource, it executes at a priority higher than the priorities of all jobs.) This protocol is called the Nonpreemptive Critical Section (NPCS) protocol. Because no job is ever preempted when it holds any resource, deadlock can never occur. Take the jobs in Figure 8–4 for example. Figure 8–7(a) shows the schedule of these jobs when their critical sections are scheduled nonpreemptively on the processor. According to this schedule, J1 is forced to wait for J3 when J3 holds the resource. However, as soon as J3 releases the resource, J1 becomes unblocked and executes to completion. Because J1 is not delayed by J2, it completes at time 10, rather than 15 according to the schedule in Figure 8–4. In general, uncontrolled priority inversion illustrated by Figure 8–4 can never occur. The reason is that a job Jh can be blocked only if it is released when some lower-priority job is in a critical section. If it is blocked, once the blocking critical section completes, all resources are free. No lower-priority job can get the processor and acquire any resource until Jh completes. Hence, Jh can be blocked only once, and its blocking time due to resource conflict is at most equal to the maximum execution time of the critical sections of all lower-priority jobs.

Basic Priority-inheritance And Priority-ceiling Protocols

7 min read
BASIC PRIORITY-INHERITANCE PROTOCOL: The priority-inheritance protocol proposed by Sha, et al. [ShRL90] is also a simple protocol. It works with any preemptive, priority-driven scheduling algorithm. Like the NPCS protocol, it does not require prior knowledge on resource requirements of jobs. The priority-inheritance protocol does not prevent deadlock. When there is no deadlock (i.e., when some other method is used to prevent deadlock), the protocol ensures that no job is ever blocked for an indefinitely long time because uncontrolled priority inversion cannot occur. In this and the next three sections, we confine our attention to the special case where there is only 1 unit of each resource. (This is the reason for calling the version described here the basic version.) By doing so, we relieve ourselves temporarily of the details that arise due to multiple resource units, so we can focus on the essence of the protocols and the behavior of the system under their control. We will return in Section 8.9 to remove this restrictive assumption. Definition of Basic Priority-Inheritance Protocol: In the definition of this protocol, as well as other protocols described later, we call the priority that is assigned to a job according to the scheduling algorithm its assigned priority. As you will see shortly, at any time t , each ready job Jl is scheduled and executes at its current priority πl (t), which may differ from its assigned priority and may vary with time. In particular, the current priority πl (t) of a job Jl may be raised to the higher priority πh(t) of another job Jh . When this happens, we say that the lower-priority job Jl inherits the priority of the higher priority job Jh and that Jl executes at its inherited priority πh(t). Hereafter, when there is no need to be specific or there is no possible confusion, we will simply say priority when we mean either current or assigned priority.

Stack Based Priority-ceiling Protocol

6 min read
Introduction: The different definitions arise from two different motivations: to provide stack-sharing capability and to simplify the priority-ceiling protocol. They led to the two different names of the same protocol. Motivation and Definition ofStack-Sharing Priority-Ceiling Protocol A resource in the system is the run-time stack. Thus far, we have assumed that each job has its own run-time stack. Sometimes, especially in systems where the number of jobs is large, it may be necessary for the jobs to share a common run-time stack, in order to reduce overall memory demand. (According to Baker [Bake91], if there are 10 jobs at each of 10 priority levels, the storage saving is 90 percent.) Space in the (shared) stack is allocated to jobs contiguously in the last-in-first-out manner. When a job J executes, its stack space is on the top of the stack. The space is freed when the job completes. When J is preempted, the preempting job has the stack space above J ’s. J can resume execution only after all the jobs holding stack space above its space complete, free their stack spaces, and leave J ’s stack space on the top of the stack again. Clearly, if all the jobs share a common stack, schedules such as the one in Figure 8–10 should not be allowed. You can see that if the jobs were to execute according this schedule, J5 would resume after J4 is blocked. Since J4 is not complete, it still holds the space on the top of the stack at this time. The stack space of J5 would be noncontiguous after this time, which is not allowed, or J5 would not be allowed to resume, which would result in a deadlock between J5 and J4 (i.e., J5 holds Black and avoidance blocks J4 and J4 holds the “top of the stack” and blocks J5).

Fixed-priority Scheduling And Priority-ceiling Protocol

4 min read
Fixed-Priority Scheduling and Priority-Ceiling Protocol The priority-ceiling protocol is an excellent algorithm for controlling the accesses to resources of periodic tasks when the tasks are scheduled on a fixed-priority basis. It is reasonable to assume that the resources required by every job of every task and the maximum execution times of their critical sections are known a priori, just like the other task parameters. All the jobs in each periodic task have the same priority. Hence, the priority ceiling of each resource is the highest priority of all the tasks that require the resource. This makes it possible for us to analyze and determine the potential of resource contentions among tasks statically. The effect of resource contentions on the schedulability of the tasks can be taken care of by including the blocking time bi (rc) in the schedulability test of the system. For example, suppose that the jobs in Figure 8–14 belong to four periodic tasks. The tasks are T1 = (ε, 2, 0.8; [Black; 0.8]), T2 = (ε, 2.2, 0.4), T3 = (ε, 5, 0.2; [Shaded; 0.2]), and T4 = (10, 1.0; [Black; 1.0]), where ε is a small positive number. For all i , Ji in Figure 8–14 is a job in Ti . Figure 8–16 shows the initial segment of the schedule of the tasks according to the rate-monotonic algorithm and priority-ceiling protocol.We see that T2 misses it is deadline at time 2.2 ε. A schedulability test can predict this miss. The time-demand function of T2 is equal to 2.2 (i.e., 0.8 0.4 1.0) in (0, 2.0 ε] when the blocking time b2(rc) = 1.0 of T2 is included and becomes 3.0 at 2.0 ε (i.e., the beginning of the second period of T1). Obviously, the time supply by T2’s first deadline at 2.2 ε cannot meet this demand. Similarly, if the jobs Ji, for i = 1, 2, . . . , 6, in Figure 8–15 are jobs in periodic tasks Ti , respectively, we can take the effect of resource conflicts into account in our determination of whether the tasks are schedulable by including the blocking time bi (rc) computed above in the schedulability test. To summarize this section, we recall that two factors contribute to the time-demand function of each task in addition to the execution times of its jobs and execution times of equal and higher-priority jobs. They are blocking time and context-switching time. When the system is scheduled on a fixed-priority basis and uses the priority-ceiling protocol, we can compute the blocking time bi (rc) of each task Ti due to its resource conflicts with other tasks in the way described above. After we have thus obtained bi (rc), we include this blocking factor with the other types of blocking times (e.g., due to nonpreemptivity of lower-priority

Access Control In Multiple-unit Resources

3 min read
CONTROLLING ACCESSES TO MULTIPLE-UNIT RESOURCES Both versions of the priority-ceiling protocol and preemption-ceiling protocol described in the previous sections assume that there is only one unit of each resource. We now describe an extension to these protocols so that they can deal with the general case where there may be more than one unit of each resource (type). Priority (Preemption) Ceilings ofMultiple-Unit Resources The first step in extending the priority-ceiling protocol is to modify the definition of the priority ceilings of resources. We let (Ri , k), for k ≤ vi , denote the priority ceiling of a resource Ri when k out of the νi (≥ 1) units of Ri are free. If one or more jobs in the system require more than k units of Ri , (Ri , k) is the highest priority of all these jobs. If no job requires more than k units of Ri , (Ri , k) is equal to Ω, the nonexisting lowest priority. In this notation, the priority ceiling (Rj ) of a resource Rj that has only 1 unit is (Rj , 0). Let ki (t) denote the number of units of Ri that are free at time t . Because this number changes with time, the priority ceiling of Ri changes with time. The (current) priority ceiling of the system at time t is equal to the highest priority ceiling of all the resources at the time. Figure 8–21 gives an example. The resource requirement graph gives the numbers of units of the resources X, Y, and Z required by the five jobs that are indexed in decreasing order of their priorities. The table below the graph gives the priority ceilings of each resource for different numbers of free resource units. For example, there are 2 units of X. When 1 unit of X is used, only J3 is directly blocked. Therefore, (X, 1) is π3. J1 is also directly blocked when both units of X are in use. For this reason, (X, 0) is π1, the higher priority between π1 and π3. When both units of X are free, the ceiling of the resource is Ω. Similarly, since J2, J3, and J5 require 2, 3, and 1 unit of Y, which has 3 units, (Y, 0), (Y, 1), and (Y, 2) are equal to π2, π2, and π3, respectively. Suppose that at time t , 1 unit of each of X, Y, and Z is free. The priority ceilings of the resources are π3, π2, and Ω, respectively, and the priority ceiling of the system is π2.

Types Of Multiprocessor Operating System

3 min read
TYPES OF MULTIPROCESSOR OPERATING SYSTEMS The multiprocessor operating systems are complex in comparison to multiprograms on an uniprocessor operating system because multiprocessor executes tasks concurrently. Therefore, it must be able to support the concurrent execution of multiple tasks to increase processors performance. Depending upon the control structure and its organisation the three basic types of multiprocessor operating system are: 1) Separate supervisor 2) Master-slave 3) Symmetric Supervision Separate Supervisors In separate supervisor system each process behaves independently. Each system has its own operating system which manages local input/output devices, file system and memory well as keeps its own copy of kernel, supervisor and data structures, whereas some common data structures also exist for communication between processors. The access protection is maintained, between processor, by using some synchronization mechanism like semaphores. Such architecture will face the following problems: 1) Little coupling among processors. 2) Parallel execution of single task. 3) During process failure it degrades. 4) Inefficient configuration as the problem of replication arises between supervisor/kernel/data structure code and each processor. Master-Slave: In master-slave, out of many processors one processor behaves as a master whereas others behave as slaves. The master processor is dedicated to executing the operating system. It works as scheduler and controller over slave processors. It schedules the work and also controls the activity of the slaves. Therefore, usually data structures are stored in its private memory. Slave processors are often identified and work only as a schedulable pool of resources, in other words, the slave processors execute application programmes.

Introduction To Multiprocessor Systems

6 min read
Introduction: The computer hardware industry experienced a rapid growth in the graphics hardware market during this past decade, with fierce competition driving feature development and increased hardware performance. One important advancement during this time was the programmable graphics pipeline. Such pipelines allow program code, which is executed on graphics hardware, to interpret and render graphics data. Soon after its release, the generality of the programmable pipeline was quickly adapted to solve non-graphics-related problems. However, in early approaches, computations had to be transformed into graphics-like problems that a graphics processing unit (GPU) could understand. Recognizing the advantages of general purpose computing on a GPU, language extensions and runtime environments were released by major graphics hardware vendors and software producers to allow general purpose programs to be run on graphics hardware without transformation to graphics-like problems. Today, GPUs can be used to efficiently handle dataparallel compute-intensive problems and have been utilized in applications such as cryptology, supercomputing , finance, ray-tracing, medical imaging, video processing, and many others. There are strong motivations for utilizing GPUs in realtime systems. Most importantly, their use can significantly increase computational performance. A review of published research shows that performance increases commonly range from 4x to 20x , though increases of up to 1000x are possible in some problem domains. Tasks accelerated by GPUs may execute at higher frequencies or perform more computation per unit time, possibly improving system responsiveness or accuracy.

Analysis Methods

6 min read
Analysis Methods We consider two methods for analyzing mixed task sets of CPU-only and GPU-using tasks on a multiprocessor system with a single GPU: the Shared Resource Method and the Container Method. Fundamental differences between these methods stem from how GPU execution time is modeled and how potential graphics hardware driver behaviors are managed. Shared Resource Method It is natural to view a GPU as a computational resource shared by the CPUs of a multiprocessor system. This is the approach taken by the Shared Resource Method (SRM), which treats the GPU as a globally-shared resource protected by a real-time semaphore. The execution of a GPU-using job goes through several phases. In the first phase, the job executes purely on the CPU. In the next phase, the job sends data to the GPU for use by the GPU kernel. Next, the job suspends from the CPU when the kernel is invoked on a GPU. The GPU executes the kernel using many parallel threads, but kernel execution does not complete until after the last GPU-thread has completed. Finally, the job resumes execution on the CPU and receives kernel execution results when signaled by the GPU. Optionally, the job may continue executing on the CPU without using the GPU. Thus, a GPU-using job has five execution phases as depicted in Fig. 1. We can remove the GPU driver from resourcearbitration decisions and create a suitable model for realtime analysis through the use of a real-time semaphore protocol. Contention for a GPU may occur when a job attempts to communicate with it. We resolve this contention with a synchronization point between the first and second phases to provide mutual exclusion through the end of the fourth phase; this interval is called a critical section and denoted for each task Ti by csi. This approach ensures that the driver only services one job at a time, which eliminates the need for knowing how the driver (which, again, is closed-source) manages concurrent GPU requests.

Multiprocessor Interconnections

3 min read
The second approach where the cache is associated with each individual processor is the most popular approach because it reduces contention more effectively. Cache attached with processor an capture many of the local memory references for example, with a cache of 90% hit ratio, a processor on average needs to access the shared memory for 1 (one) out of 10 (ten) memory references because 9 (nine) references are already captured by private memory of processor. In this case where memory access is uniformly distributed a 90% cache hit ratio can support the shared bus to speed-up 10 times more than the process without cache. The negative aspect of such an arrangement arises when in the presence of multiple cache the shared writable data are cached. In this case Cache Coherence is maintained to consistency between multiple physical copies of a single logical datum with each other in the presence of update activity. Yes, the cache coherence can be maintained by attaching additional hardware or by including some specialised protocols designed for the same but unfortunately this special arrangement will increase the bus traffic and thus reduce the benefit that processor caches are designed to provide. Cache coherence refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence. When clients in a system, particularly CPUs in a multiprocessing system, maintain caches of a common memory resource, problems arise. Referring to the Figure 4, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without it knowing any better. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory. Let us discuss some techniques which can be employed to decrease the impact of bus and memory saturation in bus-oriented system. 1) Wider Bus Technique: As suggested by its name a bus is made wider so that more bytes can be transferred in a single bus cycle. In other words, a wider parallel bus increases the bandwidth by transferring more bytes in a single bus cycle. The need of bus transaction arises when lost or missed blocks are to fetch into his processor cache. In this transaction many system supports, block-level reads and writes of main memory. In the similar way, a missing block can be transferred from the main memory to his processor cache only by a single main-memory (block) read action. The advantage of block level access to memory over word-oriented busses is the amortization of overhead addressing, acquisition and arbitration over a large number of items in a block. Moreover, it also enjoyed specialised burst bus cycles, which makes their transport more efficient. 2) Split Request/Reply Protocols: The memory request and reply are split into two individual works and are treated as separate bus transactions. As soon as a processor requests a block, the bus released to other user, meanwhile it takes for memory to fetch and assemble the related group of items. The bus-oriented system is the simplest architecture of multiprocessor system. In this way it is believed that in a tightly coupled system this organisation can support on the order of 10 processors. However, the two contention points (the bus and the shared memory) limit the scalability of the system.

Introduction To Real-time Systems And Fault-tolerance

4 min read
Introduction When a component of a computer system fails, it will usually produce some undesirable effects and it can be said to no longer behave according to its specification. Such a breakdown of a component is called a fault and its consequence is called a failure. A fault may occur sporadically, or it may be stable and cause the component to fail permanently. Even when a fault occurs instantaneously, a fault such as a memory fault may have consequences that manifest themselves after a considerable time. Fault-tolerance is the ability of a system to function correctly despite the occurrence of faults. Faults caused by errors (or ‘bugs’) in software are systematic and can be reproduced in the right conditions. The formal methods described in previous chapters address the problem of errors in software and, while their use does not guarantee the absence of software errors, they do provide the means ofmaking a rigorous, additional check. Hardware errors may also be systematic but in addition they can have random causes. The fact that a hardware component functions correctly at some time is no guarantee of flawless future behaviour; Note that hardware faults often affect the correct behaviour of software. One of the reasons for introducing dynamic scheduling is to deal with the unexpected computational load imposed when faults do occur. Of course, it is not possible to tolerate every fault. A failure hypothesis stipulates how faults affect the behaviour of a system. An example of a failure hypothesis is the assumption that a communication medium might corrupt messages. With triple modular redundancy, a single component is replaced by three replicas and a voter that determines the outcome, and the failure hypothesis is that at any time at most one replica is affected by faults. Afailure hypothesis divides abnormal behaviour, i.e. behaviour that does not conform to the specification, into exceptional and catastrophic behaviours. Exceptional behaviour conforms to the failure hypothesis and must be tolerated, but no attempt need be made to handle catastrophic behaviour (and, indeed, no attemptmay be possible). For example, if the communication medium mentioned earlier repeatedly sends the same message, then this may be catastrophic for a given fault-tolerance scheme. It is important to note that ‘normal’ behaviour does not mean ‘perfect’ behaviour: after a time-out occurs, the retransmission of a message by a sender is normal but it may result in two copies of the same message reaching its destination. Exceptional and normal behaviours together form the acceptable behaviour that the system must tolerate.

Pso System

2 min read
pSOSystem Again, pSOSystem is an object-oriented operating system. Like the other operating systems described in this section, it is modular. pSOS is a preemptive, multitasking kernel that runs on a single microprocessor, while pSOS m is a distributed multiprocessor kernel. pSOS has the same API functions as pSOS , as well as functions for interprocessor communication and synchronization. The most recent release offers a POSIX real-time extension-compliant layer. Additional optional components provide a TCP/IP protocol stack and target and hostbased debugging tools. The classes of pSOSystem objects include tasks, memory regions and partitions, message queues, and semaphores. Each object has a node of residence, which is the processor on which the system call that created the object was made. An object may be global or local. A global object (a task or a resource) can be accessed from any processor in the system, while a local object can be accessed only by tasks on its local processor. A remote node with respect to any object is a processor other than its node of residence. You may have notice that this is how the MPCP model views a static multiprocessor system. The basic unit of execution is a task, which has its own virtual environment.25 pSOS provides each task with the choice of either preemptive priority-driven or time-driven scheduling. The application developer can also choose to run user tasks in either user or supervisory mode. pSOSystem 2.5 offers, in addition to priority inheritance, priority-ceiling protocol.

Open System Architecture

4 min read
Objectives and Alternatives: Ideally, a real-time operating system should create an open environment. Here, by an open environment, we mean specifically one in which the following development and configuration processes are feasible. In short, independently developed and validated hard real-time applications can be configured at run time to run together with soft real-time and nonreal-time applications. We recall that the fixed-time partitioning scheme discussed in Section 6.8.7 is a way to achieve the above goal and has been used to allow safety-critical applications to run together with nonsafety-critical applications. According to that scheme, we partition time into slots and confine the execution of each application in the time slots allocated to the application. We saw in Section 6.8.7 that if applications do not share any global resource (i.e., resources shared by tasks in different applications) and the time slots are assigned to applications on the TDM (Time-Division Multiplexing) basis, we can determine the schedulability of each application independently of other applications. The number of slots per round an application requires to be schedulable is then its required capacity. An application can be admitted into the system whenever the system has enough spare time slots per round to meet the required capacity of the application. The key parameter of the fixed-time partitioning scheme is the length of the time slots. Obviously, this length should be smaller than the shortest relative deadline of all applications. The shorter the time slots, the closer the system emulates a slower virtual processor for each application, but the higher the context-switch overhead. Hence, using the fixed-time partitioning scheme, we cannot accommodate stringent response time requirements without paying high overhead. Modern operating systems typically provide prioritydriven scheduling. Some kind of middleware is needed to put the time-driven scheme on them.

Vrtx

2 min read
VRTX: VRTX has two multitasking kernels: VRTXsa and VRTXmc. VRTXsa is designed for performance. It has a POSIX-compliant library, provides priority inheritance, and supports multiprocessing. Its system calls are deterministic and preemptable. In particular, VRTXsa 5.0 is compliant to POSIX real-time extensions. It is for medium and large real-time applications. VRTXmc, on the other hand, is optimized for power consumption and ROM and RAM sizes. It provides only basic functions and is intended for applications such as cellular phones and other small handheld devices. For such applications, the kernel typically requires only 4-8 kilobytes of ROM and 1 kilobyte of RAM. Most noteworthy about VRTX is that it is the first commercial real-time operating system certified by the Federal Aviation Agency (FAA) for mission- and life-critical systems, such as avionics. (VRTX is compliant to the FAA RTCS/DO-178B level A standard. This is a standard for software on board aircraft, and level A is for software whose failure would cause or contribute to a catastrophic failure of the aircraft. The process of compliance certification involves 100 percent code coverage in testing.) VRTX is used for the avionics on board Boeing MD-11 aircraft. Rather than providing users with a variety of optional components, VRTX provides hooks for extensibility. For example, the TCB of a new task created by TCREATE( ) is extended to include application-specific information. An application can also add system calls.

Scheduling Overhead And Processor Utilization

7 min read
Scheduling Overhead and Processor Utilization Deng et al. [DLZS] simulated systems with different workloads for the purpose of determining the scheduling overhead of the two-level priority-driven scheme. Specifically, they compare the numbers of context switches and queue insertion/deletion operations incurred in systems using the two-level scheme with the corresponding numbers in a closed system using a onelevel priority-driven scheduler. Their simulation results show that scheduling overhead of the open system depends critically on the scheduling quantum. This design parameter can be set to 0 when all realtime applications are predictable. In this case, the scheduling overhead of the open system is essentially the same as that of a closed system. (The number of context switches are the same. There are more queues to maintain but all the queues are shorter.) When some realtime applications are nonpredictable, we must choose a nonzero scheduling quantum. The scheduling overhead of the open system is significantly higher than that of the closed system only when the scheduling quantum is small. We note that the smaller the minimum relative deadline, the smaller the scheduling quantum must be. Even when the minimum relative deadlines of all applications are large, we may still want to use a small scheduling quantum. The reason is that the size of the server for each nonpredictable application required to meet the schedulability condition 1 grows with the scheduling quantum. A larger server than the required capacity of the applications means a lower processor utilization for real-time applications. In short, in order to accommodate nonpredictable applications with large release-times jitters and small relative deadlines, some processor bandwidth (on the order of 30 percent) is not available for real-time applications. This bandwidth is either wasted as scheduling overhead, when the scheduling quantum is small, or set aside for the sake of compensating blocking time in nonpredictable applications, when the scheduling quantum is large. The latter is better because the extra processor bandwidth over the required capacity of the real-time application will be used by server S0 for nonreal-time applications.

Utime High-resolution Time Service

7 min read
UTIME High-Resolution Time Service. UTIME [HSPN] is a high-resolution time service designed to provide microsecond clock and timer granularity in Linux. With UTIME, system calls that have time parameters, such as select and poll, can specify time down to microsecond granularity. To provide microsecond resolution, UTIME makes use of both the hardware clock and the Pentium time stamp counter. Rather than having the clock device programmed to interrupt periodically, UTIME programs the clock to interrupt in one-shot mode. At any time, the next timer interrupt will occur at the earliest of all future timer expiration times. Since the kernel responds as soon as a timer expires, the actual timer resolution is only limited by the length of time the kernel takes to service a timer interrupt, which is a few microseconds with UTIME. Since the clock device no longer interrupts periodically, UTIME uses the Pentium time stamp counter to maintain the software clock. When the system is booted, the clock is programmed to interrupt periodically. During initialization, UTIME reads and stores the time stamp counter periodically and calculates the length of a jiffy in term of the number of time stamp cycles in a jiffy and the number of time stamp cycles in a second. Having thus calibrated the time stamp counter with respect to the clock and obtained the numbers of cycles per jiffy and cycles per second, UTIME then reprograms the clock to run in one-shot mode.

Event-triggered Communication

6 min read
Event-Triggered Communication: A sender sends a message whenever a significant event (e.g., termination of a task, an interrupt signal, etc.) occurs at the sender. This message is placed in a queue at the sender’s site until the basic message transport service (BMTS) is ready to transport the message to the receiver. The communication channel can be eventriggered, rate-constrained, or time-triggered. After arrival of the message at the receiver, the message is placed in a receiver queue until the receiver consumes the message. Using the CRC field contained in every message, the BMTS checks at the receiver’s site whether the contents of a message have been corrupted during transport and simply discards corrupted messages. From the architectural point of view, a BMTS is characterized by a maximum bandwidth, a transport latency, a jitter, and a reliability for the transport of correct BMTS messages. These transport parameters can be characterized by probability distributions. Whenever queues are involved in a scenario, the possibility of queue overflow must be considered. Queue overflow will occur if the transmission rate of the sender is larger than the capacity of the network (overflow of the sender’s queue) or the delivery rate of the network is larger than the reception rate at the receiver (overflow of the receiver’s queue). Different event-trigged protocols take different approaches to the handling of queue overflow. It is impossible to provide temporal guarantees in an open event-triggered communication scenario. If every sending component in an open communication scenario is autonomous and is allowed to start sending a message at any instant, then it can happen that all sending components send a message to the same receiver at the same instant (the critical instant), thus overloading the channel to the receiver. In fielded communication systems, we find three strategies to handle such a scenario: (1) the communication system stores messages intermediately at a buffer before the receiver, (2) the communication system exerts backpressure on the sender, or (3) or the communication system discards some messages. None of these strategies is acceptable for real-time data.

Predictability Of General Purpose Operating System

3 min read
Asynchronous Procedure Calls and LPC Mechanism. Events are synchronous. Like Real-Time POSIX signals, events are queued and hence will not be lost if not handled immediately. Unlike Real-Time POSIX signals, however, they are delivered in FIFO order and do not carry data. APCs complement events to provide asynchronous services to applications as well as the kernel. Each (kernel or user) thread has its own APC queue, which enqueues the APCs that will be executed by the thread. When called to do so, the kernel inserts an APC in the queue of the specified thread and requests a software interrupt at the APC priority level, which is lower than the priority of the dispatcher (i.e., the scheduler) but higher than the priorities of normal threads. When the thread runs, it executes the enqueued APC. Since a thread is at the APC priority level while it is executing an APC, the thread is nonpreemptable by other threads during this time and may block higher-priority threads. Therefore, it is important that the execution times of APCs be kept as small as possible. Another source of unpredictability in Windows NT is Local Procedure Calls (LPCs). This is an example of incorrect prioritization. LPCs provide the interprocess communication mechanism by which environment subsystem Dynamic Link Libraries (DLLs) pass requests to subsystem service providers. It is also used by remote procedure calls between processes on the same machine, as well as by the WinLogin process and security reference monitor. Specifically, the LPC mechanism provides three schemes to communicate data across address space boundaries. Short messages are sent over an LPC connection when the sending process or thread makes a LPC call which specifies the buffer containing the message. The message is copied into the kernel space and from there into the address space of the receiving process. The other schemes make use of shared memory sections and are for exchanges of long messages between the sender and receiver. Since the LPC queue is FIFO ordered, priority inversion can occur when a thread sends a request over an LPC connection to a service provider. Furthermore, without priority tracking, a service provider thread which was created to service a request from a high-priority thread may execute at a nonreal-time or low real-time priority.

Lynxos

3 min read
LynxOS LynxOS 3.0 moves from the monolithic architecture of earlier versions to a microkernel design. The microkernel is 28 kilobytes in size and provides the essential services in scheduling, interrupt dispatch, and synchronization. Other services are provided by kernel lightweight service modules, called Kernel Plug-Ins (KPIs). By adding KPIs to the microkernel, the system can be configured to support I/O and file systems, TCP/IP, streams, sockets, and so on, and functions as a multipurpose UNIX operating system as earlier versions do. Unlike single-threaded system service providers in traditional microkernel systems, KPIs are multithreaded. Each KPI can create as many threads to execute its routines as needed. LynxOS says that there is no context switch when sending a message (e.g., RFS) to a KPI, and inter-KPI communication takes only a few instructions [Bunn]. LynxOS can be configured as a self-hosted system. In such a system, embedded applications are developed on the same system on which they are to be deployed and run.This means that the operating system must also support tools such as compilers, debuggers, and performance profilers and allow them to run on the same platform as embedded applications. Protecting the operating system and critical applications from these possibly untrustworthy applications is essential. Moreover, their memory demands are large. For this reason, LynxOS not only provides memory protection through hardware Memory Management Units (MMUs) but also offers optional demand paging.

Time Services And Scheduling Mechanisms

3 min read
The second source of error arises from the fact that timer events may not be acted upon by the kernel in time order. In some operating systems (e.g., Windows NT 4.0 and Linux), when more than one timer is found expired at a clock interrupt, the kernel takes care of the timer with the latest expiration time first and in decreasing time order. In other words, it services timer events in LIFO order. Therefore, if the order of occurrences of timer events is important, you will need to take care of this matter. For example, if two timer expiration times are within the same clock interrupt period, you need to give the timer that is supposed to trigger an earlier activity a later expiration time. Release-Time Jitters of Periodic Tasks. We conclude our discussion on timers by looking closely at the user-level implementations of a periodic task in Figure 12–5. In particular, we ask what can cause jitters in the release times of the periodic thread. Suppose that the kernel finishes creating the thread and places it in the ready queue at time t . The programmer’s intention is for the thread to be released for the first time at time t 10.We see that the actual first release time can be later than this time because (1) the thread may be preempted and not scheduled until a later time and (2) it takes some time to create a timer (and to get the current time) when the thread starts to execute. If our time unit is millisecond, the delay due to factor (2) is small and can be neglected, but the delay due to factor (1) is arbitrary. Therefore, the implementations are correct only if we accept that t 10 is the earliest release time of the first instance, while the actual release time of this instance may be much later. Similarly, 100 time units are the minimum length of time between the release times of instances of this task. The subsequent instances of the thread will be released periodically only if the while loop always completes within 100 time units. According to implementation 2, if the response time of an instance of the thread exceeds 100, the next instance is released as soon as the current instance completes. (Again, the next release can be delayed since the thread can be preempted between iterations of the loop.) In contrast, the timer in implementation 1 continues to signal every 100 time units. If a signal arrives while the thread is executing its program, the signal is blocked. The pseudocode in Figure 12–5(a) does not describe a correct implementation of a periodic task because it neglects this situation. In general, Real-Time POSIX signals that arrive while blocked are queued, but not SIGARLM signals. Instead, the number of times the SIGARLM signal has arrived while the signal is blocked is indicated by the timer overrun count. By examining this count, the thread can determine when the code of the periodic task should be executed again. In any case, the periodic task is not truly periodic. Moreover, because the thread can be preempted for an arbitrary amount of time, the interrelease time of consecutive instances can be arbitrarily large.

Real-time Extensions Of Linux Operating Systems

5 min read
Real-Time Extensions of Linux Operating Systems The adoption of Linux has increased steadily in recent years as the operating system becomes increasingly more stable, its performance improves, and more and more Linux applications become available. The remainder of this section describes two extensions that enable applications with firm or hard real-time requirements to run on Linux. As you will see shortly, these extensions fall far short of making Linux compliant to POSIX real-time extensions. More seriously, applications written to run on these extensions are not portable to standard UNIX machines or other commercial real-time operating systems. Important Features. Like NT, Linux also has many shortcomings when used for real-time applications. One of the most serious arises from the disabling of interrupts by Linux subsystems when they are in critical sections. While most device drivers disable interrupts for a few microseconds, the disk subsystem may disable interrupts for as long as a few hundred microseconds at a time. The predictability of the system is seriously damaged when clock interrupts can be blocked for such a long time. The solution to this problem is to rewrite all the offending drivers to make their nonpreemptable sections as short as possible, as they are in real-time operating systems. Neither extension described below attacked this problem head on; rather, one tries to live with it, while the other avoids it. Scheduling. Linux provides individual processes with the choices among SCHED FIFO, SCHED RR, or SCHED OTHER policies. Processes using the SCHED FIFO and SCHED RR policies are scheduled on a fixed-priority basis; these are real-time processes. Processes using SCHED OTHER are scheduled on the time-sharing basis; they execute at priorities lower than real-time processes.

Application Program Interface Andssp Structure

3 min read
Application Program Interface andSSP Structure The API functions Real-Time Mach provides to support resource reservation include request( ) and modify( ) for requesting and modifying reservations. There are also functions for binding threads to a reservation and specifying the type of a reservation. In addition, an application may create a port for a reservation and request the kernel to send a notification message to that port whenever the budget of the reservation is exhausted. Mercer, et al. [MeST] pointed out that a System Service Provider (SSP) executes mostly in response to requests from clients. If each SSP were to reserve sufficient resources to ensure its responsiveness, most of its reserved resources would be left idle when there is no request for its service. A better alternative is to have the client pass the client’s reservations needed by the SSP to do the work along with the request for service. In a microkernel system adopting this approach, each SSP has only a small CPU reservation. Only its daemon thread executes using this reservation. When a client requests service from an SSP, it allows the SSP to use its own reservations. When the SSP receives a request, the daemon thread wakes up, executes using the SSP’s CPU reservation, creates a work thread to execute on the client’s behalf, and suspends itself. This work thread uses the client’s CPU reservation, as well as reservations of other types of processors. In this way, the resources used by the SSP to service each request are charged to the requesting client. The SSP structure described in the next section is an extension, and we will provide more detail on this structure when we describe the extension.

Vxworks

5 min read
VxWorks A man on the street may not know VxWorks by name but most likely knows one of its applications, as it was the operating system used on Mars Pathfinder, which NASA sent to Mars in 1997. You recall that after Pathfinder landed on Mars and started to respond to ground commands and take science and engineering data, its computer repeatedly reset itself. This frequent reset made worldwide news daily. Fortunately, JPL and VxWorks were able to pinpoint the problem. This was done with the aid of a trace generation and logging tool and extensive instrumentation of major services (such as pipe, message queues, and interrupt handling), which were used during debugging and testing and were left in the system. They fixed the problem on the ground and thus saved the day for Pathfinder. According to Glenn Reeves, the Mars Pathfinder flight software cognizant engineer [Reev],26 the cause was the classical uncontrolled priority inversion problem. Although VxWorks provides priority inheritance, the mechanism was disabled. The prolonged blocking experienced by a high-priority task caused it to miss its deadline, and the computer was reset as a consequence. The fix was, of course, to enable priority inheritance. Before moving on, we want make a couple of parenthetical remarks on lessons learned from this incident. First, we cannot rely on testing to determine whether tasks can complete in time and how often a task may be late. (The computer reset behavior was observed only once during the months of testing on the ground. It was deemed infrequent enough to not warrant concern before Pathfinder left the ground.) Second, it is good practice to leave the instrumentation code added for debugging and testing purposes in the deployed system. System and software engineering experts can give you numerous reasons for doing so, not just so that you can use it to debug the system after it is deployed. Back to VxWorks, the Pathfinder incident highlights the fact that VxWorks allows us to disable major function such as memory protection and priority inheritance. You can specify what options are to be enabled and what options are to be disabled via global configuration parameters. In the case of Pathfinder, the global configuration variable is the one that specifies whether priority inheritance is enabled for mutual exclusion semaphores. Once this option is set, priority inheritance is enabled or disabled as specified for all mutual exclusion semaphores created afterwards.

Event Notification Andsoftwar E Interrupt

8 min read
Event Notification andSoftwar e Interrupt: Event notification, exception handling, and software interrupts are essential for multitasking systems. Responsive mechanisms are needed to inform threads of the occurrences of timer events, the receipt of messages, the completion of asynchronous I/O operations, and so on. In UNIX systems, signal is the general mechanism for these purposes. Most of this subsection is devoted to the Real-Time POSIX signal as its features exemplify what are good and practical for real-time applications. Signal and Similar Mechanisms. We saw earlier that in a UNIX system, interrupt handlers and the kernel use signals as a means to inform threads of the occurrences of exceptions (e.g., divide by zero and bad system call) or waited for events (e.g., the expiration of a timer and arrival of a message). A thread may signal another thread to synchronize and communicate. (For example, a predecessor thread may signal a successor thread when it completes.) A thread has a service function, called a signal handler. When the kernel delivers a signal to the thread, the signal handler executes. Thus, the signal mechanism provides asynchrony and immediacy, just as hardware interrupts do. Non-UNIX systems typically provide these capabilities using more than one mechanism. As an example, inWindows NT [Solo, Hart], events and Asynchronous Procedure Calls (APCs) serve the same purposes as signals in UNIX systems. Events are set by threads for the purpose of notification and synchronization. They are synchronous. In other words, a thread must explicitly wait for an event for the event to have effect, and the thread is suspended while it waits. When a thread sets an event (object) to the signaled state, the thread(s) that is waiting on the event is awakened and scheduled. An NT event is an efficient and powerful notification and synchronization mechanism. Being synchronous, NT event delivery does not have the relatively high overhead of asynchronism. It is many-to-many in the sense that multiple threads can wait on one event and a thread can wait for multiple events. In contrast, each UNIX signal is targeted to an individual thread or process, and each signal is handled independently of other signals.

Qnx/neutrino

3 min read
QNX/Neutrino QNX is one of the few multiprocessor operating systems suitable for real-time applications that requires high-end, networked SMP machines with gigabytes of physical memory. Its microkernel provides essential thread and real-time services. Other operating system services are provided by optional components called resource managers. For example, in the recent version, called Neutrino, the microkernel supports only threads. A process manager is needed to support processes that are memory-protected from each other, and the process manager is optional. Because optional components can be excluded at run time, the operating system can be scaled down to a small size. (The QNX 4.x microkernel is 12 kilobytes in size.) QNX is a message-passing operating system; messages are the basic means of interprocess communication among all threads. Section 12.3.1 already talked about its message-based priority inheritance feature. This is the basic mechanism for priority tracking: Messages are delivered in priority order and the service provider executes at the priority of the highest priority clients waiting for service. QNX implements POSIX message queues outside the kernel but implements QNX’s message passing within the kernel. QNX messages are sent and received over channels and connections. When a service provider thread wants to receive messages, it first creates a channel. The channel is named within the service provider by a small integer identifier. To request service from a service provider, a client thread first attaches to the service provider’s channel. This attachment is called a connection to the channel. Within the client, this connection is mapped directly to a file descriptor. The client can then send its RFS messages over the connection, in other words, directly to the file descriptor.

Two-level Scheduler

5 min read
(a) Operation of the OS scheduler Initialization • Create a total bandwidth server S0 with size u0 < 1 for nonreal-time applications and a passive server for each service provider in the system. • Set Ut to the total size of S0 and all passive servers. Acceptance test and admission of a new real-time application Tk : • Upon request from the application, do the acceptance test as described in Figure 12–11(c). • If Tk passes the acceptance test, create server Sk of size uk , following the rules described in Figure 12–11(c). Maintenance of each server Sk : Replenish the budget and set the deadline of server Sk according to Figure 12–11(b). Interaction with server scheduler of each server Sk : • When a thread in the application Tk becomes ready, invoke the server scheduler of Sk to insert the thread in Sk ’s ready queue. • If Tk uses a preemptive scheduling algorithm, before replenishing the budget of Sk , invoke the server scheduler of Sk to update the next event time tk of Tk . • When a thread in Tk enters its nonpreemptable section, mark the server Sk nonpreemptable until the thread exits the nonpreemptable section, and mark the server preemptable when the nonpreemptable section exceeds the declared maximum length. • When a thread in Tk requests a global resource, grant the thread the resource and mark the server Sk nonpreemptable until the thread releases the global resource. Scheduling of all servers: Schedule all ready servers on the EDF basis, except when a server is marked nonpreemptable; when a server is nonpreemptable, it has the highest priority among all servers. Termination of a real-time application Tk : • Destroy server Sk . • Decrease Ut by uk .

Sufficient Schedulability Condition And Acceptance Test

3 min read
Sufficient Schedulability Condition and Acceptance Test Again, the OS scheduler subjects the application to an acceptance test whenever an application Tk requests to execute in the real-time mode. The required capacity of the application is sk ,and in the open system, it is scheduled by its server scheduler according to algorithm Σk . Deng, et al. [DLZS] showed that the following conditions together constitute a sufficient condition for the application Tk to be schedulable when executed in the open system according to algorithm Σk . 1. The size ˜ uk of the server Sk used to execute Tk is equal to sk, if Tk is predictable, or if Tk is unpredictable, where δk,min is the shortest relative deadline of all threads in Tk whose release times or resource acquisition times are unknown. 2. The total size of all servers in the open system is no more than where the max function is over all real-time applications Tj ’s in the system, Bj is the maximum execution time of nonpreemptable sections of all applications other than Tj , and Dmin, j is the shortest relative deadline of all threads in Tj . 3. In the interval (t, d − q) from any budget replenishment time t of Sk to q time units before the corresponding server deadline d, where q ≥ 0 and d − q > t , there would be no context switch among the threads in Tk if Tk were to execute alone on a slow processor of speed sk . (c) Acceptance test and choice of server size summarizes the acceptance test that the OS scheduler subjects each application to and the choice of server size that the scheduler makes when admitting a new real-time application. The rationale behind them are conditions 1 and 2 stated above: The OS scheduler makes sure that these conditions are met. Similarly, the replenishment rules in (b) Server maintenance rules are such that condition 3 is satisfied for all types of real-time applications.

The Kernel

7 min read
External Interrupts. Hardware interrupts provide an effective mechanism for notifying the application of the occurrences of external events and dealing with sporadic I/O activities. For applications that have such activities, handling external interrupts is an essential function of the kernel. Depending on the source of the interrupt, the amount of time required to handle an interrupt varies. In particular, handling interrupts from DMA (Direct Memory Access) interfaces may take significant amounts of time. A network interface is an example. When there is an incoming message to be delivered, the interface raises an interrupt. In response to this interrupt, the protocol handler executes to identify the thread that is to receive the message, move the message to the address space of the receiving thread, perform handshakes, and so on. The required time can be large and varied. Service routines for disk and network devices can take hundreds of microseconds to tens of milliseconds to complete. For this reason, in most operating systems, interrupt handling is divided into two steps. Immediate Interrupt Service. The first step of interrupt handling is executed at an interrupt priority level. All modern processors support some form of priority interrupt. The relationship between interrupt priority levels and normal thread priorities (i.e., software priorities) are depicted by Figure 12–3: The higher the box, the higher the priority. The number of interrupt priority levels depends on the hardware and is unimportant to our discussion here. It

I/o Andnetworking

3 min read
I/O andNetworking: Three modern features of file system and networking software are (1) multithreaded-server architecture, (2) early demultiplexing and (3) lightweight protocol stack. These features were developed to improve the performance of time-shared applications, high-performance applications and network appliances. As it turns out, they also benefit real-time applications with enhanced predictability. Multithreaded Server Architecture. Servers in modern operating systems are typically multithreaded. In response to a request, such a server activates (or creates) a work thread to service the request. By properly prioritizing the work thread, we can minimize the duration of priority inversion and better account for the CPU time consumed by the server while serving the client. As a example, we consider a client thread that does a read. Suppose that the file server’s request queue is prioritized, the request message has the same priority as the client, and the work thread for each request inherits the priority of the request message. (Section 12.3.1 described a way to do this.) As a consequence, the length of time the client may be blocked is at most equal to the time the server takes to activate a work thread. I/O requests are sent to the disk controller in priority order. For the purpose of schedulability analysis, we can model the client thread as an end-to-end job consisting of a CPU subjob, which is followed by a diskaccess subjob, and the disk-access job executes on the disk system (i.e., the controller and the disk) and is in turn followed by a CPU subjob. Both CPU subjobs have the same priority as the client thread, and their execution times include the lengths of time the work thread executes. Since the time taken by the server to activate a work thread is small, the possible blocking suffered by the client is small. In contrast, if the server were single-threaded, a client might be blocked for the entire duration when the server executes on behalf of another client, and the blocking time can be orders of magnitude larger. Early Demultiplexing. Traditional protocol handlers are based on layered architectures. They can introduce a large blocking time. For example, when packets arrive over a TCP/IP connection, the protocol module in the network layer acknowledges the receipts of the packets, strips away their headers, reassembles them into IP datagrams, and hands off the datagram to the IP module after each datagram is reassembled. Similarly, the TCP and IP modules reassemble IP datagrams into messages, put the messages in order and then deliver the messages to the receiver. Because much of this work is done before the identity of the receiver becomes known, it cannot be correctly prioritized. Typically, the protocol modules execute at a higher priority than all user threads and block high-priority threads when they process packets of low-priority clients. The duration of priority inversion can be reduced and controlled only by identifying the receiver of each message (i.e., demultiplexing incoming messages) as soon as possible. Once the receiver is known, the execution of the protocol modules can be at the priority of the receiver. Traditionally, incoming messages are first moved to the buffer space of the protocol modules. (Figure 11-1 is based on this assumption.) They are then copied to the address space of the receiver. Early demultiplexing also makes it possible to eliminate the extra copying. Some communication mechanisms [e.g., Illinios FM (Fast Message] [PaLC]) for high-speed local networks take this approach. The fact that messages are copied directly between the net

Memory Management

4 min read
Memory Management: A mode change cannot complete unless the operating system can find sufficient memory for the codes, data, and run-time stacks of the threads that execute in the new mode. We now discuss those aspects of memory management that call for attention. They are virtual memory mapping, paging, and memory protection. Whereas all general-purpose operating systems support virtual memory and memory protection, not all real-time operating systems do, and those that do typically provide the user with the choice of protection or no protection. Unlike nonreal-time applications, some real-time applications do not need these capabilities, and we do not get these capabilities without penalty. Virtual Memory Mapping. We can divide real-time operating systems into three categories depending on whether they support virtual memory mapping (i.e., virtual contiguity) and paging (i.e., demand paging or swapping). Real-time operating systems designed primarily for embedded real-time applications such as data acquisition, signal processing, and monitoring,18 may not support virtual memory mapping. The pSOSystem is an example [Moto]. Upon request, the system creates physically contiguous blocks of memory for the application. The application may request variable size segments from its memory block and define a memory partition consisting of physically contiguous, fixed-size buffers. Memory fragmentation is a potential problem for a system that does not support virtual mapping. After allocating variable-size segments, large fractions of individual blocks may be unused. The available space may not be contiguous and contiguous areas in memory may not be big enough to meet the application’s buffer space demand. The solution is to provide virtual memory mapping from physical addresses, which may not be contiguous, to a contiguous, linear virtual address space seen by the application.

Processors Reserves And Resource Kernel

7 min read
PROCESSOR RESERVES AND RESOURCE KERNEL One can easily argue that a real-time operating system should support as options admission control, resource reservation, and usage enforcement for applications that can afford the additional size and complexity of the option. By monitoring and controlling the workload, the operating system can guarantee the real-time performance of applications it admits into the system. This is the objective of the CPU capacity-reservation mechanism proposed by Mercer, et al. [MeST] to manage quality of services of multimedia applications. Rajkumar, et al. [RKMO] have since extended the CPU reserve abstraction to reserves of other resources and used it as the basis of resource kernels. In addition to Real-TimeMach, NT/RK19 also provides resource kernel primitives. Commercial operating systems do not support this option. Resource Model and Reservation Types: A resource kernel presents to applications a uniform model of all time-multiplexed resources, for example, CPU, disk, network link. We been calling these resources processors and will continue to do so. An application running on a resource kernel can ensure its real-time performance by reserving the resources it needs to achieve the performance. To make a reservation on a processor, it sends the kernel a request for a share of the processor. The kernel accepts the request only when it can provide the application with the requested amount on a timely basis. Once the kernel accepts the request, it sets aside the reserved amount for the application and guarantees the timely delivery of the amount for the duration of the reservation.

Scheduling Mechanisms

7 min read
Scheduling Mechanisms: This section discusses several aspects regarding the implementation of algorithms for scheduling periodic tasks and aperiodic tasks. In particular, we call attention to those scheduling services that the kernel can easily provide which can significantly simplify the implementation of complex algorithms for scheduling aperiodic tasks in the user level. Fixed-Priority Scheduling. All modern operating systems support fixed-priority scheduling. Many real-time operating systems provide 256 priority levels. A fixed-priority system with this many priority levels performs as well as an ideal system that has an infinite number of priority levels. (The loss in schedulable utilization of a rate-monotonically scheduled system due to nondistinct priorities is negligible.) In contrast, a general-purpose operating system may provide fewer levels. For example, there are only 16 real-time priority levels in Windows NT. A parameter of the create thread function is the priority of the new thread; the priority of each new thread is set at the assigned priority, that is, the priority chosen for the thread according to the algorithm used to schedule the application. (This is sometimes done in two steps: The thread is first created and then its priority is set.11) Once created, the assigned priority of the thread is kept in its TCB. In a system that supports priority inheritance or the ceiling-priority protocol, a thread may inherit a higher priority than its assigned priority. The current priority also needs to be kept in the thread’s TCB. To support fixed-priority scheduling, the kernel maintains a ready queue for each priority level. Whenever a thread is ready to execute, the kernel places it in the ready queue of the thread’s current priority. For this reason, the current priority of a thread is often called its dispatch priority.

Other Basic Operating System Functions

7 min read
OTHER BASIC OPERATING SYSTEM FUNCTIONS This section continues our discussion on operating system services that are essential for all but the simplest embedded applications. Specifically, it discusses real-time issues in communication and synchronization, software interrupt, memory management, I/O, and networking. Communication and Synchronization: As they execute, threads communicate (i.e., they exchange control information and data). They synchronize in order to ensure that their exchanges occur at the right times and under the right conditions and that they do not get into each other’s way. Shared memory, message queues, synchronization primitives (e.g., mutexes, conditional variables, and semaphores), and events and signals are commonly used mechanisms for these purposes.14 Almost all operating systems provide a variety of them in one form or the other. (The only exceptions are singlethreaded executives intended solely for small and deterministic embedded applications.) This subsection discusses message queues and mutexes and reader/writer locks, leaving events and signals to the next subsection. We skip shared memory entirely. Shared memory provides a low-level, high-bandwidth and low-latency means of interprocess communication. It is commonly used for communication among not only processes that run on one processor but also processes that run on tightly coupled multiprocessors. (An example of the latter is radar signal processing. A shared memory between signal and data processors makes the large number of track records produced by signal processors available to the tracking process running on the data processor or processors.)We gloss over this scheme because we have little that is specific to real-time applications to add to what has already been said about it in the literature. (As an example, Gallmeister [Gall] has a concise and clear explanation on how to use shared memory in general and in systems that are compliant to POSOX real-time extensions in specific.) The little we have to add is the fact that real-time applications sometimes do not explicitly synchronize accesses to shared memory; rather, they rely on “synchronization by scheduling,” that is, threads that access the shared memory are so scheduled as to make explicit synchronization unnecessary. Thus, the application developer transfers the burden of providing reliable access to shared memory from synchronization to scheduling and schedulability analysis. The cost is that many hard real-time requirements arise from this as a result and the system is brittle. Using semaphores and mutexes to synchronously access shared memory is the recommended alternative.

Server Maintenance

5 min read
Server Maintenance: The OS scheduler maintains (i.e., replenishes its budget and sets its deadline) the server Sk for each real-time application Tk with the help of the scheduler of Sk . Figure 12–11(b) summarizes the rules the OS scheduler uses. In the figure, ˜ u denotes the size of the server being maintained. For simplicity, we take as the time origin the time instant immediately after the server is created; this is the instant when the application executed by the server begins to execute in real-time mode. (b) Server maintenance rules Inputs: Server size= ˜u, scheduling quantum= q, start time of the application= 0. • If the application is cyclically scheduled with frame size f , – server type: Constant utilization – replenishment rule: At time 0, f , 2 f , . . ., set budget to ˜u f and deadline to f , 2 f , 3 f , . . . , respectively. • If the application is nonpreemptively scheduled: – server type: Constant utilization. – replenishment rule: The constant utilization server algorithm. • If the application is unpredictable, – server type: Total bandwidth server-like. – replenishment rule: ∗ replenishment time t: (1) when the ready queue is empty and a thread is released, or (2) when the ready queue is nonempty, (i) t is the current server deadline d, or (ii) the budget is exhausted at t and the next event time is not the release time of a thread with priority higher than the thread at the head of the server ready queue. ∗ budget replenished: ˜ u(tn q − max(t, d)), where tn is a lower bound to the next event time and q is the scheduling quantum of the open system. ∗ server deadline: tn q • If the application is preemptively scheduled but predictable, – server type: Total bandwidth server – replenishment rule: Same as the rule for unpredictable applications except q is equal to 0 and tn is the next event time.

Sporadic Servers

2 min read
2. The server does not begin to execute until time 3.5. At the time, tr is 0, BEGIN is equal to 3, and END is equal to 3.5. According to rule R2, the effective replenishment time te is equal to max(0, 3.0) = 3, and the next replenishment time is set at 8. 3. The server executes until time 4; while it executes, its budget decreases with time. 4. At time 4, the server is preempted by T2.While it is preempted, it holds on to its budget. 5. After the server resumes execution at 5, its budget is consumed until exhaustion because first it executes (C1) and then, when it is suspended again, T1 and T2 are idle (or equivalently, END, which is 5.0, is less than the current time) (C2). 6. When the aperiodic job A2 arrives at time 7, the budget of the server is exhausted; the job waits in the queue. 7. At time 8, its budget replenished (R3), the server is ready for execution again. 8. At time 9.5, the server begins to execute for the first time since 8. te is equal to the latest replenishment time 8. Hence the next replenishment time is 13. The server executes until its budget is exhausted at 11; it is suspended and waits for the next replenishment time. In the meantime, A2 waits in the queue. 9. Its budget replenished at time 13, the server is again scheduled and begins to execute at time 13.5. This time, the next replenishment time is set at 18. However at 13.5, the periodic task system T becomes idle. Rather than 18, the budget is replenished at 15, when a new busy interval of T begins, according to rule R3b. 10. The behavior of the later segment also obeys the above stated rules. In particular, rule R3b allows the server budget to be replenished at 19.

Simple Sporadic Servers In Deadline-driven Systems

4 min read
Simple Sporadic Servers in Deadline-Driven Systems: When the periodic tasks are scheduled on the EDF basis, the priorities of the tasks vary as their jobs are released and completed. Consequently, the membership of the subset of tasks with higher priorities than the server varies with time. Some of the rules of simple sporadic servers stated earlier for fixed-priority systems must be modified for this reason. The rationales behind the modified rules remain the same: a simple sporadic server of period ps and budget es following the rules behave like a periodic task (ps , es). The consumption and replenishment rules of simple sporadic servers in a system where all tasks are scheduled according to the EDF algorithm are stated below. We again use tr to denote the latest replenishment time. However the notation te, the effective latest replenishment time, has a different meaning from that of a fixed-priority sporadic server. Specifically, rule R2 below governs how the value of te is determined. The server is ready for execution only when it is backlogged (i.e., the aperiodic job queue is nonempty) and its deadline d (and hence its priority) is set. The server is suspended whenever its deadline is undefined or when the server is idle (i.e., the aperiodic job queue becomes empty). Consumption Rules of Simple Deadline-Driven Sporadic Server: The server’s execution budget is consumed at the rate of one per unit time until the budget is exhausted when either one of the following two conditions is true. When these conditions are not true, the server holds its budget. C1 The server is executing. C2 The server deadline d is defined, the server is idle, and there is no job with a deadline before d ready for execution.

Performance Of Bandwidth-preserving Server Algorithms

6 min read
Performance of Bandwidth-Preserving Server Algorithms Because no job of any accepted sporadic task is rejected by the scheduler, it is possible for a sporadic task to overload the processor. For this reason, the scheduler must provide firewall protection not only to tasks and jobs that have hard deadlines, but also to each existing sporadic task so the performance guarantee of the task will not be violated when other sporadic tasks misbehave. The bandwidth-preserving server algorithms described in earlier sections are designed to provide such protection and, therefore, are ideally suited for scheduling sporadic tasks with soft deadlines. We focus here on systems which use a bandwidth-preserving server to execute each sporadic task. During an acceptance test, the scheduler can use an appropriate schedulability test to determine the maximum schedulable size of the server chosen to execute the new task. Hence, whether a new sporadic task is acceptable is reduced to the question of whether its required performance can be achieved when its server has the maximum schedulable size. This question can be answered by analyzing the new sporadic task alone without regard to other tasks. We now ask what is the maximum response time of jobs in a sporadic task that is executed by a constant utilization, total bandwidth, or WFQ server in a deadline-driven system. Clearly, the size ˜ u of the server must at least be equal to the average utilization u (i.e., the ratio of the average execution time to the average interarrival time) of the sporadic task S. Queueing theory tells us that even when ˜ u = u, the average response time of the jobs in S is unbounded if their execution times and interarrival times are not otherwise constrained.

Alternative Approaches

7 min read
Alternative Approaches: All the algorithms described in this chapter attempt to provide improved performance over the three commonly used approaches. These approaches are background, polled, and interruptdriven executions. For the sake of simplicity and clarity, we ignore sporadic jobs for now. Background and Interrupt-Driven Execution versus Slack Stealing. According to the background approach, aperiodic jobs are scheduled and executed only at times when there is no periodic or sporadic job ready for execution. Clearly this method always produces correct schedules and is simple to implement. However, the execution of aperiodic jobs may be delayed and their response times prolonged unnecessarily. As an example, we consider the system of two periodic tasks T1 = (3, 1) and T1 = (10, 4) shown in Figure 7–2. The tasks are scheduled rate monotonically. Suppose that an aperiodic job A with execution time equal to 0.8 is released (i.e., arrives) at time 0.1. If this job is executed in the background, its execution begins after J1,3 completes (i.e., at time 7) as shown in Figure 7–2(a). Consequently, its response time is 7.7. An obvious way to make the response times of aperiodic jobs as short as possible is to make their execution interrupt-driven. Whenever an aperiodic job arrives, the execution of periodic tasks are interrupted, and the aperiodic job is executed. In this example, A would execute starting from 0.1 and have the shortest possible response time. The problem with this scheme is equally obvious: If aperiodic jobs always execute as soon as possible, periodic tasks may miss some deadlines. In our example, if the execution time of A were equal to 2.1, both J1,1 and J2,1 would miss their deadlines.

Fairness And Starvation

3 min read
the execution times of jobs in these tasks are 1, 1, 3, and 3, respectively. Each aperiodic task is executed by a server. The sizes of the servers for the tasks are 1/4, 1/8, 1/4, and 3/8, respectively. Starting from time 0, the jobs in A1, A2, and A3 arrive and keep their respective servers backlogged continuously. The first job in A4 arrives at time 18, and afterwards, the server for A4 also remains backlogged. Figure 7–14(a) shows their schedules when the servers are total bandwidth servers. Server TBi executes Ai, for i = 1, 2, 3, 4. The numbers under each time line labeled by a server name give the deadlines of the server as time progresses. (You recall that the budget of a backlogged server is replenished and its deadline set each time it completes a job.) In particular, when the first job in A4 arrives at time 18, the deadlines of servers TB1, TB2, and TB3 are 36, 40, and 36, respectively. Since the deadline of TB4 is first set to 26 and, upon the completion of the first job in A4, to 34, TB4 executes until 24, starving the other servers in (18, 24). Before time 18, the amounts of time allocated to the servers according to their sizes are 4.5, 2.25, and 4.5 respectively, but because TB4 is idle and the time left unused by it is shared by backlogged servers, the servers have executed for 8, 4, and 6 units of time, respectively. In contrast, their fair shares of processor time should be 7.2, 3.6, and 7.2, respectively. (The fair fractional share of a backlogged server in a time interval is equal to its size divided by the sum of sizes of all servers backlogged during the interval. So the fair shares of the three servers are 2/5, 1/5, and 2/5 of 18.) This example supports our intuition: The closer the consecutive deadlines (i.e., the larger the server size and the smaller the execution times of jobs), the larger share of background time a server attains at the expense of other backlogged servers. Now suppose that each aperiodic task Ai is executed by a starvation-free constant utilization/background server CUi, for i = 1, 2, 3, 4. We have the schedule shown in Figure 7–14(b). At time 6, the budgets of all three backlogged servers are exhausted, and the system

Real Time Performance For Jobs With Soft Timing Constraints

5 min read
REAL-TIME PERFORMANCE FOR JOBS WITH SOFT TIMING CONSTRAINTS: For many applications, occasional missed deadlines are acceptable; their sporadic jobs have soft deadlines. This section discusses the performance of the algorithms described in previous sections when used to schedule this type of jobs. In the remainder of this section, except where it is stated otherwise, by a sporadic job, we mean one whose deadline is soft. Traffic Models: You recall that each sporadic task is a stream of sporadic jobs that have the same interreleasetime and execution-time distributions and the same real-time performance requirements. The real-time performance experienced by each sporadic task is typically measured in terms of such criteria as the maximum tardiness and miss rate of jobs in it. (These terms were defined in Section 3.8.) In a system that provides each sporadic task with some kind of performance guarantee, the system subjects each new sporadic task to an acceptance test. Once a sporadic task is admitted into the system, the scheduler accepts and schedules every job in it. Specifically, when requesting admission into the system, each sporadic task presents to the scheduler its traffic parameters. (These parameters are collectively called the flow specification of the task in communications literature.) These parameters define the constraints on the interarrival times and execution times of jobs in the task. The performance guarantee provided by the system to each task is conditional, meaning that the system delivers the guaranteed performance conditioned on the fact that the task meets the constraints defined by its traffic parameters. The flip side is that if the task misbehaves (i.e., its actual parameters deviate from its traffic parameters), the performance experienced by it may deteriorate. Different traffic models use different traffic parameters to specify the behavior of a sporadic task. In addition the periodic task and sporadic task models, there are also the Ferrari and Verma (FeVe) model [FeVe], the (λ, β) model [Cruz], and the leaky bucket model. These models are commonly used to characterize real-time traffic in communication networks.

Preemptive Weighted Fair-queueing Algorithm

6 min read
Preemptive Weighted Fair-Queueing Algorithm: The WFQ algorithm is designed to ensure fairness among multiple servers. As you will see shortly, the algorithm closely resembles the total bandwidth server algorithm. Both are greedy, that is, work conserving. Both provide the same schedulability guarantee, and hence, the same worst-case response time. At a quick glance, the replenishment rules of a WFQ server appear to be the same as those of a total bandwidth server, except for how the deadline is computed at each replenishment time. This difference, however, leads to a significant difference in their behavior: The total bandwidth server algorithm is unfair, but the WFQ algorithm gives bounded fairness. Emulation of GPS Algorithm. Again, aWFQ server consumes its budget only when it executes. Its budget is replenished when it first becomes backlogged after being idle. As long as it is backlogged, its budget is replenished each time it completes a job. At each replenishment time, the server budget is set to the execution time of the job at the head of its queue. In short, the replenishment rules of the WFQ algorithm are such that a WFQ server emulates a GPS server of the same size; the deadline of the WFQ server is the time at which a GPS server would complete the job at the head of the server queue. To illustrate, we look at the example in Figure 7–14 again. Figure 7–14(c) shows the schedule of the four tasks Ai, for i = 1, 2, 3, 4, when they are scheduled according to the GPS algorithm. Specifically, the figure shows that they are executed by GPS servers GPS1, GPS2, GPS3, and GPS4, respectively, and the sizes of these servers are 1/4, 1/8, 1/4, and 3/8, respectively. The scheduler schedules backlogged servers on a weighted round-robin basis, with an infinitesmally small round length and the time per round given to each server is proportional to the server size. The numbers below each time line in Figure 7–14(c) give the completion times of jobs executed by the respective server.

Enhancements Of Fixed-priority Sporadic Server

8 min read
Enhancements of Fixed-Priority Sporadic Server: We now discuss several enhancements as an exercise in server design. If we are willing to make sporadic servers somewhat more complex and pay for slightly higher scheduling overhead, there are many ways to replenish the execution budget more aggressively and preserve it longer. One point to keep in mind in this discussion is the relationship between the replenishment and consumption rules. If we change one rule, we may need to change some other rule(s) in order for the entire set of rules to remain correct. Sporadic/Background Server. The example in Figure 7–8 tells us that rule R3b of the simple fixed-priority sporadic server is overly conservative. At time 18.5, the periodic system becomes idle. The aperiodic job A3 remains incomplete, but the server budget is exhausted. According to this rule, the budget is not replenished until the job J3,2 is released at 19. While A3 waits, the processor idles. It is clear that the schedulability of the periodic system will not be adversely affected if we replenish the server budget at 18.5 and let the server execute until 19 with its budget undiminished. In this way, the server claims all the background time that is not used by the periodic task system. For this example, A3 would complete by time 19 if the server were allowed to claim the background time. We call a sporadic server that claims the background time a sporadic/background server; in essence, it is a combination of a sporadic server and a background server and is defined by the following rules. When there is only one server in the system, there is no reason not to make it a sporadic/ background server. When there is more than one sporadic/background server, the highest priority server gets the background time whenever it can execute at the time. This may or may not be desirable. For this reason, we may want to keep some servers simple and give the background time to some other servers, or even make all sporadic servers simple servers and use a separate background server to execute in the background of the periodic task system and the servers.

Scheduling Nonpredictable Applications

3 min read
Scheduling Nonpredictable Applications In general, the actual release-times of some jobs in a nonpredictable application Ti may be unknown. It is impossible for its server scheduler to determine the next release-time t of Ti precisely. Therefore, some priority inversion due to overreplenishment of the server budget is unavoidable. Fortunately, the bad effect on the schedulability of Ti can be compensated by making the size ˜ ui of the server larger than the required capacity si of the application. Budget Replenishment. Let t'e denote an estimate of the next release time t  of Ti . The server scheduler computes this estimate at each replenishment time t of the server as follows. If the earliest possible release time of every job in Ti is known, the server scheduler uses as t'e the minimum of the future earliest release times plus an error factor ε > 0. When the earliest release times of some jobs in Ti are unknown, t'e is equal to the t ε, where t is the current time. After obtaining the value t'e from the server scheduler, the OS scheduler sets the server budget to min(er, (t'e − t) ˜ ui ) and the server deadline at min(t er / ˜ ui , t'e). ε is a design parameter. It is called the quantum size of the two-level scheduler. In the absence of any knowledge on the release-times of jobs in an application, the OS scheduler replenishes the budget of its server every ε units of time. Hence, the smaller the quantum size, the more frequent the server replenishments, and the higher the server maintenance overhead. However, the size of the server required to execute the application grows with the quantum size, as it will become evident below.

Scheduling Predictable Applications

6 min read
Scheduling Predictable Applications: An important property of all types of predictable applications is that such an application is schedulable according to the two-level scheme if the size of its server is equal to its required capability and its server is schedulable. In contrast, a nonpredictable application requires a server of a size larger than than its required capability, as we will explain below. Nonpreemptively Scheduled Applications. Specifically, according to the two-level scheme, each nonpreemptively scheduled application Ti is executed by a constant utilization server whose size ˜ ui is equal to the required capacity si of the application. The server scheduler orders jobs in Ti according to the algorithm used by Ti. Let B denote the maximum execution time of nonpreemptable sections3 of all jobs in all applications in the system and Dmin denote the minimum of relative deadlines of all jobs in these applications. Corollary 7.8 says that all servers are schedulable if the total size of all servers is no greater than 1 − B/Dmin. To show that every job in a nonpreemptively scheduled real-time application Ti completes in time when scheduled according to the two-level scheme, we consider a time instant at which the OS scheduler replenishes the server budget and sets the server deadline to execute a job in Ti . This time instant is the same as the scheduling decision time at which the job would be scheduled if Ti were executed alone on a slow processor whose speed is si times the speed of the physical processor. The job would complete on the slow processor at the deadline of the server. Since the server is schedulable, it surely will execute for as long as the execution time of the job (hence complete the job) by the server deadline according to the two-level scheme.4

Constant Utilization Server Algorithm

6 min read
Constant Utilization Server Algorithm We now return our attention to the problem of scheduling aperiodic jobs amid periodic tasks in a deadline-driven system. For the purpose of executing aperiodic jobs, there is a basic constant utilization server. The server is defined by its size, which is its instantaneous utilization ˜ us ; this fraction of processor time is reserved for the execution of aperiodic jobs. As with deferrable servers, the deadline d of a constant utilization server is always defined. It also has an execution budget which is replenished according to the replenishment rules described below. The server is eligible and ready for execution only when its budget is nonzero. When the server is ready, it is scheduled with the periodic tasks on the EDF basis. While a sporadic server emulates a periodic task, a constant utilization server emulates a sporadic task with a constant instantaneous utilization, and hence its name. Consumption and Replenishment Rules. The consumption rule of a constant utilization server, as well as that of a total bandwidth or weighted fair-queueing server, is quite simple. A server consumes its budget only when it executes. You will see shortly that such a server never has any budget when there is no aperiodic job ready for execution. Hence the problem of dealing with chunks of budget never arises. The budget of a basic constant utilization server is replenished and its deadline set according to the following rules. In the description of the rules, ˜ us is the size of the server, es is its budget, and d is its deadline. t denotes the current time, and e denotes the execution time of the job at the head the aperiodic job queue. The job at the head of the queue is removed when it completes. The rules assume that the execution time e of each aperiodic job becomes known when the job arrives. We will return later to discuss how to remove this restriction. Replenishment Rules of a Constant Utilization Server of Size ˜ us R1 Initially, es = 0, and d = 0. R2 When an aperiodic job with execution time e arrives at time t to an empty aperiodic job queue, (a) if t < d, do nothing; (b) if t ≥ d, d = t e/ ˜ us, and es = e. R3 At the deadline d of the server, (a) if the server is backlogged, set the server deadline to d e/ ˜ us and es = e; (b) if the server is idle, do nothing.

A Two Level Scheme For Integrated Scheduling

5 min read
A TWO-LEVEL SCHEME FOR INTEGRATED SCHEDULING This section describes a two-level scheduling scheme [DeLS] that provides timing isolation to individual applications executed on one processor. Each application contains an arbitrary number and types of tasks. By design, the two-level scheme allows different applications to use different scheduling algorithms (e.g., some may be scheduled in a clock-driven manner while the others in a priority-driven manner). Hence, each application can be scheduled in a way best for the application. More importantly, the schedulability and real-time performance of each application can be determined independently of other applications executed on the same processor. By emulating an infinitesmally fine-grain time slicing scheme, the two-level scheme creates a slower virtual processor for each applications in the system. Overview and Terminology According to the two-level scheme, each application is executed by a server. The scheduler at the lower level is called the OS scheduler. It replenishes the budget and sets the deadline of each server in the manners described below and schedules the ready servers on the EDF basis. At the higher level, each server has a server scheduler; this scheduler schedules the jobs in the application executed by the server according to the algorithm chosen for the application. Required Capability. In the description below, we use Ti for i = 1, 2, . . . , n to denote n real-time applications on a processor; each of these applications is executed by a server. To determine the schedulability and performance of each application Ti , we examine the tasks in it as if the application executes alone on a slower processor whose speed is a fraction s of the speed of the physical processor. In other words, we multiple the execution time of every job in the application by a factor 1/s > 1 and use the product in place of the execution time in the schedulability test on the application. The minimum fraction of speed at which the application is schedulable is called its required capacity si . The required capacity of every application must be less than 1.

Challenges In Validating Timing Constraints In Priority Driven Approach

6 min read
CHALLENGES IN VALIDATING TIMING CONSTRAINTS IN PRIORITY-DRIVEN SYSTEMS Compared with the clock-driven approach, the priority-driven scheduling approach has many advantages. As examples, you may have noticed that priority-driven schedulers are easy to implement. Many well-known priority-driven algorithms use very simple priority assignments, and for these algorithms, the run-time overhead due to maintaining a priority queue of ready jobs can be made very small. A clock-driven scheduler requires the information on the release times and execution times of the jobs a priori in order to decide when to schedule them. In contrast, a priority-driven scheduler does not require most of this information, making it much better suited for applications with varying time and resource requirements. You will see in later chapters other advantages of the priority-driven approach which are at least as compelling as these two. Despite its merits, the priority-driven approach has not been widely used in hard realtime systems, especially safety-critical systems, until recently. The major reason is that the timing behavior of a priority-driven system is nondeterministic when job parameters vary. Consequently, it is difficult to validate that the deadlines of all jobs scheduled in a prioritydriven manner indeed meet their deadlines when the job parameters vary. In general, this validation problem [LiHa] can be stated as follows: Given a set of jobs, the set of resources available to the jobs, and the scheduling (and resource access-control) algorithm to allocate processors and resources to jobs, determine whether all the jobs meet their deadlines. Anomalous Behavior of Priority-Driven Systems: Figure 4–8 gives an example illustrating why the validation problem is difficult when the scheduling algorithm is priority-driven and job parameters may vary. The simple system contains four independent jobs. The jobs are scheduled on two identical processors in a prioritydriven manner. The processors maintain a common priority queue, and the priority order is J1, J2, J3, and J4 with J1 having the highest priority. In other words, the system is dynamic. The jobs may be preempted but never migrated, meaning that once a job begins execution on a processor, it is constrained to execute on that processor until completion. (Many systems fit this simple model. For example, the processors may model two redundant data links connecting a source and destination pair, and the jobs are message transmissions over the links. The processors may also model two replicated database servers, and the jobs are queries dynamically dispatched by a communication processor to the database servers. The release times, deadlines, and execution times of the jobs are listed in the table.) The execution times of all the jobs are fixed and known, except for J2. Its execution time can be any value in the range [2, 6].

Constant Utilization, Total Bandwidth With Weighted Fair-queueing Servers

6 min read
CONSTANT UTILIZATION, TOTAL BANDWIDTH, AND WEIGHTED FAIR-QUEUEING SERVERS We now describe three bandwidth preserving server algorithms that offer a simple way to schedule aperiodic jobs in deadline-driven systems. They are constant utilization [DeLS], total bandwidth [SpBu], and weighted fair-queueing [DeKS] algorithms. These algorithms belong to a class of algorithms that more or less emulate the Generalized Processor Sharing (GPS) algorithm. GPS, sometimes called fluid-flow processor sharing, is an idealized weighted roundrobin algorithm; it gives each backlogged server in each round an infinitesmally small time slice of length proportional to the server size. Clearly, infinitesmally fine-grain time slicing cannot be implemented in practice. The algorithms which we are about to describe behave like the GPS algorithm to different degrees. In particular, each server maintained and scheduled according to any of these algorithms offers timing isolation to the task(s) executed by it; by this statement, we mean that the worstcase response times of jobs in the task are independent the processor-time demands of tasks executed by other servers. While such a server works in a way that is similar to sporadic servers, its correctness relies on a different principle. Before we describe the rules governing the operations of such a server, we digress briefly to describe a schedulability condition of sporadic jobs scheduled on the EDF basis. The correctness of these algorithms follows from the condition. Schedulability of Sporadic Jobs in Deadline-Driven Systems: We know that a system of independent, preemptable periodic tasks whose relative deadlines are equal to or larger than their respective periods is schedulable according to the EDF algorithm if the total utilization of the tasks is equal to or less than 1. We also argued that if the execution times of some jobs in some task are smaller than the execution time of the task and the interrelease times between some jobs are larger than its period, the system is schedulable if it is schedulable according to this criterion. We now carry this point further to a more general sufficient schedulability condition for independent, preemptable sporadic jobs. This schedulability condition is in terms of densities of sporadic jobs. The density of a sporadic job Ji that has release time ri , maximum execution time ei and deadline di is the ratio ei /(di − ri ). A sporadic job is said to be active in its feasible interval (ri , di ]; it is not active outside of this interval.

Priority-driven Approach

6 min read
PRIORITY-DRIVEN APPROACH: The term priority-driven algorithms refers to a large class of scheduling algorithms that never leave any resource idle intentionally. Stated in another way, a resource idles only when no job requiring the resource is ready for execution. Scheduling decisions are made when events such as releases and completions of jobs occur. Hence, priority-driven algorithms are event-driven. Other commonly used names for this approach are greedy scheduling, list scheduling and work-conserving scheduling. A priority-driven algorithm is greedy because it tries to make locally optimal decisions. Leaving a resource idle while some job is ready to use the resource is not locally optimal. So when a processor or resource is available and some job can use it to make progress, such an algorithm never makes the job wait. We will return shortly to illustrate that greed does not always pay; sometimes it is better to have some jobs wait even when they are ready to execute and the resources they require are available. The term list scheduling is also descriptive because any priority-driven algorithm can be implemented by assigning priorities to jobs. Jobs ready for execution are placed in one or more queues ordered by the priorities of the jobs. At any scheduling decision time, the jobs with the highest priorities are scheduled and executed on the available processors. Hence, a priority-driven scheduling algorithm is defined to a great extent by the list of priorities it assigns to jobs; the priority list and other rules, such as whether preemption is allowed, define the scheduling algorithm completely.

Weighted Round-robin Approach

4 min read
WEIGHTED ROUND-ROBIN APPROACH The round-robin approach is commonly used for scheduling time-shared applications. When jobs are scheduled on a round-robin basis, every job joins a First-in-first-out (FIFO) queue when it becomes ready for execution. The job at the head of the queue executes for at most one time slice. (A time slice is the basic granule of time that is allocated to jobs. In a timeshared environment, a time slice is typically in the order of tens of milliseconds.) If the job does not complete by the end of the time slice, it is preempted and placed at the end of the queue to wait for its next turn. When there are n ready jobs in the queue, each job gets one time slice every n time slices, that is, every round. Because the length of the time slice is relatively short, the execution of every job begins almost immediately after it becomes ready. In essence, each job gets 1/nth share of the processor when there are n jobs ready for execution. This is why the round-robin algorithm is also called the processor-sharing algorithm. The weighted round-robin algorithm has been used for scheduling real-time traffic in high-speed switched networks. It builds on the basic round-robin scheme. Rather than giving all the ready jobs equal shares of the processor, different jobs may be given different weights. Here, the weight of a job refers to the fraction of processor time allocated to the job. Specifically, a job with weight wt gets wt time slices every round, and the length of a round is equal to the sum of the weights of all the ready jobs. By adjusting the weights of jobs, we can speed up or retard the progress of each job toward its completion. By giving each job a fraction of the processor, a round-robin scheduler delays the completion of every job. If it is used to schedule precedence constrained jobs, the response time of a chain of jobs can be unduly large. For this reason, the weighted round-robin approach is not suitable for scheduling such jobs. On the other hand, a successor job may be able to incrementally consume what is produced by a predecessor (e.g., as in the case of a UNIX pipe). In this case, weighted round-robin scheduling is a reasonable approach, since a job and its successors can execute concurrently in a pipelined fashion. As an example, we consider the two sets of jobs, J1 = {J1,1, J1,2} and J2 = {J2,1, J2,2}, shown in Figure 4–1. The release times of all jobs are 0, and their execution times are 1. J1,1 and J2,1 execute on processor P1, and J1,2 and J2,2 execute on processor P2. Suppose that J1,1 is the predecessor of J1,2, and J2,1 is the predecessor of J2,2. Figure 4–1(a) shows that both sets of jobs (i.e., the second jobs J1,2 and J2,2 in the sets) complete approximately at time 4 if the jobs are scheduled in a weighted round-robin manner. (We get this completion time when the length of the time slice is small compared with 1 and the jobs have the same weight.) In contrast, the schedule in Figure 4–1(b) shows that if the jobs on each processor are executed one after the other, one of the chains can complete at time 2, while the other can complete at time 3. On the other hand, suppose that the result of the first job in each set is piped to the second job in the set. The latter can execute after each one or a few time slices of the former complete. Then it is better to schedule the jobs on the round-robin basis because both sets can complete a few time slices after time 2.

Dynamic Versus Stasic System

3 min read
DYNAMIC VERSUS STATIC SYSTEMS In the above example, jobs that are ready for execution are placed in a priority queue common to all processors. When a processor is available, the job at the head of the queue executes on the processor. We will refer to such a multiprocessor system as a dynamic system, because jobs are dynamically dispatched to processors. In the example in Figure 4–2, we allowed each preempted job to resume on any processor and hence, jobs are migratable. We say that a job migrates if it starts execution on a processor, is preempted, and later resumes on a different processor. Another approach to scheduling in multiprocessor and distributed systems is to partition the jobs in the system into subsystems and assign and bind the subsystems statically to the processors. Jobs are moved among processors only when the system must be reconfigured, that is, when the operation mode of the system changes or some processor fails. Such a system is called a static system, because the system is statically configured. If jobs on different processors are dependent, the schedulers on the processors must synchronize the jobs according to some synchronization and resource access-control protocol. Except for the constraints thus imposed, the jobs on each processor are scheduled by themselves. As an example, a partition and assignment of the jobs in Figure 4–2 put J1, J2, J3, and J4 on P1 and the remaining jobs on P2. The priority list is segmented into two parts: (J1, J2, J3, J4) and (J5, J6, J7, J8). The scheduler of processor P1 uses the former while the scheduler of processor P2 uses the latter. It is easy to see that the jobs on P1 complete by time 8, and the jobs on P2 complete by time 11. Moreover, J2 completes by time 4 while J6 starts at time 6. Therefore, the precedence constraint between them is satisfied.

Effective Release Times And Deadlines

4 min read
EFFECTIVE RELEASE TIMES AND DEADLINES The given release times and deadlines of jobs are sometimes inconsistent with the precedence constraints of the jobs. By this, we mean that the release time of a job may be later than that of its successors, and its deadline may be earlier than that of its predecessors. Therefore, rather than working with the given release times and deadlines, we first derive a set of effective release times and deadlines from these timing constraints, together with the given precedence constraints. The derived timing constraints are consistent with the precedence constraints. When there is only one processor, we can compute the derived constraints according to the following rules: Effective Release Time: The effective release time of a job without predecessors is equal to its given release time. The effective release time of a job with predecessors is equal to the maximum value among its given release time and the effective release times of all of its predecessors. Effective Deadline: The effective deadline of a job without a successor is equal to its given deadline. The effective deadline of a job with successors is equal to the minimum value among its given deadline and the effective deadlines of all of its successors. The effective release times of all the jobs can be computed in one pass through the precedence graph in O(n2) time where n is the number of jobs. Similarly, the effective deadlines can be computed in O(n2) time.

Total Bandwidth Server Algorithm

5 min read
Total Bandwidth Server Algorithm: To motivate the total bandwidth server algorithm [SpBu], let us return to the example in Figure 7–13. Suppose that A3 were to arrive at time 14 instead. Since 14 is before the current server deadline 15, the scheduler must wait until time 15 to replenish the budget of the constant utilization server. A3 waits in the interval from 14 to 15, while the processor idles! Clearly, one way to improve the responsiveness of the server is to replenish its budget at time 14. This is exactly what the total bandwidth server algorithm does. Specifically, the total bandwidth server algorithm improves the responsiveness of a constant utilization server by allowing the server to claim the background time not used by periodic tasks. This is done by having the scheduler replenish the server budget as soon as the budget is exhausted if the server is backlogged at the time or as soon as the server becomes backlogged. We now show that a constant utilization server works correctly if its budget is replenished in this aggressive manner. In particular, we can change the replenishment rules as follows and get a total bandwidth server. You can see that the rules of a total bandwidth server are even simpler than the rules of a constant utilization server. Replenishment Rules of a Total Bandwidth Server of size ˜ us R1 Initially, es = 0 and d = 0. R2 When an aperiodic job with execution time e arrives at time t to an empty aperiodic job queue, set d to max(d, t) e/ ˜ us and es = e. R3 When the server completes the current aperiodic job, the job is removed from its queue. (a) If the server is backlogged, the server deadline is set to d e/ ˜ us, and es = e. (b) If the server is idle, do nothing. Comparing a total bandwidth server with a constant utilization server, we see that for a given set of aperiodic jobs and server size, both kinds of servers have the same sequence of deadlines, but the budget of a total bandwidth server may be replenished earlier than that of a constant utilization server. As long as a total bandwidth server is backlogged, it is always ready for execution. In the above example, this means that the server’s budget is replenished at 6.9 and, if A3 were to arrive at 14, at 14, and the deadline of the server is 15 and 23.5, respectively, A3 would be completed at time 17.5 if it were executed by a total bandwidth server but would be completed at 19 by a constant bandwidth server.

Offline Versus Online Scheduling

8 min read
OFF-LINE VERSUS ON-LINE SCHEDULING: In Section 4.1, we mentioned that a clock-driven scheduler typically makes use of a precomputed schedule of all hard real-time jobs. This schedule is computed off-line before the system begins to execute, and the computation is based on the knowledge of the release times and processor-time/resource requirements of all the jobs for all times. When the operation mode of the system changes, the new schedule specifying when each job in the new mode executes is also precomputed and stored for use. In this case, we say that scheduling is (done) off-line, and the precomputed schedules are off-line schedules. An obvious disadvantage of off-line scheduling is inflexibility. This approach is possible only when the system is deterministic, meaning that the system provides some fixed set(s) of functions and that the release times and processor-time/resource demands of all its jobs are known and do not vary or vary only slightly. For a deterministic system, however, off-line scheduling has several advantages, the deterministic timing behavior of the resultant system being one of them. Because the computation of the schedules is done off-line, the complexity of the scheduling algorithm(s) used for this purpose is not important. Indeed, as we will see in the next chapter, complex heuristic algorithms are typically used to find good off-line schedules that can make nearly full use of the resources. Competitiveness of On-Line Scheduling. We say that scheduling is done on-line, or that we use an on-line scheduling algorithm, if the scheduler makes each scheduling decision without knowledge about the jobs that will be released in the future; the parameters of each job become known to the on-line scheduler only after the job is released. The priority-driven algorithms described earlier and in subsequent chapters are on-line algorithms. In Chapter 2 we talked about the admission of each new task depending on the outcome of an acceptance test that is based on the parameters of the new task and tasks admitted earlier. Such an acceptance test is on-line.

Nonoptimility Of The Edf And Lst Algorithm

3 min read
NONOPTIMALITY OF THE EDF AND THE LST ALGORITHMS It is natural to ask here whether the EDF and the LST algorithms remain optimal if preemption is not allowed or there is more than one processor. Unfortunately, the answer is no. The fact that the EDF and the LST algorithms are optimal only when preemption is allowed is illustrated by the example in Figure 4–6. The system shown in this figure has three independent, nonpreemptable jobs J1, J2, and J3. Their release times are 0, 2 and 4, respectively, and are indicated by the arrows above the schedules. Their execution times are 3, 6, and 4; and their deadlines are 10, 14, and 12, respectively. Figure 4–6(a) shows the schedule produced by the EDF algorithm. In particular, when J1 completes at time 3, J2 has already been released but not J3. Hence, J2 is scheduled. When J3 is released at time 4, J2 is executing. Even though J3 has an earlier deadline and, hence, a higher priority, it must wait until J2 completes because preemption is not allowed. As a result, J3 misses its deadline. It is easy to see that the LST algorithm would produce the same infeasible schedule. The fact that these three jobs can meet their deadlines is demonstrated by the feasible schedule in Figure 4–6(b). At time 3 when J1 completes, the processor is left idle, even though J2 is ready for execution. Consequently, when J3 is released at 4, it can be scheduled ahead of J2, allowing both jobs to meet their deadlines.

Objectives, Correctness, And Optimality

3 min read
Objectives, Correctness, and Optimality: Throughout this chapter, we assume that we are given the parameters {pi } and {ei } of all the periodic tasks and a priority-driven algorithm used to schedule them. Moreover, when the periodic tasks are scheduled according to the given algorithm and there are no aperiodic and sporadic jobs, the periodic tasks meet all their deadlines. For the sake of concreteness, we assume that the operating system maintains the priority queues shown in Figure 7–1. The ready periodic jobs are placed in the periodic task queue, ordered by their priorities that are assigned according to the given periodic task scheduling algorithm. Similarly, each accepted sporadic job is assigned a priority and is placed in a priority queue, which may or may not be the same as the periodic task queue. Each newly arrived aperiodic job is placed in the aperiodic job queue. Moreover, aperiodic jobs are inserted in the aperiodic job queue and newly arrived sporadic jobs are inserted into a waiting queue to await acceptance without the intervention of the scheduler. The algorithms described in this chapter determine when aperiodic or sporadic jobs are executed. We call them aperiodic job and sporadic job scheduling algorithms; they are solutions to the following problems: 1. Based on the execution time and deadline of each newly arrived sporadic job, the scheduler decides whether to accept or reject the job. If it accepts the job, it schedules the job so that the job completes in time without causing periodic tasks and previously accepted sporadic jobs to miss their deadlines. The problems are how to do the acceptance test and how to schedule the accepted sporadic jobs. 2. The scheduler tries to complete each aperiodic job as soon as possible. The problem is how to do so without causing periodic tasks and accepted sporadic jobs to miss their deadlines.

Scheduling Of Sporading Jobs

4 min read
1. Suppose that the first sporadic job S1(0 , 8, 2) is released shortly after time 0. Since the density of S1 is only 0.25, the scheduler accepts it. The deadline 8 of S1 divides the future time into two intervals: I1 = (0 , 8] and I2 = (8,∞). The scheduler updates the total densities of active sporadic jobs in these intervals: s,1 = 0.25 and s,2 = 0. The scheduler then inserts S1 into the queue of ready periodic and sporadic jobs in the EDF order. 2. At time 2, the second sporadic job S2(2, 7, 0.5) with density 0.1 is released. Its deadline 7 is in I1. Since the condition 0.1 0.25 ≤ 0.5 defined by Eq. (7.11) is satisfied, the scheduler accepts and schedules S2. We now call the interval I2 I3. The interval I1 is divided into I1 = (2, 7] and I2 = (7, 8]. The scheduler increments the total density s,1 by the density of S2. Now, Δs,1 = 0.35, Δs,2 = 0.25, and Δs,3 = 0. 3. At time 4, S3(4, 14, 1) is released. S2 has already completed. The only sporadic job in the system is S1. Δs,1 = 0.25 (for interval I1 before 8) and Δs,2 = 0 (for interval I2 after 8). The deadline of S3 is in the interval I2. The conditions the scheduler checks are whether 0.25 0.1 and 0.1 are equal to or less than 0.5. Since both are satisfied, the scheduler accepts S3. The intervals maintained by the scheduler are now I1 = (4, 8], I2 = (8, 14], and I3 = (14,∞]. Moreover, Δs,i are 0.35, 0.1, and 0 for i = 1, 2, and 3, respectively. 4. At time 9, S4(9, 13, 2) is released. Now, the only active sporadic job in the system is S3. I1 is (9, 14] and I2 is (14,∞). Since for I1, the total density of existing active sporadic jobs is 0.1 and the density of S4 is 0.5, their sum exceeds 0.5. Consequently, the scheduler rejects S4. Enhancements and Generalization. Two points are worthy of observing. First, the acceptance test is not optimal, meaning that a sporadic job may be rejected while it is schedulable. The reason is that the schedulability condition given by Theorem 7.4 is sufficient but not necessary. In the example above, the system becomes idle at time 9.5 and does not become busy again until time 12. So, S4 is acceptable. In fact, as we will see shortly, it would be acceptable even if its execution time were 4.0, making its density 1.0! An enhancement is to have the scheduler also compute the slack of the system. In this example, suppose that the scheduler were to compute the slack of the system dynamically at time 9. It would find that the current busy interval of the periodic tasks and accepted sporadic jobs ends at time 9.5 and the next busy interval does not begin until time 12. This leaves the system with 2.5 units of slack before time 13, the deadline of S4. Hence, S4 is acceptable. The shortcoming of an acceptance test that makes use of dynamic-slack computation is its high run-time overhead. The next subsection describes an optimal acceptance test algorithm that makes use of static-slack computation. According to that test, S4 would be acceptable as long as its execution time is no more than 4. However, that acceptance test can be used only when periodic tasks have little or no release-time jitters. The second point worth noting is that the acceptance test described above assumes that every sporadic job is ready for execution upon its arrival. In general, its feasible interval may begin some time after its acceptance test time. (As an example, suppose that that S3 in the above example were offered to the scheduler for acceptance testing at time 3, but it is ready for execution at time 4.) We can modify the simple acceptance test in a straightforward fashion to accommodate arbitrary ready times of sporadic jobs. The ready times and deadlines of sporadic jobs in the system partition the future time into disjoint time intervals. The scheduler maintains information on these intervals and the total densities of active sporadic jobs in them in a similar manner, but now it may have to maintain as many as 2Ns 1 intervals, where Ns is the maximum number of sporadic jobs in the system.