- Colin Fidge
``Fundamentals of Distributed System Observation''
IEEE Software 13:6, November 1996, pp. 77-83
Interesting, very readable discussion of the difficulties of observing event ordering in distributed systems. They're summarized in Table 1 of the article; a
y in a cell indicates that the mechanism overcomes the respective observability effect:
Table 1: Effectiveness of Time-Stamping Mechanisms Ordering Mechanisms |
---|
| Real-time stamps | Logical time stamps |
---|
Effects | local clocks | global clock | totally ordered | partially ordered |
---|
Multiple observers see different orderings | y | y | y | y |
Incorrectly perceived orderings | | y | y | y |
Same computation exhibits different orderings | | | y | y |
Arbitrary orderings adopted | | | | y |
- [Fohler:92]
Gerhard Fohler
``Realizing Changes of Operational Modes with Pre Run-Time Scheduled Hard Real-Time Systems.''
In Proc. of the Second International Workshop on Responsive Computer Systems, Saitama, Japan, October 1992.
Discusses the requirements for mode changes in statically scheduled hard real-time systems, and issues for handling them. Good intro to the issues involved with mode changes. The focus is on strictly periodic systems, though.
- [GPS:95]
Richard Gerber, William Pugh, and Manas Saksena
``Parameteric Dispatching of Hard Real-Time Tasks''
appeared in IEEE Transactions on Computers, 44(3), March 1995.
Parametric dispatching allows dependencies between tasks and ranges of execution times. It separates the problem into an off-line and on-line components. The schedule is partially produced off-line, but actual task dispatch timers are not generated until run-time.
- R. Gopalakrishnan and Gurudatta M. Parulkar
``Bringing Real-time Scheduling Theory and Practice Closer for Multimedia Computing''
appeared in ACM SIGMETRICS Conference, Philadelphia
see Gopal's
dissertation chapter citation below.
- R. Gopalakrishnan
Chapter 4: The Real-time Upcall Mechanism
chapter of dissertation
Describes the real-time upcall (RTU) mechanism, which is what you'd get if you made a thread so lightweight that it became a function. The state of an RTU doesn't get saved on preemption, because a preempted RTU returns synchronously. Rather than truly preempting, the RTU is allowed to finish its current iteration (for the target application of real-time protocol processing) before returning the dispatcher thread of control. Therefore, RTU's need not have a separate context.
Another byproduct of the target application is that scheduling is dynamic, and uses modified (simplified to account for delayed preemption) rate monotonic. This application decomposes nicely into iterative handlers. Maximizing CPU utilization is important. For some other real-time applications, these assumptions and goals may not apply, of course. But, if they do, and the application can live with the delayed preemption, then the performance benefits should be significant.
- Scott Herzinger
``Using C++: Some Words to the Wise''
Electronic Design, 21 February 1994, reprinted by Wind River Systems
Very brief, introductory discussion to uses of inlining, virtual functions, and static const object initialization in embedded systems.
- Kevin B. Kenny and Kwei-Jay Lin
``Building Flexible Real-Time Systems Using the Flex Language''
Computer 24:5, May 1991
Describes Flex, a derivative of C++. It supports computations with adjustable execution times by allowing them to return imprecise results. In addition, the runtime system can choose a version of a function based on performance constraints; this is called
performance polymorphism. Interesting in that it's almost the opposite of conventional real-time approaches, in which fixed computation times are required (or assumed). However, I don't think this approach is directly applicable for our application. (The mechanism for specifying timing and resource constraints, on blocks of code, might be of interest.)
- [KSZ:92] Sandeep Khanna, Michael Sebrée, and John Zolnowsky
``Realtime Scheduling in SunOS 5.0''
Everything you wanted to know about Solaris real-time scheduling. Solaris 2.x provides:
- priority-based scheduling for user tasks, so the application has control of scheduling,
- bounded dispatch latency, necessary for realtime, and
- bounded priority inversion via priority inheritance for mutexes, and to a limited degree (for the first reader only) for readers/writers locks.
Also, hidden scheduling overhead is reduced by separating timeouts into non-realtime and realtime timeouts. Non-realtime timeouts are processed by a kernel thread that has the highest
sys class priority, which is below the realtime class priorities (with the default kernel configuration). See also
[SunRT:95].
- Mark H. Klein, Thomas Ralya, Bill Pollak, Ray Obenza, and Michael González Harbour
A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems, 1993
Kluwer Academic Publishers
The RMA bible.
- [RT-MachProtocolProcessing:93]
Chen Lee, Katsuhiko Yoshida, Cliff Mercer, and Ragunathan Rajkumar
``Predictable Communication Protocol Processing in Real-Time Mach''
Second IEEE Real-Time Technology and Applications Symposium, June 10-12, 1996, Boston, MA
Discusses communication protocol processing in real-time operating systems, in particular Real-Time Mach. Protocol processing in real-time is categorized as follows:
- Prioritized Processing:
- Instead of using FIFO queues, priority queues are used in protocol stacks. This does not solve the problem, however, in interrupt-driven protocol processing approaches where low-priority network traffic can indefinitely preempt higher-priority processing.
- Shared Communication Protocol Server:
- A separate server can be employed for protocol processing. The protocol stack then acts as a shared resource. The priority of the server can be controlled by applications, but priority inversion is still possible with this approach.
- Prioritized Threads:
- If priorities are associated with packets, they can be queued and serviced, prior to demultiplexing, in priority order. By using threads for the servicing, the OS dispatcher provides preemption for higher priority packets.
- Application-Level Protocol Processing:
- In this approach, which is used in RT-Mach, the protocol stack resides in application space. It is in library form so that it can be linked into each application process. The advantage of this approach in RT-Mach is that packets need not be processed by the Unix server.
The RT-Mach protocol processing approach may be used with either fixed priority scheduling or with the RT-Mach processor reservation scheduling policies. The experiments that the authors use to demonstrate the benefits of the RT-Mach reservation policy in combination with application-level protocol processing are interesting. There are five scenarios:
- Single thread (Net App) that transmits 10 UDP packets every 40 ms.
- Net App and non-real-time application with 5 compute-bound threads.
- Net App and non-real-time application with 5 threads that make IPC calls to the Unix server.
- Net App and and one low-priority thread that also sends out 10 UDP packets every 10 ms.
- Net App and all three of the above non-real-time competing applications, i.e., compute bound, IPC, and background networking.
In each scenario, the processor usage of the
Net App thread is measured. Smoother usage over time results from better scheduling and elimination of priority inversion.
- [LY:86]
Dennis W. Leinbaugh and Mohamad-Reza Yamini
``Guaranteed Response Times in a Distributed Hard-Real-Time Environment''
IEEE Trans. Software Engineering SE-12:12, December 1986
Presents an analysis approach that considers processing time, shared device access, critical sections, and limited communication errors. The analysis has some intuitive appeal but I'm not comfortable with it. The key is to evaluate blocking; it starts by simply figuring out the worst case blocking. That is almost always too conservative, so transformation rules are applied to eliminate infeasible blocking. While this may be tractable for small systems, it won't scale well.
- C. L. Liu and J. W. Layland
``Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment,''
JACM 20:1, pp. 46-61, 1973
The original paper on rate-monotonic scheduling. It starts with these assumptions:
- tasks with hard deadlines execute periodically,
- a task's execution must complete before its next deadline,
- tasks are independent,
- task execution times are constant,
- non-periodic tasks do not have hard deadlines, but may interrupt periodic tasks.
It defines some useful terms, which are expressed in our terms here:
- deadline:
- The next scheduled execution time for a task.
- overflow:
- Overflow occurs at a particular time if the time is the missed deadline of a task.
- feasible:
- A scheduling algorithm in which no overflow occurs.
- response time:
- The time interval between when a task is able to execute and when it completes its execution.
- critical instant:
- The particular time for a task at which its response time is maximum.
- critical time zone:
- The time interval between a critical instant for a task and when it completes its execution.
- rate-monotonic priority assignment:
- Tasks with higher execution rates are assigned higher priorities.
Some of the more useful theorems on fixed priority scheduling presented in the paper are:
- Theorem 1. A critical instant for any task occurs whenever the task is requested simultaneously with requests for all higher priority tasks.
- Theorem 2. If a feasible priority assignment exists for some task set, then the rate-monotonic priority assignment is feasible for that task set.
- Theorem 5. For a set of m tasks with fixed priority order, the least upper bound on processor utilization is U = m (21/m - 1).
For the special case in which the period of each task is a multiple of the period of each lower priority task, the least upper bound on processor utilization is 1.
In addition to fixed priority scheduling, the paper introduces the dynamic deadline driven scheduling algorithm, now known as earliest deadline first.
- Theorem 6. When the deadline driven scheduling algorithm is used to schedule a set of tasks on a processor, there is no processor idle time prior to an overflow.
- Theorem 7. For a give set of tasks, the deadline driven scheduling algorithm is feasible if and only if the sum of the task utilizations is less than or equal to 1.
(I wrote this out instead of using an equation because that doesn't work too well with HTML 2.0. Task utilization is the ratio of a task's execution time to its period.)
文章评论(0条评论)
登录后参与讨论