- Maher Awad, Juha Kuusela, and Jurgen Ziegler
``Octopus: Subsystem Analysis/Design and Performance Analysis''
Embedded Systems Programming, November 1996
Describes an OO-based system analysis and design approach for real-time systems. It uses OMT and Statecharts, and looks OK. Of interest is the process (``thread'', ``task'') allocation strategy: it's done after deciding whether each object interaction should be synchronous or asynchronous based on concurrency requirements, significance and time requirements of the events, service duration, and other subsystem communication. Then, groups of objects with synchronous interaction form processes. This is the textbook way of doing things, but it often doesn't happen that way in practice.
The last section of the paper is ``Performance Analysis''. It discusses priority assignment and timing analysis. Priority assignment is based on pairwise comparison of process importance, repeated methodically (and mechanically) until all possible pairings have been analyzed. It looks like a good approach: priorities are global, so this gets there eventually. It's different than, but maybe a useful extension to handle importance to, rate monotonic scheduling.
The timing analysis isn't as rigorous as we need. The authors suggest estimating durations of services, taking into account the effects of message passing and synchronization, and checking the most critical event sequence. We need more precision and assurance that all sequences have been checked.
- Ronald E. Barkley and T. Paul Lee
``A Heap-based Callout Implementation to Meet Real-Time Needs''
Proceedings of the USENIX Summer Conference, June, 1988, pp. 213--222
Describes a heap-based callout table for real-time event scheduling on UNIX systems. It provides much faster and more predictable callout scheduling operations.
- Alan Burns, Ken Tindell, and Andy Wellings
``Effective Analysis for Engineering Real-Time Fixed Priority Schedulers''
IEEE Trans. Software Engineering 25:5, May 1995
Presents methods for considering scheduler overhead: clock overheads, queue manipulations, and release delays. This is more of interest at the OS scheduling level, rather than at our application scheduling level.
- Darren Cathey
``RTOS Benchmarking -- All Things Considered . . .''
Real-Time Magazine, Second Quarter 1993, reprinted by Wind River Systems
Shows results of VxWorks, pSos+, VTRX32, LynxOS, and PDOS (to a limited extent) on these benchmark tests:
- interrupt response time
- task response time (after interrupt)
- task context switch
- ping suspend/resume (with two context switches)
- suspend/resume (without context switch)
- task synchronization
- ping semaphore (with context switch)
- getting/releasing semaphore (without context switch)
- message queue timings
- memory allocation/deallocation
- task creation/deletion
- task initialization/activation (for VxWorks only)
- network throughput (for VxWorks only)
A few examples: on a Motorola MVME 147S board with a 25 MHz 68030 processor, a VxWorks context switch takes 18 microseconds; on a 25 MHz MVME-167 board, it takes 4 microseconds. The time to get and release a semaphore on the 147S is 11 microseconds.
- [DLS:96]
Z. Deng, J. W.-S. Liu, and J. Sun
``Dynamic Scheduling of Hard Real-Time Applications in Open System Environment''
17th IEEE Real-Time Systems Symposium, Work in Progress Session 2, Washington, DC, December 4-6, 1996
Presents an approach for running both real-time and non-real-time applications on a general purpose platform (workstation), while guaranteeing that hard deadlines will be met. The scheduling scheme has these objectives:
- The schedulability of the tasks in each real-time application can be validated in isolation from other applications.
- A simple acceptance test determines whether a new real-time application should be admitted.
- Schedulability is guaranteed for the tasks in admitted real-time applications.
- A certain level of responsiveness is maintained for non-real-time applications.
- It does not rely on fixed allocation or fine-grain time-slicing.
The scheduling system has a two-level hierarchy. Each real-time application has its own constant utilization server, and all non-real-time applications share a single constant utilization server. The constant utilization servers are responsible for dispatching for their application(s), and have a fixed size U that reflects their application(s)' CPU time requirements. The lower level OS schedule coordinates the constant utilization servers. It also decides whether to admit new applications, based on comparison of the sum of the current total U with that of the new application, with the processor speed.
This seems like a good approach. The constant utilization servers may map nicely to LWPs on Solaris.
- Vincent Encontre
``How to use modeling to implement verifiable, scalable, and efficient real-time application programs''
Real-time Engineering, Fall 1997
- Vincent Encontre
``Modeling and Implementing Correct, Scalable and Efficient Real-Time Applications with ObjectGEODE''
Real-Time Magazine, First Quarter 1996, reprinted by Verilog USA
Describes Verilog's ObjectGEODE analysis, design, verification, and validation toolset. The verification and validation aspects are supported by rapid prototyping and simulation. ITU-T's Specification and Description Language (SDL) and Message Sequence Charts (MSC) are used to specify the system and verify properties. SDL is FSM based and MSC can be used to describe scenarios; the two can be used together to validate the expected system behavior. Looks very interesting.
- [FL:97]
Wu-chun Feng and Jane W.-S. Liu
``Algorithms for Scheduling Real-Time Tasks with Input Error and End-to-End Deadlines''
IEEE Trans. Software Engineering 23:2, February 1997
Describes real-time scheduling algorithms for preemptive, imprecise, composite tasks. Uses two-level scheduling: the higher level schedules tasks on a single processor taking into account their imprecise nature. The lower level scheduler allocates the time budgeted to a composite task across its component tasks.
Well presented and interesting, but primarily for systems that can take advantage of imprecise calculations where the more time that can be spent by a task on calculating a result, the better the result. Our application doesn't lend itself to that, at least initially. The lower level allocation across component tasks is similar to what we do when allocating CPU time to operations in a thread.
文章评论(0条评论)
登录后参与讨论