tag 标签: real-time

相关博文
  • 热度 21
    2016-2-19 14:16
    969 次阅读|
    0 个评论
    No happy face for me a few days ago. I was faced with a conundrum that made my poor old noggin ache. The solution turned out to be simple, but the underlying problem has certainly given me pause for thought   It all started when I decided it was time to organize the morass of breadboard-based circuitry driving my Cunning Chronograph.   (Source: Max Maxfield / EETimes.com)   Eventually, I ended up with a jolly nice stack of boards as illustrated below. On the bottom we have an Arduino Mega. Sitting on this we have the same custom audio spectrum analyzer shield my chum Duane Benson created for my BADASS Display.   (Source: Max Maxfield / EETimes.com)   Second from the top we find a custom sensor shield, which currently carries only a real-time clock (RTC), but which is waiting to have a pressure/temperature sensor added along with a 9DOF (nine degrees of freedom) sensor boasting a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer.   Finally, sitting proudly and pertly on top of the pile -- the source of my aforementioned problems -- we have an off-the-shelf Arduino proto-shield into which is connected the Simblee breakout board I'm using to control the Cunning Chronograph via Bluetooth by means of my iPad.   Now, before we proceed further, first take a look at the following rough illustration depicting the 29-GPIO Simblee breakout board mounted in the middle of the Arduino Uno proto-shield.   (Source: Max Maxfield / EETimes.com)   The Simblee requires a 3.3V supply. The Arduino Mega runs on 5V, but it does provide a 3.3V rail as illustrated above. The only Simblee pins that are directly connected to the Arduino at this stage are the 3.3V power and ground signals. When this shield is connected into the stack on top of the Arduino Mega, we also have 13 (lucky for some) of the Simblee's GPIOs (configured as OUTPUT) connected to the equivalent number of Arduino GPIOs (configured as INPUT_PULLUP). These signal connections are made using flying leads -- not the header pins used in the shield stack.   So, here's the situation I ran into. When I first created my stack of shields, everything worked perfectly. This past weekend, however, I decided to move the power supply and electronics into the Cunning Chronograph's cabinet. I always underestimate how long this sort of thing will take. The thing is that I want it to look professional, so I take an inordinate amount of time cutting the wires to just the right length to form the wiring harness and stuff.   Eventually, I sat back basking in the glow of a job well done, flicked on the power switch, and... nothing whatsoever happened. There was much gnashing of teeth and rending of garb at that time, let me tell you.   Now, just to make sure we're all tap-dancing to the same drum beat, the following illustration provides a rough idea of the scene that faced me.   (Source: Max Maxfield / EETimes.com)   This is the view as seen looking into the back of the Cunning Chronograph. The power supply is mounted on the lower-right inside face of the cabinet (I couldn’t mount it on the bottom because it would interfere with the plug-panel). Meanwhile, the Arduino stack is mounted on the upper-left inside face so as to keep it as far from the power supply as possible (I don’t know if this matters, but I want to minimize any effects the power supply might have on my magnetometer).   The long and short of it is that I whipped out my trusty multimeter and started to probe around. It didn’t take long to discover that -- with my multimeter probes on the Simblee's power and ground pins -- I was seeing only 1.5V on the 3.3V supply pin to my Simblee Breakout board.   "Hmmm, that's funny," I thought (I was being ironic -- I saw no humor in this at all). Even more interesting was the fact that -- with my multimeter probes on the Arduino's 3.3V and ground pins -- I was seeing the full 3.3V.   The next thing I did was to pull the shield carrying the Simblee off the board stack and to connect the 3.3V and GND header pins from the top of the stack via flying leads to the 3.3V and GND header pins on the shield carrying the Simblee. Eeek Alors! Everything now worked as expected.   "Ah Ha! This was just a random glitch and 'one of those things' we'll never understand," I thought (hopefully) to myself as I happily reattached the shield to the stack and powered everything up again. Arrgghh! I was back to having only 1.5V showing on the Silblee's 3.3V power pin.   How on earth (no pun intended) could I have 3.3V at the bottom of the stack and only 1.5V at the top of the stack, when the only items between the two extremes are the header pins forming the stack itself? Just where was the remaining 1.8V going to?   Suffice it to say that I eventually sorted this out and all is now as it should be, so now I'm wearing my happy face again. Unfortunately, I'm still not 100% sure as to the exact problem because there are three possibilities, and I didn’t realize this until after I'd applied my fix, which -- by some happy quirk of fate -- could have addressed all three scenarios. Can you guess what these possibilities are?
  • 热度 33
    2016-1-21 17:09
    1231 次阅读|
    0 个评论
    实时调度机制(Real-Time Scheduler)是Windows Embedded Compact区别于其他所有Windows系统的最显著特性,同时也是开发嵌入式系统时需要着重考虑的部分。在工程领域对“实时”的理解,我非常欣赏下面的定义: “一个实时系统必须要满足明确的(受限的)响应时间约束或风险的严重后果,包括失效状态” - 出自于Phillip A. Laplante 《Real-Time System Design and Analysis》 所以,一个实时系统中执行一个代码必须在规定的时间约束内有明确的结果,否则就可能会导致系统失效,实时并不一定意味着快速处理能力。 而Windows Embedded Compact正符合上述对于实时系统的定义,因此为了进一步理解,我们首先来了解它的任务调度机制,WinCE的任务调度内核每隔1ms查询一次现有任务并依据下面两个原则来决定处理哪个任务: a). 具有更高优先级的任务先被执行 b). 同样优先级的任务按100ms时间间隔(或Task Quantum定义的时间间隔)循环轮流执行 对于第一条规则,WinCE提供256个优先级等级(0-255),数字越小优先级越高,因此0级为最高优先级,关于优先级的应用本文不做重点描述,请参考下面文章: Real-Time Priority System Levels (Windows Embedded CE 6.0) 对于第二条规则,当多个待处理任务拥有同样的优先级,将按照100ms时间间隔(可以根据Thread Quantum自定义)循环轮流执行。例如有N个同样优先级的任务,当第一个运行了上述定义时间片(WinCE称其为Quantum,如100ms),系统内核就会将其中断然后执行第二个任务,依此类推,直到N个任务都执行过一个Quantum时间后再重新回到第一个任务执行,如此循环。任一个线程最多只能运行一个Quantum的时间,除非另一个更高优先级的任务需要占用CPU,则按照第一条规则,这个线程会被更高优先级任务占用。   下面我们通过两个例子来验证上述两个规则,硬件平台使用 Toradex Colibri VF61 (NXP/Freescale Vybrid Cortex-A5 )计算机模块搭配 Iris 载板 ,软件使用Toradex提供的对应此平台的工业级的 WinCE6 OS 和 GPIO 库 。 a). 软硬件平台搭建请参考开发指南,如下图所示 b). 验证原理为在系统中启动两个任务,第一个任务驱动硬件某个GPIO输出为低电平,而另一个则将同一个GPIO输出为高电平;再将这两个任务的优先级分别设置为相同和不同时候,通过观测GPIO连接的示波器输出来判定上述的两个调度规则。关键代码如下:创建两个线程入口函数ThreadON和ThreadOFF,分别用来将选定的GPIO输出为高电平和低电平,而从函数内部代码可见会持续输出高电平或者低电平,因此我们通过示波器观察GPIO管脚的输出即可得出目前是哪个函数在运行。 -------------------------------------------------------------------------------------------------------------- #include windows.h #include "gpio.h"   // === define constant pins / gpios === // SODIMM pin 101 uIo io1 = COLIBRI_PIN(101); HANDLE hGpio; HANDLE hThreadON, hThreadOFF;   //define ThreadON DWORD WINAPI ThreadON ( LPVOID lpParam ){      //Set Thread Priority      CeSetThreadPriority(GetCurrentThread(), 100);      Sleep(5); //Allow the orther Thread to configure it's PRIO      //FNFINITE LOCKING LOOP      while(1){          //Set GPIO logic high          Gpio_SetLevel(hGpio, io1, ioHigh);      }      return 0; } //define ThreadOFF DWORD WINAPI ThreadOFF ( LPVOID lpParam ){      //Set Thread Priority      CeSetThreadPriority(GetCurrentThread(), 100);      //FNFINITE LOCKING LOOP      while(1){          //Set GPIO logic low          Gpio_SetLevel(hGpio, io1, ioLow);      }      return 0; } //============================================================================= // Application Entry Point // // The simple error handling using ASSERT statements is only effective when // the application is run as a debug version. //============================================================================= int wmain(int argc, _TCHAR* argv[])   {     BOOL success;     // === Initialize GPIO library. ===     // We don't use registry-based  configuration, thus we can     // pass NULL go Gpio_Init()     hGpio = Gpio_Init(NULL);       ASSERT(hGpio != 0);     success = Gpio_Open(hGpio);     ASSERT (success);     // Configure the pin to act as GPIO (as opposed to an Alternate function)     // Set it to Output,  High     Gpio_ConfigureAsGpio(hGpio, io1);     Gpio_SetDir         (hGpio, io1, ioOutput);     Gpio_SetLevel       (hGpio, io1, ioHigh);       CeSetThreadPriority(GetCurrentThread(), 99);      //Create two concorrent Threads, one set GPIO to High and other to Low      hThreadON = CreateThread(NULL, 0, ThreadON, NULL, 0, NULL);     hThreadOFF = CreateThread(0, 0, ThreadOFF, NULL, 0, NULL);     //Time to finish the Program Sleep(3000);       return(TRUE); } -------------------------------------------------------------------------------------------------------------- c). 首先我们来测试第二条规则,将两个任务的优先级设置为相同值(如上面代码为100),运行程序后示波器图形如下,可以看到每隔100ms GPIO输出循环交替变化,完全符合我们第二条规则。 d). 然后我们再来测试第一条规则,如下修改一个任务(ThreadON)的代码,将其优先级提高到99,另外在循环中增加一个5ms的暂停时间。 -------------------------------------------------------------------------------------------------------------- DWORD WINAPI ThreadON ( LPVOID lpParam ){      //Set Thread Priority      CeSetThreadPriority(GetCurrentThread(), 99);      Sleep(5); //Allow the orther Thread to configure it's PRIO      //FNFINITE LOCKING LOOP      while(1){          //Set GPIO logic high          Gpio_SetLevel(hGpio, io1, ioHigh);                 Sleep(5);      }      return 0; } -------------------------------------------------------------------------------------------------------------- e). 运行修改后的程序,示波器输出如下结果,每隔7ms左右有一个拉高的脉冲,表示每次当高优先级的任务(ThreadON)从5ms 暂停时间恢复后,都会中断低优先级任务的执行,这样完全符合第一条规则的描述。   当然,以上所有测试都是基于单核心系统,从Windows Embedded Compact 7 开始,WinCE内核提供了对多核心处理器的支持,同时也有一个新的属性“affinity”来定义哪一个核心来执行哪一个线程,所以如果在多核系统和WEC7上面运行上述例子,同时并未限定线程在同一个核心上面执行,则结果会不同因为两个线程会同时在不同核心上面运行。当然,其实正常应用情况下我们是不建议设置“affinity”参数的,因为这样就无法使得内核调度来自动安排线程在最先空闲的核心上面运行,达不到降低延迟提高系统性能的要求了。 实时系统目前在包括工业自动化,机器人和医疗领域等嵌入式设备上面有广泛的需求,因此了解WinCE的实时调度工作机制以及如何使用线程可以让我们的应用程序实时稳定的执行,让我们更高效可靠的利用WinCE搭建我们的实时系统!
  • 热度 28
    2015-7-9 19:20
    1422 次阅读|
    0 个评论
    My chum, Rich Quinnell, recently wrote an article titled: Embedded systems survey uncovers trends concerns for engineers . Based on this, Bob Snyder emailed Rich some very interesting comments and questions.   Rich brought me into the conversation and -- with Bob's permission -- we decided I should post this blog presenting his comments and questions, and soliciting input from the community. So here is Bob's message: Richard, I really appreciate the hard work that the folks at UBM do to provide the embedded survey results. It seems as though non-real-time, soft real-time, and hard real-time applications have very different requirements. I have been closely studying the survey results for many years and trying to understand (or imagine) how the responses to some of the questions might be correlated. For example, it would be nice to know how the overall movement to 32-bit MCUs (most of which have cache) breaks down by application (e.g., non-real-time, soft real-time, and hard real-time). Are popular 32-bit MCUs, such as ARM and MIPS, being widely adopted for hard real-time applications where worst-case execution time is at least as important as average-case execution time, and where jitter is often undesirable? If so, do people disable the cache in order to achieve these goals, or simply throw MIPS at the problem and rely upon statistical measures of WCET and jitter? Performance penalty of completely disabling the cache Microchip's website explains how to completely disable the cache on a PIC32MZ (MIPS 14K core). The article says that doing this will reduce performance by a factor of ten: "You probably don't want to do this because of lower performance (~ 10x) and higher power consumption." Somebody at the University of Toronto ran a large set of benchmarks comparing various configurations of a specific ARM processor. When they compared a fully-enabled cache configuration to an L2-only configuration, the L2-only setup was six times slower (shown in the red rectangles below). It seems reasonable to assume that if L2 had also been disabled, performance would have been even worse.   Based upon this data, it seems reasonable to conclude that when the cache is completely disabled on a 32-bit micro, the average performance is roughly ten times worse than with the cache fully enabled. Why would anyone use a cache-based MCU in a hard real-time application? The fastest PIC32 processor (the PIC32MZ) runs at 200 MHz. With the cache fully disabled, it would effectively be running at 20 MHz. The 16-bit dsPIC33E family runs at 70 MHz with no cache. Admittedly, the dsPIC will need to execute more instructions if the application requires arithmetic precision greater than 16 bits. But for hard real-time applications that can live with 16-bit precision, the dsPIC33E would seem to be the more appropriate choice. I am having trouble understanding the rationale for using an ARM or PIC32 in a hard real-time application. These chips are designed with the goal of reducing average-case execution time at the expense of increased worst-case execution time and increased jitter. When the cache is disabled, they appear to have worse performance than devices that are designed without cache. Atmel's 32-bit AVR UC3 family has neither an instruction cache nor a data cache, so this is not a 32-bit issue per se. But it seems that the majority of 32-bit MCUs do have cache and are targeted at soft real-time applications such as graphical user interfaces and communication interfaces (e.g. TCP/IP, USB) where WCET and jitter are not major concerns. Breakdown by market segment It seems to me that there will always be a large segment of the market (e.g., industrial control systems) where hard real-time requirements would mitigate against the use of a cache-based MCU. It would be interesting to see the correlation between choice of processor (non-cached vs. cached, or 8/16 vs 32 bits) and the application area (soft real-time vs. hard real-time, or GUI/communications vs. industrial control). I wonder if it would be possible to tease that out of the existing UBM survey data. Looking at the 2014 results With regard to the question: "Which of the following capabilities are included in your current embedded project?" We see that over 60 percent of projects include real-time capability. The question does not attempt to distinguish between hard and soft real-time. And the 8/16/32-bit MCU question does not distinguish between cached and non-cached processors. Nevertheless, it might be interesting to see how the 8/16/32 bit responses correlate with the real-time and non-real-time responses, or signal-processing responses. I find it hard to believe that a large number of projects are using cached 32-bit processors for hard real-time applications. It is interesting to note that every response for the capabilities question shows a falling percentage between 2010 and 2014. This suggests that other categories may be needed. I suppose it is possible that fewer projects required real-time capabilities in 2014, but it seems more likely that there was an increase in the number of projects that required other capabilities such as Display and Touch, which are not being captured by that question. Thanks for considering my input, Bob Snyder.   Well, it's certainly true that non-real-time, soft real-time, and hard real-time applications have different requirements, but I'm not sure how best to articulate them. Do you have expertise in this area? Do you know the answers to any of Bob's questions? How about Bob's suggestions as to how we might consider refining our questions for the 2016 survey? Do you have any thoughts here? If so, please post them in the comments below.
  • 热度 20
    2013-10-4 15:40
    1706 次阅读|
    0 个评论
    Time is a simple yet complex concept. In its simplest form, it's the counting of a regular, repeating pattern. On the opposite end, we would have to turn to Einstein. Thankfully, this story resides on the simple side. You see, before the advent of the real-time clock, electrical engineers scurried hither and yon in search of ways to calculate time accurately in their circuits. One very convenient source of a regular, repeating pattern was the 60Hz line frequency coming from the wall outlet. It was a very stable source in the US, and so long as it was stable, your machine would work fine. But if ever a crack appeared in the stability of the 60Hz source, the entire machine could be compromised. This was the case on the idle Tuesday morning our story begins. Giant pillars of white smoke soared high in the air from the large, concrete stacks that reached up into the clear blue sky. It was easy to find parking in the large lot when I arrived at the paper-making plant nestled on the outskirts of Richmond, Va. The smell of sulphur was thick, and the rumble of the large trucks carrying trees filled the air as they entered the plant. It was a short walk to the visitor's desk, where I was met by the man who had called me a few days earlier. "Damn thing doesn't work," he had said. I get that a lot. He led me to the not-working microwave solids analyser. This simple machine consists of a microwave, a balance, and an embedded computer. It heats a sample to remove all the moisture, and the computer uses the difference in weight to calculate the per cent moisture in the sample. I placed a test sample in the machine and pressed the start button. Immediately, problems were apparent. On the grey scale display, the system clock was counting down the time. The problem was it was counting way too fast. It had counted down more than two minutes, but my gut, verified by a clock on the cinderblock wall, told me less than a minute had gone by. I pressed the stop button, and another problem revealed itself. Instead of the usual single beep, I got three. It acted as if I had pressed the button a couple of times. It appeared I had debouncing as well as timing issues. It was then that I clued in on the A/C line and took a closer look. I was in for a shock.   This video and the scope screen captures above were taken directly from the 120V wall socket via a 10-to-1 stepdown wall-wart transformer using a 10x probe. As you can see, the signal is very dirty. I concluded the source of the noise was probably from one of the large paper rolling machines running elsewhere in the plant. I had identified the problem. But I still did not know how the dirty signal was causing the timing and debouncing issues and how to fix them. I had to solve a before I could even begin to approach how to fix the issues. Like any good technician, I dug into the schematic. It wasn't long until I discovered a small circuit that appeared to be converting the A/C line frequency into a 5V clock signal. The output was routed to one of the pins of the 80188 Intel processor.   I knew I had to get a scope on pin 6 of the opto-coupler (U7) to see if the noise on the A/C line was getting through.     So the noise from the A/C line was getting through the opto-coupler in the form of problematic leading and trailing edges. It was so bad that the scope had a hard time locking on to the frequency, as seen above. I did not have access to firmware, but my theory was sound. I believed that the 80188 computer was using this very signal to generate its definition of time—every 120 leading-edge counts was equal to one second in the firmware. And in this case, all the spikes on the leading edges were being counted, as well. This would cause the system's internal clock to run too fast. Also, I knew the switch debouncing was done in firmware. This code is dependent on accurate delay times. If the system clock were running too fast, that would mean the delay times would be too short, and the debouncing code would not work properly. Yessss! I had found the source of the problem, theoretically at least. Now, how to fix it. Long story short, I concluded that it would be easiest to simply replace the signal. I would remove the opto-coupler IC and use a microcontroller to generate a clean signal and route it to the processor.   Above is the schematic. (I realised I had taken the scope readings in the previous images off U6 instead of U7. U7 produces a 10 per cent duty cycle 120Hz signal, as opposed to 50 per cent.) Below is the code to generate the 10 per cent duty cycle 120Hz signal. while (true) { { output_high(PIN_A2); delay_us(833); output_low(PIN_A2); delay_us(7600); } } I made a board using toner transfer. I removed the opto-coupler, and used the empty socket to supply power to my board and route the clean signal to the processor.     I installed it in the instrument the next day, and everything worked perfectly. The timing was accurate, and no more debouncing issues. I had the board done from a board house and installed it a few weeks later. The instrument has been working well ever since.     This article was submitted by Will Sweatman, a field engineer, as part of Frankenstein's Fix, a design contest hosted by EE Times (US).  
  • 热度 23
    2012-5-15 17:38
    1687 次阅读|
    0 个评论
    For several months I have been discussing with embedded systems engineers about Google's Android and the why's, what's, and how's of using this Linux-based mobile software platform in embedded systems, particularly in applications requiring real-time and deterministic operation. I have been curious to find out: why all the interest in Android? Part of this interest has to do with the historical relationship of mobile devices—and the Android platform—to embedded and real-time OSes, which strikes me as being analogous to the relationship between trucks and suburban utility vehicles (SUVs). When SUVs were first conceived, they were designed by truck designers, built by truck workers on truck production lines, and used truck wheels, frames, and components. However, SUVs soon began to diverge, and developed their own unique characteristics. But there was enough shared history and similarity in design that SUVs became a target market for anyone who was in the truck business. Now, in turnabout fashion, truck designers are eyeing some of the SUV enhancements and design approaches and incorporating them back into trucks. That pattern is mirrored in the recent history of mobile/embedded devices, particularly as it relates to Android smartphones and tablet devices. As distinct a platform as mobile devices have become, their beginnings were in the embedded space. The developers I met with who crowded into the Android classes at the Spring ESC DESIGN West thought they had a lot of embedded design expertise that would be useful in the small footprint, resource constrained mobile device segment as well. More tantalizing is the reverse possibility—adapting Android for embedded applications. For another analogy it is useful to look back at the evolution of the X86-based PC platform. Before the introduction of the Microsoft OS-based IBM PC and its AT bus architecture, there were dozens of desktop computer vendors using technology derived from the embedded market that had existed for a decade before the PC's introduction. After the standardisation on the AT/Windows platform, dozens of companies and engineers then reversed the flow by developing variations and extensions of the platform for use in embedded applications. We are at that last stage with Android, with growing interest in creating a variation of that platform supporting real-time deterministic app development. But is this a realistic expectation? And if so, what will it take to get there? Or will it become yet another tantalizing but impossible dream? The consensus of the developers with whom I've discussed this seems to be that without modification Android would not be a useful vehicle for supporting real-time deterministic embedded apps, no matter how broadly that term is defined. It's not only questionable whether Android can achieve even soft real-time deterministic responses to interrupts, but even if it can, how reliable it will be in doing so, especially when a design is heavily loaded with apps that demand – not merely politely request – access to the underlying hardware/software resources. The challenges facing Android in embedded apps The problems with Android seem to fall into two basic categories: the responses of the Linux kernel (Version 2.6) at the core of the platform, and the responses of the Android-optimised Dalvik Java virtual machine. There are also some concerns about the Java code wrapper that surrounds the Linux kernel and is the API through which most development is done. Engineers I have talked to have assured me that taken individually, each is manageable. But together, they play havoc with the Android platform's ability to achieve the real-time responsiveness and interrupt response times needed for hard-core industrial, automotive, and even some consumer apps. That has not prevented some of you from trying, despite the fact that any Android application has multiple levels at which interrupt responses can be compromised. First is the latency introduced by the Linux kernel when it handles an interrupt. Then there is the latency added by the Davlik VM, specifically the time interval between when it is received at the kernel event management level and when it is then passed up to the application running on top of the VM. Particularly troubling to developers is the variability in the latency responses. In most cases, the degree of variation goes beyond the upper limit on how much can be tolerated in real-time applications. The preliminary numbers about Android's real-time performance vary considerably, depending on the source. In general, though, when the application is relatively simple and the system is lightly loaded with no more than 1 to 5 interrupts per second, and the frequency of interrupts is in the 10—30Hz range, the number of deadline misses are troubling, but manageable. But when the system is operating under heavy loads with a dozen or so interrupts per second, and the frequency of interrupts is in the 1 to 5kHz range, the Android OS performance degrades drastically. Proposed solutions and are they practical? One option for making Android more real timethat often came up in my conversations is to remove the Linux kernel at Android's heart, along with it's Completely Fair Scheduler (CFS), and replace it with a Real-Time version of Linux. The aim of CFS is to provide a "fair" balance between tasks assigned to a processor and in the adjustments that have to be made when tasks are waiting for an I/O device. This is an OK solution in normal Android human interaction-related tasks. But deadly to determinism, even of the soft real-time variety. By substituting a Real-Time Linux kernel, the inherent predictability and determinism it brings to Android would allow the use of real-time scheduling classes, do better job of anticipating priority inversion, and allow more carefully crafted strategies for resource management. An alternative would be to substitute a real-time Java Virtual Machine for the existing Davik Virtual Machine, which would allow incorporation of bounded memory management and real-time scheduling, better synchronisation, and avoidance of priority inversion conditions. But these advantages would be limited to the VM. It would have no impact on the way the Linux OS performs. Full advantage of this option is obtained only when it is used in combination with a real-time Linux kernel. Some developers think it would be a better idea to extend the real-time capabilities of the Davik Virtual Machine through the use of the Real Time Specification for Java (RTSJ) rather than the standard Java dialect, which would make it possible to introduce more deterministic operations into the VM layer without replacing Davik. Another possibility would be to use a real-time hypervisor virtual machine monitor and allow Android to run in one of the partitions as a guest OS and real-time applications to run in the other. The advantage of this approach is that it virtually guarantees that any and all real-time tasks have priority over Linux kernel tasks, which means that it can be used for hard real-time deterministic operations. Unfortunately, this approach has two gotchas. First, real-time applications are limited to what is provided by the real-time hypervisor. Second, if a real-time application in the real-time hypervisor hangs up, the whole system is brought to a standstill. An alternative that intrigues me is the use of a rate monotonic scheduling algorithm (RMA). The idea is to keep the standard Linux distribution but replace CFS with a real-time scheduler, ideally one using a rate monotonic algorithm. ( The use of RMAs in embedded design has been written about by Embedded.com columnist Michael Barr.) It is also used in a number of RTOSes, including the open source RTEMS and Deos from DDC-I. Invitation to the dialogue Looked at from the mile-high technical journalist's perspective, all the alternatives seem to me to be doable technically, albeit with their own degrees of difficulty. No doubt when viewed close up and personal, numerous problems are more readily apparent. For that on-the-ground perspective I would like to hear from those of you staring the problems in the face. Unfortunately, no matter what the perspective, one stumbling block to any of these alternatives is that they would undercut the universality and cross-platform usability of applications written for use on Android. To complicate matters, after a two year ban during which Android developers were not permitted to contribute code to the open source Linux project, they have now been added back into Version 3.3's staging area by project founder Linus Torvald. In the past, this is usually the final step before contributed code is merged into the main Linux tree. Whether this will mean that Linux—and thus Android—will be more amenable to real-time modifications, or less, is hard to say. But it is a complicating wrinkle. As Yogi Berra has said: it is déjà vu all over again, a replay of what happened after embedded developers started taking the PC/MS Windows platform and adapting it to their needs. The more deterministic and real-time they made their PC/MS designs, the less compatibility they had with unmodified versions of the platform. All developers then had was the comfort of a familiar hardware and software environment within which to do development. But there was little cross-platform compatibility. You could not take an app that ran on a standard platform and be assured it would run on the modified one, and vice versa. But in terms of a real-time implementation of Android, maybe the availability of the easy to use, easy to program platform is enough. Cross platform compatibility for apps from either implementation – standard or real time—would be nice of course, but is it necessary? What do you think?  
相关资源
  • 所需E币: 4
    时间: 2019-12-26 01:41
    大小: 59.71KB
    上传者: 微风DS
    Thenexttenyearswillseedistributedreal-timecomputersystemsreplacingmanymechanicalandhydrauliccontrolsystemsinhigh-dependabilityapplications.Intheseapplicationsafailureinthetemporaldomaincanbeascriticalasafailureinthevaluedomain.Thispaperdiscussessomeofthetechnologytrendsthatexplainwhydistributedembeddedreal-timesystemsforhigh-dependabilityapplicationswillmoveintothemainstream.Ittheninvestigatesthenewrequirementsthatmustbeaddressedbythesoftwareengineeringprocess.Twoofthemostimportantrequirementsarethedesignforcomposabilityandthesystematicvalidationofhigh-dependabilitydistributedreal-timesystems.Inthelasttwosections,theseissuesofcomposabilityandvalidationaretreatedinsomedetail.……
  • 所需E币: 5
    时间: 2019-12-26 01:06
    大小: 5.16MB
    上传者: givh79_163.com
    μCOSIIV2.5(源代码)+μCOSII,TheReal-TimeKernel(中英对照)……
  • 所需E币: 3
    时间: 2019-12-25 22:44
    大小: 216.25KB
    上传者: 16245458_qq.com
    DSP/BIOSisTI’sreal-timeembeddedkernelfortheTMS320C5000andTMS320C6000digitalsignalprocessors(DSPs).UnderstandinghowtobuildapplicationsusingtheDSP/BIOSkernelrequiresadoptingamulti-threadeddesignparadigmthatatfirstseemsforeignfornewusersofDSPsandveteranDSPdevelopersalike.However,onceunderstood,designingDSP/BIOSapplicationsbecomesstraightforward,andyouwillfinddevelopingapplicationseasiertodesign,debug,maintain,andextend.Areal-timekernelsuchasDSP/BIOShelpstofactortheproblemandfacilitatearobustdesignthatiseasilymaintainable.……
  • 所需E币: 3
    时间: 2019-12-28 23:47
    大小: 42.5KB
    上传者: rdg1993
    RecentU.S.legislationincreasedthedaylightsavingtime(DST)periodbyfourweeks.ThischangewillaffectsystemsthatadjustforDSTbyusingreal-timeclocks(RTCs)withanintegratedDSTadjustment.ThisnotediscussesissuesrelatedtotheDSTchange,showshowtotestandadjusttheRTCforDST,andrecommendshowtomanageDSTinfuturedesigns.……
  • 所需E币: 3
    时间: 2019-12-25 17:01
    大小: 67.87KB
    上传者: 2iot
    实时系统的实现……
  • 所需E币: 4
    时间: 2019-12-28 23:54
    大小: 173KB
    上传者: quw431979_163.com
    :ThisapplicationnoteprovidesanexampleofhardwareandsoftwareforinterfacinganSPIreal-timeclock(RTC)toaMotorola®digitalsignalprocessor(DSP)thathasabuilt-inSPI-interfacemodule.ThisexampleusesaMotorolaDSPDemoKitasthebasisforthecircuit.……
  • 所需E币: 3
    时间: 2019-12-25 15:52
    大小: 491.09KB
    上传者: 238112554_qq
    介绍了该系统的整体结构,包括:水下压力获取子系统,船载数据汇聚和传输子系统,以及室内控制和数据记录子系统.然后详细介绍了船载数据汇聚和传输子系统的软硬件设计,它使用了可编程逻辑阵列FPGA和嵌入式操作系统VxWork...第14卷2期应用基础与工程科学学报V01.14.No.2June20062006年6月JOURNALOFBASICSCIENCEANDENGINEERING……
  • 所需E币: 4
    时间: 2019-12-25 10:46
    大小: 1.96MB
    上传者: 16245458_qq.com
    eXpress……
  • 所需E币: 5
    时间: 2019-12-25 10:46
    大小: 1.89MB
    上传者: 238112554_qq
    biossell……
  • 所需E币: 3
    时间: 2019-12-24 18:39
    大小: 54.18KB
    上传者: 微风DS
    摘要:典型的锂离子纽扣电池规格已在系统电源的情况下提供了10年的电池寿命。最终用户应该评估中的具体应用的预期寿命,特别是对那些超过典型的商业环境,需要持续10年以上。这篇文章给读者的主要因素,影响一生的IC系统电源或备用电源锂电池,可供电的概述。Maxim>Designsupport>Appnotes>BatteryManagement>APP1793Keywords:lithium,battery,rtc,real-timeclockDec03,2002APPLICATIONNOTE1793LithiumCoin-CellBatteries:PredictingAnApplicationLifetimeAbstract:Thetypicalspecificationforlithiumcoin-cellbatterieshasbeentoprovidea10-yearbatterylifetimeintheabsenceofsystempower.Endusersshouldevaluatetheanticipatedlifetimeintheirspecificapplication,especiallyforthosethatexceedtypicalcommercialenvironmentsorthatneedtolastmorethan10years.ThisarticlegivesthereaderanoverviewofthemajorfactorsaffectingthelifetimeofanICthatcanbepoweredbyeither……
  • 所需E币: 5
    时间: 2019-12-24 17:57
    大小: 55.63KB
    上传者: 978461154_qq
    摘要:某些类型的米有电池,大多是以支持使用时间的函数。在这些米,电池板电源不存在时,维护振荡器电路和实时时钟(RTC)。本应用指南描述了生产序列时使用Teridian的71米6511/71万6513/71M6515H和3-单相电能计量Ic、电池结合。71M651XEnergyMeterICsAMaximIntegratedProductsBrandAPPLICATIONNOTEAN_651X_056JANUARY2011OperationwithBatteriesThisdocumentdescribestheproperproductionsequencewhenusingtheelectricitymeteringICsofthe71M651Xfamily(71M6511,71M6513,71M6515H)inconjunctionwithabattery.BackgroundonBatteryOperationSometypesofmetersareequippedwithbatteries,mostlytosupportTOU(time-of-use)functions.Inthesemetersthe……
  • 所需E币: 3
    时间: 2019-12-24 17:58
    大小: 175.81KB
    上传者: wsu_w_hotmail.com
    摘要:71TeridianM6521,多相住宅电能表系统芯片(SoC)已要求补偿实现高计量精度的实时时钟(RTC)。本应用指南描述了如何使用软件来补偿71米6521的RTC。71米6511,71M6513和其他Teridian电能表ICs可能会使用类似的原则。71M6521EnergyMeterICAMaximIntegratedProductsBrandAPPLICATIONNOTEAN_6521_035APRIL2009RealTimeClockCompensationThisdocumentdescribeshowtousesoftwaretocompensatetherealtimeclock(RTC)inTeridianmeterchips.Thesamplecodediscussedisfromthedemonstrationcodeforthe71M6521FE,butsimilarprinciplesmaybeusedwiththe71M6521DE,71M6523,71M6511,71M6513andotherTeridianMeterproducts.T……
  • 所需E币: 5
    时间: 2019-12-24 17:57
    大小: 160.03KB
    上传者: 二不过三
    摘要:本应用笔记介绍了创建一个准确的实时时钟(RTC),使用Teridian公司的71M6533,第三代,多相能源米上系统芯片(SoC)的方法。类似的技术,可用于与71M6531,71M6534,和其他Teridian公司的能源米集成电路。有两种基本方法,以使RTC的准确:正确的RTC所有预期的错误,或从行频RTC。无论使用哪种方法,RTC运行在停电期间的调整精度高。AMaximIntegratedProductsBrand71M653XAPPLICATIONNOTEAN_653X_003JUNE2011RTCCompensationIntroductionThisdocumentdescribesmethodstocreateanaccuraterealtimeclock(RTC)usingthe71M653x-familyTeridianEnergyMeterICs.ThesamplecodediscussedisfromthereleasedDemoCodeforthe71M6533,butsimilartechniquesmaybeusedwiththe71M6531,71M6532,or71M6534TeridianMeterproducts.TheoryofOperationTherearetwobasicwaystomaketheRTCaccurate:CorrecttheRT……
  • 所需E币: 3
    时间: 2019-12-24 17:57
    大小: 92.45KB
    上传者: wsu_w_hotmail.com
    摘要:所有Teridian单/多相电能表系统--片(自动)支持使用时间计量。设备已经调整到比5ppm,提供了优秀的计时功能更好的实时时钟(RTCs)。此应用程序说明描述有关他们的随机对照试验的Teridian电能表芯片的功能。文章还建议米含电池的测序生产过程。71M65XXEnergyMeterICsAMaximIntegratedProductsBrandAPPLICATIONNOTEAN_65XX_054NOVEMBER2008TheReal-TimeClocksofthe71M65XXMeteringICsThisdocumentdescribeshowtousethereal-timeclock(RTC)circuitsofthethreefamiliesofTeridianmeterSOCs,the71M651X,71M652Xand71M653Xseries.Inaddition,recommendationsaregivenforsequencingproductionprocessesofmeterscontainingbatteries.IntroductionAllTeridianmeterSOC……
  • 所需E币: 4
    时间: 2019-12-24 17:56
    大小: 191.08KB
    上传者: 978461154_qq
    摘要:基于Teridian公司71M6511H71M6513H,71M6533H,71M6534H能量测量IC米,预计在0.126%以上的工业温度范围内是准确的。71M6511,71M6512,71M6521,71M6523,71M6531和71M6513芯片提供精度为0.5%。该设备采用温度校正技术实现测量精度高。本应用笔记介绍了如何装置可以补偿温度的影响。71M651xEnergyMeterICsAMaximIntegratedProductsBrandAPPLICATIONNOTEAN_651X_009MAY2010TemperatureCompensationforthe71M651x/651xH,71M652x,and71M653xEnergyMeterICsMetersbasedonthe71M6511H,71M6513H,71M6533H,and71M6534Hareexpectedtobeaccuratewithin0.126%overtheindustrialtemperaturerange.The71M6511,71M6512,71M6521,71M6523,71M6531and71M6513ICsprovide0.5%accuracy.ForbothtypesifICs,thehighaccurac……
  • 所需E币: 5
    时间: 2020-1-16 12:34
    大小: 2.76MB
    上传者: rdg1993
    (Kluwer)Real-TimeVideoCompression--Techniques&AlgorithmsPageiReal-TimeVideoCompressionPageiiTHEKLUWERINTERNATIONALSERIESINENGINEERINGANDCOMPUTERSCIENCEMULTIMEDIASYSTEMSANDAPPLICATIONSConsultingEditorBorkoFurhtFloridaAtlanticUniversityRecentlyPublishedTitles:VIDEOANDIMAGEPROCESSINGINMULTIMEDIASYSTEMS,byBorkoFurht,StephenW.Smoliar,HongJiangZhangISBN:0-7923-9604-9MULTIMEDIASYSTEMSANDTECHNIQUES,editedbyBorkoFurhtISBN:0-7923-9683-9MULTIMEDIATOOLSANDAPPLICATIONS,editedbyBorkoFurhtISBN:0-7923-9721-5MULTIMEDIADATABASEMANAGEMENTSYSTEMS,byB.PrabhakaranISBN:0-7923-9784-3PageiiiReal-TimeVideoCompressionTechniquesandAlgorithmsbyRaymondWestwaterBorkoFurhtFloridaAtlanticUniversityPageivDistributorsforNorthAmerica:KluwerAcademicPublishers101Philip……
  • 所需E币: 5
    时间: 2020-1-6 13:06
    大小: 79.45KB
    上传者: 2iot
    DS1307Z……
  • 所需E币: 4
    时间: 2019-12-29 00:03
    大小: 49KB
    上传者: 微风DS
    ThisapplicationnotepresentsanoverviewoftheDS32X35familyofproducts.ThesedevicesprovideanaccurateReal-TimeClockwithFerroelectricRandomAccessMemory(RTC+FRAM)thatdoesnotrequireabatterytomaintainitscontents.……