tag 标签: c

相关帖子
相关博文
  • 热度 2
    2016-3-19 17:37
    1413 次阅读|
    2 个评论
    A quick Google for something like "Python vs. C" will yield lots of comparisons out there. Sad to relate, however, trying to work out which is the "best" language is well-nigh impossible for many reasons, not the least that it's extremely difficult to define what one means by "best" in the first place.   One aspect of all this that doesn’t seem to garner quite so much discussion is Python versus C in the context of embedded systems -- especially small microcontroller (MCU)-based applications for "things" that are intended to be connected to the Internet of Things (IoT) -- so that's what I'm poised to ponder here, but first...   ...it's interesting to note that there are literally hundreds and hundreds of different programming languages out there. If you check this Wikipedia page , for example, you'll find 54 languages that start with the letter 'A' alone (don’t get me started on how many start with 'C" or 'P'), and this list doesn’t even include the more esoteric languages , such as Whitespace, which uses only whitespace characters (space, tab, and return), ignoring all other characters, but we digress...   The reason I'm waffling on about this here is that I just finished reading a brilliant book called Learn to Program with Minecraft by Craig Richardson. This little scamp (the book, not Craig) focuses on teaching the Python programming language, and it offers the most user-friendly, intuitive, and innovative approach I've seen for any language.   As part of my review I said: "Now, I don’t wish to wander off into the weeds debating the pros and cons of languages like C and Python here -- that's a separate column in its own right." Well, I'm not outrageously surprised to discover that I was 100% correct, because this is indeed a separate column in its own right (LOL).   Now, I'm not an expert programmer by any stretch of the imagination, but I do dabble enough to be dangerous, and I think I know enough about both Python and C to be very dangerous indeed. There are myriad comparisons that can be made between these two languages; the problem, oftentimes, is working out what these comparisons actually mean. It's very common to hear that C is statically typed while Python is dynamically typed , for example, but even getting folks to agree on what these terms mean can be somewhat problematical.   Some folks might say: "A language is statically typed if the types of any variables are known at compile time; by comparison, it's dynamically typed if the types of any variables are interpreted at runtime." Others, like the folks at Cunningham Cunningham (I can never tell those two apart), might respond that static typing actually means that "...a value is manifestly (which is not the same as at compile time) constrained with respect to the type of the value it can denote, and that the language implementation, whether it is a compiler or an interpreter, both enforces and uses these constraints as much as possible." Well, I'm glad we've cleared that up (LOL).   Another comparison we commonly hear is that C is weakly typed while Python is strongly typed . In reality, weak versus strong typing is more of a continuum than a Boolean categorization. If you read enough articles, for example, you will see C being described as both weakly and strongly typed depending on the author's point of view. Furthermore, if you accept the definition of strong typing as being "The type of a value doesn’t suddenly change," then how do you explain the fact that you can do the following in Python:   bob = 6 bob = 6.5 bob = "My name is Bob" In reality, what we mean by "strongly typed" is that, for example, a string containing only digits (e.g., "123456") cannot magically transmogrify into a number without our performing an explicit operation to make it do so (unlike in Perl, for example). In the case of the above code snippet, all we're saying is that the variable bob can be used represent different things at different times. If we used the method type(bob) after bob = 6 , then it would return int (integer); after bob = 6.5 it would return float (floating point number); and after bob = "My name is Bob" it would return str (string).   One thing on which we can all agree is that C doesn’t force you to use indentation while -- putting it simplistically -- Python does. This is another topic people can get really passionate about, but I personally think we can summarize things by saying that (a) If you try, you can write incredibly obfuscated C code (there's even an International Obfuscated C Code Contest -- need I say more) and (b) Python forces you to use the indentation you would have (should have) used anyway, which can’t be a bad thing.   Another thing we can agree on is that C is compiled while Python is interpreted (let's not wander off into the weeds with just-in-time (JIT) compilation here). On the one hand, this means that a program in Python will typically run slower than an equivalent program in C, but this isn’t the whole story because -- in many cases -- that program won’t be speed/performance bound. This is especially true in the case of applications running on small MCUs, such as may be found lurking in the "things" portion of the IoT.   I feel like I've already set myself up for a sound shellacking with what I've written so far, so let's go for broke with a few images that depict the way I think of things and the way I think other people think of things, if you see what I mean. Let's start with the way I think of things prior to Python circa the late 1980s and early 1990s. At that time, I tended to visualize the programming landscape as looking something like the following:   The way I used to think of things circa 1990 (Source: Max Maxfield / Embedded.com)   Again, I know that there were a lot of other languages around, but -- for the purposes of this portion of our discussions -- we're focusing on assembly language and C. At that time, a lot of embedded designers captured their code in assembly language. There were several reasons for this, not the least that many early MCU architectures were not geared up to obtaining optimum results from C compilers.   Next, let's turn our attention to today and consider the way I think other people tend to think of things today with respect to Python and C. Obviously processors have gotten bigger and faster across the board -- plus we have multi-core processors and suchlike -- but we're taking a "big picture" view here. As part of this, we might note that -- generally speaking -- assembly now tends to be used only in the smallest MCUs that contain only minuscule amounts of memory.   The way I think other people think of things circa 2016 (Source: Max Maxfield / Embedded.com)   In turn, this leads us to the fact that C is often described as being a low-level language . The term "low-level" may seem a bit disparaging, but -- in computer science -- it actually refers to a programming language that provides little or no abstraction from a computer's underlying hardware architecture. By comparison, Python is often said to be a high-level language , which means that it is abstracted from the nitty-gritty details of the underlying system.   Now, it's certainly true that the C model for pointers and memory and suchlike maps closely onto typical processor architectures. It's also true that -- although it does support bitwise operations -- pure Python doesn’t natively allow you to do things like peek and poke MCU registers and memory locations. Having said this, if you are using Python on a MCU, then there will also be a hardware abstraction layer (HAL) that provides an interface allowing the Python application to communicate directly with the underlying hardware.   One example of Python being used in embedded systems can be found in the RF Modules from the folks at Synapse Wireless that are used to implement low-power wireless mesh networks. This deployment also provides a great basis for comparison with C-based counterparts.   In the case of a ZigBee wireless stack implemented in C, where any applications will also typically be implemented in C, for example, the stack itself can easily occupy ~100KB of Flash memory, and then you have to consider the additional memory required for the applications (more sophisticated applications could easily push you to a more-expensive 256KB MCU). Also, you are typically going to have to compile the C-based stack in conjunction with your C-based application into one honking executable, which you then have to load into your wireless node. Furthermore, you will have to recompile your stack-application combo for each target MCU (but I'm not bitter).   By comparison, the Synapse's stack, which -- like ZigBee -- sits on top of the IEEE 802.15.4 physical and media access control layers, consumes only ~55KB of Flash memory, and this includes a Python virtual machine (VM). This means that if you opt to use a low-cost 128KB MCU, you still have 73KB free for your Python-based applications.   And, speaking of these Python-based applications, they are interpreted into bytecode, which runs on the Python virtual machine. Since each bytecode equates to between 1 and 10 machine opcodes -- let's average this out at 5 -- this means that your 73KB of application memory is really sort of equivalent to 73 x 5 = 365KB. Furthermore, the same bytecode application will run on any target MCU that's using Synapse's stack.   As part of my ponderings, I also asked my chum David Ewing about his views on the C versus Python debate. David -- who is the creator of our ESC Collectible wireless mesh networked "Hello There!" Badges and "Hello There!" Robots -- is the CTO over at Synapse Wireless and, unlike yours truly, he is an expert programmer. David responded as follows: C and Python are both fantastic languages and I love them both dearly. There are of course numerous technical, syntactic, and semantic differences -- static vs. dynamic typing, compiled versus interpreted, etc. -- but the gist of it all is this: -- C is a "close to the metal" compiled language. It is the "universal assembler." It is clean and elegant. My favorite quote about C is from the back of my 30-year old KR book: "C is not a large language, and it is not well served by a large book.” -- Python, with its "dynamic typing" etc., reduces "accidental complexity." Python is interpreted (or JIT compiled), so you can have silly errors that aren’t discovered until runtime. However, compilers don’t generally catch the hard, non-trivial bugs. For those, only testing will suffice; a solution must be rigorously tested, regardless of the implementation language.   David went on to say: If a problem can be solved in Python, then it can also be solved in C; the reverse is not always true. However, if the problem can be solved in Python: -- The solution (source code) will be simpler than the corresponding C code. -- It will be more "readable." -- Perhaps more importantly, it will be more "writeable" (this is an oft-overlooked quality!). Due to the qualities noted above, the solution will have fewer bugs and be much faster to develop, and these are the real reasons to opt for Python over C for many tasks.   I'm like David (except I sport much better Hawaiian shirts) in that I appreciate the pros associated with both languages. I like the clever things you can do with pointers in C, for example, and I also appreciate the more intuitive, easy-to-use syntax of Python.   So, which language is best for embedded applications? I'm hard-pushed to say. To a large extent this depends on what you want (need) to get out of your applications. You know what I'm going to say next, don't you? What do you think about all of this? Which language do you favor between C and Python (a) in general and (b) for embedded applications? Also, if we were to widen the scope, is there another language you prefer to use in Embedded Space (where no one can hear you scream)?
  • 热度 9
    2015-7-31 09:35
    15173 次阅读|
    8 个评论
        几年前,对着家里桌面上一堆规格各异的适配器,我有点抓狂。第一,这些适配器造成了家居环境的混乱以及使用的不便。第二,偶尔某个设备的适配器损坏后,你会发现很难买到合适的替代品。第三,这些适配器,将造成无数的电子垃圾,包括塑料,重金属污染等。作为家里唯一懂电气的人,我还得承受使用不便所产生的各种抱怨,并费劲的解释为什么这个不适合那个。于是,某天,另一半对我说,你把这些适配器全部统一成多功能的吧,让用电器去告诉适配器该输出什么就行了。我觉得这个想法非常有价值,就撰写了申请号为:201210007717.X的专利,并提交专利局审查。时间过的很快,2014年,我第一次阅读TYPE-C 及USB PD2.0的规范。我几乎是在瞬间就明白了它们所承载的使命。TYPE-C与USB PD2.0的推出,将会从根本上解决我们生活中,涉及到电子产品供电的混乱局面。它是怎么做到的呢? 让我们来看看: 第一步,适配器在连接建立后,会通过CC线进行广播,告诉连接的另外一方,适配器能够提供多少种电压以及对应的电流。 图1 Macbook 原装适配器的PD通信波形 图2 Google 原装适配器的PD通信波形 第二步,用电器在获悉适配器的供电能力之后,从中选择一个最适合自己的供电方式,并向适配器发送请求数据包。 图3 Macbook发送的Request数据包 第三步,适配器根据用电器的选择,评估自身的能力之后,发送“接受”命令。 图4 适配器发送的ACCEPT PD通信数据包 第四步,适配器进行内部电压变换,并向用电器发送“电源准备好”数据包。 图5 适配器发送的电源切换完毕 PD通信数据包   第五步,适配器向VBUS施加协商后的新的供电电压。 如果仅仅是一个进行供电管理。那么PD通信到此就已经完成了电压切换的任务。但是如果供电方不仅仅是一个适配器,而是一个HUB,甚至是支持HDMI输出的HUB,那么,PD将还需要进行更多的协商,以达到系统预设的其他功能。 其中,包括了数据传输角色的转换、VDM通信、DP信号配置等。 图6 苹果原装USB DOCK发送的DR_SWAP数据包 图7 苹果原装USB DOCK发送的DP Configure数据包   图8 苹果原装USB DOCK发送的Enter Mode数据包让Macbook输出DP信号   图8 苹果原装USB DOCK发送的unstruct VDM数据包属于私有通信 以上波形都来自对实际PD通信过程的监控,采用了乐得瑞在业界率先量产的PD逻辑分析仪—LDRPD01。   图9 LDRPD01分析仪对DOCK与MACBOOK的通信过程进行监控   图10 LDRPD01分析仪对GOOGLE Chromebook 充电过程的PD通信进行监控   此监控系统可以为USB PD相关产品的从业人员提供非常有价值的参考,准确掌握PD通信过程的错误信息。如有需要,欢迎加微信13510191269。 与QC2.0的区别:首先从名字上就看一窥端倪,PD是Power Delivery,关注的是两个或者多个设备,甚至是一个基于USB接口的智能电网的电能传输过程,电能传输可以是双方向的,甚至是组网的,可以具备系统级供电策略。而QC是Quick Charge仅仅关注的是快速充电问题,电能传输是单方向的,不具备电能组网能力,不支持除了供电以外的其他功能。 QC2.0 关注的是一个充电设备和一个被充电设备 USB PD 解决的是一个电能传输网络的平衡问题 综上分析我们可以看出,USB PD不仅为消费类电子带来了形式多样接口应用,还承载着未来消费类电子以及部分家用电器的供电管理智能化的使命,将能够比较好的解决目前供电方式混乱,各种适配器及连接线严重浪费社会资源,污染自然环境的情况。深圳市乐得瑞科技有限公司,致力于USB TYPE-C接口及USB PD2.0相关应用技术的开发,已经推出支持TYPE-C接口芯片的LDR6013,以及支持USB PD 2.0的芯片LDR6021等,积极配合USB-IF组织共同推进这一有利于人类社会的重大改革。
  • 2015-3-31 23:52
    638 次阅读|
    0 个评论
    在进行两个变量的时候,经常会看到有些书误人子弟的推荐使用异或的方式: 方式一 {   x = x ^ y;   y = x ^ y;   x = x ^ y; } 而不是采用临时变量实现交换: 方式二 {   int temp;   temp = a;        a = b;   b = temp; } 美其名曰:节省内存,提高运行速度。   但是,真的节省了内存吗? 使用这种方式大部分时候,没有节省内存。因为,一般情况下编译器会将方式二中的临时变量优化到寄存器中,不使用堆栈。   真的提高了运行速度吗? 下边的博客给出了很详细的解答。 http://blog.csdn.net/do2jiang/article/details/4549679 http://blog.csdn.net/solstice/article/details/5166912 由于方式一,每一次执行都会多进行三次异或运算。又由于方式一的代码,在编译器理解起来,可能比较困难所以编译器翻译出来的代码,执行效率更低。   关于优化: “过早的优化是万恶之源”,当我们没有确定程序影响性能最重要的20%代码时,最好不要进行优化。同时,在优化时,不要过于相信经验,因为CPU技术,编译技术,操作系统等等,都会让看似可行的技术,失效。 在优化前,通过实际的运行确定影响性能的代码,然后进行优化。   最后,从软件工程的角度看,代码是写给人看的。最容易理解的代码,被维护的代价也最小,方式二的代码更容易阅读。   编译器很强大,CPU技术进步很快,我们的经验积累反而是最慢的,有些来不及积累,就已过时。多反思,多总结。   附注: 将一个字节进行首尾位倒序,可以考虑一下采用位运算的方式解决。 例: uint8_t SwapBitsInByte(uint8_t input) {   uint8_t output = 0;   uint8_t i;   for (i = 0; i 8; i++) {     if (input (1 i)) {       output |= 1 (7 - i);     }   }   return output; } 位交换: uint8_t SwapBitsInByte(uint8_t val) {   val = (val 0x55) 1 | (val 0xAA) 1;   val = (val 0x33) 2 | (val 0xCC) 2;   val = (val 0x0F) 4 | (val 0xF0) 4;   return val; } 摘自《嵌入式系统开发(影印版)》P245.
  • 热度 7
    2014-10-9 07:26
    1878 次阅读|
    5 个评论
    一、前言 对于编程而言,重要的是解决问题的方式,而不是语言本身。面向对象与面向过程是解决问题和思考问题的方式。C语言常说是面向过程开发的语言,因为其缺少很多对于面向对象特性的支持。但,这并不影响我们采用面向对象的方式,使用C语言编程。   二、基本介绍 在C语言中,采用面向对象开发,有两个地方是要明白的: 1、结构体, 2、函数指针变量。   结构体:结构体是对于内存的一种组织方式,结构体成员可以是多个不同类型的变量(这是与数组最大的区别,数组中可以包含多个的变量,但只能是同一种类型)。这样使用结构体做到将多个相关的数据集合(封装)到一起。便于记忆与管理。 函数指针变量:函数指针变量,是一个可以存放函数地址的指针变量。因为它是变量,所以可以被重新赋值;因为它是指针变量,所以可以间接寻址;因为它是函数指针变量,可以将函数地址赋值给它,并通过对它的访问,实现调用函数的目的。   由于结构体本身不能定义函数,那么没有办法将函数直接集合(封装)到结构体中。但是,函数指针变量给我们提供了一个间接将函数集合(封装)到一起的方式。我们可以通过调用函数指针变量,实现调用函数的目的。 struct struct_test {      uint8_t type;      uint16_t counter;      uint32_t size;      void (*fun)(void *input); };   struct struct_test test; 测试内存(Ubuntu gcc编译)分布如下: test size:12 type:0xbff9fb64  /* 这里进行了字节对齐,多加了一个字节 */ counter:0xbff9fb66 size:0xbff9fb68 fun:0xbff9fb6c   三、简单实现 面向对象的本质是对数据和方法(函数)的隐藏,避免对细节的了解。而这一点,正是结构体结合函数指针变量可以做到的。 在这里,着重于对于数据和方法(函数)的隐藏来谈论C语言面面向对象的简单实现。暂时不去谈论,关于面向对象的其他特性例如:继承,组合,多态等。 在C的模块化编程中,将一个.c和.h文件看作一个模块,在实现的时候,尽量使用局部静态变量,减少对外的函数接口,同时尽可能少的调用其它模块中的函数,避免受到其他模块的影响(也即,低耦合,高内聚)。C的面向对象编程与模块化编程,都是为了解决降低编程复杂度,减轻同一时间需要记忆的工作量,所以他们可以也应该联合使用。 简单的例子: 有一个.c文件oo_test.c #include "oo_test.h"   static struct oo_test  oo; /* struct oo_test  oo; */ /* void oo_test_init(void) {      oo.type = 0;      oo.counter = 0;      oo.fun = test; } */   struct oo_test *oo_test_init(void) {      oo.type = 0;      oo.counter = 0;      oo.fun = test;      return oo; }   static void test(void *input) {      …; }   有一个.h文件oo_test.h #ifndef  OO_TEST_H #define OO_TEST_H #include "stdint.h"   struct oo_test {      uint8_t type;      uint16_t counter;      void (*fun)(void *input); };   struct oo_test *oo_test_init(void); /* #define OO               oo  */ /* extern struct oo_test  oo; */ #endif   其他文件使用: xx.c …… struct oo_test *oo = oo_test_init(); oo.fun(); /* oo_test_init(); */ /* OO.fun();  */ …… (注释中,是另一个种实现方式,这两种方式各有所取。根据使用的情况下,适当选择)   在上边提到的简单的例子中,首先避免了C编程中全局变量的问题,通过局部静态变量实现信息传递,做到数据的隐藏; 其次减少C编程中对外的函数接口(这一点是模块化编程,与面向对象编程都强调的一点:减少对外的接口,隐藏内部函数)。   四、参考 Contiki操作系统; 《Make Embedded System》。
  • 热度 2
    2013-12-24 13:41
    446 次阅读|
    2 个评论
    @ allen_zhan : 在一般的各种语言编程  help document  中 ,  都有类似的说法 ,  其实应该是不建议使用全局变量吧 ?  而不是这里所说的 " 没有初始化全局变量 ".  我猜原因是否是 , 2000 年代初 , SRAM  还是贵的 ,  资源还是尽量少的 ,  给  programer  的  RAM  是珍惜的 .  所以被我们一旦预约了全局变量 ,  应该就是等同于占据了  RAM  中的空间 .  这是否也意味着 " 设计的资源效率低下 "?  推广到今天的  uController,  当我们使用时 ,  尽管我们享受着大容量 ( 针对过去 ) 的  RAM,  但是我们也不应该一时兴起 ,  无休止将大 size  的数组定义成全局 ~~~  反正这方便我们的代码  coding  不是吗 ?  但是大  size  的数组 ,  当然也必将导致可用的  RAM  的迅速降低 ,  很多时候 ,  我们茫茫然抬头发现  compiler  告诉我们  RAM  资源耗尽 !  但是 ,  不管是全局的 ,  还局部的 ,  无论在哪里进行初始化 ,  都必然占据  ROMDATA  的 .  当然了 ,  如果预约下全局的  RAMDATA,  而实际上不作初始化 ,  也事实上代码中发生疏漏 ,  没有应用 ,  那更算是粗心导致的大错 ,  无意义损耗  RAMDATA,  也许这就是所谓的避免定义全局数据的呼吁 ...  但是哪个心细的  programer  又会犯这种错误呢 .  另外 ,  在某些编程指导书籍里面 ,  极度反对全局变量 ,  理由是对工程项目不利 ,  对团队合作编程不利 ,  这又是另一个  story  了 .  完全不定义全局变量 ?  似乎在工程学上又走向另一个极端 ? 事实上 ,  我有一种隐约的感觉 ,  无法用明确的表述描绘得清晰 .  那就是全局变量和局部变量 ,  从某种程度上讲 ,  本质是一致的 ,  其根本原因就是因为它们都来自  ramdata,  并在某个  routine  中变量有其 " 生命周期 ".  局部变量在其 " 生命周期 " 内 , compiler  给予的  space  的具体内容 ,  可能被其他的调用可见 .  但是一旦超过了这个 " 生命周期 ",  那么 ,  这段空间就被  compiler  释放 .  由于我们目前的代码语言 ,  实际上是函数为核心 ( 另一个流派是  OO  为核心 ),  这导致我们离开了局部变量的定义的函数 , compiler  就释放了局部变量的  RAMDATA  给其他  routine  使用 .  也就是说 ,  架设函数作用时间足够长 ,  以至于在整个代码生命期 (reboot 开始算起 ),  没机会被  compiler  释放 ,  那么它就是全局的 .  之所以在当前的编程语言中 ,  全局变量与局部变量的明确不同 ,  则其实被当前的编程语言 ,  函数决定 .  因为我们总是不得不在函数内 ,  或者函数外来定义变量 ,  没有其他的途径 .  导致函数外的变量被生硬定义为全局 ,  而我们无法给予 " 变量生命期 " 概念 .  这时 ,  我们不得不进行深思 ,  开始考虑 ,  如何设计一个系统 ,  在我们需要或者不需要时预约和释放  RAMDATA  呢 ?  而不是被迫于函数写法 ,  被迫决定为局部还是全局呢 ? ...  这时 ,  我们便要学习操作系统了吧 . /*----------------------------------------------------------------------------------------------------------------*/ @Catch: 因为在连接时,全局变量和局部静态变量都是在 data 区,所以在这里给放一起讨论了。在原文中自己的表述犯了一个错误,就是初始化的全局变量或者局部静态变量,是占用生成的可执行文件大小。而未初始化的全局变量和局部静态变量,不占实际文件大小,但是在执行时是分配 RAM 空间的。这一点,是原来写博文的时候,弄混淆的一点。所以对于 RAM 而言,局部静态变量和全局变量消耗是一样的。所以,就像你在回复中说的,预约了全局变量可能会导致“设计效率低下”。但是对于 ROMDATA 而言,初始化和未初始化是不一样的,差异很大。 尽管现在 MCU 的 RAM size 大了很多, Flash 也多了很多,但由于现在产品功能越来越强,也就要求更多的代码,需要跟多的 ROM 和 RAM 。内存受限,可能是个永远存在的问题。但是,也会导致初入的 programer ,对于内存的问题不会那么敏感。当出现问题的时候,感受到内存限制的时候,可能已经 code 了很多。 关于禁用全局变量一说,一开始确实感觉很矛盾。有些数据需要以后的函数中用或者在其他的 .c 中用,如果不用全局怎样进行数据传递。后来看了一些代码,慢慢感觉到用全局变量,与其说是数据的传递,更准确的不如说是怎么将数据保存下来。至于如何将数据传递过去是另一件事情。所以,整个代码中,全局变量存在的意义在于保存数据。如果是这样,我们完全可以使用局部静态变量来保存。然后,通过函数调用将保存的数据,传递给需要的地方(因为函数调用,难免影响效率。对于效率比较高的地方可以用宏定义来实现)。在这个过程中,也感觉到,这样做更好的是,可以减少对于使用变量的关心,减少需要理解的代码(好的函数名是前提),提高程序的可读性,方便以后的修改。 所以,这里我很赞同“全局变量与局部变量的本质差异,也就是生命周期的差异”。既然,静态局部变量和全局变量有相同的生命周期(对于外部来说,可见程度是不一样的),那么我们应该可以最大程度的使用静态局部变量,取代全局变量,通过函数或者宏定义来获取。当然,完全的取代,应该不容易做到。 操作系统的使用在以前的一篇博文里列了自己的看法,操作系统提供一个虚拟的并行,可以让我们将更多的精力 focus on 我们需要实现的功能上。当然,有些时候为了达到我们需要的效果,对于操作系统的调度算法要有一定了解(虚拟并行毕竟不是真正的并行)。 现在操作系统越来越稳定,在操作系统上的工作量也会越来越少,而应用程序的复杂度却越来越高,如何应用操作系统,实现我们需要的功能。 应用程序构建的三种方法(摘自《 UNIX 网络编程——进程间通信》): ( 1 )用一个庞大的程序完成全部工作。程序的各个部分可以实现为函数,函数之间通过参数,返回值和全局变量来交换信息。 ( 2 )使用多个程序,程序之间使用某种形式的 IPC 进行通信。(因为彼此间运行地址空间独立,可以理解为多个 MCU 协同工作) ( 3 )使用一个包含多个线程的程序,线程之间使用某种 IPC 。(因为彼此间运行地址空间共享,可以理解为一个 MCU 执行多个任务)   关于静态局部变量初始化与未初始化占用,执行文件大小的验证代码:   初始化静态变量 a :   未初始化静态变量 a :   关于 MCU 使用操作系统的一点看法: http://forum.eet-cn.com/BLOG_ARTICLE_18771.HTM
相关资源
广告