tag 标签: control moment gyros

相关博文
  • 热度 15
    2011-3-11 14:37
    1872 次阅读|
    0 个评论
    These ripping yarns from an old-timer embedded systems developer will make other old-timers smile and new-timers thank their lucky stars. From my profile picture, you might guess that I'm not exactly new to the process of developing embedded systems. In some entries in this blog, I'll be talking about the way the technology and techniques have changed over the years. Don't expect this to be one of those "Gee, I miss the Good Old Days" bits. It's more like a celebration of how far we've come, and how happy I am not to be in those days anymore. Old-timers who've been there should get a kick out of the look backwards, and you new-timers can thank your lucky stars you don't still have to do things that way. Simulating gyros My first exposure to real-time systems actually predated the advent of microprocessors. NASA and their contractors were developing the hardware and control algorithms for spacecraft attitude control using control moment gyros (CMGs). We were developing a real-time, hardware-in-the-loop computer simulation that could test and simulate the behavior of the CMGs during typical simulated flights. I was developing the digital software; we had other computer programmers who spoke patch-cords, programming the analog side. The problem with testing a CMG is that you need to see what it's doing in real time. That's even harder to do when the CMG is a virtual one. A CMG generates torque through the changes in angular momentum of a spinning wheel. As the torque is generated, the gyro precessors, and its angular momentum vector (the H-vector) changes. What we most wanted to see was the way the H-vectors changed with time. The NASA guys had worked out a scheme to visualize the state of the system that was, at the same time, extremely crude and extremely clever. They hooked three output lines from the analog computer to an ordinary x-y oscilloscope. The computer generated signals to display the three H-vectors on the scope, as a 3D image. In 1972, this must surely have been one of the earliest 3D graphics displays ever built. The system worked very nicely, but it did prompt me to formulate a rule of thumb I've never forgotten: if the extent of your development and testing system involves three or four PhDs sitting around on a concrete floor studying an oscilloscope, you have probably not yet optimized your test facility. Intel comes to town In 1974, microprocessors had arrived, and the Intel 8080 was the new kid on the block. I wanted in on the action in the worst way. I joined an existing but tiny company that already had contracts in hand. At the time, our entire development system consisted of an Intel single-board computer (SBC) based on the granddaddy of all microprocessors, the 4004. It boasted a PROM burner, and a line editor and one-line assembler in ROM. Access was via a Teletype ASR33 printing terminal. The only bulk storage was paper tape, accessed via the ASR33's 110-Baud reader/punch. Storage capacity depended on the length of your paper tape. Our configuration management system consisted of a bunch of tapes, tossed into a box. By the time I arrived on the scene, the ROMs had been updated to support a 4040 assembler. We had one in-circuit testing tool, which you had to see to believe. We called it the Blue Box. It was not exactly an in-circuit emulator (ICE), but it did give us a view, however darkly, into the computer and its CPU chip. There was no debugger proper, and you couldn't set breakpoints to stop the CPU. What you could do was to set a watchpoint, watching the data whistling through the data bus at the breathtaking rate of 93kHz. All the addresses were set, and data displayed, in binary using toggle switches and LEDs. Hex was for sissies. Here's how it worked. Using the toggle switches, you set a memory address into the Blue Box, and launched the computer program. The first time the selected memory location was accessed, the Blue Box grabbed the data on the data bus. After that, the CPU continued on its way, but you had that single byte of data to peruse at your leisure. Think of it as a memory buffer with a one-byte depth. The main problem with this scheme was that the box could only see data that went out onto the data bus. If you needed to watch the contents of a CPU register, forget it. That data never got put onto the data bus, so you couldn't see it. Fortunately, I didn't have to deal with this gadget for long. The SBC got updated to support the 4040, which was our target CPU. Extra hardware included an umbilical that clipped onto the CPU. I wouldn't exactly call this an ICE, because you couldn't actually emulate the 4040, only observe and control one. The assembler was still a one-line assembler (no symbolic addresses). And there were no upload/download facilities at all. Instead, we used the original SneakerNet, based on EPROM chips instead of floppies. You burned the assembled program into PROM chips, then plugged those chips into the target machine. Once the PROMs and the CPU umbilical were in place, you could do all the things we usually associate with hex debuggers, including setting breakpoints, single-stepping, and displaying or modifying any data in memory or registers. It may have been a small step for Intel, but it was a giant leap for us. For PROMs, we used the UV-erasable 1702 EPROMs, each capable of holding an astonishing 256B of data. The 1702 was a big step forward from earlier burn-once-erase-never fusible-link PROMS. The 1702s had a little quartz window. You could erase the chip by shining UV light through the window. That's if you had an EPROM eraser. Which we didn't. But we got the same effect the natural way. We'd simply set the EPROMs out on the hood of a car, in the bright Alabama sun. Timing was a matter of cooking until done, which took maybe 30 minutes. Success depended on the weather. No sun, no erasing. We had a second project that involved an 8080. This one was actually my "baby." It also was for an embedded application, but I didn't have much interaction with the hardware, since it was our customer's hardware, 180 miles away in Atlanta. My job was to develop the software, which included a two-state Kalman filter, the floating-point library to execute it, and the math library to compute it. "Downloading" was still a matter of PROM-based SneakerNet, only this time it was more like PickupTruckerNet. Or FedExNet. To support this effort, we bought an Intel Intellec 8 computer. It had no hard drive at all—bulk storage was still via paper tape. But it had lots of RAM and ROM, and supported true symbolic assemblers for both the 8080 and its 4040 brother. It also had a decent line editor, not that far removed from the ed/edlin editors of RSTS, Unix, Multics, CP/M, and Unix. Not exactly emacs, but serviceable. The hex debugger was good for debugging 8080 code. For the 4040, we still needed to SneakerNet the PROMs to the 4004 SBC. My 8080 code was a good bit bigger than the 4040 code, so the big bottleneck was reading and punching all those paper tapes at about 8B per second. We couldn't do much about the punch side—you can only make holes in paper so fast. But we could improve the reading side. We found an ultra-cheap optical tape reader (for the record, the ASR33's reader used little mechanical fingers to read the holes). To me, the new reader was a marvel of Yankee ingenuity. It was asynchronous. You see, the paper-tape format didn't depend on tape transport speed. The data format was self-clocking, thanks to a row of little synch holes. Theoretically, you could read the tape as fast as you could get it through the reader, limited only by the I/O speeds of the Intellec parallel port. So to read our paper tapes, we simply pulled them through the reader, much like a sailor hauling in his anchor rope. There was no takeup reel; the tape simply spilled out onto the floor, as it always had. I went out and bought a used film editor, which had a crank-turned feed. We cobbled up a wider reel to hold the paper tape. From then on, reading a file into memory became, literally, a turn-the-crank process. Despite the crudeness of these early development efforts, I learned a lot of lessons, several of which have persisted to this day. First and foremost, debugging on target hardware is painful. The longer you can put it off, the better. You don't even know if the target hardware is working properly. You don't know if a problem is in the software, the hardware, the power supply, the connection to the ICE, or even a bug in the ICE itself. Perhaps one of the I/O chips is in backwards. Come to think of it, I had one case where the circuit-board layout used a mirror image of the A-to-D chip, thereby criss-crossing the pin assignments. Needless to say, those things need to get fixed before you start testing "flight" software. Perhaps because my target machine was hours away, I formed a habit of testing as much of my software as possible on the Intellec's internal 8080. I tested the software exhaustively, carefully single-stepping through every executable instruction and comparing the results with hand checks. I didn't just test the software in one huge glop—the "Big Bang" approach to testing. Each time I developed an algorithm, even as simple as a square root or absolute value function, I wrote a separate test driver for it and wrung it out in solitary splendor. I think they have a name for that. It's called unit testing, and it's highly recommended. By the time the software got burned into ROMs and loaded into the target machine, I could never be exactly sure where a problem was. But I could be about 99.44% certain where it wasn't. Knowing what I know now, I would have extended the testing even more to include simulating the entire system on a mainframe or desktop computer. But that lesson was still in my future. The Z-8000 Several years later, I worked on a project based on the Zilog Z-8000. This was actually quite a nice chip, just slow because it lacked a barrel shifter. The development system, though, had to be seen to be believed. When we first got started on the project, my boss came by to say, "Your development system is here." Excited, I went to see it, only to find a circuit board about the size of a National Geographic. Not even an SBC really, just an evaluation kit. Think of Synertek's old SYM-1 evaluation board and you won't be far off the mark. I go, "What is this?" He says, "It's your development system." I look around. I don't see a single floppy drive anywhere, much less a hard drive. I ask, "Where's my editor? Where's my assembler? Where's my debugger." He said, "They're all in the ROM." Wonderful. Yet another line-by-line assembler. "Where's my bulk storage?" I ask. He points to the terminal—a 110-Baud thermal-print terminal with two cassette drives. He says, "What do you think the tape drives are for?" I should explain, this was no toy project. It was a serious research project, funded at the corporate level of a huge multinational corporation, on a project that promised to improve product performance by a factor of ten.   And I was supposed to save the company using an evaluation kit.