原创 Embedded development, then and now (Part 3)

2011-3-11 15:29 1538 13 13 分类: 消费电子

[Continued from Embedded development, then and now (Part 2)]

 

How well did this testing in emulation mode work? When we were finally ready for hardware-software integration and test, we found one (count'em, one) error: a direct rather than indirect store. We fixed it in short order. Our entire integration tests required less than one 8-hour day.

 

This project was completed on schedule, and put in the field. The gyro system outperformed our previous systems by a factor of 10.

 

One final aspect of this project: The Kalman filter calls for a lot of vector and matrix operations. In fact, there are/were almost no scalar calculations. Considering that we were writing the code in assembly language, do you think I wrote a vector/matrix function library for the job?

 

You have to ask? Does a bear like honey?

 

The 68332

My next job involving embedded systems was vastly different. First, it was years later, since I had been vectored off into aerospace jobs not involving embedded systems. Second, it was with my own company, so I had more leeway into our approaches and tool selection.

 

In fact, this job was very satisfying because, for the first time in my young life, I was given freedom to choose all the tools and techniques we'd be using. A hardware guy picked the Motorola 68332 chip, and the various I/O devices like angle encoders, A to D and D to A converters, gyros, and accelerometers. But I got to choose the development systems, the software tools, the ICE, the algorithms, and the general approach. We had a very nice C compiler from Intermetrics, which included an equally nice symbolic debugger. We had an ICE that was both powerful and inexpensive, thanks to its use of the Motorola JTAG port. The screen editor was Brief.

 

Everything ran on a PC running MSDOS. We had two computers on site; a desktop for code development, and a laptop that talked to the ICE. The devices all talked to each other via fast serial connections. No SneakeNet allowed.

 

Our setup was not exactly an IDE, but not far from it. We had the main feature of any IDE, which is the ability to compile, download, and test software from the screen editor. While editing a source file, I had only to press a hotkey to compile it. If there were errors, Brief would bounce me back to the editor, with the cursor poised at the location of the first error.

 

If the program compiled without error, another hot key invoked a BAT file that connected to the ICE, downloaded the file, and bounced to the symbolic debugger. As I said, not quite a true IDE, but close enough to leave satisfied developers.

 

We didn't use an over-the-counter RTOS for this job, for two reasons. First, the job just didn't warrant it. The functionality simply wasn't complicated enough. Mostly it had just one real-time task that talked to the I/O devices, and a background task running system tests.

 

Second, the hardware of the 68332 did most of the work. There was a real-time clock that triggered the real-time task, a watchdog timer, and the very capable counter/timer units. The serial and parallel ports were all interrupt driven. By the time we'd written the Interrupt Service Routines (ISRs), there was not much left for an RTOS to do.

 

On this job I applied the lessons I'd learned through the years. The approach was based on the theory that one should put off testing in the actual hardware until the very last. First, we tested algorithms in simulation mode, performing the software development and unit testing on a desktop, using Borland C. As usual, every line of code got tested using test drivers. Only after we were satisfied that the computations were working properly did we move the code to the Intermetrics compiler.

 

Next, we developed and tested the "flight code" in the ICE's own CPU. This let us not only test in a more realistic environment, we could do so without interfering with the hardware development.

 

One of the nice things about the ICE was that you could map both the CPU and the memory to be either in the ICE, or on the hardware. Our transition to the hardware was very painless because we could switch over to it one step at a time. If the software worked in one configuration, but broke if, for example, we mapped to the actual hardware ROMs, we didn't have to look far to find the source of the error. As I'd learned to expect by now, the final integration and test was pretty much a matter of writing the ROMs and pushing the go-button.

 

There was one aspect of this job that we did differently, and it's an approach I've used ever since. I gave you my general approach, which is to use the biggest, most powerful computer system to do the preliminary work. Use a desktop or mainframe, and test the algorithms thoroughly, even if it's in a different language, such as Matlab. There's no point downloading software to the target system, only to find out that it's executing the wrong algorithms.

 

Gradually move to a more realistic environment, saving execution on the target machine until the very last. And test, unit test, and single-step to exhaustion.

 

There is one change we made on this project, though, and it's an important one. I said that one should put off running on the target machine, but that approach only works if someone else is testing the hardware. In previous jobs, there were always hardware guys swarming over the hardware and testing it.

 

We couldn't count on that, though, in this case. This lead hardware designer was an analog expert—he'd never built a digital circuit before in his life. We realized that if we were going to have to depend on the hardware to work properly, we'd better test it ourselves.

 

So I had to modify my approach to say, by all means do your initial development in a general-purpose computer, with everything simulated. But first, test the hardware itself.

 

This turned out to be easy. My partner was a EE graduate, and he had a briefcase I've envied him for, ever since. On one side, it was an ordinary briefcase, with the usual papers, pens, and documents. Flip it to the other side, and it was a complete electronics workshop, including soldering pencil, multimeter, and a whole panoply of active and passive components. This guy could build entire circuits right from his briefcase.

 

We did some simple tests in not much more than a day. We began by simply connecting a potentiometer to one of the A-to-D ports. We sent the digital value back out through a D-to-A port, and displayed it on a scope. Even the scope was an unnecessary frill—a multimeter would have done just as well.

 

Next, we generated test waveforms like square waves and sawtooths in the CPU. We sent the digital data out through a D-to-A, and displayed that on the scope. Then we closed the loop through another pair of converters, so we could display both the generated and processed waveforms.

 

We did similar tests with the angle encoders and discrete I/O lines. A handful of toggle switches and LEDs were enough to do the job. The whole process took about a daty. Then we could turn the hardware back over to the hardware guys with confidence that the whole thing wasn't going to smoke the first time we ran it.

 

In this case, we didn't find a single problem in the hardware. The designer may have been new to digital systems, but he sure got it right.

 

In the end, this system ran, right out of the box. It was delivered on time and budget, and it outperformed the specs by a factor of two.

 


 

 

[Continued at Embedded development, then and now (Part 4)]

 

文章评论0条评论)

登录后参与讨论
我要评论
0
13
关闭 站长推荐上一条 /2 下一条