In my previous column, I talked about the way I write software. I began by saying, "I've always known that the way I write software is different from virtually everyone else on the planet."
Boy, was I wrong. Many readers sent me e-mails saying, in effect, "Me too."
In retrospect, I was arrogant to suppose that I'm "unique on the planet," except in the sense that each of us is. It's nice to know I have some kindred spirits. (Not everyone shared my views though.) Yet, if I have so many kindred spirits, how come I never noticed?
Here's what I think. Faced with a certain development environment, complete with management directives and guidelines, and impossible schedules, different people respond differently. Some folks salute smartly, say "Yes, sir," and make do. I, on the other hand—the model of soft-spoken tact—give my opinions, which range from "That's unacceptable," to the ever popular, "That's the dumbest idea I ever heard!" Such comments have justly earned me, over the years, the title Jack "not-a-team-player" Crenshaw.
Having blurted out my opinion on a given environment, I try to change it to one where I can be productive.
Sometimes I succeed, sometimes not. One in-house project with the potential for making the company millions of dollars, involved real-time software for a microprocessor. When the "software development environment" arrived, it turned out to be a single-board evaluation kit, complete with line-by-line assembler (no labels, just addresses), and an old 110-baud, thermal-paper terminal. When I said, "Hey, wait a minute! Where's my hard drive? Where's my bulk storage device?" the project manager calmly pointed to the terminal's 110-baud cassette drives.
Fortunately, my campaign to change that environment succeeded: we got a far more acceptable one and delivered a great product.
More often, I'm unable to change the development environment. On such occasions, I try to carve out a mini-environment where I can still be effective. In my last column, I mentioned an embedded system project where we were promised new builds every two weeks, whether we needed them or not. In my inimitable fashion, I blurted "That's unacceptable! I'm used to turnarounds of two seconds" and set out to find a better solution. We found one: an interactive development environment (IDE) on the company's time-share mainframe. The IDE came complete with assembler, debugger, and instruction-level CPU emulator.
It wasn't perfect. The terminal was yet another 110-baud, thermal printing beauty that we used over the company's voice lines to the time-share system 1,500 miles away. But we got edit-compile-test cycles down to the 10-minute level, a whole lot better than two weeks.
Were there kindred spirits on that project, doing things the same way I do? Sadly, not. But in looking back, I suspect I had many kindred spirits on other projects and just didn't know it. It wasn't that they were all corporate sheep shuffling to the slaughter. It's just that they were a lot more successful than I, in seeming to be. Like me, they (you) would carve out mini-environments and practices where they could be more effective. They just did it under the radar. They didn't blab about it and left out the "That's unacceptable" and "dumbest idea" parts.
Dang! I wish I'd thought of that.
Waterfall vs. agile
There's a tension between the classical, waterfall-diagram approach to design (separate phases for requirements, design, code, and test) and the iterative, spiral-development approaches. I'm 100% convinced that the waterfall model doesn't work. Never did. It was a fiction that we presented to the customer, all the while doing something else behind the scenes.
What's more, I can tell you why it doesn't work: we're not smart enough.
The waterfall approach is based on the idea that we can know, at the outset, what the software (or the system) is supposed to do, and how. We write that stuff down in a series of specs, hold requirements reviews to refine those specs, and don't start coding until those early phases are complete and all the specs are signed off. No doubt some academics can point to cases where the waterfall approach actually did work. But I can point to as many cases where it couldn't possibly have, for the simple reason that, at the outset of the project, no one was all that sure what the problem even was, much less the solution.
One of my favorite examples was a program I wrote way back in the Apollo days. It began as a simulation program, generating trajectories to the Moon. Put in an initial position/velocity state, propagate it forward in time, and see where it goes.
As soon as we got that part done, we realized it wasn't enough. We knew where we wanted the trajectory to go—to the Moon. But the simulated trajectory didn't go there, and we had no idea how to change the initial conditions (ICs) so it would.
We needed an intelligent scheme to generate the proper ICs. As it happens, the geometry of a lunar trajectory involves a lot of spherical trig, and I discovered that I could codify that trig in a preprocessor that would help us generate reasonable ICs or, at the very least, tell us which ICs would not work. I wrote a preprocessor to do that, which I imaginatively named the initial conditions program.
Using this program, we were generating much better trajectories. Better, but not perfect. The problem is that, after all, a real spacecraft isn't so accommodating as to follow nice, simple spheres, planes, and circles. Once we'd hit in the general vicinity of the desired target, we still had to tweak the IC program to get closer.
A problem like this is called a two-point boundary value (TPBV) problem. It can only be solved by an iterative approach called differential correction. We needed a differential correction program. So we wrote one and wrapped it around the simulation program, which wrapped itself around the initial conditions program.
Just as we thought we were done, along came the propulsion folks, who didn't just want a trajectory that hit the target. They wanted one that minimized fuel requirements. So we wrapped an iterative optimizer around the whole shebang.
Then the mission planning guys needed to control the lighting conditions at the arrival point and the location of the splashdown point back at Earth. The radiation guys needed a trajectory that didn't go through the worst of the Van Allen belts. The thermal guys had constraints on solar heating of the spacecraft, and so forth. So we embedded more software into the iterators and solvers—software that would handle constraints.
In the end, I had a computer program that needed almost no inputs from me. I only needed to tell it what month we wanted to launch. The nested collection of optimizers and solvers did the rest. It was a very cool solution, especially for 1962.
Here's the point. In those days, computer science was in its infancy. In no way could we have anticipated, from the outset, what we ended up with. We couldn't possibly have specified that final program, from the get-go. Our understanding of the problem was evolving right alongside the solutions.
And that's why we need iterative, spiral, or agile approaches.
Environment drives approach
As I mentioned last time, I tend to test my software very often. It's rare for me to write more than five or six lines of code without testing, and I often test after coding a single line.
You can't do this, though, if your system takes four hours to compile or two weeks to build. For my method to work, I need to keep the edit-compile-test cycle time very short, measured in seconds rather than hours, days, or weeks. If my development environment can't support this, I go looking for one that can.
It hasn't always been that way, especially for embedded systems. In the beginning, we made do with paper tape inputs read by an ASR-33 terminal. Our only way to "download" software was by burning an EPROM.
Fortunately, those days are gone forever. The development environments available these days are amazingly good and shockingly fast. You can develop in an environment (I prefer an IDE) that's almost indistinguishable from your favorite PC compiler system. You can download it into a target at Ethernet, if not USB, data rates, and debug it in situ, using a JTAG-based source debugger. Wonderful.
Avoid the embedded part
When I'm developing software for an embedded system, I follow one other rule. I can sum it up this way: Avoid the embedded system like the plague.
Understand, here I'm talking about the real, production or prototype hardware, not a single-board evaluation kit. At first glance, this may seem crazy. What's the point of developing software for an embedded system if you're not using the embedded system?
Let me explain. A lot of the software I develop, whether for an embedded system or not, is math intensive. And the math isn't going to change (we hope) whether it's in the embedded system or my desktop. A Kalman filter or a least-squares fit is the same, regardless of the processor it runs on.
Yes, I know, that isn't strictly true; finite word lengths and processor speeds can affect the performance of an algorithm, as do the timing and handshaking between multiple tasks. Even so, the right place to test a new algorithm is not in the embedded system at all; it's in the most effective system we can find, which these days usually means the multicore, multigigahertz box on your desktop.
There are at least two other reasons for shying away from the embedded system. First, it may not even be there. In many projects, the hardware is being developed in parallel with the software. If you don't even start the software development until the hardware is complete, you're behind schedule on day 1.
Second, even when the hardware has arrived, it may not be stable. Many of us can tell stories about getting strange results and not knowing whether the problem is in hardware or software. We can also tell stories about how the hardware guys invariably say, "Must be a software problem. My hardware is working fine." Before we can get the hardware fixed, we first have to convince the hardware guys that it's broken. Sometimes that's not easy.
Because the hardware isn't always stable, the hardware guys need access to the embedded system, just as you do. There's nothing more disheartening than to be all geared up for an intensive debugging session, only to find that the hardware isn't even there; it's gone back to the shop for rework. You end up negotiating with the hardware guys to get access to the machine.
For all these reasons, I do my level best to hold off hardware-software integration to the last possible moment. Here's my typical sequence:
1. Develop the math algorithms in a desktop machine. I often use languages other than C or C++ (more on this later).
2. Code the algorithms in C++ using the desktop machine.
3. Download and test the software in a CPU simulator or emulator. If an emulator, I'll take either a software emulation, or hardware in-circuit emulator (ICE). Alternatively, I might use an instruction-level simulator, running on the desktop machine.
4. Download the software into an evaluation kit. Simulate the sensors and actuators of the real embedded system. If you're using an ICE, you can use its internal CPU for this phase.
5. After—and only after—you've tested the software in the simulated/emulated environments, go to the real hardware.
[Continued at How I test software (Part 2)]
文章评论(0条评论)
登录后参与讨论