[Continued from How I write software (Part 1)]
Theory vs. practice
I have another reason to start coding early: I don't know exactly how the program should be structured. Like the potential fish or duck, many things are not clear yet. In the old, discredited waterfall approach, this was not a problem. We were supposed to assume that the guys who wrote the top-level requirements document got it perfect the first time. From then on, it was simply a matter of meeting those requirements.
To be fair, some visionaries of this approach had the sense to add feedback loops from each phase, back to the previous one. This was so that, in the completely improbable off chance that someone made a mistake further back upstream, there was a mechanism to fix it. Practically speaking, it was almost impossible to do that. Once a given phase had been completed, each change required a formal engineering change request (ECR) and engineering change notice (ECN). We avoided this like the plague. The guys in the fancy suits just kept telling the customer that the final product would meet the original spec. Only those of us in the back room knew the truth.
This is why I like the spiral development approach, also known as incremental, or prototype approaches. As the program evolves, you can still tell the customer, with a straight face, not to worry about all these early versions of the program. They are, after all, only prototypes. He doesn't need to know that one of those prototypes is also the final product.
Top down or bottom up?
The great racing driver, Stirling Moss, was once asked if he preferred a car with oversteer or understeer. He said, "It really doesn't matter all that much. In the end, it just depends on whether you prefer to go out through the fence headfirst, or tailfirst." Ironically, Moss had a terrible crash that ended his career. He went out through a literal, wooden fence, headfirst.
Software gurus have similar differences of opinion on bottom-up vs. top-down development. Purists will claim that the only way to build a system is top down. By that, they mean design and build the outer layer (often the user interface) first. Put do-nothing stubs in place until they can be fleshed out. Even better, put in "not quite nothing" stubs that return the values you'd expect the final functions to return.
Other practitioners prefer a bottom-up approach, where you build the lowest-level code—sometimes the hardware interfaces—first. Connect them all together properly, and you've got a working program.
Paraphrasing Moss's comment, it all depends on how you prefer your project to fail. You can start with a beautifully perfect top-down design, only to discover in the end that your software is too slow or too big to be useful. Or you can start bottom-up, with a lot of neat little functions, only to discover that you can't figure out how to connect them together. In either case, you don't know until the very end that the system is not going to work.
That's why I prefer what I call the "outside in" approach. I start with both a stubbed-out main program and the low-level functions I know I'm going to need anyway. Quite often, these low-level functions are the ones that talk to the hardware. In one of my columns, I talked about how we wrote little test programs to make sure we could interface with the I/O devices. It took only a day, and we walked away with the interface modules in hand. During the course of the rest of the project, it's a good feeling to know that you won't have some horrible surprise, near the end, talking to the hardware.
After all is said and done, there's a good and game-changing reason (I assert) that a pure top-down process won't work. It's because we're not that smart. I've worked on a few systems where I knew exactly what the program was supposed to do before I began. Usually, it was when the program was just like the previous four. But more than once, we've not been able to anticipate how the program might be used. We only realized what it was capable of after we'd been using it awhile. Then we could see how to extend it to solve even more complex problems.
In a top-down approach, there's no room for the "Aha!" moment. That moment when you think, "Say, here's an idea." That's why I much prefer the spiral, iterative approach. You should never be afraid to toss one version and build a different one. After all, by then you've got the hang of it, and much of the lower-level modules will still be useful.
The top-down approach isn't saved by object-oriented design (OOD), either. One of the big advantages of OOD is supposed to be software reusability. In the OOD world, that means total reusability, meaning that you can drop an object from an existing program, right into a new one, with no changes. I submit that if you follow a top-down approach to OOD, you'll never be able to achieve such reusability. To take a ridiculous example, I might need, in a certain program, a vector class that lets me add and subtract them easily. But for this program, I don't need to compute a cross product. So unless I look ahead and anticipate future uses for the class, it'll never have the cross product function.
KISS
I've been using modular designs and information hiding since I first learned how to program. It's not because I was so much smarter than my coworkers; it's just the opposite: I'm humble (and realistic) enough to know that I need to keep it short and simple (KISS). It always amazes me how little time it takes for me to forget what I wrote a few weeks ago, let alone a few years. The only way I can keep control over the complexity inherent in any software effort, is to keep the pieces small enough and simple enough so I can remember how they work, simply by reading the code.
I was also lucky in that the fellow who taught me to program never bothered to show me how to write anything but modules. He had me writing Fortran functions that took passed parameters, and returned a single (possibly array) result. By the time I found out that other people did things differently, it was too late. I was hooked on small modules.
The term "modularity" means different things to different people. At one conference, a Navy Admiral gave a paper on the Navy's approach to software development. During the question and answer period, I heard this exchange:
Q: In the Navy programs, do you break your software up into modules?
A: Yes, absolutely, In fact, the U.S. Navy has been in the forefront of modular design and programming.
Q: How large is a typical module"
A: 100,000 lines of code.
When I say "small," I mean something smaller than that. A lot smaller. Like, perhaps, one line of executable code.
In my old Fortran programs, I was forever having to convert angles from degrees to radians, and back. It took me 40 years to figure out: Dang! I can let the computer do that. I wrote:
double radians(double x){
return pi*x/180;
}
double degrees(double x){
return 180*x/pi;
}
Gee, I wish I'd thought of that sooner! (For the record, some folks prefer different names, that make the function even more explicit. Like RadiansToDegrees, or Rad2Deg, or even RTD). One colleague likes to write ConvertAngleFromRadiansToDegrees. But I think that's crazy. He's a weird person.
Back in my Z80 assembly-language days, I had a whole series of lexical-scan functions, like isalpha, isnum, isalnum, iscomma, etc. The last function was:
iscomma: cpi ','
ret
A whole, useful function in three bytes. When I say "small," I mean small.
Is this wise? Well, it's sure worked for me. I mentioned in a recent column (It's Too Inefficient) that I've been berated, in the past, for nesting my functions too deeply. One colleague pointed out that "Every time you call a subroutine, you're wasting 180µs."
Well, at 180µs per call (on an IBM 360), he might have had a point. The 360 had no structure called a stack, so the compiler had to generate code to implement a function call in software.
But certainly, microprocessors don't have that problem. We do have a stack. And we have built-in call and return instructions. The call instruction says, simply, push the address of the next instruction, and jump. return means, pop and jump. On a modern CPU, both instructions are ridiculously fast. Not microseconds, but nanoseconds. If you don't want to use callable functions, fine, but you can't use run time speed as an excuse.
[Continued at How I write software (Part 3)]
文章评论(0条评论)
登录后参与讨论