tag 标签: mainframe

相关博文
  • 热度 18
    2012-5-11 15:42
    2766 次阅读|
    0 个评论
    Changes in the winds Fast forward to 1970. I was still programming an unseen mainframe in FORTRAN. That particular mainframe wasn't even in our building; it belonged to NASA. Our only contact with it was a courier, who made twice-daily runs to pick up our card decks and return printouts. Turnaround time was 24 hours. To keep the pump primed, each time we got a run back, we'd pore through the printout with a red pen in hand, marking it up for the next cycle. But big changes were on the horizon: I'd been reading about these newfangled gadgets called minicomputers. Though far more capable than my old "desk" computer, a typical minicomputer was about the same size and price, and had a similar, interactive user interface: A Teletype console with paper tape I/O. Most exciting, people began connecting minicomputers to the real world. In truth, we could have done the same thing with a big mainframe. All computers have I/O ports and support interrupts. With enough specialized (and very expensive) interface devices like analog-to-digital (A/D) and digital-to-analog (D/A) converters, a computer could interact with its surroundings. But unless you had a budget equal to NASA's, you weren't likely to use a mainframe that way. With minicomputers, cost wasn't such an issue. What's more, minicomputers tended to have more uncommitted I/O ports and interrupts, and the cost of A/D and D/A converters was plummeting. Suddenly, all over the world, people were hooking their minicomputers up to all manner of external machinery, including factory assembly lines and research lab equipment. The era of real-time and embedded systems had arrived. One day, I was walking down the halls at my job, and passed room after room of guys poring over fan-fold printouts, marking them up with red pens. I had the thought, "these guys—indeed, ALL us guys—will soon be as obsolete as dodo birds." I resolved not to let that happen to me. I resolved to get involved with minicomputers and real-time systems. Chasing the dream In a nice, orderly, and linear world, I would have followed up that resolution. But my world has been anything but linear. To explain, I have to rewind to the mid-1960s. I was back at college, chasing another degree. For both my teaching duties and NASA research, I found myself back in the world of FORTRAN and mainframes. But at home, in my "spare time," my thoughts turned back to my dream: A computer of my very own. During those years, I was hardly alone. Some enterprising souls actually managed to realize their dream, assembling their own minis from surplus parts. Others settled for slices of a time-shared mini, like the BASIC system developed at Dartmouth. My own thoughts, however, took a more primitive turn: I wanted to build a homebrew computer from scratch. Two unrelated events had steered me in that direction. First, in a GE semiconductor manual, I found a very nice tutorial on Boolean logic. Fascinated, I learned all about 1's and 0's, ANDs and ORs, exclusive ORs, and De Morgan's theorem. I learned about flip-flops. I learned about circuit minimization and Karnaugh maps. Just for fun, I would pick some logic problem (Example: build me a circuit to drive a seven-segment display) and work out a gate-level solution. Second, Fairchild introduced a line of low-cost integrated circuit (IC) logic devices. Even a grad student could afford a dual NOR gate 80 cents or a J-K flip-flop for $1.50. I bought a bunch of them, and spent many glorious hours making lights blink, sensing pushbutton inputs, and mechanizing some of those logic solutions. I built a couple of useful gadgets: a lap timer for a racetrack and a counter-timer-frequency meter for myself. For the first time, my dream of a homebrew computer seemed within reach. As my design evolved, I took a page from that old LPG-30, and used serial logic, with shift registers ICs replacing its magnetic drum memory. My next problem was I/O. For output, I wanted to display decimal digits. After much trial and error, I settled on the idea of displaying seven-segment digits, drawn cursively on an x-y-z oscilloscope. I worked out the waveforms I'd need, and etched and soldered circuit boards to generate them. I had gotten as far as displaying a single digit, when my plan was severely rerouted by advancing technology. The microprocessor When Intel's 4004 burst on the scene, it not only changed my plans, it changed the world forever. I asked myself: Why should I bother building a homebrew computer from discrete gates, when I could get the whole CPU on a single chip? Interestingly enough, this vision wasn't shared by many respected computer gurus and manufacturers. Even Intel themselves, when they introduced the 8080, wrote an article showing how it could be used to control a traffic light. That's one traffic light. Three sets of light bulbs, and four pressure sensors. That kind of use seemed to be the limit of their imaginations. Even years later, respected computer gurus were saying things like: * "A microprocessor will never be used as a general-purpose computer." * "A high-order language compiler will never run in a microprocessor." * "Why would anyone want more than 20k of RAM?" But for those pioneers who had been building homebrew computers out of surplus core memories, discrete logic, and duct tape, the intellectual leap from controller chip to general-purpose computer was obvious. One thing's for sure: entrepreneurs were not just building, but marketing kits. Three or four used the Intel 8008. I started writing software for it, including a complete floating-point package.   Personal computers If the first microprocessors changed the world, the next event shook it to its core, and created an industry of unprecedented scope. Only a month or two after Intel released the 8080, Ed Roberts, owner of the electronics firm MITS, announced the 8080-based Altair kit. This was no bag of parts or a set of etched circuit boards; the Altair was a real computer, with a rugged power supply, bus structure, and a beautiful case. What's more, it only cost $395—just $45 more than the CPU chip alone. The day I saw the ad in Popular Electronics, I bought one. Finally, real time The next event didn't change the world at all, but it sure changed mine. Not wanting to become a dodo bird, I'd been looking for a chance to get into a micro-based business. At a Huntsville electronics trade show, I met Paul Bloom, president of Comp-Sultants. At his booth, Paul was displaying the components for his own 4040-based computer kit, the Micro 440. That was enough for me. Hands were shaken, some money changed hands, and I ended up as Comp-Sultants' software guy and its manager. Which interprets as among other duties, I got to handle phone calls from irate customers, deal with door-to-door salesmen and beggars, keep the toilet working, and sweep the chad from the floor. In my "spare" time, I had to direct our four technicians and develop software, Paul and I had our differences, mostly about money and "vision," but you have to give him this: He was a true visionary. Where I was still stuck in homebrew computer mode, Paul saw the value of the microprocessor in its commercial value in real-time processor controllers. Before I arrived on the scene, he already had two products under development: A controller for a cold-forge machine, and another for a plastic injection-molding machine. To say that our "laboratory" was primitive would be far too kind. Paul's designs used the Intel 4040. His "development system" consisted of a 4004-based single-board computer, a primitive ROM-based assembler, and a Teletype ASR-33. Intel had upgraded the assembler to support the 4040. Our test equipment consisted of an equally primitive bus monitor, a multimeter, and an oscilloscope. To hold our software, we used UV-erasable EPROMS. But we had no EPROM eraser. Instead, we just put the EPROMs outside, on the hood of someone's car, and let the Sun do the job. Sometimes, the software acted strangely. Do you think maybe a cloud passed over the Sun? When I arrived on the scene, we bought the much more capable Intel Intellec-8, which improved our capabilities big time. The Intellec-8 included both a better assembler and a PROM reader-burner. That's the way we burned PROMs for the cold-forge machine. But more importantly, we could now develop software for the 8080. Paul sold a contract for software to control a satellite-tracking antenna. It was to include a two-state Kalman filter (KF)—surely one of the first KFs in a microprocessor. I wrote the software for it. A KF is best implemented in floating point. So I ported my 8008 floating-point package to the 8080. It was on this project that I learned the value of RAM bytes and clock cycles. For each assembly-language subroutine, I counted the clock cycles and bytes used. The end result was a package that was smaller and more efficient than both Intel's own package, and Microsoft's (yes, Bill, I disassembled your code). I also had to program the fundamental functions: square root, sine, cosine, and arctangent. My algorithms eventually found their way, first into the pages of ESD, and then into my book (Math Toolkit Real Time Programming). Now, you have to ask: If we were developing microprocessor-based, real-time systems in 1975, how come we didn't become rich and famous billionaires? Answer: We tended to snatch defeat from the jaws of victory. My Kalman filter worked like a champ, but I can't say as much for our other products. As it turned out, the Micro 440 kit that Paul had at that trade show was a myth. He had some of the real parts, but most were just random circuit boards, put there for show. We did eventually complete the Micro 440, and even sold a few, mostly to universities. But we had made a serious marketing error. We thought that the public wanted lower cost, and the Micro 440 was $100 cheaper than the Altair. In reality, our customers wanted a horsepower race: The more RAM, longer words, and faster clock speed, the better. The hobbyists saw our ads, yawned and moved on. Paul made another egregious error. Before I came along, he needed a programmer for his cold forge machine. Thinking to get one on the cheap, he went to the computer science department at the University of Alabama, Huntsville, and asked them for their smartest senior. Tim may indeed have known computer science, but he turned out to be the most inept programmer I've ever known (and that's saying a lot). He wrote impossibly obtuse and lengthy code, filled with flags and branches, for even the simplest algorithms. We had to scrap his code for a TTY interface because it filled the entire memory. Worse yet, his only way of testing the software was to go to Nashville, 100+ miles away, and plug the EPROMS into the machine. It's called the Big Bang theory of testing. And when you're talking of a big machine with 5000-psi hydraulics, you're talking about a BIG bang! Week after week, the software misbehaved, destroying the machine and sending technicians diving behind crates. In the end, I fired Tim, rewrote the software myself, and delivered a working product. But not before the due date on the contract had expired. The customer took our system, said, "Thank you very much," and walked away. The future of the company now depended on my antenna controller. We delivered that one on time, and it exceeded its performance spec. In fact, it may have been a little too good. We had been hoping that, if we did a good job, we'd get a follow-on contract to refine and extend the code. Turns out, the customer was happy with version 1.0, so that was that. Gyros, ships, Missiles, torpedoes After Comp-Sultants, I wandered into other jobs, including teaching computer science at a local university, writing software requirements for NASA projects, and even a stint as a chief engineer for Heathkit. My next job with real-time systems involved developing software for the 16bit Zilog Z8000. The company made navigation systems using ring-laser gyros. One of our top scientists had developed an algorithm that used a computer to predict and compensate for gyro errors. It promised to improve performance by an order of magnitude. My job was to turn the algorithm into code. On this job, my biggest problem was clock cycles. To implement the algorithm, I had to make the software generate those same fundamental functions—square root, sine, cosine, and arctangent—in a millisecond. That's 1000 clock cycles. To make this happen was without a doubt the biggest challenge I've faced. I didn't even have time to store and fetch data to/from RAM. As much as possible, I had to keep them in CPU registers. To optimize the register usage, I used graph-coloring algorithms, as an optimizing compiler does. I had another idea that I'm kinda proud of. The issue was: how do you test an algorithm to see if it's working right? You can single-step through the code, but once the gyro is connected, you can't stop the CPU. To solve the problem, I hooked up another Z8000, and let it share memory with the unit under test. The CPU couldn't wait to peek and poke all the CPU registers, but I could afford to do about four of them. So I added software that would peek at the registers, grab four of them, and then—if asked—poke new values. It worked like a charm. I had one more job involving the Z8000. We had an existing ship navigation system. Our company had sold the Navy the idea of using it in an old torpedo, basically turning the torpedo into a self-locating mine. The catch was, we had only 2 weeks to deliver a prototype. Testing a navigation system is not usually a simple thing. You can spend two weeks just calibrating the sensors. We couldn't do that, but we could deliver a working, if not perfect, system. Instead of a calibrated, computer-controlled test rig, we simply put the box on a desk, and verified that it could find "down." After we'd done that in a few orientations, we rotated the box back and forth by hand, and verified that it knew "North." The next test was way cool. The torpedo had a tachometer on the propeller, which it used to "trim" the position calculations. We couldn't give the nav system a propeller, but we did the next best thing; we hooked up a square-wave generator to the prop input, so the system would think it was moving at constant speed. Then we put the system, a battery, and a terminal on an equipment cart. One of our techs had been practicing, pushing the cart around the parking lot at a constant pace. When we had him push our system around a big loop and back to the starting point, we closed the loop within 15 feet (convert to m). Not bad, for a calibrated technician. I did two more embedded systems for that company: A large, ground-to-ground missile and an experimental ship navigator. The ship navigator required me to implement an 18-state Kalman filter. Both systems were successful. The missile is currently deployed, and the ship navigator achieved the highest accuracy ever achieved, to that time.  
  • 热度 12
    2012-1-24 10:39
    1598 次阅读|
    0 个评论
    My degree course was of a kind known as a sandwich course (referred to as a "co-op" in America), which meant that we spent a year in college, then six months working for a company (whatever company you could persuade to take you on), then a year back in college, then another six months out in the field, then a final year at the university. I also noted that my second industrial placement was at the Research and Development centre for a large glass manufacturer. This second placement occurred around 1979, when microprocessors and microcomputers were still relatively new on the scene. To put things into some sort of perspective, one of the computers we used at the university was a humongous analogue beast composed of voltage/current summers (adders and subtractors), amplifiers (multipliers and dividers), integrators, differentiators, and so forth. In order to make this work, we had to use flying leads to connect different functional units together, turn knobs and twiddle dials, and hop up and down on one leg while whistling the national anthem ... well, maybe not, but it sometimes felt like it.   An analogue computer (no, that's NOT me :-)   The only digital computer we had access to at that time was a honking mainframe that lived in its own building. We captured our programs on decks of punched cards using teletype machines, and then we hand carried them across to the computer building, handed them in, and were told something like "Come back next Wednesday." And when you did return, it was only to be told that there was a syntax error and you'd missed a comma, and the whole thing started again ... it took at least a semester to get even the simplest program to run. (Ah, the good old days :-) But we digress... I had been at the RD centre for only a week or so when my supervisor took me to one side and asked if I knew anything about computers (I later discovered that most of the electronics in a glass factory at that time was either analogue or relay-based digital sequencing type "stuff"). It turned out that someone had ordered a Texas 9000 (or was it a 9900?) microcomputer system, which had just arrived. The problem was that the person who had ordered this little rascal had subsequently accepted a position with another company, and no one else in the RD centre had ever seen a microcomputer before. The bottom line was that they put me in a room with the microcomputer and a manual and left me to it. My instructions were (a) learn how to use the computer and (b) come up with something useful for it to do so that they could justify having purchased it in the first place. What a great opportunity!!! The first step was to learn how to use the computer, which had to be programmed in assembly language (this was the first time I'd been exposed to assembly language, all I knew at that time was FORTRAN on the mainframe). I still remember the feeling of triumph when I managed to get my first "Hello World" type program to work. Once I had mastered "the beast", the next step was to think of something to do with it. The folks at the RD centre already had a laser, and they allowed me to purchase a diode-array camera. I think the resolution of the camera was either 32 x 32 or 64 x 64 pixels; whatever it was, it was pathetic by today's standards, but it really great for the time. I no longer remember how I did it (some simple cable interface – perhaps it was even RS-232), but it was possible to use the microcomputer to instruct the camera to capture (latch) whatever image was currently being detected by the sensor, and to then read the output from the sensor byte-by-byte. My idea was to bounce the laser off the liquid glass as it flowed in channels from the furnace to the bottle-forming machines, and to use the diode array camera to detect the reflected laser beam and use this to calculate the level of the glass (it's not like you can use a float or something... in fact, I'm not sure how they used to do it). But my real triumph was using the camera-microcomputer combo to read the numbers on the bottom of the glass bottles as they zipped down a conveyer belt. There was already some way to detect cracks in the body of the bottles – the trick was to be able to determine which glass-forming machine a damaged bottle had come from based on the number on its bottom. This was actually a non-trivial task, because the numbers were formed by raising the level of the glass on the underside of the bottle, which was illuminated by two light sources aimed at 90 degrees from each other. The thing was that, depending on the orientation (rotation) of the bottle, different faces of the raised numbers would be illuminated. So first I had to detect the bottle going by and trigger the camera to take a snapshot. Next I had to identify the facets forming the number from random "noisy" areas of the glass, calculate the "centre-of-gravity" of the number, rotate everything around until it was "vertical" and then identify it – all before the next bottle shot past. Actually, when I consider how little I knew in those days, I now realise how amazing it was that I got any of this to work at all. Fortunately, I had no idea as to just how out of my depth I was, so I beavered away and eventually got it all working. I was dancing my happy dance that day, let me tell you! Apart from the (not insignificant) boost to my confidence, this experience was really important to me because (a) I got a chance to work with a microcomputer before a lot of other folks even saw one, (b) I learned an assembly language and (b) I discovered that I was actually pretty good at assembly-level programming (I know that it's not very modest of me to say so, but it's true). I also think I was really lucky to be learning all this stuff at that time when memory was so limited and processor clocks speeds were so low, because you had to use all sorts of tricks to minimise your memory usage and wring the last drop of performance out of your programs. All of these tricks and techniques have served me well over the years in all sorts of esoteric applications.  
  • 热度 20
    2011-10-27 11:54
    17721 次阅读|
    0 个评论
    Writing about how things were brings back all kinds of memories. My first job after graduating university was at International Computers Limited (ICL) in West Gorton, Manchester, England. ICL was the UK equivalent to IBM in the USA, only much smaller. As I have mentioned before, I started my new job in the summer of 1980 as a member of a team designing a Central Processing Unit (CPU) for a new mainframe computer. Music to my ears Once we had designed a new computer and build the prototype we had to test it. This testing came in multiple forms. When it came to burn-in testing (running it for several days to make sure it didn't crash), one approach that I remember as though I was still there was to get the machine to play music – but I'm not talking about it playing an MP3 file or anything like that... ...I know this may sound silly, but what we did was to have a loudspeaker hooked up to the CPU such that whenever it executed some form of "Jump" instruction the speaker would be presented with a pulse. Thus, if you created a loop that jumped 2000 times a second, for example, you would end up with a 2kHz tone (plus lots of harmonics, but we used capacitors to smooth things out a bit). The thing is that we didn't play simple notes. The first time I entered the main prototyping area and got my introductory look at one of these monsters I could hear some classical music playing in the background.   An ICL 2900-series mainframe (This one was later than the ones I worked on) At first I thought the music was coming from a radio (although it did sound a little "tinny" and "strange" in a way I couldn't put my finger on). However, as we walked around, I soon realized that the music was coming from the mainframe itself. When I enquired what was happening, the whole jump instruction – loud speaker setup was explained to me. The idea was that the computer was processing lots of test sequences between jumps. Also that – by varying the type and number of instructions – the time between the various jumps was controlled. So why did we do this? Well, while we were all working on different tasks, the music would be playing merrily away in the background. If the music stopped – or started sounding "weird" – we would immediately know that a problem had occurred, in which case the system would be halted and we would backtrack through the data (which was being constantly collected) to determine when the error had occurred and what exactly had gone wrong. The one thing I never questioned until today (as I pen these words) is who created the test program that resulted in the music and how was this program actually implemented? I now realize that it would have been almost impossible to create by hand. In one of my articles, I provided a link to a program that can take an image and generate the ASCII-art equivalent. What I think must have happened is that someone created a similar program to generate the music. I'm thinking that they would have created a table describing of all the different instructions the computer could run and how many microseconds each instruction required. I also think they must have had some way to capture a tune (notes and durations) as an ASCII (or similar) text file. Then there must have been some program that took the file containing the tune and generated a corresponding program with all the instructions including the jumps. If anyone has any more information on this, I would love to hear about it...