原创 How it used to be: Programming (and debugging) microprocessors (Part 2)

2011-11-28 17:10 2123 24 24 分类: 消费电子

Debugging involved loading the tape of the object code and then using the monitor to run or step through the program. At least it allowed breakpoints and even a one line assembler. Some of the more sophisticated monitors allowed addressing by name rather than absolute address, but I can't recall how this one worked. Once again, corrections in the short term were implemented using patches so as to avoid having to re-assemble. The same techniques of GOTOs and leaving gaps between subroutines applied here.


Once the development was over, the object code was loaded into a PROM programmer and then the EPROM was inserted in to the target hardware. Next was the eternal challenge – how do you know that the system is working or – more importantly – why it is not working? Some people wrote their own monitors to address this, but I quickly became a confirmed believer in In-Circuit Emulators.


As you can imagine some of these tapes were quite long. It was possible to get the tape in fan-fold format, but I never manage to find a supplier. Organising the length of paper tape could be quite a challenge. They could form a roll 5" or 6" in diameter. Reading them would result in many feet of tape spewed out over the floor. Rewinding them was not only tedious but could also lead to damage of the tape. My solution (although not my original idea) was to wind them on to a plastic bobbin derived from a sewing cotton reel. My mom was a sewer and there were many "empties". I see they are still available. I would cut a slot and feed the tape into it. Then I had to find a method to wind it on. I acquired a manual grinding wheel (see photo below), removed the grindstone, and wound tape around the shaft to increase the diameter. I then wedged the bobbin onto the shaft and although it was a two handed operation, rolling the tape became a breeze.

 

au-uc-01.jpg


My grinding wheel was very useful for winding paper tapes

 


Most of the CP/M computers at this time (around 1979) were designed to work with a teletype input, which quickly morphed into "glass teletypes" as dumb terminals were called. In the middle of our desktop computer development, Lear Siegler (one of the glass teletype manufacturers) brought out a desktop system that was a clone of the DEC (the largest minicomputer manufacturer of the day) PDP12. We figured there was no way we could compete and so I was left rudderless. It did not matter that DEC successfully sued Lear Siegler and their product never made it to market.


I found my way into some industrial design and decided to revert to Intel, largely because of the quality of the support, both hardware and personnel, of their distributor in South Africa. I broke down and financed an Intel MDS236 development system with ICE for the 8085 and 8048 families. The equipment cost more than my house! The paper tape approach had been replaced by 3 8" floppy drives- 1 single density (720KB) and 2 double density drives (1.2KB). I also had a high level language: PL/M. It was now possible to develop software in modules and use libraries. Although the processes were similar (three-pass assembly), they were transparent to the user and there was mostly enough disc space to do this, although sometimes you had to shuffle floppies. It mainly supported Intel products, although there was a plug in ICE for a Z80 (see photo below).

 

au-uc-02.jpg


I got a project designing a calorimeter and, for cost reasons, the customer opted for an 8080. I could produce my code on the development system since 8080 and 8085 code was identical, but I could not debug since I did not have (and couldn't afford) and new ICE. I managed to get a used Intel development tool called aµScope which was essentially a reduced feature emulator in an attaché case (see photo below). The user interface was a bit clumsy (especially as I never received a user manual), but still usable.

 

au-uc-03.jpg


I also acquired an Osborne 1 for the express purpose of producing data manuals (using Wordstar) so that I would look professional. I also ordered Supercalc, which was my first introduction to the world of spreadsheets. (Supercalc was for CP/M-based computers; Visicalc was for Apple machines.) The Osborne 1 was a "luggable" computer and had a 5" screen that was like a magnifying glass view of your document. You could see about 24 characters (of an 80 character wide document) and 10 lines at a time. Storage was limited to two 360KB 5¼ inch floppy disks, one of which had the application you were running. Changing floppies was not a simple task of opening the drive latch, removing one, and inserting another disc. On CP/M machines there was some initialisation required every time disc was inserted, all of which further complicated mass storage. We've come a long way!


The IBM PC finally became accepted as the industry standard about 1983 (in SA at least) and I started moving towards using it for documentation and PC layout.


More than that though, Intel started accepting it as the development base for all their hardware and introduced the ICE5100 emulator for the 8051 hosted on a PC via an RS232 connection. Intel even created emulators to run all the existing software compilers, assemblers, editors under PC-DOS. By 1986 the development approach was not much different to what it is today, with the major exception of the user interface.


Unlike the single chip microcomputers that we use today, back then the microprocessor had to connect to RAM, ROM and peripherals externally. That meant there were up to 8 data lines, 16 address lines, and 3 control signals (27 lines) snaking their way around a PCB. The probability for a manufacturing fault increased dramatically and Hewlett Packard believed they had a technique to aid debugging when a product failed test in manufacturing. They created an instrument called a Signature Analyzer (see photo below) to capitalise on the idea. It also provided much amusement when seen by the uninitiated who assumed it applied to one's John Hancock.

 

au-uc-04.jpg


In circuits with repetitive waveforms, diagnostic manuals had pictures of the waveforms identified with nodes and the settings that produced these waveforms. Of course microprocessor busses are non-repetitive and it is quite an art to debug them. HP's idea was to provide some "waveform" at each node to prove that the system was working. The concept was to force the micro to run a set of instructions that would repeat a bit pattern through a particular node. This pattern is fed into a shift register (in the Signature Analyzer) with some feedback loops similar to CRC calculation to generate a 16 bit signature that is displayed as a 4-digit hexadecimal word on the instrument. That could only be done when the basic system was working. To start up there had to be some method of opening the data bus and forcing the micro to execute a single instruction over and over to allow the exercise of the address bus and ROM read signal. This was fairly easy in Intel processors because the NOP instruction was 00hex and so all that was needed was an 8 way DIP switch to open the bus and 8 diodes connected in a common cathode arrangement with a switch to ground. You could then establish a signature on each address line and EPROM data output, and slowly enable the memories etc. from there.


 

文章评论0条评论)

登录后参与讨论
我要评论
0
24
关闭 站长推荐上一条 /2 下一条