[Continued from My favorite quirky chips (Part 1)]
The first bit of strangeness was the package, an enormous 50-pin DIP (old-timers will remember the first 68000, which was in an equally-huge 64-pin DIP). That allowed for separate instruction and data buses, and some claim that this was the first micro with a Harvard architecture. Instructions were 16 bits, with 13 address bits, and the data bus was just a byte wide.
It gets odder. The data bus was called the "Interface Vector bus," or IV for short, and bit 7 was the LSB! IV0 to IV7 were multiplexed with both address and data, suggesting that 256B of data were supported. But, no, Left Bank and Right Bank signals selected one of two data sources, doubling the space. The nomenclature reminds me of Paris.
The data space included both RAM and I/O, and the latter was memory mapped.
The 8X300 contained a variety of registers, numbered R1 to R6 and R11. What about the missing ones? The numbers were the octal address of each register, and R11 for unknown reasons was at O11. R10, or at least the "register" at O10, was a one-bit overflow flag, the only conditional bit. Only the ADD instruction could set or clear that bit. And R7 didn't really exist; it was a trick used to access the IV bus.
To read or write in data space or a register, one, well, the process is rather mind-boggling; there isn't enough room to describe it here. But the MOVE, ADD, AND, and XOR instructions included fields to specify how many bits to rotate an argument. Except that sometimes those bits selected a field length.
There were four other instructions, including XEC, which executed the instruction pointed to by a bit of convoluted math. The PC pointed to the location after the XEC, unless the instruction executed was a jump. Strange, but it did make handling jump tables easy.
Note what's missing: there were no interrupts. No stack, no calls. And no subtract. But that was easy enough to simulate by taking the one's complement of the subtrahend, adding one to make the two's complement, and then adding the minuend. Each subtract consumed five instructions.
And many more!
A number of microprocessors had quirky features that could be a curse or a blessing.
For example, the 4004 had a hardware stack, on-board the chip. It was three-levels deep. That doesn't permit much subroutine nesting. Intel improved this on the 8008, making the stack seven-levels deep. Even in those early days we tried to practice modular programming, so it was common to blow the stack via too many calls. The PIC1650 is often credited as being the first microcontroller (although I suspect the Scottish PICO1 probably deserves that honor). That part's stack was two deep, mirroring the similar PIC10 parts still selling today.
Interrupt cycles on Intel's 8080 were exactly like any fetch, except the CPU asserted an interrupt acknowledge pin. The hardware was then supposed to jam an instruction on the bus to vector to a handler. Eight one-byte calls (RST0-7) were provided for this purpose. But there were plenty of designs that used crafty logic to jam a three-byte call. Where I worked, we had severe time constraints and none of those approaches were fast enough due to the stack pushes involved. Instead, when awaiting a particular event, we'd branch to the handler and execute a halt instruction. The interrupt hardware would jam a NOP, which was the fastest instruction the CPU could execute, and the code following the halt, the interrupt handler, would resume.
For a while bit-slice processors were the rage for high-performance systems. The most popular was AMD's 2900 series. These 4bit slices could be cascaded together to build a system of any word length. Each chip had an ALU (arithmetic logic unit), a decoder for the eight instructions, and 16 nibbles of dual-ported RAM that acted as registers. Things like a program counter and the other components of a computer had to be added externally. Intel was in that game, too, with their 3000-series of 2bit slices. Needless to say, a processor with a wide word consumed a lot of parts.
Then there was Fairchild's 8bit F8—which had no address bus. Designers used the CPU chip in conjunction with a number of other unique Fairchild devices, and all worked in lockstep. Each device included data pointers, which changed values by watching control signals emitted from the CPU. Even the memory controllers had to know when a jump or other instruction caused a change in control.
The F8 did have a whopping 64 registers. Sixteen could be addressed directly from an instruction; the rest were accessed via a pointer register. Some instructions could cause that to increment or decrement, making it easy to run through tables. This "autoincrement" idea harked back to the PDP-11.
The F8's datasheet can be found at http://datasheets.chipdb.org/Fairchild/F8/fairchild-3850.pdf. Strangely, it uses hex, octal, and decimal rather interchangeably. In the early micro days, octal was often used instead of hex, as a lot of developers had problems groking the notion of letters as numbers.
By the time the microprocessor became a successful product, computer instruction set architectures were well understood and often beautifully orthogonal. But designers did awful things to cram a CPU into a chip and often had to take what today seems like strange shortcuts. In other cases, the new world of micros caused a flurry of creativity that resulted in some marvelously quirky, and sometimes cool, features. Many of those decisions live on today, frozen in time and silicon by the weight of legacy compatibility.
文章评论(0条评论)
登录后参与讨论