Alpha Hardware History


I am Roger D Moore former vice president of I.P. Sharp Associates. I was involved in IPSANET programming and operation for most of the years from 1976 to my retirement in 1989. For some of this period I was the only programmer working on IPSANET. In addition to Michael Harbinson mentioned below David Chivers and Stephen Crouch made significant additions to IPSANET software. Fred J Perkins of IPSA Europe was quite helpful in the early years. He installed the early European Alphas and made many helpful suggestions on software changes and priorities. He also acted via e-mail as buffer between several strong personalities in marketing and network development.


IPSANET was originally developed in Amsterdam at Intersystems BV (not the present company of that name). Michael Harbinson was the principal author and he was assisted by some junior programmers. The original Intersystems BV was jointly owned by I P Sharp Associates Limited in Toronto and Michael. Michael founded Intersystems BV when he left IPSA’s Ottawa office. His association with the protocol was so intense that it was initially referred to as the “Harbinson protocol”. Programming effort started in late 1972.


The software ran on a Computer Automation LSI 2-20. The LSI 2 was selected because Michael had two machines leftover from an approach control radar project at Schiphol Airport. Although the LSI-2 is not that pretty a machine it is not as ugly as some machines I have seen such as the Intel 8008 or the universally unloved Westinghouse machine used in the RCN FHE-400 hydrofoil project.


Google search suggests that there is no Web description of the LSI-2. therefore I will attempt to describe some of its features from memory. (I have since obtained a manual for the LSI-2/20).


It was designed to fit into a 19 inch relay rack. This meant the full width circuit boards were about 17 inches wide and approximately square. One promotional item from Computer Automation was an empty cardboard pizza box with a photo of the processor on the top. The motherboard was completely passive and fairly simple. It had five slots plus a connector for the power supply. Most of the signals were carried in parallel to all five slots. Some signals were daisy chained so that a higher slot could take priority over a lower slot.


From a programmer’s view the LSI-2 was an improved Varian 620i. The two registers (A=accumulator) and (X=index) were 16 bits wide. Normal addressing selected one of 2**15 words; byte mode selected one of 2**16 bytes. There were six conditional branches on accumulator sign/zero status. Fancy addressing modes included many-level indirect addressing and combined indexing and indirection (indexing was applied to the result of indirection). The instruction set included multiply and divide although these were not used in the concentrator.


The minimal configuration for an original version IPSANET node occupied four slots:

1] CPU (7.35a)

2] 8192 word memory (16 bits) (2.9a)

3] One or two SMC (Synchronous Modem Controller) halfboards (1.2a)

4] One AMM (Asynchronous Modem Multiplexor) connected four async lines (3.0a)

+5V DC requirement listed in parenthesis


One aspect of the LSI-2 design which seemed particularly clever was the self organizing memory boards. CAI had three different sizes of memory: 8K 16K and 32K words. If a memory system contains more than one memory board, one of the boards must recognize addresses which begin somewhere above zero. There was a magic protocol in the LSI-2 such that the daisy chained memory boards negotiated this at power on time. For a system composed of a 16K board followed by two 8K boards. The first board would have a base address of zero. The second board would respond to addresses between 16K and 24K. The third board would service addresses of 24K and up. This is an uneconomic configuration which I believe the memory address allocation protocol supported. In practice IPSANET soon outgrew the 8K limit and required a memory of at least 16K to run.


One very useful feature of the LSI-2 was that the memory was core store rather than semiconductor. The processor and power supply were designed to take advantage of an important property of core store. Core storage (with one precaution) is non-volatile just like a hard drive or floppy. The contents are preserved without electrical power. The required precaution is that power must not be lost in mid-cycle. Readout from core storage is inherently destructive and clears the word which was read to zero. Most core storage designs always rewrite the read result back to core store to give the system designer a non-destructive readout. This non-destructive property assumes that there is a sufficient electric power to complete the rewrite after read.


In the LSI-2 the power supply provided an early warning of an imminent power failure. This generated an interrupt which could be used to do something simple before a power failure. IPSANET had an extremely simple use of this interrupt. It led to a HALT instruction. This guaranteed that the core storage would be inactive when the voltage became too low to operate. When power returned a power-on interrupt was generated.


This ability to recover from a power outage was a great convenience in network operation. A power outage did disrupt all of the virtual calls passing through or originating in a node but it did not require a software reload. As software reload could take from one to ten minutes, hardware maintenance was much easier than in an environment where recovery from a power off situation is quite slow. (In my experience an IBM 3081 required about an hour to recover from a power outage.)


This ability to tolerate power outages was considered quite valuable by network operations staff. When the LSI 2-20 was replaced by the 2-40 with semiconductor mainstore 12v lead acid batteries were used to give a non-volatile effect for the important node clusters.


Another valuable feature of the LSI-2 was the customisable bootstrap chip. The normal Computer Automation provided boot chip had the capability of booting from an ASR Model 33, fast paper tape, tape, hard drive or 8” floppy. The standard chip had 256 words of fusible link ROM. The socket was designed to accept a 512 word chip. Michael and I designed a protocol which allowed a simple program in the upper 256 words of the boot chip to load a program from IPSANET. Operation required the co-operation of three other computers and will be described elsewhere. This ability to customize the reboot scheme to meet our own needs was another major benefit from LSI-2 architecture.


With a bootstrap chip capable of requesting reload via the network, a deadman timer was a valuable addition to the machine. It assumed that the healthy software would issue input instructions at 10hz or faster. An input pulse quickly charged a capacitor via a transistor. A large value resistor slowly discharged the capacitor. When the voltage across the capacitor approached ground a reload was triggered.


The bootstrap chip on the LSI 2-20 used fusible link ROM technology. This was considerably faster than the core store of the time. The concentrator software used a subroutine in the ROM (shared with the Lazarus code) to calculate the CRC of transmission frames. This saved ?? microseconds per frame.


Another feature which provided a benefit to IPSA was the exact power supply design. The PS was a simple series regulated system with several power transistors in parallel. Each transistor was in series with a current sense resistor. The resistors were mounted on the underside of a printed circuit board. CAI assumed that European power was 220 volts whereas it is often 240 volts. This caused chronic overcurrent in the regulating transistors. After the abused transistor shorted out, the resistor would overheat. Because it was on the underside of the PC board, the solder would melt and the overheated resistor would fall out. This allowed quick location of the shorted transistor.


Although CAI had prices and delivery times which were far better than those of IBM there were some problems at Computer Automation. Their engineering design and manufacturing quality control were inferior to IBM.


Interrupts were vectored to an addresses obtained from a peripheral. The address was placed on the data bus in response to a request from the processor.


There was an open collector interrupt request from line from the peripherals to the processor. It indicated that one or more devices wished to present an interrupt. Priority was resolved by position on the bus. A daisy-chained signal (PROT-/PRIN-) from the processor determined which device with a pending interrupt had the highest priority. A device which with a pending interrupt was supposed to prepare an interrupt address and disable interrupt preparation in lower devices.


This daisy-chained request for an interrupt address had two potential failure modes. One was that the signal was permanently inactive below the bad device. This was relatively harmless as no lower peripheral could present an interrupt. More insidious was the permanently active condition. This potentially allowed two devices to simultaneously place an interrupt address on the data bus. The subsequent interrupt to the logical AND of the two addresses was guaranteed to cause trouble. As this only happened about once a day it was easily blamed on software or some other cause.


CAI quality control ignored this interrupt priority daisy chain. We learned the hard way that it was important to test new CAI boards to see that this signal has propagated properly. Fortunately this was rather easy to do with normal operation. Monitoring the output from the lowest peripheral with an oscilloscope would quickly indicate whether all boards were good. If the waveform indicated a fault, marching the scope probe up the motherboard would identify the offending peripheral. I don’t know why CAI skipped this step in their QC. This test procedure was introduced in summer 1977 after IPSA Canada had difficulties with a newly-installed large node in Ottawa.


Synchronous Modem Controller halfboard connected a single synchronous line using a chip whose number I have forgotten. It supported several character sizes and sync byte values. It did not calculate a CRC. Data was transferred by interrupting for every character transmitted or received.


Asynchronous Modem Multiplexor fullboard connected four asynchronous lines via UARTs. The UART allowed connection of seven bit IBM terminals, eight bit ASCII devices and five bit Telex lines. There is no direct provision for software adjustment of UART clock speed but there was an unused ±12v output for every port. Bob Bernecky designed a small external board which used this output to switch the associated UART clock between 300bps and 135bps. The AMM had a wide rear edge connector. This provided primary board parameters such as device address and interrupt vector address. There were also secondary parameters such as UART clock wiring. Each async RS-232 port required seven wires (plus two grounds).


Placing the primary board parameters on the rear connector rather than onboard DIP switches was apparently a CAI recommendation as the SMC also used this parameter scheme. The advantage of this was that when replacing an AMM there were no DIP switches to set. The disadvantage was that a missing or crooked rear edge connector could wreak havoc upon the system due to misvectored interrupts. The simple change of putting a combined handle/cover over the rear edge connector gave a considerable increase in AMM reliability as misaligned connectors were easy to spot and correct.


About June 1977 IPSA began to use a new communications board in the Alpha. This was the Universal Modem Multiplexor halfboard designed and built by Macrodata BV. It supported four synchronous or asynchronous ports. The main chip was initially an AMD 9551 USART although this was eventually replaced by an Intel 8251. Clock selection was controlled by a mixture of software and external strapping. A passive clock header (programmed with a soldering iron) was plugged into a 16 pin DIP socket to provide software with four choices of clocks for a particular line. (For synchronous ports the header routed the modem provided clock signals to the USART.) The ability to have different send and receive clocks allowed support of V.23 1200/75 baud FSK modems. Unfortunately it was rare to find a terminal supplier willing to support 1200/75 communication.


The UMM was designed with the benefit of hindsight. Years of reading CAI engineering change notices revealed the follies of certain techniques in the TTL and LSI-2 bus universes. There were some minor teething problems such as the discovery that adjust of asynchronous character length after speed resolution for the new call caused a glitch on DTR (data terminal ready). The initial software testing was with 103A2 which used a relay coil to sense DTR. Newer modems used TTL chips and were quick to disconnect on the fall of DTR. Retrofitting a capacitor to slow the fall of DTR avoided the problem.


The only serious problem with the USART concerned transmit shutdown. When the contents of a data packet have been sent to a port, it is necessary to shutdown transmission mode. Neither the UART nor the USART have the ability to transmit an invisible idle (ASCII rubout with start bit omitted or set to mark). With the AMM, transmit disable simply blocked the path from the UART new character required output to the interrupt system of the LSI-2. With the AMD 9551, turning off transmit mode had two effects. One was to disable future character requests as with the AMM. The second effect was to immediately force transmit data to mark (the idle condition).


Jamming transmit data to mark created several requirements. The first was that two dummy characters had to be sent to the USART before leaving transmit node. Under ideal conditions these character were never transmitted. Adding two to the count of characters to be transmitted from a packet to a terminal was rather easy. The difficult part has guaranteeing the timing required by the 9551. Next character request was signalled at the beginning of stop bit transmission. The interrupt could be delayed by interrupt requests from higher priority devices such as network link devices. We learned the hard way that the USART interrupt which requested transmit mode reset had to be serviced before the centre of stop bit time. At 1200bps it takes 416 usec to transmit half a bit. Given that some interrupts could take more than a 100 usec to process it was difficult to meet this time constraint. The consequence of not meeting it was that when transmit resumed, the start bit of the dummy character got transmitted. This garbled the first character of the subsequent packet.


The problem was solved by replacing the AMD 9551 with the Intel 8251A. The 8251A does not jam transmit data to mark when transmit mode is turned off. It took several months to modify the Alphas one node at a time but it was certainly worth it. With the 8251A terminals as fast as 9600bps could be reliably attached to a UMM.


Steve Crouch wrote code which allowed a medium speed printer to be an attached to a UMM async port. It used the network Bisync protocol and was a useful extension. He also wrote node software to attach a Kennedy tape drive an Alpha as an alternate to a Bisync port. Again the existing Bisync protocol was used. It allowed the MDS 2400 in the London office to be retired.


The printer used in many IPSA offices was a Mannesman-Tally dot-matrix printer. Jim Field of IPSA London designed the font. His adventures in font design are described in a 2005 e-mail:


I contributed to the remote printer effort in London by evaluating potential printers and designing the font. I am afraid that I do not recall much Steve Crouch's work.


I must say that I was rather pleased with the work I did. It involved visiting the Mannesman-Tally (M-T) factory and trying to convince them that there was considerable potential for printer sales within Europe if the remote printer facility was successful. IPSA bought one and I proceeded to canvas the user population on which characters they wanted. As I recall, there were more characters available in the Mannesman-Tally dot-matrix character PROM than available on the trains of the 1403 in YYZ.


With little input from the user population I decided to devise a printer character set as best I could.


The M-T printer had the unusual property of providing the font designer with a flexible but intriguing problem. Instead of the usual simple matrix of n by m (perhaps 7x 8 or whatever) the thermal and mechanical properties of the printer allowed an extra fire of some of the pins (I think there were nine vertically). This allowed the filling in of more complicated patterns. I must confess that I cannot remember the exact rules, but it did allow for 'filling in' and a gave me a considerable amount of artistic freedom in creating the font. I included some extra mathematical symbols (sqrt and additional Greek letters) among them and some European language symbols not available on the 1403 trains.


In order to build the character generating PROM I had to design the character set. I constructed a binary matrix in APL which matched the PROM map and filled it by building 'foo' functions and using the matrix manipulating tools of APL. It was great fun.


Once I had constructed the font (I think I used some tools left around by LMB when he was playing with the Selectric plotting software). I downloaded the PROM load to paper tape using a TTY 33 and sent it off to a local PROM burner. To my great delight the font worked first time and remained in use at Buck House until I left.



To enhance performance the LSI-2/40 was installed in some sites as a 2/20 replacement. It had a faster CPU which was desirable when total synchronous line speed exceeded 4x9600bps. There were some problems as the 2/40 was not completely compatible with the 2/20. Some minor parts of the UMM support assumed that the 2/20 a specific delay between two successive output instructions. The delay was provided by some spurious instructions. With the increased speed of the 2/40 the number of spurious instructions had to be adjusted at initialisation depending on CPU model.


The bootstrap ROM of the 2/40 was incompatible with the 2/20 ROM. It was no longer possible to store constant data in the ROM nor access the ROM after loading completed. The CRC calculation had to be restored to the concentrator program. Also the table based CRC calculation used by Lazarus had to be replaced by an iteration which developed the CRC one bit at a time rather than one byte at a time As there was no other activity when Lazarus is running this was not a problem.


Functionality and performance were enhanced by second peripheral from Macrodata. This was the MSM which attached two synchronous lines. Unlike the UMM and SMC it was capable of supporting SDLC/HDLC. Therefore the initial use of the MSM was to connect to X.25 and SNA networks. It was later used to attach IPSANET high order lines running at speeds of up 19.2kbps which was the modem limit in the mid-80s. It was based on a Zilog 8530 line interface and an onboard Z80 which handled the link level protocol. A FIFO chip was used between the Alpha bus and the Zilog to allow the two processors to operate independently.


In 1985, the limitations of the Alpha were becoming more obvious. The speed of the LSI-2/40 was just adequate to act as a major forwarding node attached to modems with speeds exceeding 9600bps. The storage size limited the amount of new function which could be added. The cost was higher than some mass-produced microcomputers.


Beta deployment began in 1987. Alpha purchases ended about 1988. Some Alphas were redeployed from major sites to the hinterlands where lower capacity was tolerable. By 1989 more than 10% of the operational nodes were Betas.