Framing and Data Link Control

 

When a message is more complex than the output of a SPST light switch, a framing protocol to identify the ends of a message is required. In asynchronous communication this is accomplished with a start bit, an agreed number of data and parity bits and a stop bit which may be arbitrarily elongated. The receiver expects to find the line in an initial marking state (same as the stop bit). The line’s transition from mark to space indicates that a new character is beginning. Subsequent bits are placed in a shift register and a check is made that the line has returned to mark after the necessary data bits have been received. Given a valid stop bit and an optional parity check, a good character has been received and can be passed on to a higher level protocol processor.

 

Framing in IPSANET was a little more complex as a packet contained multiple bytes. There was a need to locate the first and last bytes of a packet in the serial bit stream received from the modem. Full duplex synchronous modems were used to connect adjacent nodes. A synchronous modem provides transmit and receive clock signals to the data terminal. These clock signals control the speed of serial transmission and indicate centre bit time to the receiver. This eliminates the need for bit rate oscillators which are required in asynchronous communication. It also means that in the absence of errors the transmit and receive data streams contain the same number of bits. This is not true in an asynchronous channel which might delete extra stop bits occurring between characters.

 

A synchronous data receiver starts by dividing the received bit stream into bytes. IPSANET used hardware intended for the USASCII dialect of IBM bisync. When the receiver doesn’t know how where a byte begins, it enters a mode called: search for sync. When two successive SYN bytes (hex 16 in USASCII) have been received, character phase has been established. This allows subsequent groups of eight bits to be treated as bytes. Once character phase was established it was preserved until a transmission error was detected. This differs from the normal half duplex bisync protocol where character phase is lost after every frame to allow transmission in the opposite direction.

 

After character phase was established, the receiver entered a state where the only acceptable received bytes were SYN or STX (hex 02). Any other character was considered an error and considered a loss of character phase. The structure of all messages transmitted between nodes was:

SYN SYN        STX count message_bytes ETX CRC [two bytes]

This is simpler than proper bisync where five different frame formats exist. (See http://ckp.made-it.com/bisync.html for a short description of proper bisync). The <count> byte specified the number of following bytes including ETX CRC. Thus minimum value was three and maximum value for determined by receiver buffer size. [I am writing from memory and the enforced minimum may have been five rather three. The reason is that no sensible message contained fewer than two message bytes.] The 1976 version of the protocol used 32 byte buffers so the count was constrained to a maximum of 28. From 1978 to 1981 the buffer size was increased to 64 bytes throughout the network. [I have heard that subsequent work by Reuters after I left IPSA packed several packets into one transmission frame for reasons which will be discussed later.] The CRC calculation included both the STX and the ETX bytes. CRC verification is unable to detect dropout of leading zeroes. Starting the calculation with the nonzero STX avoids this well-known CRC problem. The CRC-CCITT polynomial was used to verify that the frame was received without error. The CRC divisor (check polynomial) was X16+X12+X5+1,

 

I don’t know much about error probability analysis and never attempted it for this protocol. The CRC check detects all errors in which the syndrome (exclusive or difference between original message and received message) is not a multiple of the check polynomial. (See http://www2.rad.com/networks/1994/err_con/crc_how.htm for a theoretical discussion.) Therefore in theory one of 65536 transmission errors would go undetected. Receipt of the final ETX was required which provided some protection against dropout/insertion errors. (A synchronous modem may lose or insert an entire baud. IPSANET modems typically transmitted three or four bits per baud.)

 

I don’t think either of the receivers with which I worked checked for the two mandatory interframe SYN bytes. The transmitters certainly tried to send them.

 

Monitoring the SYN bytes on a lightly used data link provided a fairly good bit error rate test. Every error was counted. Counters were logged and reset at 15 minutes intervals on the 3705 and at 30 minute intervals on the CAI Alpha. The same counter was used for errors in received frames (bad count, CRC failure, bad ETX). On an almost idle link, with an error in every third byte, every error will be detected and counted. With heavy data traffic the only one error per frame was counted and so the error counts had to be interpreted with some knowledge of traffic levels.

 

Michael Harbinson knew that the SDLC/HDLC framing protocol would have been more efficient. Assuming the frame contains no instances of five successive one bits, SDLC would give three bytes of overhead. Each sequence of five one bits adds one bit of framing overhead. Thus with 57 message bytes the SDLC framing overhead would be between three and 14 (3+11) bytes compared with seven bytes for the protocol described above.

 

HDLC framing was not economically feasible in 1975. The IBM 3705 which is rather expensive supported it. No reasonably priced minicomputer supported HDLC. Economic considerations forced use of readily available bisync hardware.

 

A good discussion of framing protocols can be found at: http://www.faqs.org/rfcs/rfc935.html

 

The software specification for the frame receiver was that bad frames were discarded without any indication other the error count. Valid frames were passed to the link control software.

 

LINK CONTROL

 

Because the frame receiver and modem will discard corrupted frames, error control is required somewhere in the network. In TCP/IP some of this error control is done on an end-to-end basis. In IPSANET, error control was entirely the responsibility of the link control software. The adjacent nodes connected via a data link (usually including a modem) were responsible for controlling errors with the following rules:

1] Every data packet presented for transmission should eventually be received by the adjacent node.

2] Error recovery protocols were allowed to alter the order of data packets.

 

Link control software recognized two kinds of frames: link control packets and data packets. Data packets could be either end-to-end packets of some flavour or route control packets. The bytes immediately following the count byte were of interest to link control. The formats used were:

DATA: linkctlbyte address0 address1 interendctl [zero or more data bytes] ETX CRC

CTL: (hex 00 or hex 80) zero (first ctl phrase of two bytes), (second ctl), …, ETX CRC

There was a third type called: message from Lazarus with format:

LAZ: 0, 3, (DLL specific data) ETX CRC

A detailed discussion of the DLL protocol will be found in the Network Management document.

 

The linkctlbyte at the start of a data packet was divided into three fields:

Abbreviated node number (one bit)

Internode sequence (five bits)

Forwarding priority (two bits) zero is a forbidden value.

 

A simplified view of received frame processing at link control is that the forwarding priority can be used to distinguish between control and data packets. There were some considerations of accurate error logging which complicated this. The motivation was to provide some hints to network operations staff as to the exact cause of link failure.

 

LINK INITIATION

 

Route and link control software viewed a link as being in one of two states: initiated or uninitiated. A link in the uninitiated state was incapable of transmitting data packets. An initiated link could accept data packets for transmission to the adjacent node.

 

Link control in the uninitiated state was fairly simple. Link control packets to initiate the link were periodically transmitted. The receiver listened for these packets or for a message from Lazarus. Receipt of a message from Lazarus transferred control of the link to the DLL Neighbour function to begin loading software into the Lazarus machine.

 

After filtering out an initial message from Lazarus, two messages were valid on an uninitiated link. All other messages were discarded. The messages were:

RDY0: 00 00 RDY 00 ETX CRC

RDY1 (00 or hex 80) 00 RDY 01 [other data] ETX CRC

 

The transmitter began by waiting for CTS (clear to send) from the modem. It then sent a RDY0 message every 1.5 seconds and listened for a reply.

 

Receipt of a RDY0 indicated that the adjacent node was also in an uninitiated state and that link initialisation can begin. A RDY0 was acknowledged with a RDY1 message. A flag was set to inhibit transmission of the next RDY0 message but not its successor. This flag was introduced to solve a serious problem with initiation of satellite links. Without it the transmitter idle time after RDY0 discussed below was guaranteed to be less than 1.5 seconds. This led to a sequence of line initiation followed by immediate line reset.

 

Receipt of a RDY1 indicated the adjacent node had received either a RDY0 or a RDY1 from this node. A RDY1 was acknowledged with a RDY1 message and the link initiation was complete. (If the adjacent node failed to receive a RDY1 from this node, various error detection schemes for an initiated link will return this node’s link to the uninitiated state in 15 seconds or less.)

 

This initiation protocol depends upon a reasonable round trip delay through the modem. Transmission of a RDY1 in response to a RDY0 will cause the transmitter to remain idle for at least 1.5 seconds (assuming no received RDY0 or RDY1). If too much time expires, another RDY0 will be transmitted which will destroy the link initiation if received. This 1.5 sec delay was adequate for a single satellite hop with a one way delay of about 660 milliseconds. It tended to fail for a double satellite link. Hindsight suggests that the very simple change of inhibiting the next two RDY0 transmissions would have solved the double hop problem.

 

One common problem in data network operations is the link may be looped back in three different ways. Loopback is a legitimate network diagnostic tool. By looping the data link at various points such as before or after a modem, a severe link fault can often by isolated. For this reason, a link control protocol should tolerate loopback and perhaps report on its success.

 

In loopback mode one data terminal on a data link is receiving its own transmissions. The other data terminal is receiving these transmissions, its own or nothing. The first of these loopback modes is quite pernicious and caused serious problems in the original network implementation. The link initiation protocol was modified to allow detection and toleration of loopback.

 

The RDY1 message was modified to include a 16 bit node number and a protocol level indicator. The 16 bit node number had two uses. One use was to detect receipt of a RDY1 message from self. This inhibited link initiation and led to different behaviour in the Alpha and 3705. In both implementations it resulted in the emission of MSG 02 to the logging system and the setting a loopback indicator for the link. The only purpose of the indicator was to inhibit subsequent MSG 02 emissions. For an Alpha link, an observer watching a looped back link with relatively short delay would see a RDY0, a quick RDY1 and 1.5 seconds of inactivity. For a 3705 the observer would see a shower of RDY1 messages. Error counting was active during loopback but obviously more precise with the Alpha. For the Alpha loss of dataset controls from the modem (any of DSR CTS CD) reset the loopback bit; for the 3705 link initiation reset it. Nether scheme was satisfactory. The Alpha scheme sometimes generated an unreasonable number of messages. With the 3705, it was difficult to determine that a data link was still in loopback.

 

The second use of the 16 bit node number was to compute an abbreviated node number (=ABN). Network organization assumed that every node had a unique node number. An ABN of one indicates that the local node number is greater than that of the adjacent node. The ABN was used on an initiated link to detect a message from self.

 

The protocol level indicator was useful during a period where the network was undergoing a protocol change. It indicated what level of data link and route control a specific node supported. If both ends supported a new feature, it was used. If one end was running the older software, the newer feature would be suppressed on a particular link. The most drastic use of this was during the 1980 changeover to a datagram free network. The central area of the network was capable of supporting both the old and new routing protocols. New format route control messages were never sent to the older nodes. This central area was gradually expanded for several months until it encompassed the entire network. At this point desupport of the old route protocol began.

 

The RDY1 scheme of exchanging link parameters has limitations. It is inferior to the parameter negotiation which is used in PPP initialisation. The problem with RDY1 is that a node unilaterally declares a specific parameter value. For node number this works well. For something like preferred frame size limit it is just barely workable. Both nodes would have to pick the minimum of own and adjacent size limit. (Alternatively there might be a protocol that the maximum transmit size for self is determined by the adjacent node’s declared size limit.)

 

LINK OPERATION

After link initiation, a link was ready for normal operation. The link initiation process is assumed to reset certain link control variables such as next transmit and receive sequence numbers. The link control software had one duty in addition to error control on received data packets. This was to detect failure of the data link. When link failure was detected, a log message which indicated the probable cause of failure was emitted.

 

The first step in receive frame processing by link control to detect a packet from self. The rule for a node with ABN one was that a received message with ABN of zero was legitimate. The node with ABN zero had a more complicated rule as there were certain messages with ABN zero which could be legitimately received. These were:

RDY0 (indicates adjacent node has reset the link for some reason)

RDY1 (this suggests that adjacent node failed to receive our RDY1. Reply with a RDY1 and ignore)

Message from Lazarus (The adjacent node wants to start a downline load of new software. Reset our data link and ignore the message which will get resent by Lazarus.)

 

The node with ABN one also had to recognize these cases but decoding was not necessarily entangled with loopback detection.

 

In the initial implementation of loopback detection, receipt of a message from self caused an immediate reset of the link. At the request of network operations staff, this was changed to discard the message while remembering that the most recent message was from self.

 

After filtering out messages which require a link reset and the loopback messages, the link was left with a mixture of link control and data messages. A RDY2 packet was transmitted every 1.5 seconds on an initiated link. If 15 seconds transpired without receipt of a RDY2 from the adjacent node, the link was reset. Three different logging messages could be generated upon the timeout:

23   modem control signal absent (DSR CTS CD)

26 last received frame was from self

21 other timeout condition

 

One incomplete protocol change was to allow suppression of RDY2 messages on a heavily loaded link. The receiver of positive acknowledgement treated this like a received RDY2 and reset the 15 second timer. I never figured out the exact rule which would have allowed RDY2 transmission to be suppressed with complete reliability. The reason for suppressing the RDY2 would have been to improve bandwidth efficiency slightly.

 

ERROR CONTROL AT LINK LEVEL

 

Given a protocol with distinct messages for data and acknowledgement of data, two symptoms of data link errors such as CRC errors are visible at the link control level.

1] Data packet loss

2] Acknowledgement loss

A successful link control protocol must survive both types of frame loss.

 

A data packet contained one five-bit field used in error control. This was known as the internode sequence number (abbreviated as SN) below.

 

A link control packet which was not a RDYx packet contained two kinds of link control phrase. The first was an acknowledgement of a received data packet. It had two forms:

ACK SN positive acknowledgement

NAK SN negative acknowledgement of data packet with SN.

 

The second type of phrase was an enquiry which was used when a transmitter failed to receive acknowledgement of a recent data packet. The exact form of the enquiry changed rather drastically and will be discussed later.

 

Link control required several variables. I will speak as though a node had only one link. In reality these variables existed for every high order link which the node supported.

 

The receive table was conceptually a 32 element Boolean vector. (It was not implemented this way in either Alpha or 3705). The receive table was indexed by the SN of recently received data packet. A one indicated that the packet with that SN was successfully received. A zero indicated that the packet was lost.

 

The send table was a 32 element vector of pointers. If a data packet with a particular SN had been transmitted but not acknowledged, ST[SN] was a pointer to the data buffer for that packet. Receipt of a positive acknowledgement caused the data buffer to be returned to free storage. Receipt of a negative acknowledgement caused the buffer to be enqued for retransmission with a new SN. In both cases ST[SN] was cleared. Receipt of an acknowledgement for which ST[SN] was zero was ignored.

Some scalar values were also required:

NextXSN        SN of the next data packet to transmit

STsize             32 residue (NextXSN – SN of oldest entry in ST)

NextRSN        expected SN of next received data packet

RTsize             number of data packets which must be acknowledged (using RT data)

ENQtime         time at which at next enquiry should be sent.

 

The receiver procedure for handling a data packet was:

WHILE NextRSN ~= SN[datapacket]

            BEGIN RT[NextRSN] :=0; /* future NAK */

NextRSN := 32 residue 1+ NextRSN; RTsize :=1+ RTsize;

END;

/* future ACK follows */

RT[NextRSN] :=1; NextRSN := 32 residue 1+ NextRSN; RTsize :=1+ RTsize;

 

When a data packet (or enquiry) was received, it is possible that the transmitter is busy and cannot immediately transmit a link control packet containing acknowledgements. Generation of a link control packet was deferred until the transmitter was idle.

 

A slightly simplified version of the code to build part of a link control packet is shown.

FOR J := 0 Step 1 Until J>3 DO buffer[J] := 0;

IF Enquiry_required THEN

            IF original_enquiry_protocol THEN

                        BEGIN I := 32 residue NextXSN – Stsize;

WHILE (I ~= NextXSN) AND (J < packetsizelimit) THEN

            BEGIN IF 0 ~= ST[I] THEN

                        BEGIN buffer[J+0]:=ENQ; buffer[J+1] := I;

                                    J := J+2;         

                                               END:

                       I := 32 residue 1+I;

            END;

 

            ELSE /* 1977 bulk enquiry protocol in use */

                        BEGIN buffer[J] := BULKENQ; buffer[J+1] := NextXSN;

                                    Buffer[J+2]:=0; buffer[J+3] :=STsize;

                        END;

Enquiry_required := 0;

WHILE (RTsize ~= 0) AND (J < packetsizelimit) THEN

            BEGIN I := 32 residue NextRSN - RTsize;

buffer[J+0] := (ACK, NAK)[RT[I]]; /*1st byte of phrase */

buffer[J+1] := I;                                  /* 2nd byte of phrase */

J := J+2;

RTsize := RTsize – 1;

            END;

buffer[J] := ETX;        /* final ETX */

buffer[-1] := J+3; /*byte count for framing protocol */

/* with an STX and CRC the buffer is ready for transmission as a link control packet */

 

An important aspect of the code which moves ACK/NAK information from the receive table to the buffer is that phrases are entered in ascending sequence number. Also the receiver of a link control packet will process these ACK/NAK phrases in ascending order.

 

I cannot remember the algorithm used for processing the pre-1977 form of enquiry with one ENQ per send table entry. I remember that it was an ugly tangle of GOTOs on the Alpha. 3705 I3L didn’t support GOTO and it became an ugly tangle of two recursive functions. The problems arose because the received enquiry might be stale dated. The transmitter may have sent the enquiry and then received acknowledgements for some of the older packets in the send table. Those acknowledgements which are outside the send table limits must be ignored. Also an enquiry might indicate that data packets have been lost and that NAKs must be entered in the receive table.

 

A tricky part of processing the acknowledgements was deciding if an enquiry was required.

 

/* ACK/NAK processing */

IF ST[buffer[J+1]] ~= 0 THEN

BEGIN

IF buffer[J] = ACK THEN FreeBuffer buffer[ST[J+1]]

ELSE Append_to_Retransmit_queue buffer[ST[J+1]];

ST[buffer[J+1] := 0;

/* see if this ACK/NAK is for the oldest packet in the send table */

IF buffer[J+1] = (32 residue NextXSN – STsize) THEN STsize := STsize – 1;

ELSE Enquiry_required := 1;

END;

J := J+2; /* point to next phrase */

 

/* bulk enquiry processing */

WHILE (32 residue NextRSN-1) ~= buffer[J+1] THEN

            BEGIN RT[NextRSN] :=0; /* future NAK */

                        NextRSN := 32 residue NextRSN+1:

            END;

RTsize := buffer[J+3];

J := J=4; /* point to next phrase */

 

One problem in IPSANET, X.25 and other protocols which use an SN of less than 64 bits is how many unacknowledged packets can the protocol tolerate. The answer for IPSANET is obviously less than 32. It was given the name CMAX for reasons which are lost in history.

 

I think the answer is 16. The argument is:

Assume CMAX 16;

assume send table extends from SN0 to SN15 (NextXSN16 STsize=16)

The transmit data stream contains an enquiry which requests acknowledgement of SN0 packet. An acknowledgement for SN0 is assumed to be in the receive pipeline. Packet with SN16 cannot be transmitted until an acknowledgement of SN0 has been received.

 

R: ACK/NAK SN0 to SN15 /*this allows SN16 thru SN31 to be xmitted */

X: data SN16 to SN31

R: ACK/NAK SN0 to SN16

/*above is possible if a redundant enquiry was unserviced when SN16 was received */

/* can xmit SN0 now */

/* transmission of SN16 above inhibited any further enquires about old SN0 */

/* therefore transmission of SN0 is safe */

 

 

 

 

Assume CMAX 17;

assume send table extends from SN0 to SN16 (NextXSN17 STsize=17)

The transmit data stream contains an enquiry which requests acknowledgement of SN0 packet. An acknowledgement for SN0 is assumed to be in the receive pipeline. When acknowledgements for these 17 packets are received, packets with SN17 thru SN1 can be transmitted. The enquiry requesting acknowledgement of the old SN0 will be processed before the new SN0 packet is received. Under the 1977 rules this acknowledgement of the old SN0 will be accepted as an acknowledgement of the new SN0 packet. Assuming both acknowledgements are ACK (or both are NAK) this is harmless. If the old and new acknowledgements differ, a data packet will be either duplicated or lost. This is highly undesirable.

 

Hindsight in 2005 suggests that some reluctance to accept out of order acknowledgements would allow a larger CMAX. The change would allow the initial acknowledgement of a packet in response to a packet or its successors would be accepted out of sequence. For enquiry response acknowledgements would have to be in sequence. This should be feasible without resorting to half-duplex operation as in X.25 error recovery. The specific changes would be:

 

/* revised link control build */

FOR J := 0 Step 1 Until J>3 DO buffer[J] := 0;

IF Enquiry_required THEN

            IF original_enquiry_protocol THEN

                        BEGIN I := 32 residue NextXSN – Stsize;

WHILE (I ~= NextXSN) AND (J < packetsizelimit) THEN

            BEGIN IF 0 ~= ST[I] THEN

                        BEGIN buffer[J+0]:=ENQ; buffer[J+1] := I;

                                    J := J+2;         

                                               END:

                       I := 32 residue 1+I;

            END;

 

            ELSE /* 1977 bulk enquiry protocol in use */

                        BEGIN buffer[J] := BULKENQ; buffer[J+1] := NextXSN;

                                    Buffer[J+2]:=0; buffer[J+3] :=STsize;

                        END;

Enquiry_required := 0;

/* BEGIN 2005 CODE*/

IF response_to_enquiry THEN

            BEGIN buffer[J+0] := ENQRESPONCE; J:=J+2 */ END:

/* END 2005 CODE*?

WHILE (RTsize ~= 0) AND (J < packetsizelimit) THEN

            BEGIN I := 32 residue NextRSN - RTsize;

buffer[J+0] := (ACK, NAK)[RT[I]]; /*1st byte of phrase */

buffer[J+1] := I;                                  /* 2nd byte of phrase */

J := J+2;

RTsize := RTsize – 1;

            END;

IF RTsize=0 THEN response_to_enquiry:=0;

buffer[J] := ETX;        /* final ETX */

buffer[-1] := J+3; /*byte count for framing protocol */

 

/* changes to processing received link control packet */

 

response_to_enquiry:=1; /*add to bulk enquiry processing */

 

/*processing of received ENQRESPONCE phrase*/

WHILE ((buffer[J+2]=ACK) OR (buffer[J+2]=NAK)

AND buffer[J+3] ~= 32 residue NextXSN – STsize THEN J:=J+2;

 

This should allow the number of unacknowledged packets (CMAX) to be 31.

 

I can’t remember the details of how the xmit side’s Boolean Enquiry_required was set. Receipt of a valid acknowledgement which was not for the oldest SN was one method. (This was based on the assumption that the original ACK/NAK was lost due to transmission error.) There was also a timer which was important in light load situations. I can’t remember what controlled timer start and purge.

 

Link control acquired another duty after we installed our first Codex V.27 modem. We learned that responsibility for determining that a modem retrain was assigned to the data terminal rather than the modem as is customary. A modem retrain was requesting by holding RTS (request to send) low for perhaps 100 milsec. At various points before the 15 second RDY2 timeout expired there were attempts to force a retrain. One was a local tweak of RTS. The other method was to send a RDY3 rather a RDY2 to indicate to the adjacent node that we were not receiving RDY2s. The local tweak of RTS was also used on an uninitiated line.

Link control became sensitive to a low buffer condition. If the number of free buffers in a node became lower than a preset threshold, link control attempted to solve the problem by NAKing received data packets. Although this always avoided a buffer pool exhaustion restart of a node, there was a potential danger. If the node with low buffers continued to emit RDY2 packets, the low buffer condition could be exported to the rest of the network. Suppressing RDY2 emission while received data packets were being refused confined the problem to a small area. After 15 seconds of this behaviour an adjacent node would reset a high order line. This brutal action would usually clear the problem. Hindsight suggests that after ten seconds of rejecting received data packets from all lines, a search for the cause of the buffer depletion problem would have been productive. Resetting a link with a particularly long queue might have surgically excised the problem.

 

Data packets had forwarding priorities. There were three visible priorities: 1 2 3. Assignment of forwarding priority is discussed in Interend Protocols section. The very highest priority was for data packets being transmitted after receipt of a NAK. Packets from the four different priority classes were stored in four FIFO queues. A primitive algorithm located the first non-empty queue and transmitted the first packet.

 

This crude rule meant that a surfeit of high priority packets could lock out a low priority queue. I considered more complex rules for queue service. The intent was to guarantee a minimum bandwidth for a low priority call. The problem was queues were by priority class rather than individual virtual call. It was easy to allocate say 5% of bandwidth to priority 3 packets. The problem is that there could be more than one call at this priority and so the share would be reduced.