Re: Extended sequence numbers for AX.25
- To: firstname.lastname@example.org (Phil Karn)
- Subject: Re: Extended sequence numbers for AX.25
- From: email@example.com (Rob Janssen)
- Date: Wed, 15 Mar 1995 13:39:30 +0100 (MET)
- Cc: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, PG@tasma.han.de, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, Jarkko.Vuori@hut.fi, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
- In-reply-to: <199503140716.XAA24824@unix.ka9q.ampr.org> from "Phil Karn" at Mar
- Reply-to: email@example.com
According to Phil Karn:
> Thanks for the note, and thanks for writing up your work.
Long time no hear :-)
I will try to write up some more things that I did in NET over the
past couple of years, and results that came out of the experiments.
(like the link quality measurement described below)
Almost the whole Dutch network now runs NET on PC's with SCC cards,
and this provides a convenient testbed for experiments (no compatability
problems with neighboring systems, usually)
> Some years ago I did an analysis that concluded that the optimum
> window size in AX.25 when used on terrestrial half duplex paths was 1
> packet *provided* that sufficiently large packets could be sent when
> the channel was "clean".
> I found it was always more efficient to deal with the overhead due to
> channel turnaround time by sending a single sufficiently large packet
> than to send a series of smaller packets in each transmission.
I know about that article, but I have always questioned its validity
when applied to real-world packet radio.
In practice, larger packets have a larger chance to be hit by a bit error,
and this means that with a certain (non-zero) bit error rate it is often
better to send smaller packets.
I have been running an experiment with link quality measurement in the
Dutch network over the past 1.5 year. This is based on sending UI frames
with a sequence number at regular intervals. The frames are padded to
256 bytes using a 00 00 00 00 00 00 FF FF FF FF FF FF sequence, which
is quite a good test for modems that don't use scramblers.
The results are kept as a bitmap of received/missed frames at the
receiving end, and can be displayed as packet success rate over a few
different intervals (1 hour, 3 hours, 6 hours etc).
Our network is almost completely based on HAPN 4800 bps modem links
running on 23cm with Kenwood TM-531 and homemade Interlink-TRX I
transceivers, mostly running 1 W in a yagi. The average link is about
60km between moderate-to-high buildings over a flat countryside.
The experiment was set up because the feeling was that many links were
asymmetrical in that the quality in one direction was much better than
the other. The classical methods (like using PING) only evaluated the
aggregate performance in both directions.
My experience is that the success rate of the above packets is usually
in the 90-97% range. Very good links can achieve 100% over short intervals,
but never for more than a few hours. Not-so-perfect setups achieve
in the 80-90% range.
I have a feeling that this performance is very much related to the frame
length and also, to a lesser extent, the contents. Often when the
indicated quality is very low, the NET/ROM link still works well.
Probably I should add shorter frames to the test as well, to see how
well those get through. That should provide some indication how much of
the loss is single-bit-error induced, and how much is caused by events
like collisions (between the link ends) and interference bursts, which
would not increase much with packet length.
Guessing from the data accumulated until now, I would think that a 2Kbyte
packet length would result in an unacceptable loss rate.
> This makes intuitive sense, since the go-back-N retransmission scheme
> of plain AX.25/LAPB causes many perfectly good frames to be discarded.
> Furthermore, there is considerably less link level overhead in a single
> large AX.25 frame than in a collection of small frames.
However, note that:
- it is true that you discard perfectly good frames when using plain
AX.25, but that is no worse than having to discard the single large
frame that you propose.
- it is not necessary to use complete go-back-N. It is possible to save
the out-of-sequence frames and use them later, as shown before in some
Austrian and German papers. Many European products do this.
with modulo-128, this becomes much easier, as indicated in my paper.
- the CRC check becomes weaker with large frames
- the loss of a single large frame is more catastrophic than one of a
series of small frames, especially when it was not the first in the
> If the problem is that your packets are too small, then perhaps what
> we really need is a new encapsulation scheme that lets you send
> multiple IP datagrams (or whatever) in a single AX.25 I-frame. This
> could be done rather easily since AX.25 in the connected mode is a
> "reliable" protocol; something as simple as protocol-length-contents
> coding would work fine.
Yes, it could be done. However I think there are some disadvantages to
that as well:
- the payload data is basically datagram based, and arrives in realtime.
you would need some timer to collect small frames and send the
partial frame when it runs out before the frame is sufficiently long.
that increases latency.
with modulo-128 AX.25 the datagrams can be queued as they arrive.
- there are many existing implementations that don't have a flexible
buffer scheme as NET/NOS has, and would not easily implement a scheme
with very large packets.
- some commercial TNC's cannot receive frames above the "256-byte packet
via 8 digipeaters" worst-case AX.25 length, even when operated in KISS
mode. apparently they use fixed-size buffers.
Currently the modulo-128 scheme is used mostly on interlinks in our network,
and for direct access to a BBS by a fast user. In these cases, there is
only a 14-byte header to each packet and the overhead is not that much,
when compared to other headers (NET/ROM network and transport, TCP/IP).
> Yes, selective reject is better than go-back-N. But if you're having
> to retransmit so many frames that it makes a major difference, then
> perhaps it would be even better to do something below the ARQ layer to
> improve its performance. Better modems, better links and FEC, for
> example. Every frame that an ARQ protocol throws away is wasted energy
> that an FEC scheme could have used more efficiently.
Yes that would certainly be an area where improvements can be made.
However, before a suitable system can be designed there still has to be
a lot of research to find what exactly causes the loss of frames on a
Many links operate much above the noise, and plausible error causes are:
- interference by other band users (RADAR etc)
- clicks introduced by other local transmitters being keyed
(a real problem with the synthesized TM-531)
- collisions between the link ends (the links are halfduplex, both sides
can decide to transmit at the same time)
The FEC scheme to be implemented needs to be well thought out to cope with
the kind of errors that really occur.
On local access channels, most errors are caused by collisions, but the
factor of bit-errors because of insufficient S/N also comes into play.
A better channel access algorithm can probably do more than FEC, but that
is just a guess based on a lot of tracing on packet channels.
| Rob Janssen firstname.lastname@example.org | AMPRnet: email@example.com |
| e-mail: firstname.lastname@example.org | AX.25 BBS: PE1CHL@PI8UTR.#UTR.NLD.EU |