Re: Extended sequence numbers for AX.25
- To: email@example.com
- Subject: Re: Extended sequence numbers for AX.25
- From: Phil Karn <firstname.lastname@example.org>
- Date: Wed, 15 Mar 1995 19:41:38 -0800
- Cc: email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, PG@tasma.han.de, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, Jarkko.Vuori@hut.fi, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
- In-reply-to: <9503151239.AA09321@sys3.pe1chl.ampr.org> (email@example.com
- Reply-to: firstname.lastname@example.org
Thanks for your comments.
>I know about that article, but I have always questioned its validity
>when applied to real-world packet radio.
>In practice, larger packets have a larger chance to be hit by a bit error,
>and this means that with a certain (non-zero) bit error rate it is often
>better to send smaller packets.
My analysis did assume a non-zero bit error probability. Otherwise I would
have concluded that you'd always want to send the biggest packet possible.
My conclusions held for a very wide range of bit error rate. For any
particular value of BER, there was an optimium transmission size that
minimized the combined effects of header overhead and lost
packets. Below this optimium value, header overhead reduced
throughput. Above it, packet loss reduced throughput. Yet over a very
wide range of BER, throughput was *always* maximized when the
transmission consisted of one (properly sized) frame.
I did assume randomly distributed errors, though.
Your network tests are very interesting. You should publish them. I
note that the usual rule of thumb for pure ARQ protocols (e.g., TCP)
is that the packet loss rate should not exceed 1% for good
performance. This says we need better modems and/or FEC under the ARQ
to bring the ARQ retransmission rate down below 1%. This would also support
much larger packet sizes, thereby reducing header overhead.
>- it is true that you discard perfectly good frames when using plain
> AX.25, but that is no worse than having to discard the single large
> frame that you propose.
Yes it is, since there is more header overhead with the smaller frames.
That's the sole reason for the better performance with larger frames.
>- it is not necessary to use complete go-back-N. It is possible to save
> the out-of-sequence frames and use them later, as shown before in some
> Austrian and German papers. Many European products do this.
> with modulo-128, this becomes much easier, as indicated in my paper.
Is this really true? TCP does this, but without checking the LAPB rules
I wouldn't be surprised if this causes problems.
>- the CRC check becomes weaker with large frames
Also true, but if you have TCP/IP on top...
>- the loss of a single large frame is more catastrophic than one of a
> series of small frames, especially when it was not the first in the
Again true, but this has to be balanced against the increased header
overhead. All this was taken into account in my analysis.
>- the payload data is basically datagram based, and arrives in realtime.
> you would need some timer to collect small frames and send the
> partial frame when it runs out before the frame is sufficiently long.
> that increases latency.
> with modulo-128 AX.25 the datagrams can be queued as they arrive.
Not really. Just limit your window to 1 outstanding frame. When the first
IP datagram is queued, send it immediately. If more datagrams arrive before
the first is acked, they queue up. When you do get the ack, send everything
in the queue up to your packet size limit.
>- there are many existing implementations that don't have a flexible
> buffer scheme as NET/NOS has, and would not easily implement a scheme
> with very large packets.
>- some commercial TNC's cannot receive frames above the "256-byte packet
> via 8 digipeaters" worst-case AX.25 length, even when operated in KISS
> mode. apparently they use fixed-size buffers.
True. But if we're never willing to change, nothing will ever improve!
>- collisions between the link ends (the links are halfduplex, both sides
> can decide to transmit at the same time)
This is another thing that a stop-and-wait protocol helps you
with. LAPB was designed for a full duplex environment, but our
channels are half duplex. When you send a packet and unkey, you
should at least wait for the expected link level ack before you key up
with another packet. This happens automatically if you set maxframe=1.
>The FEC scheme to be implemented needs to be well thought out to cope with
>the kind of errors that really occur.
Very true. As I said earlier, my original motivation for looking at it
was radar QRM here on 70cm in San Diego. FEC is a natural for this, but
it's also helpful with marginal links.
>On local access channels, most errors are caused by collisions, but the
>factor of bit-errors because of insufficient S/N also comes into play.
>A better channel access algorithm can probably do more than FEC, but that
>is just a guess based on a lot of tracing on packet channels.
True, which is why the protocol I've been designing includes my MACA
(Multiple Access with Collision Avoidance) along with hybrid Type II
PS. Do we need the long CC: list on this message? I suspect that many are
already on tcp-group...