This patch contains a number of small changes to frottle 0.2.1
To install:
apply patch from inside the frottle directory
patch -p1 < path-to-patch/frottle-0.2.1-mwPre0.1.diff
The changes were done in order to make configuring frottle easier in a
multi-interface environment.
1. New command line arguments
frottle master | client <interface> <port>
--mode master | client
This argument is used to select an alternative configuration file:
master /etc/frottle-master.conf
client /etc/fottle-client.conf
Mode parameters still need to be set in the configuration file.
--interface <interface>
This argument allows the interface specified in the configuration file
to be overridden.
--port <port>
This argument allows the Master port specified in the configuration file
to be overriden. Care should be used with this argument as only a correct
combination of Master IP addr and Master port will allow a client to
connect to a master instance.
2. Support for re-named binary.
If the frottle binary is launched using the name frottlemaster
either renamed, hard or soft linked then it will default to -mode master
3. New configuration file options
#pidFile
#pidfile /var/run/frottle.pid
If this option is defined the process ID will be written to the file specified.
No checking for multiple instance is done at this stage. If the file already
exists then the pid will be added to the file. On shutdown the file is unlinked
(this will be a problem in the case of multiple instances).
pre and post scripts are supported for Master and Client configurations.
The specified script will be called with the following arguments:
scriptname interface masterport
#Master pre-script
#masterprescript /usr/local/frottle_master_pre.sh
#Client pre-script
#clientprescript /usr/local/frottle_client_pre.sh
#Master post-script
#masterpostscript /usr/local/frottle_master_post.sh
#Client post-script
#clientpostscript /usr/local/frottle_client_post.sh
#Client re-register timeout
#clientreregister 10
The client retry time was hard coded. this is now configurable. If the client
does not hear from the master after this time it assumes it has been dropped
and will try to re-connect. 10 was the hard coded default, may be a little high.
#Client timeout
#clienttimeout 60
The client will timeout if nothing is hear from the master after this time.
It will fall out of the frottle configuration. This was hard coded and
is now configurable.
#Client link speed
#clientlink 0
In some cases the IOCTL that is used to return the link speed will fail in
the client. This will leave the client at the fastest linkspeed default. This
may not be desirable. Setting clientlink to non zero ( set to the link rate )
it will override the internal default. Use this to reduce the TX window in
clients on a slow link to prevent them hogging the available time.
4. Bug fixes
Bandwidth allocation was testing for link speed >=5 else ==2 else other.
This was changed to >=5 else >=2 else other.
Errors in the management of the client concetion state meant that once a
conncetion was timed out it would repeatedly go through flushing the queued
packets when this was not necessary.
5. Known Issues
Need some more work on the thread join for multiple interface support. This is
only on shutdown and sems to work Ok.
Config file parsing logic has a error where it is looking for blank lines.
This is cosmetic but should be fixed at some stage.
The link speed setting from the wireless interface divides the value returned
from the IOCTL cal by 1000000. It may be it should be divided by 10,000,000.
As it stands if the value returned is 54,000,000 then it is going to be 54.
In the code the time slice is allocated according to >=5, >=2, <2. As it
stands all links will be in the top allocation category. ( this needs checking)
QUEUE is a special target, which queues the packet for userspace processing. For this to be useful, two further components are required:
- a "queue handler", which deals with the actual mechanics of passing packets between the kernel and userspace; and
- a userspace application to receive, possibly manipulate, and issue verdicts on packets.
The standard queue handler for IPv4 iptables is the ip_queue module, which is distributed with the kernel and marked as experimental.
The following is a quick example of how to use iptables to queue packets for userspace processing:
# modprobe iptable_filter
# modprobe ip_queue
# iptables -A OUTPUT -p icmp -j QUEUE
With this rule, locally generated outgoing ICMP packets (as created with, say, ping) are passed to the ip_queue module, which then attempts to deliver the packets to a userspace application. If no userspace application is waiting, the packets are dropped.
To write a userspace application, use the libipq API. This is distributed with iptables. Example code may be found in the testsuite tools (e.g. redirect.c) in CVS.
The status of ip_queue may be checked via:
/proc/net/ip_queue
The maximum length of the queue (i.e. the number packets delivered to userspace with no verdict issued back) may be controlled via:
/proc/sys/net/ipv4/ip_queue_maxlen
The default value for the maximum queue length is 1024. Once this limit is reached, new packets will be dropped until the length of the queue falls below the limit again. Nice protocols such as TCP interpret dropped packets as congestion, and will hopefully back off when the queue fills up. However, it may take some experimenting to determine an ideal maximum queue length for a given situation if the default value is too small.