Table of Contents
We assume here that you have a network running with connectivity to the Internet. The network can be set up in two ways - (a) Bering as a router/ gateway for the network to the Internet. (b) Bering as a bandwidth manager without having to reconfigure the network in a bridged mode. The first step is therefore to setup Bering as a router or as a bridge as per the requirement. By default, Bering is configured as a router and thus the standard Bering setup would be sufficient. In case Bering needs to be set up as a bridge, the bridging setup is described in another chapter of this user's guide ( Configuring Bering as a bridge ). The second step is described here. .
I'm just documenting the use of qos-htb while the package was created by Jaime and Juan, the creators of Lince branch. I've been using tc on a stock linux machine and decided to implement it on LEAF. I was hunting around for the htb.init LRP package, landed one from some site in Korea and could not get it to work as I did not know the differences between bash and ash. During my mailing list interactions, I got talking to Jaime and Juan who had packaged htb.init into a workable lrp. It was sent to me for review and we got the configuration down from multiple files to a single file. Juan was very open to suggestions and has incorporated what I had suggested.
The HTB.init LEAF package (qos-htb.lrp) can be downloaded from here.
It has been packaged by Juan J. Prieto <jjprieto at
eneotecnologia.com>
.
Comments on this section should be addressed to its maintainer: S
Mohan <smohan at vsnl.com>
.
Before we go on to the configuration, a few concepts on bandwidth management need to be understood in order to design the bandwidth allocation methodology. Here goes.
Traffic shaping works at the device queue level just prior to the packet getting pumped out of the interface.
Packets are queued and iproute2 patched kernel provides a method of building multiple virtual queues for every physical device.
Queues do not exist for virtual devices like eth0:n. (Am I wrong here?)
Each queue has a queue discipline which decides how the packet scheduling takes place.
In a router scenario, many users have often asked how bandwidth could be allocated between incoming and outgoing traffic.
Since scheduling can happen only for outgoing traffic, incoming traffic can only be "policed" and incoming rate limited. Rules of source or destination cannot be applied to it. The ingress device is used to police incoming packet rates.
In a router scenario, if bandwidth is shaped on the internet interface, it shapes out going traffic. Similarly, if the bandwidth is shaped on the internal LAN interface, it would simulate bandwidth management of incoming traffic on the internet interface. Obviously, this does not consider traffic to the router itself or to the DMZ interface. Bandwidth shaping on the DMZ interface would take care of that segment. By and large, this meets all the requirement of shaping incoming and outgoing traffic.
In the scenario above, the bandwidth required by a node/application is distinctly split as incoming and outgoing; since they are applied on different interfaces, bandwidth cannot be borrowed between the two. In many cases like those of ISPs, the bandwidth allocation is for incoming and outgoing combined. Under such situations, in stock linux, a virtual device called IMQ has been created thro? which all traffic passes. Thus shaping on IMQ will enable shaping total traffic and not incoming and outgoing separately. IMQ is, currently, not available in LEAF.
iproute2 allows creation of a hierarchy of nodes and traffic can enter the queue at any node. This redirection of flow is done thro? filters. The uppermost node is normally called the root (mnemonic but tc works using numeric handles; thus any name can be chosen). The maximum bandwidth for the root for that interface is given between 96-99% of the actual link speed. This ensures that the queue is built up and managed on the LEAF system and not on the router or modem that handles the physical link.
The last node in the hierarchy is normally referred to as a leaf. The intermediate nodes and leaf nodes are created using tc classes command and qdiscs are attached to them. Behaviour of the qdiscs are documented well in http://www.docum.org.
A class has two queue disciplines or schedulers attached to it. One is for scheduling packets when they are within the limits of the class specified. The other is for arbitering spare bandwidth. Thus htb is the packet scheduler within limits and sfq is the scheduler for distributing spare bandwidth as examples.
Packets are redirected to classes by filters. Modules used for filtering are called classifiers e.g. u32, fwmark etc. qos-htb uses the u32 classifier to classify traffic.
Classes have priorities and so do filter rules. Class priorities define bandwidth sharing priorities while filter priorities define order in which filters will be applied to classify packets and redirect traffic to classes.
For bandwidth management to work well, it is recommended that the sum of the rates of leafs/nodes equals the rate of the parent node.
Qos-htb uses a queuing discipline called htb written by Martin Devera (http://luxik.cdi.cz/~devik/qos email: [email protected] ). Qos-htb is a tc script generator that generates tc scripts (just as Shorewall generates iptables scripts) as per higher level definitions in qos-htb configuration file.
Boot a Bering floppy image. Once the LEAF menu appears get access to the linux shell by (q)uitting the menu. Edit the syslinux.cfg file.
Download tc.lrp and qos-htb.lrp.
Your syslinux.cfg file could look like (adjust to your tastes):
display syslinux.dpy timeout 0 default linux initrd=initrd.lrp init=/linuxrc root=/dev/ram0 boot=/dev/fd0u1680:msdos PKGPATH=/dev/fd0u1680 LRP=root,etc,local,modules,ppp,keyboard,bridge,dnscache,weblet,qos-htb,tc
The last two lines ("default linux ... dnscache,weblet,qos-htb") must be typed as a single one in syslinux.cfg
In order to have the bandwidth management working, you need to have classifier, qdisc, virtual device support enabled through the appropriate kernel modules. You also need to declare the modules in /etc/modules. The modules can be obtained from http://leaf.sourceforge.net/devel/jnilo/bering/1.0-stable/modules/2.4.18/kernel/net/sched/ .
Copy these modules to /lib/modules. To configure your modules, go to the LEAF Packages configuration menu and choose modules. Enter 1) to edit the /etc/modules file and enter the following information:
######################################################### #tc support files for scheduling and class definitions ######################################################### cls_fw cls_route cls_rsvp cls_rsvp6 cls_tcindex cls_u32 sch_cbq sch_csz sch_dsmark sch_gred sch_htb sch_ingress sch_prio sch_red sch_sfq sch_tbf #sch_teql
These should load properly as they do not have any dependencies.
At the shell prompt, load all the modules listed above using insmod. Load the qos-htb package by giving
# lrpkg -i tc.lrp # lrpkg -i qos-htb.lrp
The tc package is the utility to create/configure the queues, classes, filters etc. Test if tc works properly as under:
# tc qdisc add dev eth0 root # tc -s qdisc
should list the qdisc created. If all this works as listed, then the basic modules and utilities are working fine.
Invoke lrcfg -> packages -> qos-htb. The configuration is a single file that has declarations for all classes, interfaces etc.The script has definitions for all three interfaces - NET for the Internet, DMZ for DMZ and LOC for the local network. The standard for NET is eth0. If your interface to the Internet is a PPP interface, the appropriate change needs to be made here. E.g. eth0 is the interface to the Internet where traffic needs to be shaped in this script. If your Internet connectivity is thro' ppp0 then replace eth0 with ppp0 in the variable assignment. These variables have to be used in the qdisc definitions appropriately. /sbin/cat > /etc/sysconfig/htb/$NET-2: << -EOF will create eth0-2: file for processing by the script.
Please note that we are using the same variable names NET, DMZ and LOC as in Shorewall for consistency. Declaring these in Shorewall does not mean the same values will be taken here. These are independent and appropriate matched declaration of interfaces are to is to be maintained by the user.
The default configuration has 4 classes per interface and rules can be built to classify traffic. Each interface has 4 classes 10,20,30,40.
The classes have been built to cater as under:
Class 10 is for minimum latency typically for interactive traffic.
Class 20 is for maximum throughput typically ftp traffic.
Class 30 is for http traffic.
Class 40 is for other traffic.
For each class creation, the section starts as
/sbin/cat > /etc/sysconfig/htb/$NET-2: << -EOF? .. EOF
Caution: The cat and the EOF are very important lines and should not be deleted.
The declarations that can be used within these two statements in a block (relevant parts reproduced from inline documentation in htb.init).
### HTB class parameters RATE=5Mbit CEIL=6Mbit BURST=10Kb PRIO=5 ### Filtering parameters # RULE=[[saddr[/prefix]][:port[/mask]],][daddr[/prefix]][:port[/mask]] # RULE=10.1.1.0/24:80 # selects traffic going to port 80 in network 10.1.1.0 # # RULE=10.2.2.5 # selects traffic going to any port on single host 10.2.2.5 # # RULE=10.2.2.5:20/0xfffe # selects traffic going to ports 20 and 21 on host 10.2.2.5 # # RULE=:25,10.2.2.128/26:5000 # selects traffic going from anywhere on port 50 to # port 5000 in network 10.2.2.128 # # RULE=10.5.5.5:80, # selects traffic going from port 80 of single host 10.5.5.5 # # MARK=<mark> # # These parameters make up "fw" filter rules that select # traffic for each of the classes accoring to firewall # "mark". Mark is a decimal number packets are tagged with if # firewall rules say so. You can use multiple MARK fields # per config.
Each class can have multiple RULES to classify traffic. E.g. $NET-2:20 (max throughput) can have a rule each for destination ports 21,100 and 200. This means all traffic out to these ports will go thro? this class.
If no CEIL is specified, then RATE is taken as CEIL and this class cannot borrow bandwidth from peer classes if they are not using the bandwidth allocated to them.
If PRIO is not specified, default PRIO is 3 and all filter rules carry the same PRIO.
The default LEAF class qdisc is SFQ. This operates in a manner by which spare bandwidth at a level is shared among nodes at that level in the ratio of their rates. E.g if we have 3 nodes at a level having 50%, 25% and 25% allocation and if the last one has 15kb spare, the spare bandwidth is distributed as additional 10 and 5kb to the first and second node respectively.
The MARK filter uses packet marks and this is the TC support given by Shorewall in its configuration. This is useful when SNAT is used as the source IP is replaced. This the packets are marked as per source IP or any other rule in shorewall prior to NAT and that MARK can be used here for traffic control. This feature can also be used to classify traffic as per TOS bit settings/marks.
In addition to the above declarations, qos-htb also supports time based declarations where the rate allocation can be changed depending on the time.
### Time ranging parameters# # TIME=[<dow><dow>.../]<from><till>;<rate>[/<burst>][,<ceil>[/<cburst>]] # TIME=60123/18:00-06:00;256Kbit/10Kb,384Kbit # TIME=18:00-06:00;256Kbit # # This parameter allows you to change class bandwidth during # the day or week. You can use multiple TIME rules. If there # are several rules with overlapping time periods, the last # match is taken. The <rate>, <burst>, <ceil> and <cburst> # fields correspond to parameters RATE, BURST, CEIL and CBURST. # # <dow> is single digit in range 0-6 and represents day of # week as returned by date(1). To specify several days, just # concatenate the digits together.
For the above declaration to take effect, the following entry needs to be made in the cron table to run every hour.
0 * * * * /sbin/htb.init
This runs the script every hour and redefines rules depending on the declaration.
The node heirarchy is built from the file names automatically. (the ones in the /sbin/cat > ... statement)
eth0-2:---------eth0-2:10--------eth0-2:10:100 | | --------eth0-2:20 | | --------eth0-2:30 | | --------eth0-2:40
eth0-2: is the parent for eth0-2:10 which is the parent for eth0-2:10:100
The complete configuration can be saved by giving lrcfg -> b -> l -> q -> q
For the sake of ease of use, I have defined a command alias in my profile to save the configuration as follows:
alias savecfg=?echo ?e b\nc\nl\nq\nq\n|lrcfg?
This simulates keyboard input of lrcfg -> b -> c -> l -> q -> q
Reboot.
The following commands can be used to check if the configuration works fine:
# tc -s qd displays all qdiscs # tc -s class displays all classes # tc -s fi displays all filters # tc qdisc del dev eth0 root would clear all traffic control on eth0.
If you do not want Shorewall to destroy your nice tc orders generated through htb.init, you need to make some shorewall configurations. After Shorewall 1.3.13 you just need to edit /etc/shorewall/shorewall.conf and set:
TC_ENABLED=Yes CLEAR_TC=No
Prior to this release, you need to restart htb.init every time after modifying shorewall (there is an option in the admin menus of shorewall just for this).