Wiki

Package Summary

A ROS node that allows control of network emulation parameters such as bandwidth, loss and latency for a Linux network interface. Traffic control is separate for each direction: egress and ingress.

ROS API

traffic_control_node.py

traffic_control_node.py provides control of the network emulation parameters of a network interface. The node is controlled via a dynamic_reconfigure interface.

Parameters

Node parameters
These are the startup parameters of the node. ~interface (string) ~interface_ifb (string, default: ifb0) ~filter_egress (string, default: u32 match u32 0 0 (i.e. match all packets)) ~filter_ingress (string, default: u32 match u32 0 0 (i.e. match all packets))
Dynamically Reconfigurable Parameters
See the dynamic_reconfigure package for details on dynamically reconfigurable parameters. ~bandwidth_egress (double, default: 0.0) ~latency_egress (double, default: 0.0) ~loss_egress (double, default: 0.0) ~bandwidth_ingress (double, default: 0.0) ~latency_ingress (double, default: 0.0) ~loss_ingress (double, default: 0.0) ~packet_size (int, default: 1500) ~status (string, default: OK) ~errmsg (string, default: )

How it works

There are some peculiarities about the way tbf, used for bandwidth control, and netem, used for latency and loss emulation, interact that affect the link emulation metrics. This section describes what the expected network emulation result is depending on the usage scenario.

A few definitions:

Using these definitions we distinguish a few scenarios and the actual network metrics that result from the network emulator implementation:

  1. When the link is not saturated, the metrics are as expected:

    • measured_bandwidth == tx_bandwidth_loss_adjusted

    • measured_latency == latency

    • measured_loss == loss

  2. When the link is saturated and only bandwidth control is in place (i.e. latency and loss are 0.0). Note that under this scenario all packets with size greater than packet_size will be dropped!

    • measured_bandwidth == bandwidth_limit

    • measured_latency == packet_send_time_at_capacity, is not 0ms due to the time spent by each packet in the internal tbf queue (whose size is one packet)

    • measured_loss = 100% - bandwidth_limit/tx_bandwidth(%), the percentage by which bitrate overruns the link capacity

  3. When the link is saturated and both bandwidth control and latency and/or loss control is in place:

    • measured_bandwidth == min(bandwidth_limit, tx_bandwidth_loss_adjusted), either the link capacity or the loss adjusted tx_bandwidth, whichever is smaller.

    • measured_latency is the specified latency, but adjusted up to the nearest multiple of packet_send_time_at_capacity

    • measured_loss is either the specified loss or the loss due to capacity overrun, whichever is greater

Command-line tools

There are two tools intended to help in determining the effect of a combination of link emulation parameters and send bitrate on the measured (or "received") metrics.

The first tool infers these values based on the algorithm described in the previous section while the second one actually measures them using the loopback interface.

This tool takes as parameters:

and projects the expected network emulation metrics:

For example, for a link with capacity 1Mbit/s which is saturated since the TX rate is 1.5MBit/s:

# rosrun network_traffic_control projected_link_metrics.py 1000000 0.0 0.0 1500 1500000
Projected metrics: bandwidth 1000.00Kbit/s latency 12.00ms loss 33.33%

The bandwidth is the link capacity, the latency is the time to send one packet (i.e. 1500 bytes at 1Mbit/s) and the loss is due to the overrunning the link capacity.

Other examples:

# rosrun network_traffic_control projected_link_metrics.py 1000000 0.04 0.0 1500 1500000
Projected metrics: bandwidth 1000.00Kbit/s latency 60.00ms loss 33.33%

# rosrun network_traffic_control projected_link_metrics.py 1000000 0.04 80.0 1500 1500000
Projected metrics: bandwidth 300.00Kbit/s latency 40.00ms loss 80.00%

# rosrun network_traffic_control projected_link_metrics.py 1000000 0.02 20.0 1500 500000
Projected metrics: bandwidth 400.00Kbit/s latency 20.00ms loss 20.00%

In order, to verify experimentally the theoretical projections of the projected_link_metrics.py tool, a node (measure_link_node.py) and an associated launch file (measure_link.launch) have been created which implement the network emulation on the lo (loopback) interface and use the network_monitor_udp package for metric measurement.

This node and the associated launch file live in the network_control_tests package in the test/ subdirectory.

Here's an example, whose results agree quite closely with the theoretical projection made previously:

# roslaunch measure_link.launch tx_bandwidth:=1500000 bandwidth_limit:=1000000 latency:=0.0 loss:=0.0

[...]

[INFO] 1288966581.041664: Link measurement completed!
[INFO] 1288966581.042628: Link parameters: bandwidth_limit 1000.00kbit/s latency 0.00ms loss 0.00% tx_bandwidth 1500.00kbit/s
                 packet_size 1500bytes max_allowed_latency 100.00ms max_return_time 0.00ms
                 direction egress duration 10.00s
[INFO] 1288966581.043424: RESULTS: measured_bandwidth 974.48kbit/s measured_latency 8.41ms measured_loss 35.02%

The launch file (and node) takes the following parameters:

Some more examples:

# roslaunch measure_link.launch tx_bandwidth:=1500000 bandwidth_limit:=1000000 latency:=0.04 loss:=0.0

[...]

[INFO] 1288967237.290097: Link measurement completed!
[INFO] 1288967237.291093: Link parameters: bandwidth_limit 1000.00kbit/s latency 40.00ms loss 0.00% tx_bandwidth 1500.00kbit/s
                 packet_size 1500bytes max_allowed_latency 100.00ms max_return_time 0.00ms
                 direction egress duration 10.00s
[INFO] 1288967237.291811: RESULTS: measured_bandwidth 978.08kbit/s measured_latency 69.74ms measured_loss 34.80%

# roslaunch measure_link.launch tx_bandwidth:=1500000 bandwidth_limit:=1000000 latency:=0.04 loss:=80.0

[...]

[INFO] 1288967298.388572: Link measurement completed!
[INFO] 1288967298.389555: Link parameters: bandwidth_limit 1000.00kbit/s latency 40.00ms loss 80.00% tx_bandwidth 1500.00kbit/s
                 packet_size 1500bytes max_allowed_latency 100.00ms max_return_time 0.00ms
                 direction egress duration 10.00s
[INFO] 1288967298.390246: RESULTS: measured_bandwidth 279.58kbit/s measured_latency 39.58ms measured_loss 81.36%

# roslaunch measure_link.launch tx_bandwidth:=500000 bandwidth_limit:=1000000 latency:=0.02 loss:=20.0

[...]

[INFO] 1288967485.547524: Link measurement completed!
[INFO] 1288967485.548481: Link parameters: bandwidth_limit 1000.00kbit/s latency 20.00ms loss 20.00% tx_bandwidth 500.00kbit/s
                 packet_size 1500bytes max_allowed_latency 100.00ms max_return_time 0.00ms
                 direction egress duration 10.00s
[INFO] 1288967485.549177: 
RESULTS: measured_bandwidth 394.65kbit/s measured_latency 20.19ms measured_loss 21.28%

Implementation details

Egress

An htb qdisc is created on the root of the interface with a single htb class with a very high limit (10Gbps). This htb class at the root is needed in order to attach the filter as only classful qdisc's can have filters (and tbf is classless).

Next, a tbf (Token Bucket Filter) qdisc for bandwidth control is attached. Finally, if latency or loss control is enabled, a netem qdisc child is attached to the tbf qdisc.

Ingress

An ingress qdisc is created on the interface and an ifb interface is created. A filter is attached to the ingress qdisc that redirects matching packets to the ifb interface. A setup identical to that described for egress is then created on this ifb interface.

tbf parameters

For bandwidth control there are three parameters of interest:

netem parameters

The parameter of interest for netem is limit which defines the size of an internal queue in packets. If no bandwidth control is in place, then this parameter is set to a high value (1000). If there is bandwidth control then the value of this parameter is selected as a function of latency and packet size, more specifically, it is equal to the number of packets whose transfer time at link capacity is equal to the specified latency.

Wiki: network_traffic_control (last edited 2010-11-05 14:40:02 by CatalinDrula)