April 20, 2013
Network latency and packet loss emulation is a technique you can use to test a program or system on a simulated network connection which is faulty or error prone. This is a valuable tool to make sure your application works when the network has packet loss, jitter or high latency. Loss emulation is a standardized method to simulate a bad network connection using a system on your local LAN. Using the local LAN keeps your testing environment secure and allows full control over the topology.
What if you are a game developer and you want to make sure your online game works when a percentage of the players have a high latency (i.e. ping of 200 ms or higher) or high packet loss (2% or greater). If you have eight(8) players in a game and one(1) player is on a dial up connection with a slow system, what happens to your game? Do all of the players slow down due to this one person or does the game just crash (Command and Conquer Red Alert)? Does a warning go out to all players saying that the slow person is the cause of the issues (Starcraft 2)? Does your game lose sync with all players and give useless warnings (Supreme Commander)? We have seen many games which are released without this method of testing. They probably developed their game and tested it on their local LAN; a flawless low latency high bandwidth connection. Then when the product was released, the users and reviewers rated it horribly due to its jittery, unplayable on-line model.
Network emulation is accomplished by introducing a device on the LAN which alters the packet flow in a way that imitates the behavior of application traffic in less than ideal circumstances. This device may be either a general-purpose computer running software (Linux or FreeBSD bridge) to perform the network emulation or a dedicated emulation device. The device or system incorporates a variety of network attributes including the round-trip time across the network (latency), the amount of available bandwidth, a given degree or range of packet loss, duplication of packets, reordering packets, and/or the severity of network jitter.
Testing your on-line applications before they are released is a good way to make sure your product is ready for prime time. If your application is not able to handle all types of network problems we guarantee you users will notice the issues and blame your application. It does not matter what type of connection they have, how far they are from the server or if the system they have is too slow run the application. They will complain and they will take out their frustrations by writing bad reviews. Lets take a look at some testing environment solutions which are free and quite easy to setup.
Netem allows you to setup a delay profile on the outgoing packets on a network interface. This setup can be used on the Ubuntu machine itself or a Linux operating system inside a virtual machine like vitualbox. This setup is most useful on the network if you setup the Ubuntu machine as a transparent bridge between the the client and server machines. Lets go through the basic setup now and further down on the page we can look at setting up a Linux based bridge.
In order to use netem, you need to install the iproute package through the Ubuntu package manager.
sudo apt-get install iproute
Once iproute is installed we can use the binary "tc" to delay traffic. Only root can modify a network interface so either su to root or use sudo. The simplest setup of netem is to add a static amount of latency to packets leaving our external interface. Here we are adding 250ms of latency to outgoing packets of the external interface, eth0. Understand that netem only adds the delay to packets leaving the interface. So, 250 ms delay outgoing would be conceptually equal to 125 ms delay outgoing and 125 ms delay for the packet to come back.
First find an ip address you can ping. As a base line we will ping the router on our internal LAN.user@machine :~$ ping -c 3 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=255 time=0.869 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=255 time=0.796 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=255 time=0.784 ms --- 192.168.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.784/0.816/0.869/0.044 ms
As you can see the average latency for pinging the machine was 0.816 ms. This is the expected result of a local LAN. It is a flawless network setup.
Now, lets configure netem using the tc binary to add 250ms of latency to our interface eth0.
sudo tc qdisc add dev eth0 root netem delay 250ms
Now, ping the exact same ip address we ping'ed in the previous step.
user@machine :~$ ping -c 3 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=255 time=251 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=255 time=251 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=255 time=250 ms --- 192.168.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 250.920/251.276/251.477/0.252 ms
The result is an average of 252ms of latency per ping packet route trip. This is right around what we would expect after adding the 250 ms of latency.
To disable the netem delay products on the interface we can delete the rules.
sudo tc qdisc del dev eth0 root netem
In this example we are going to emulate an Internet path with variable packets loss, latency, re-ordering and jitter. This setup emulates the real world connection of an Internet link from the east coast of the United States to the Samoan Islands in the central South Pacific (www.samoa.ws). It is also a pattern you would see internationally between low tier ISP's. When we tested the real world network we noticed the following happen to our traffic:
Using the properties of the network stated above we can use netem (tc) to emulate this network. Enable the new delay product scheme using the following:
sudo tc qdisc add dev eth0 root netem delay 200ms 40ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 5% 50%
Now, to test the setup using just our local LAN we are going to ping the router again. This time we are going to send 100 packets to give a greater possibility of network corruption or loss. The output here has been shortened for brevity since we are only interested in the ping result summery.
user@machine :~# ping -c 100 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=255 time=167 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=255 time=236 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=255 time=192 ms ... 64 bytes from 192.168.0.1: icmp_req=100 ttl=255 time=209 ms --- 192.168.0.1 ping statistics --- 100 packets transmitted, 97 received, +2 duplicates, 3% packet loss, time 99138ms rtt min/avg/max/mdev = 161.466/203.143/241.433/24.642 ms
The results show an average latency of 203.143 ms and we saw 3% packet loss and 2 duplicate packets. This is a great setup to test an intercontinental link as well. With just a little bit of time you can take the ping data from any network you want to test and make a corresponding netem profile. The results of the test above were really close to what we saw in the real world ping from the eastern US to Samoa.
To disable the netem delay products on the interface just delete the rule.
sudo tc qdisc del dev eth0 root netem
This is simple a bash shell script to enable and disable a netem testing environment. We are going to call the script calomel_netem.sh, but you can call it anything you like. To use the script just execute it with an argument of the function you want to execute. For example, if you want to disable all netem profiles use "./calomel_netem.sh off". To enable the profile of the Samoa example from above use "./calomel_netem.sh samoa" and to show the current running profile use "./calomel_netem.sh show". You should be easily able to modify this script to suit your needs by adding methods to the end of the script.
#!/bin/bash # ## Calomel.org ## https://calomel.org/network_loss_emulation.html # # Usage: ./calomel_netem.sh $argument if [ $# -eq 0 ] then echo "" echo " Calomel.org calomel_netem.sh" echo "------------------------------------------" echo "off = clear all netem profiles" echo "show = show active profiles" echo "samoa = enable Samoa netem profile" echo "" exit fi if [ $1 = "off" ] then echo "netem profile off" tc qdisc del dev eth0 root netem exit fi if [ $1 = "show" ] then echo "show netem profile" tc qdisc show dev eth0 exit fi if [ $1 = "samoa" ] then tc qdisc add dev eth0 root netem delay 200ms 40ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 25% 50% exit fi #### EOF #####
If you need to stress test the link to a server from multiple client machines this is best accomplished by setting up a transparent bridge. A bridge is simple a device on the network where all traffic must pass through before it gets to the server. When the server responds to the client the traffic passes back through the bridge. The bridge is not seen by any of the machines, thus the description" transparent."
The setup is really easy. Physically, connect the server to a switch and the switch to one of the network interfaces on the bridge. Then connect the other network interface of the bridge to a different switch that the client are connected to. In order for traffic to get to and from the server it must pass through the bridge.
Now, we can setup the interfaces of the bridge. From the console on the box which is going to be the bridge we put both internal and external interfaces in promiscuous mode. Then we create the bridge virtual interface an add both interfaces (eth0 and eth1) to the virtual network. This is all to configure a transparent bridge. If you would also like to access the bridge from the network, for example to ssh to it, then we also have setup an ip address on the bridge interface. Now, you have a transparent bridge which you can also ssh to and modify the netem configuration.
## set interfaces to promiscuous mode ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up ## add both interfaces to the virtual bridge network brctl addbr br0 brctl addif br0 eth0 brctl addif br0 eth1 ## optional: configure an ip to the bridge to allow remote access ifconfig br0 192.168.0.111 netmask 255.255.255.0 up route add default gw 192.168.0.1 dev br0
Now, when you configure netem you can add the profile to both the eth0 and eth1 interfaces. This will make the bridge delay the packets in _BOTH_ the incoming and outgoing direction and better represent the true network path. For example, these two lines add the "samoa" netem example profile to eth0 (interface on the server side) and eth1 (interface on the clients side). With this flexibility you could even change to profile so that traffic coming back from the server is faster then the traffic being sent from the clients. You have the flexibility to emulate any topology you need to.sudo tc qdisc add dev eth0 root netem delay 200ms 40ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 5% 50% sudo tc qdisc add dev eth1 root netem delay 200ms 40ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 5% 50%
How can I find out the timings of a network and make my own rules ?
The best method we found is to simply ping the host on the real network you are looking to emulate. Lets say, ping the remote host 20 times with 100 packets per iteration. Then look at the packet loss, jitter and delays. Then setup your netem to look like the same profile. With just a few iterations you should be able to get the delay products spot on.