mirror of https://github.com/dswd/vpncloud.git
2.1 KiB
2.1 KiB
Performance Tests
Test setup
Two nodes with:
- 2x Intel(R) Xeon(R) CPU L5420
- 16 GiB Ram
- Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Kernel driver e1000e)
- Debian Linux 7.0 (Kernel 2.6.32-26-pve)
- Connected by Cisco Catalyst 4506 switch
VpnCloud version: VpnCloud v0.1.0 (without crypto support, protocol version 1)
Test 1: Unencrypted throughput
Node 1:
$> ./vpncloud -t tap -l NODE1:3210 -c NODE2:3210 \
--ifup 'ifconfig $IFNAME 10.2.1.1/24 mtu MTU up' &
Node 2:
$> ./vpncloud -t tap -l NODE2:3210 -c NODE1:3210 \
--ifup 'ifconfig $IFNAME 10.2.1.2/24 mtu MTU up' &
$> iperf -s &
$> top
First, the test is run without VpnCloud:
$> iperf -c NODE2 -t 60
and then via VpnCloud:
$> iperf -c 10.2.1.2 -t 60
Results:
- Throughput without VpnCloud: 938 Mbits/sec
- Throughput via VpnCloud (MTU=1400): 363 Mbits/sec
- CPU usage for VpnCloud (MTU=1400): maxed out at ~105% of one core
- Throughput via VpnCloud (MTU=16384): 946 Mbits/sec (no idea why this is higher)
- CPU usage for VpnCloud (MTU=16384): ~73% of one core
Test 2: Unencrypted ping
Node 1:
$> ./vpncloud -t tap -l NODE1:3210 -c NODE2:3210 \
--ifup 'ifconfig $IFNAME 10.2.1.1/24 mtu 1400 up' &
Node 2:
$> ./vpncloud -t tap -l NODE2:3210 -c NODE1:3210 \
--ifup 'ifconfig $IFNAME 10.2.1.2/24 mtu 1400 up' &
Each test is first run without VpnCloud:
$> ping NODE2 -c 1000 -i 0.01 -s SIZE -U -q
and then with VpnCloud:
$> ping 10.2.1.2 -c 1000 -i 0.01 -s SIZE -U -q
SIZE: 50 bytes
- Without VpnCloud: Ø= 200 µs, stddev= 28 µs
- With VpnCloud: Ø= 318 µs, stddev= 26 µs
SIZE: 500 bytes
- Without VpnCloud: Ø= 235 µs, stddev= 29 µs
- With VpnCloud: Ø= 351 µs, stddev= 26 µs
SIZE: 1400 bytes
- Without VpnCloud: Ø= 303 µs, stddev= 32 µs
- With VpnCloud: Ø= 421 µs, stddev= 31 µs
Conclusion
- VpnCloud achieves about 360 MBit/s with default MTU settings.
- At increased MTU, VpnCloud is able to saturate a Gigabit link.
- VpnCloud adds about 120µs to the round trip times, i.e. 60µs latency increase.