How to easily reroute traffic between servers running Proxmox VMs

Last weekend I ran into a lack of IP addresses to test something on my SYS box (176.31.102.105). But I did not want to buy additional IP addresses just for testing, as these would have increased my monthly cost (I know I am very greedy). I had an IP address (5.39.19.229) left on my OVH VPS (213.32.31.12) that I use for monitoring. So I decided to recycle this address. How ever it is not possible to move addresses from OVH servers to SYS / Kimsufi servers, so I needed a workaround.

Box 1 (monitoring)

Tunnel Setup

sysctl -w net.ipv4.ip_forward=1
ip link add tunnel0 type gretap remote 176.31.102.105 local 213.32.31.12
ip link set up tunnel0
# ip route add 10.10.10.0/24 dev tunnel0 <- not actually necessary (used for debugging)
ip addr add 10.10.10.2/24 dev tunnel0

Routing

arp -s 5.39.19.229 fa:16:3e:cb:b8:b9 -i eth0 pub
ip route add 5.39.19.229/32 dev tunnel0

Box 2 (SYS server)

Tunnel Setup

ip link add tunnel0 type gretap remote 213.32.31.12 local 176.31.102.105
ip link set up tunnel0
# ip route add 10.10.10.0/24 dev tunnel0 <- not actually necessary (used for debugging)
# ip addr add 10.10.10.1/24 dev tunnel0 <- not actually necessary (used for debugging)

Tunnel bridge

brctl addbr vmbr2
brctl addif vmbr2 tunnel0

Why I use a bridge? I want to use this IP address directly in a virtual machine. This is also the reason why I needed to use a gretap tunnel insteaed of a gre tunnel. Only gretap tunnels seem to support bridging. So my Proxmox configuration for this virtual machine looks like this:

Simple? Yup. You do not even need to care about the MAC address. Just get the bridge interface right and you’re all set (well almost).

Next step is the configuration of the virtual machine:

# The primary network interface
auto ens18
allow-hotplug ens18
iface ens18 inet static
	address 5.39.19.229
	netmask 255.255.255.255
	dns-nameservers 213.186.33.99
	broadcast 5.39.19.229
	gateway 10.10.10.2

Bring the interface up and the VM should start pinging. Simple, isn’t it? Well, not exactly.

Debugging

I noticed that the SSH connection is very laggy and freezes pretty pretty much instantly. I was able to do an iperf test at some point (right before the connection timed out again):

~# iperf -c iperf.ovh.net -i 1
------------------------------------------------------------
Client connecting to iperf.ovh.net, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 5.39.19.229 port 53252 connected with 188.165.12.136 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  79.2 KBytes   649 Kbits/sec
[  3]  1.0- 2.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  2.0- 3.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  3.0- 4.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  4.0- 5.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  6.0- 7.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  7.0- 8.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  9.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  0.0-10.2 sec  79.2 KBytes  63.3 Kbits/sec

Something seems very wrong.

Let’s check the interface configuration on the virtual machine:

2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:4c:ed:91:75:81 brd ff:ff:ff:ff:ff:ff
    inet 5.39.19.229/32 brd 5.39.19.229 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::404c:edff:fe91:7581/64 scope link 
       valid_lft forever preferred_lft forever

and compare it to the configuration of the tunnel on the host:

32: tunnel0@NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1462 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether be:3f:6c:19:87:94 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.1/32 scope global tunnel0
       valid_lft forever preferred_lft forever
    inet6 fe80::bc3f:6cff:fe19:8794/64 scope link 
       valid_lft forever preferred_lft forever

Wrong MTU on the VM. Add the MTU to the network config:

# The primary network interface
auto ens18
allow-hotplug ens18
iface ens18 inet static
        address 5.39.19.229
        netmask 255.255.255.255
        dns-nameservers 213.186.33.99
        broadcast 5.39.19.229
        gateway 10.10.10.2
        mtu 1462

A new iperf test looks fine:

# iperf -c iperf.ovh.net -i 1 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to iperf.ovh.net, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  5] local 5.39.19.229 port 33302 connected with 188.165.12.136 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0- 1.0 sec  11.5 MBytes  96.5 Mbits/sec
[  5]  1.0- 2.0 sec  11.4 MBytes  95.4 Mbits/sec
[  5]  2.0- 3.0 sec  9.88 MBytes  82.8 Mbits/sec
[  5]  3.0- 4.0 sec  10.5 MBytes  88.1 Mbits/sec
[  5]  4.0- 5.0 sec  9.88 MBytes  82.8 Mbits/sec
[  5]  5.0- 6.0 sec  9.75 MBytes  81.8 Mbits/sec
[  5]  6.0- 7.0 sec  9.88 MBytes  82.8 Mbits/sec
[  5]  7.0- 8.0 sec  11.1 MBytes  93.3 Mbits/sec
[  5]  8.0- 9.0 sec  11.0 MBytes  92.3 Mbits/sec
[  5]  9.0-10.0 sec  10.9 MBytes  91.2 Mbits/sec
[  5]  0.0-10.1 sec   106 MBytes  87.6 Mbits/sec
[  4] local 5.39.19.229 port 5001 connected with 188.165.12.136 port 43046
[  4]  0.0- 1.0 sec  7.83 MBytes  65.7 Mbits/sec
[  4]  1.0- 2.0 sec  9.35 MBytes  78.4 Mbits/sec
[  4]  2.0- 3.0 sec  7.04 MBytes  59.0 Mbits/sec
[  4]  3.0- 4.0 sec  10.6 MBytes  88.8 Mbits/sec
[  4]  4.0- 5.0 sec  10.9 MBytes  91.4 Mbits/sec
[  4]  5.0- 6.0 sec  10.9 MBytes  91.1 Mbits/sec
[  4]  6.0- 7.0 sec  10.7 MBytes  89.9 Mbits/sec
[  4]  7.0- 8.0 sec  11.1 MBytes  92.8 Mbits/sec
[  4]  8.0- 9.0 sec  11.1 MBytes  92.9 Mbits/sec
[  4]  9.0-10.0 sec  10.7 MBytes  89.5 Mbits/sec
[  4]  0.0-10.2 sec   102 MBytes  83.9 Mbits/sec

Making this permanently

Add a new interface on the VPS:

iface tunnel0 inet static
        address 10.10.10.2
        network 10.10.10.0
        netmask 255.255.255.0
        pre-up ip link add tunnel0 type gretap remote 176.31.102.105 local 213.32.31.12
        pre-up arp -s 5.39.19.229 $(cat /sys/class/net/eth0/address) -i eth0 pub
        post-up ip route add 5.39.19.229/32 dev tunnel0
        post-up ip route add 10.10.10.0/24 dev tunnel0
        post-down ip tunnel del tunnel0

Enable IPv4 packet forwarding permanently:

echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf

Add new interfaces on the Proxmox host (untested because server is in production, but it should™ work):

auto tunnel0
iface tunnel0 inet static
	pre-up ip link add tunnel0 type gretap remote 213.32.31.12 local 176.31.102.105
	post-down ip link del tunnel0

auto vmbr2
iface vmbr2 inet static
	bridge_ports tunnel0
	bridge_stp off
	bridge_fd 0

Comments

    1. Hello Paul,

      I have no information about blocked GRE tunnels.

      Did you configure the IP address on the tunnel and add the route?
      A: ip addr add 10.10.10.1/24 dev tunnel0
      A: ip route add 10.10.10.0/24 dev tunnel0

      B: ip addr add 10.10.10.2/24 dev tunnel0
      B: ip route add 10.10.10.0/24 dev tunnel0

Leave a Reply to jan Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.