Firezone Docker, ipvlan, with masquerade disabled for split-tunnel vpn

On the off chance anyone else has issues routing packets while using ipvlans.

I could see return packets making it to the docker host and being passed to the vlan interface on the docker host but never passed into the container.

I had no direct limitation or use case to use ipvlans over macvlans. As such I converted over to macvlans and everything is now routing as expected. I believe the container having a MAC address allows the docker host to route things correctly. Whereas with ipvlans the container shared the hosts’ MAC address resulting in the packets thinking they had reached their final destination once they hit the hosts eth0 device.

Additionally the sysctls declarations in the docker-compose file are not needed when using macvlans (they may not be needed with ipvlans either, untested). No changes to the sysctl.conf file on the host either. These were troubleshooting steps I tried and reverted after the macvlan change.

A route on the common gateway is still necessary to direct the WG client IPs to the Firezone container.

@gobijerboa, may I know how you got this configured?

I did set to macvlan but firezone-firezone-1 docker container isn’t starting, it fails for not able to reach to postgres

%DBConnection.ConnectionError{message: "tcp connect (postgres:5432):

    driver: macvlan

You need to specify the postgres instance IP address in the environment variables : DATABASE_HOST

I also have second bridge network attached to each service. This is the network I use for the DB connection. Optional but there’s no need for the database to have access to anything but firezone.

@gobijerboa , I tried setting the DATABASE_HOST as the host IP and postgres container I did set -p too but still din’t get it working.

Is it possible to send your docker-compose file?

Compose: version: '2.2'networks: db_connection: driver: bridge internal: -

I looked through my documentation as well and my notes suggested that the Firezone container won’t execute the database init script. I had to start the database container manually, which I did via docker run, and then create the database, “firezone”. Once created I could start firezone normally and the firezone will build the DB schema. This can be done with the postgres commands or with an application like pgadmin, which is what I used.

Here is the docker run command I used to start the container so I could create a DB.

docker run -v /var/docker/firezone/data/postgres:/var/lib/postgresql/data -e POSTGRES_PASSWORD=XXXX --network=macvlanXXXX --rm postgres:13

Thanks @gobijerboa ,

How did you create the macvlanXXXX external vlan?

is this the command?
podman network create --driver macvlan macvlanXXXX --subnet --ipam-driver=host-local

I’m using docker and not podmon my understanding is they are pretty much interchangeable.

podman network create --driver macvlan macvlanXXXX -o parent=eth0.XXXX --subnet --gateway --ipam-driver=host-local

If the vlan interface doesn’t exist on the parent interface podmon will automatically create it (or at least this is how docker functions).

@gobijerboa ,

I create as below
podman network create --driver macvlan -o parent=ens3 --subnet --ipam-driver=host-local firezone

all 3 services are started caddy, firezone and postgres

firezone is able to talk to postgres but caddy is not able to reach to firezone ip on port 13000

{“level”:“error”,“ts”:1672351094.6469665,“logger”:“http.log.error”,“msg”:“dial tcp i/o timeout”,“request”:{“remote_ip”:“”,“remote_port”:“54654”,“proto”:“HTTP/2.0”,“method”:“GET”,“host”:“”,“uri”:“/”,“headers”:{“User-Agent”:[“curl/7.87.0”],“Accept”:[“/”]},“tls”:{“resumed”:false,“version”:772,“cipher_suite”:4865,“proto”:“h2”,“server_name”:“”}},“duration”:3.004223364,“status”:502,“err_id”:“tszb85cg6”,“err_trace”:“reverseproxy.statusError (reverseproxy.go:1272)”}

service firezone network


service postgres network


    driver: bridge
    internal: true
      driver: default
        - subnet:
    external: true

and caddy service

    container_name: caddy
    image: caddy:2
      - ${FZ_INSTALL_DIR:-.}/caddy:/data/caddy
    # See Caddy's documentation for customizing this line
      - /bin/sh
      - -c
      - |
        cat <<EOF > /etc/caddy/Caddyfile && caddy run --config /etc/caddy/Caddyfile

        https:// {
          reverse_proxy *
    network_mode: "host"
      <<: *default-deploy

can you please share your caddy service docker-compose?

I’m not using caddy as my reverse proxy. I have a third party external proxy for my configuration. It looks like the issue is that Caddy doesn’t know how to reach

It looks like Caddy is setup to use host networking mode. You’ll either need to add the macvlanXXX network to the caddy service. Or your hosts network/gateway needs to have a route to the macvlan network.

@gobijerboa , actually I am not able to reach to firezone on port 13000 via the podman external network

podman network inspect firezone

          "name": "firezone",
          "id": "914b73c655f5711a9422a75ad87fa315435af825b3ec47d04fe4740dff909382",
          "driver": "macvlan",
          "network_interface": "ens3",
          "created": "2022-12-29T22:18:13.77082825Z",
          "subnets": [
                    "subnet": "",
                    "gateway": ""
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"

from my host machine I cant reach to, there is no interface create on the host machine for this network

I think we have to go back to the network declaration. You define the VLAN interface the format Parent.vlanID. So, it should be “-o parent=ens3.1234” (or whatever you have set as the vlan ID in your network design).

Your gateway needs to have the route to the vlan subnet as well.

@gobijerboa , ok but that is 802.1q which I don’t have in my cloud network.

Ah, I see. in that case macvlan is not the network type for this deployment. You’d just create a new bridge network.

Thanks @gobijerboa ,

In that case I’ll stay with docker host networking which is working

network_mode: "host"

Sounds good. Best of luck to you.