Migrate to Docker failed

Hi all, running latest version as Omnibus on Debian 11 following this Migrate to Docker | Firezone Installed Docker using the links provided on the same page (the hello-world container works)
And get the error below,

root@vpn:~# sudo -E bash -c "$(curl -fsSL https://github.com/firezone/firezone/raw/master/scripts/docker_migrate.sh)"
This script will copy Omnibus-based Firezone configuration to Docker-based Firezone configuration.
It operates non-destructively and leaves your current Firezone services running.
Proceed? (Y/n): y
Enter the desired installation directory (/root):
Would you like Firezone to attempt to migrate your existing database to Dockerized Postgres too?
We only recommend this for Firezone installations using the default bundled Postgres.
Proceed? (Y/n): y
Dumping existing database to ./firezone.sql
pg_dump: error: connection to database "firezone" failed: could not connect to server: Connection refused
        Is the server running on host "127.0.0.1" and accepting
        TCP/IP connections on port 15432?

An error occurred running this migration. Your existing Firezone installation has not been affected.
root@vpn:~#

Also just discovered when I run sudo firezone-ctl start afterwards I get 502 Bad Gateway

After restoring the original backup, I discovered something else, when I run apt-update I get this lot


root@vpn:~# apt list --upgradable
Listing... Done
dbus/stable-security 1.12.24-0+deb11u1 amd64 [upgradable from: 1.12.20-2]
firezone/bullseye 0.6.4-1 amd64 [upgradable from: 0.5.11-1]
hyperv-daemons/stable-security 5.10.149-1 amd64 [upgradable from: 5.10.140-1]
isc-dhcp-client/stable-security 4.4.1-2.3+deb11u1 amd64 [upgradable from: 4.4.1-2.3]
isc-dhcp-common/stable-security 4.4.1-2.3+deb11u1 amd64 [upgradable from: 4.4.1-2.3]
libdbus-1-3/stable-security 1.12.24-0+deb11u1 amd64 [upgradable from: 1.12.20-2]
libksba8/stable-security 1.5.0-3+deb11u1 amd64 [upgradable from: 1.5.0-3]
linux-image-amd64/stable-security 5.10.149-1 amd64 [upgradable from: 5.10.140-1]
tzdata/stable-updates 2021a-1+deb11u7 all [upgradable from: 2021a-1+deb11u6]
root@vpn:~#

apt-upgrade = 502 bad gateway after

Looks like postgres isn’t running. Are you sure you’re running the latest Firezone Omnibus (0.6.4?). What’s the output of firezone-ctl version?

No, it looks like postgres isn’t running, the first step of the instructions is to stop firezone, which in turn seems like it stops postgres.

root@vpn:~# sudo firezone-ctl stop
ok: down: nginx: 0s, normally up
ok: down: phoenix: 0s, normally up
ok: down: postgresql: 0s, normally up
ok: down: wireguard: 1s, normally up
root@vpn:~#

Also when I try to update to 6.4 the whole thing breaks with 502 bad gateway after, will this matter for the migration?

Thanks for the detailed info. I’ve updated the docs to remove that step – the migrate script handles this for you.

For the upgrade, you’ll need to make sure the Omnibus services are up. So the procedure would be:

# Upgrade firezone package to 0.6.4
apt-get upgrade
# reconfigure
firezone-ctl reconfigure
# restart
firezone-ctl restart

At this point, assuming your Omnibus services are online and healthy, you should be able to run the migration script.

If you’re still seeing the gateway error pre-migration, this means phoenix is down. The Omnibus services should be healthy before migrating. There should be something relevant in /var/log/firezone/phoenix/current – post that here and that should shed some light.

Something odd going here, was able to update to 0.6.4 ok now and all working, installed Docker ran the migration script all seemed to go ok, however going to the firezone ip in Firefox gave SSL_ERROR_INTERNAL_ERROR_ALERT, Opera and Edge gave ERR_SSL_PROTOCOL_ERROR, tried clearing cache/private browsing etc

Rebooted Debian all working again but I guess the Omnibus version, see below.

root@vpn:~# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
root@vpn:~# docker container ls
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
root@vpn:~#

I installed Portainer to see what was going on, all 3 had state ‘exited’ when I try and start them postgres starts but the other 2 say ‘Failed with status code 500’

I see in the caddy logs it’s trying to pull a zerossl cert, ports 443/80 are in use on another web server, I don’t really want the admin ui exposed to the internet anyway, I assume this is the problem?

Hey @HenrysCat – yeah you’ll want to customize the docker-compose.yml caddy service’s command line specify --internal-certs. You’ll want to make sure it’s something like:

    command: caddy reverse-proxy --to firezone:13000 --from ${EXTERNAL_URL:?err} --internal-certs

Thanks, I tried by modifying the command in portainer but still get ‘status code 500’

Has anyone been able to get this to work?

Hi @HenrysCat – “request failed with status code 500” is I think coming from Portainer.

Could you please post the logs of the caddy and firezone services? You’re likely hitting SSL issues with Caddy and need to configure it with internal TLS.

The ERR_SSL_PROTOCOL_ERROR is coming from Caddy.

Additionally, you’d want to disable the Omnibus systemd unit to prevent Omnibus services from starting up. Newer versions of the migration script handle this for you.

sudo systemctl disable firezone-runsvdir-start

Just tried again so should be latest version of migration script
Caddy logs

{"level":"warn","ts":1666636063.6912866,"logger":"admin","msg":"admin endpoint disabled"}
{"level":"info","ts":1666636063.6932266,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"proxy","https_port":443}
{"level":"info","ts":1666636063.6933522,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"proxy"}
{"level":"info","ts":1666636063.7004302,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000acfc0"}
{"level":"warn","ts":1666636063.7150276,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
{"level":"info","ts":1666636063.7154758,"msg":"Warning: \"certutil\" is not available, install \"certutil\" with \"apt install libnss3-tools\" or \"yum install nss-tools\" and try again"}
{"level":"info","ts":1666636063.7155437,"msg":"define JAVA_HOME environment variable to use the Java trust"}
{"level":"info","ts":1666636063.753252,"msg":"certificate installed properly in linux trusts"}
{"level":"info","ts":1666636063.7536955,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1666636063.7538772,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
{"level":"info","ts":1666636063.7540197,"logger":"http.log","msg":"server running","name":"proxy","protocols":["h1","h2","h3"]}
{"level":"info","ts":1666636063.754092,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1666636063.7541406,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["vpn.mydomain.com"]}
Caddy proxying https://vpn.mydomain.com -> firezone:13000
{"level":"info","ts":1666636063.7547328,"logger":"tls.obtain","msg":"acquiring lock","identifier":"vpn.mydomain.com"}
{"level":"info","ts":1666636063.755072,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1666636063.7552214,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1666636063.7624998,"logger":"tls.obtain","msg":"lock acquired","identifier":"vpn.mydomain.com"}
{"level":"info","ts":1666636063.7627184,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"vpn.mydomain.com"}
{"level":"info","ts":1666636063.7638454,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"vpn.mydomain.com"}
{"level":"info","ts":1666636063.76395,"logger":"tls.obtain","msg":"releasing lock","identifier":"vpn.mydomain.com"}
{"level":"warn","ts":1666636063.7643042,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [vpn.mydomain.com]: no OCSP server specified in certificate","identifiers":["vpn.mydomain.com"]}

firezone log

18:29:32.611 [info] Migrations already up
18:29:34.609 [info] Running FzHttpWeb.Endpoint with cowboy 2.9.0 at 0.0.0.0:13000 (http)
18:29:34.614 [info] Access FzHttpWeb.Endpoint at http://localhost:13000

Tried again now getting SSL_ERROR_INTERNAL_ERROR_ALERT adding --internal-certs does not work.
Tried access with host.mydomain.com and get Error code: SEC_ERROR_UNKNOWN_ISSUER
Could you not add the option to the migration script to use self signed certs? or even omit SSL altogether?

Update:
caddy reverse-proxy --from :80 --to firezone:13000 --internal-certs

The above gets me to the login page, when I enter email and password and click sign on the page just says ‘Forbidden’ in plain text top left corner.

Try setting the SECURE_COOKIES ENV var to false for the Firezone container if you’d like to disable SSL.

1 Like

Thank you, finally it works after 8 days of trying,

caddy service’s command line,
caddy reverse-proxy --from :80 --to firezone:13000 --internal-certs

Set SECURE_COOKIES ENV var to false

add this to your docs to help others :wink: