Docker, Nginx Proxy Manager - 502 Bad Gateway, [error] connect() failed (111: Connection refused)

Hi all,

I am new to Firezone and attempting to install and configure this as an proof of concept system. I am looking to move a few systems from OpenVPN (OpenVPN Access Server) to WireGuard with Firezone as the management system. Having problems and only get 502 Bad Gateway from the clients. I have attempted to access the system via local IP (as stated in the documentation) but no luck on that front either.

The environment that it is running in a NAT private subnet 192.168.0.0/24. The Docker server that is dedicated to Firezone is running on is a ubuntu server 20.04 lts running in a ProxMox, VM 1 core, 2GB RAM, 16GB disk. Docker 24.0.5 at IP 192.168.0.36 The network is supported by Nginx Proxy Manager running on a different Docker server @ 192.168.0.250

I have set the enticement variable with the following for testing (the whole 192.168.0.0 subnet):

PHOENIX_EXTERNAL_TRUSTED_PROXIES=["192.168.0.0/24"]
PHOENIX_PRIVATE_CLIENTS=["192.168.0.0/24"]
SECURE_COOKIES=false

Browsers pointing at the server get 502 Bad Gateway errors from its private network or routed from the public web. When attempting to connect from a browser to the Firezone server I get “Secure Connection Failed” “An error occurred during a connection to 192.168.0.36. Peer reports it experienced an internal error. Error code: SSL_ERROR_INTERNAL_ERROR_ALERT”

When using curl from the local network (server running NPS) to the Firezone IP:

root@dlf-docker:/# curl -v https://192.168.0.36
*   Trying 192.168.0.36:443...
* TCP_NODELAY set
* Connected to 192.168.0.36 (192.168.0.36) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, internal error (592):
* error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error

When using curl from the local network (server running NPS) to the Firezone URL:

root@dlf-docker:/# curl -v https://fzeh01.XXXXXXX.com /
*   Trying 108.173.66.19:443...
* TCP_NODELAY set
* Connected to fzeh01.XXXXXXX.com (108.173.66.19) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=fzeh01.XXXXXXX.com 
*  start date: Aug 28 07:01:44 2023 GMT
*  expire date: Nov 26 07:01:43 2023 GMT
*  subjectAltName: host "fzeh01.XXXXXXX.com " matched cert's "fzeh01.XXXXXXX.com "
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5648f1f24300)
> GET / HTTP/2
> Host: fzeh01.XXXXXXX.com 
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 502
< server: openresty
< date: Tue, 29 Aug 2023 23:45:37 GMT
< content-type: text/html
< content-length: 154
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty</center>
</body>
</html>
* Connection #0 to host fzeh01.XXXXXXX.com left intact

Thanks in advance for the help.

If you’re terminating SSL you’ll need to remove SECURE_COOKIES=false.

502 means the upstream is down. Can you share the firezone container logs? You should see the culprit in there.

I did that before the posting based on your replies to help others.

I have been at it for several days trying to get FireZone working and not getting anything more than clues from the system.

All inbound HTTP and HTTPS to the subnet is first being passed to Nginx Proxy Manager and then passed to the appropriate server with. NPM is handling all SSL certs (Let’s Encrypt). Can not pass TCP 80 to FireZone.

How can I install a self-signed certificate for https between NPM and Caddy?

I have reinstalled several times.
Updated SECURE_COOKIES=“false” to PHOENIX_ SECURE_COOKIES=“false”
Added debug logging to caddy in the docker-compose.yml

https://fzeh01.XXXXXXX.com  {
  log {
    level DEBUG
    output stdout
  }
  reverse_proxy * 172.25.0.100:13000
}

The caddy logs are showing that acme_client is attempting to get a cert witch is not possible on the system.

023-09-01T06:08:51.301719878Z ERR ts=1693548531.3014922 logger=http.acme_client msg=validating authorization identifier=fzeh01.XXXXXXX.com  problem={"type":"","title":"","detail":"","instance":"","subproblems":[]} order=https://acme.zerossl.com/v2/DV90/order/ZcsyykMtSHiVeo962MzSbQ attempt=1 max_attempts=3
2023-09-01T06:08:51.301724564Z ERR ts=1693548531.3015406 logger=tls.obtain msg=could not get certificate from issuer identifier=fzeh01.XXXXXXX.com  issuer=acme.zerossl.com-v2-DV90 error=HTTP 0  - 
2023-09-01T06:08:51.301727291Z ERR ts=1693548531.3015683 logger=tls.obtain msg=will retry error=[fzeh01.XXXXXXX.com ] Obtain: [fzeh01.XXXXXXX.com ] solving challenge: fzeh01.XXXXXXX.com : [fzeh01.XXXXXXX.com ] authorization failed: HTTP 0  -  (ca=https://acme.zerossl.com/v2/DV90) attempt=5 retrying_in=600 elapsed=656.346639265 max_duration=2592000

When forwarding the HTTPS from NPM and Caddy there are no entries in the Caddy logs.
When I set NPM to send all traffic for fzeh01.XXXXXXX.com 192.168.0.233 HTTP 80 – > Caddy 192.168.0.36 there was a lot that showed up in the logs but when fzeh01.XXXXXXX.com 192.168.0.233 HTTPS 443 – > Caddy 192.168.0.36 there were no new log entries.

Any ideas on where to go with this??

Thanks in advance

Got the the management interface of Firezone working. The issue seems to have been with the Nginx Proxy Manager connecting with Coddy the entire time.
Simple fix, remove Coddy from the equation.
After using the Firezone automatic install, remove all the Firezone containers and images from Docker.
Edit the docker-compose.yml with only Firezone and Postgres in it.
Expose port 13000 in Firezone
Edit the Nginx Proxy Manager configuration for Firezone
Keep the old .env file
docker compose up -d

Modified to work with Nginx Proxy Manager
NPM is running on a different docker server from Firezone

Nginx Proxy Manager:
Scheme - HTTP
Forward Hostname / IP - Address of your Firezone server
Forward Port - 13000
Block Common Exploits - On
Websockets Support - On

SSL
I use Lets Encrypt Cert via NPM
Force SSL - ON
http/2 Support - On

docker-compose.yml used

############################
# Example compose file for production deployment on Linux.
#
# Note: This file is meant to serve as a template. Please modify it
# according to your needs. Read more about Docker Compose:
#
# https://docs.docker.com/compose/compose-file/
#
# 09/01/2023 V:.01
# Modified to work with Nginx Proxy Manager 
#    NPM is running on a different docker server from Firezone
# Nginx Proxy Manager:
#   Scheme - http
#   Forward Hostname / IP - Address of your Firezone server
#   Forward Port - 13000
#   Block Common Exploits - On
#   Websockets Support - On
#  SSL
#   I use Lets Encrypt Cert
#   Force SSL - ON
#   http/2 Support - On
#

version: '3.7'

x-deploy: &default-deploy
  restart_policy:
    condition: unless-stopped
    delay: 5s
#    max_attempts: 3
    window: 120s
  update_config:
    order: start-first

networks:
  firezone-network:
    enable_ipv6: true
    driver: bridge
    ipam:
      config:
        - subnet: 172.25.0.0/16
        - subnet: fcff:3990:3990::/64
          gateway: fcff:3990:3990::1

services:
  firezone:
    image: firezone/firezone:${VERSION:-latest}
    ports:
      - ${WIREGUARD_PORT:-51820}:${WIREGUARD_PORT:-51820}/udp
      - 13000:13000
    env_file:
      # This should contain a list of env vars for configuring Firezone.
      # See https://www.firezone.dev/docs/reference/env-vars for more info.
      - ${FZ_INSTALL_DIR:-.}/.env
    volumes:
      # IMPORTANT: Persists WireGuard private key and other data. If
      # /var/firezone/private_key exists when Firezone starts, it is
      # used as the WireGuard private. Otherwise, one is generated.
      - ${FZ_INSTALL_DIR:-.}/firezone:/var/firezone
    cap_add:
      # Needed for WireGuard and firewall support.
      - NET_ADMIN
      - SYS_MODULE
    sysctls:
      # Needed for masquerading and NAT.
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv4.ip_forward=1
      - net.ipv6.conf.all.forwarding=1
    depends_on:
      - postgres
    networks:
      firezone-network:
        ipv4_address: 172.25.0.100
        ipv6_address: fcff:3990:3990::99
    deploy:
      <<: *default-deploy

  postgres:
    image: postgres:15
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: ${DATABASE_NAME:-firezone}
      POSTGRES_USER: ${DATABASE_USER:-postgres}
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD:?err}
    networks:
      - firezone-network
    deploy:
      <<: *default-deploy
      update_config:
        order: stop-first

# Postgres needs a named volume to prevent perms issues on non-linux platforms
volumes:
  postgres-data: