Home Nginx reverse proxy unavailable upstreams in Docker
Post
Cancel

Nginx reverse proxy unavailable upstreams in Docker

In this post we will se how to fix unavailable upstreams in Nginx when using reverse proxy capabilities of nginx, with servers that aren’t reachable at the time of starting Nginx.

When using nginx docker to reverse proxy and load balance to other containers, sometimes there is the need to have some sort of start order defined in our stack, due to their inter dependency, ideally,

For instance, recently, we had an issue here @fiercely that having a stack starting order defined with nginx first was causing troubles, something similar to a circular dependency, where you start questioning what appeared first, the chicken or the egg.

First of all there shouldn't be a starting order, and circular dependency shouldn’t exist, as we should be able to get our containers/servers running independently of each other and neither should fail, however, some times, and especially on large projects, some things might not be that simple.

Yet, nginx should never, ever be dependent, much like haproxy or any other load balancer / reverse proxy, of the previous existance of the service it will proxy, and fail to perform its duty if any of the services its proxying fails to start, or in mid production causing a blackout on all services proxyied by nginx.

Hence the need for resolvers, something we already approached in our Haproxy SSL termination in docker, but we will also address on how to solve using nginx.

Resolver allow us to overcome the next issues addressed here.

Nginx Failing when upstreams are unavailable

If when starting your nginx it fails for some reason, and when you go check out the logs by doing

1
cat /var/log/nginx/error.log

And you happen to find a message log similar the one below.

1
NameOfService could not be resolved (1: Host not found)

And your nginx.d/someserver.conf has a configuration with an upstream directive, such as the one below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
    upstream upstreamName {
    server nameofhost:8443;
}


server {
            listen 80;
            listen [::]:80;
            server_name nameofhost;
            return 307 https://$host$request_uri;
}

server {
        listen 443 ssl;
        server_name nameofhost;

        location / {
           proxy_pass http://upstreamName;
           proxy_http_version 1.1;
           proxy_set_header Host               $host;
           proxy_set_header X-Real-IP          $remote_addr;
           proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto  $scheme;
        }
}

This will result int the above mentioned error, if upstreamName cannot be resolved when starting nginx server.

Using nginx resolvers

What actually is causing the issue is the proxy_pass directive, trying to the solve the upstream server that is currently unavaible.

So how to solve the problem of the non-resolvable at boot time upstream server when reverse proxing.

Using the resolver approach, because if you define a resolver, that can be either at http or server block, we can set it under the location block, as we are inside the server block scope.

What the resolver directive does is to block the code execution by nginx, without throwing a non-recoverable error if I can’t resolve the server we are going to proxy, and will keep on trying to resolve it until it can.

Resolver is actually pointing to a dns server, and defaulting to dns default port of 53 so we will only need to set it to target a dns server to tell nginx if the server is ready and listening or not.

So here is an example of configuration server using the resolver directive using a public dns server , google’s (8.8.8.8 ).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
    server {
            listen 80;
            listen [::]:80;
            server_name nameofhost;
            return 307 https://$host$request_uri;
}

server {
        listen 443 ssl;
        server_name nameofhost;

        location / {
           resolver 8.8.8.8 valid=10s;
           set $upstreamName nameofhost:8443;
           proxy_pass http://$upstreamName;
           proxy_http_version 1.1;
           proxy_set_header Host               $host;
           proxy_set_header X-Real-IP          $remote_addr;
           proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto  $scheme;
        }
}

Yes, just set the upstream directive in a variable, this will allow for dynamic ip resolution provided by the dns server.

So the important part of the configuration being

1
2
3
    resolver 8.8.8.8 valid=10s;
           set $upstreamName nameofhost:8443;
           proxy_pass http://$upstreamName;

However, this will only work if you actually have your services behind a resolvable domain name by google dns.

You can use any private/public dns servers to accomplish.

So since we are resolving the issue in docker, where its mostly proeminent, given the ephemeral nature of some containers, let’s check below how to set it up in docker.

Nginx resolvers in Docker-Compose

When using nginx in a docker-compose service declaration, the services, unless specified will all reside on the same network, and each container will have access to a docker created dns server whose location is always at the ip 127.0.0.11, this grants the container a service discover mechanism, if containers are scaled or stopped etc..

So how to set the nginx configuration file example if we have access to this dns server. It’s actually quite simple.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
    server {
            listen 80;
            listen [::]:80;
            server_name nameofhost;
            return 307 https://$host$request_uri;
}

server {
        listen 443 ssl;
        server_name nameofhost;

        location / {
           resolver 127.0.0.11 valid=10s;
           set $upstreamName nameofhost:8443;
           proxy_pass http://$upstreamName;
           proxy_http_version 1.1;
           proxy_set_header Host               $host;
           proxy_set_header X-Real-IP          $remote_addr;
           proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto  $scheme;
        }
}

Nginx resolvers in single container network

However, if you use Docker to have a way to provision nginx easily but not using a service stack provided with the simplicity of Docker-compose. Then you won’t have access to a dns server out of the box.

If for example, you use hostnames defined in /etc/hosts to keep things simple, than you won’t be able to use a public dns such as google’s 8.8.8.8 to allow you for domain resolution, as most of the time you won’t even have a domain for that service, as well you might even be proxying the service for that same reason.

We can for such cases uses dnsmasq private and local dns server, that is a very lightweight dns server.

The installation is quite simple, as its on most Linux Distros main packages.

Installing dnsmasq in Centos/RedHat

1
sudo yum install dsnmasq

Installing dnsmasq in Debian/Ubuntu

1
sudo apt-get install dnsmasq

And starting dnsmasq

1
sudo systemctl start dnsmasq

So when you start dnsmasq, it will start a dns server at 127.0.0.1, and it will read the /etc/hosts file as well as the /etc/resolv.conf

This allows you to have your hosts defined at /etc/hosts, and being resolved by dnsmasq that then responds to the nginx resolver query.

If the service is down or doesn’t exist in the file, there is no harm done, as nginx will keep querying the service, based on the valid query time.

So this is what our example file would look like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
    server {
            listen 80;
            listen [::]:80;
            server_name nameofhost;
            return 307 https://$host$request_uri;
}

server {
        listen 443 ssl;
        server_name nameofhost;

        location / {
           resolver 127.0.0.1 valid=10s;
           set $upstreamName nameofhost:8443;
           proxy_pass http://$upstreamName;
           proxy_http_version 1.1;
           proxy_set_header Host               $host;
           proxy_set_header X-Real-IP          $remote_addr;
           proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto  $scheme;
        }
}

Problem using dnsmasq with /etc/host

Dnsmasq will only evaluate the hosts file when it starts, so any subsequent change to the file, won’t be visible to dnsmasq.

If you plan on adding, new entries to /etc/hosts dynamically or by hand, then you must reload dnsmasq

You can do it by triggering

1
systemctl restart dnsmasq

If you are doing it dynamically, then an inotify script option would be your best bet, triggering the call automatically on file changes.

And thats it, hope this post helped you somehow.

This post is licensed under CC BY 4.0 by the author.