Hardening Admin Access With Nginx

I run some web applications on my server, and a lot of these applications have admin interfaces. Since I don’t need to have admin access frequently, I want these interfaces to only be accessible from my LAN, not from the Internet. This needs different approaches, because each web application is different. In this post, I want to showcase some of these approaches. First we’ll show a simple example. After that, we will move on to more complex ones.

The simplest example is a web application that exposes its admin interface on a single URI. This is easy, because we can just define that URI as a location block, and then put the specific security options in that block.

upstream application {
    server unix://var/run/application.socket;
}

server {
    server_name application.example.com;

    location /admin/ {
        allow 192.168.1.0/24;
        deny all;
        proxy_pass http://application;
    }

    location / {
        proxy_pass http://application;
    }

}

This works very well if your application only exposes admin access on a certain endpoint. Most applications don’t do this. Frequently they will have separate login and admin panel URLs. So even if you protect the admin panel, login as an administrative user is still possible.

To solve this problem, we first have to create a location block for the login endpoint. In this location block we disallow logging in from the internet for administrative users. To figure out how to do that, we have to understand the login process.

Most web applications will use a POST request with application/x-www-form-urlencoded. In this case, the body of the HTTP message is a string and contains all information as key-value pairs, separated by ampersands (&). In the case of a login, this is something like:

userName=admin&password=rosebud

We need the name of the key to parse it into Nginx. The best way of doing that is with the developer tools of your favorite browser. Use the Network tab while logging in, and just look at the request body of the relevant POST request (this is also a good way to get the URL for your location block).

Nginx can’t natively parse the bodies of these requests, but there is a module that can. Please refer to the GitHub page for installation instructions.

This module is super easy to use. We just specify the key of the value we want to have, and the module will save it in an Nginx variable.

set_form_input $userName;

After we have parsed the correct variable, we just need to determine if someone is trying to log on as our admin user. We can do that with an if statement.

NB: if statements are notorious in Nginx as something you need to be careful about: there is even a whole article about how if is evil in location contexts. If you want to fully understand why I do what I do, please read that piece of documentation.

tldr; it’s fine to use it in location contexts if there’s no alternative module, and if we only return to another location in the body of the if statement.

This is realised by defining our target location as an error page. We then return that error. In my first version, my location block looked like this:

location /user/login {
    set_form_input $userName;

    error_page 403 = @with_admin;
    if ($user_name = admin) {
        return 403;
    }
    proxy_pass http://application;
}

This location blog has a problem though: can you figure out what it is?

The problem is that this will allow an attacker to put some whitespace around the string. This will not match our string, and since most web applications strip whitespace in login fields, the login will still work.

Thanks to Besen for spotting this problem!

It’s better to use substring matching. This will stop an attacker from using any weird characters before or after the string, that might be stripped by the application itself:

location /user/login {
    set_form_input $userName;

    error_page 403 = @with_admin;
    if ($user_name ~ admin) {
        return 403;
    }
    proxy_pass http://application;
}

After that we just need to define a location block that our return statement will jump to. In that block we define our security settings, and pass the credentials on to our proxied application.

location @with_admin {
    allow 192.168.1.0/24;
    deny all;
    proxy_pass http://application;
}

As a recap, the full configuration will look something like this:

upstream application {
    server unix://var/run/application.socket;
}

server {
    server_name application.example.com;

    location /admin/ {
        allow 192.168.1.0/24;
        deny all;
        proxy_pass http://gitea;
    }

    location /user/login {
        set_form_input $userName;

        error_page 403 = @with_admin;
        if ($user_name ~ admin) {
            return 403;
        }
        proxy_pass http://application;
    }

    location @with_admin {
        allow 192.168.1.0/24;
        deny all;
        proxy_pass http://application;
    }

    location / {
        proxy_pass http://application;
    }
}

In that way, we protect our admin interface and only allow logging in as the admin user from a LAN.

This will not work for all web applications. Some applications use JSON to communicate with their backend. As far as I know Nginx does not have a native way to deal with this. To deal with this, we will use OpenResty. This is an application that turns Nginx into a lua application server. I will write about this topic in a future blog post, so stay tuned if this is something that interests you!


Articles from blogs I read - Generated by openring