Nginx is the web server powering one-third of all websites in the world. Detectify Crowdsource has detected some common Nginx misconfigurations that, if left unchecked, leave your web site vulnerable to attack. Here’s how to find some of the most common misconfigurations before an attacker exploits them.
UPDATE: Detectify Security Advisor, Frans Rosen, published some research that deep dives into some novel web server misconfigurations on Detectify Labs in his post: Middleware, middleware everywhere – and lots of misconfigurations to fix
Nginx is one of the most commonly used web servers on the Internet due to it being lightweight, modular, and having a user-friendly configuration format. At Detectify, we scan for misconfigurations and security vulnerabilities in Nginx for thousands of customers. Our Crowdsource network regularly submits new and interesting vulnerabilities affecting Nginx that we then later implement as a security test into our web application scanner.
We analyzed almost 50,000 unique Nginx configuration files downloaded from GitHub with Google BigQuery. With this data, we could find out how common different misconfigurations are.
This article will shine some light on the following Nginx misconfigurations:
- Missing root location
- Unsafe variable use
- Raw backend response reading
- merge_slashes set to off
Missing root location;
server { root /etc/nginx; location /hello.txt { try_files $uri $uri/ =404; proxy_pass http://127.0.0.1:8080/; } }
The root directive specifies the root folder for Nginx. In the above example, the root folder is /etc/nginx
which means that we can reach files within that folder. The above configuration does not have a location for / (location / {...})
, only for /hello.txt
. Because of this, the root
directive will be globally set, meaning that requests to /
will take you to the local path /etc/nginx
.
A request as simple as GET /nginx.conf
would reveal the contents of the Nginx configuration file stored in /etc/nginx/nginx.conf
. If the root is set to /etc
, a GET
request to /nginx/nginx.conf
would reveal the configuration file. In some cases it is possible to reach other configuration files, access-logs and even encrypted credentials for HTTP basic authentication.
Of the nearly 50,000 Nginx configuration files we collected, the most common root paths were the following:
Off-By-Slash
server { listen 80 default_server; server_name _; location /static { alias /usr/share/nginx/static/; } location /api { proxy_pass http://apiserver/v1/; } }
With the Off-by-slash misconfiguration, it is possible to traverse one step up the path due to a missing slash. Orange Tsai made this technique well known in his Blackhat talk “Breaking Parser Logic!” In this talk he showed how a missing trailing slash in the location
directive combined with the alias
directive can make it possible to read the source code of the web application. What is less well known is that this also works with other directives like proxy_pass
. Let’s break down what is happening and why this works.
location /api { proxy_pass http://apiserver/v1/; }
With an Nginx server running the following configuration that is reachable at server
, it might be assumed that only paths under http://apiserver/v1/
can be accessed.
http://server/api/user -> http://apiserver/v1//user
When http://server/api/user
is requested, Nginx will first normalize the URL. It then looks to see if the prefix /api
matches the URL, which it does in this case. The prefix is then removed from the URL so the path /user
is left. This path is then added to the proxy_pass
URL which results in the final URL http://apiserver/v1//user
. Note that there is a double slash in the URL since the location directive does not end in a slash and the proxy_pass
URL path ends with a slash. Most web servers will normalize http://apiserver/v1//user
to http://apiserver/v1/user
, which means that even with this misconfiguration everything will work as expected and it could go unnoticed.
This misconfiguration can be exploited by requesting http://server/api../
which will result in Nginx requesting the URL http://apiserver/v1/../
that is normalized to http://apiserver/
. The impact that this can have depends on what can be reached when this misconfiguration is exploited. It could for example lead to the Apache server-status being exposed with the URL http://server/api../server-status
, or it could make paths accessible that were not intended to be publicly accessible.
One sign that a Nginx server has this misconfiguration is the server still returns the same response when a slash in the URL is removed. For example, if both http://server/api/user
and http://server/apiuser
return the same response, the server might be vulnerable. This would lead to the following requests being sent:
http://server/api/user -> http://apiserver/v1//user http://server/apiuser -> http://apiserver/v1/user
Unsafe variable use
Some frameworks, scripts and Nginx configurations unsafely use the variables stored by Nginx. This can lead to issues such as XSS, bypassing HttpOnly-protection, information disclosure and in some cases even RCE.
SCRIPT_NAME
With a configuration such as the following:
location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; }
The main issue will be that Nginx will send any URL to the PHP interpreter ending in .php
even if the file doesn’t exist on disc. This is a common mistake in many Nginx configurations, as outlined in the “Pitfalls and Common Mistakes” document created by Nginx.
An XSS will occur if the PHP-script tries to define a base URL based on SCRIPT_NAME
;
<?php if(basename($_SERVER['SCRIPT_NAME']) == basename($_SERVER['SCRIPT_FILENAME'])) echo dirname($_SERVER['SCRIPT_NAME']); ?> GET /index.php/<script>alert(1)</script>/index.php SCRIPT_NAME = /index.php/<script>alert(1)</script>/index.php
Usage of $uri can lead to CRLF Injection
Another misconfiguration related to Nginx variables is to use $uri
or $document_uri
instead of $request_uri
. $uri
and $document_uri
contain the normalized URI whereas the normalization
in Nginx includes URL decoding the URI. Volema found that $uri
is commonly used when creating redirects in the Nginx configuration which results in a CRLF injection.
An example of a vulnerable Nginx configuration is:
location / { return 302 https://example.com$uri; }
The new line characters for HTTP requests are \r (Carriage Return) and \n (Line Feed). URL-encoding the new line characters results in the following representation of the characters %0d%0a
. When these characters are included in a request like http://localhost/%0d%0aDetectify:%20clrf
to a server with the misconfiguration, the server will respond with a new header named Detectify
since the $uri variable contains the URL-decoded new line characters.
HTTP/1.1 302 Moved Temporarily Server: nginx/1.19.3 Content-Type: text/html Content-Length: 145 Connection: keep-alive Location: https://example.com/ Detectify: clrf
Learn more about the risks of CRLF injection and response splitting at https://blog.detectify.com/2019/06/14/http-response-splitting-exploitations-and-mitigations/.
Any variable
In some cases, user-supplied data can be treated as an Nginx variable. It’s unclear why this may be happening, but it’s not that uncommon or easy to test for as seen in this H1 report. If we search for the error message, we can see that it is found in the SSI filter module, thus revealing that this is due to SSI.
One way to test for this is to set a referer header value:
$ curl -H ‘Referer: bar’ http://localhost/foo$http_referer | grep ‘foobar’
We scanned for this misconfiguration and found several instances where a user could print the value of Nginx variables. The number of found vulnerable instances has declined which could indicate that this was patched.
Raw backend response reading
With Nginx’s proxy_pass
, there’s the possibility to intercept errors and HTTP headers created by the backend. This is very useful if you want to hide internal error messages and headers so they are instead handled by Nginx. Nginx will automatically serve a custom error page if the backend answers with one. But what if Nginx does not understand that it’s an HTTP response?
If a client sends an invalid HTTP request to Nginx, that request will be forwarded as-is to the backend, and the backend will answer with its raw content. Then, Nginx won’t understand the invalid HTTP response and just forward it to the client. Imagine a uWSGI application like this:
def application(environ, start_response): start_response('500 Error', [('Content-Type', 'text/html'),('Secret-Header','secret-info')]) return [b"Secret info, should not be visible!"]
And with the following directives in Nginx:
http { error_page 500 /html/error.html; proxy_intercept_errors on; proxy_hide_header Secret-Header; }
proxy_intercept_errors will serve a custom response if the backend has a response status greater than 300. In our uWSGI application above, we will send a 500 Error
which would be intercepted by Nginx.
proxy_hide_header is pretty much self explanatory; it will hide any specified HTTP header from the client.
If we send a normal GET
request, Nginx will return:
HTTP/1.1 500 Internal Server Error Server: nginx/1.10.3 Content-Type: text/html Content-Length: 34 Connection: close
But if we send an invalid HTTP request, such as:
GET /? XTTP/1.1 Host: 127.0.0.1 Connection: close
We will get the following response:
XTTP/1.1 500 Error Content-Type: text/html Secret-Header: secret-info Secret info, should not be visible!
merge_slashes set to off
The merge_slashes directive is set to “on” by default which is a mechanism to compress two or more forward slashes into one, so ///
would become /
. If Nginx is used as a reverse-proxy and the application that’s being proxied is vulnerable to local file inclusion, using extra slashes in the request could leave room for exploit it. This is described in detail by Danny Robinson and Rotem Bar.
We found 33 Nginx configuration files with merge_slashes
set to “off”.
Try it yourself
We have created a GitHub repository where you can use Docker to set up your own vulnerable Nginx test server with some of the misconfigurations discussed in this article and try finding them yourself!
https://github.com/detectify/vulnerable-nginx
Further reading:
Conclusion
Nginx is a very powerful web server platform and it is easy to understand why it is widely used. But with flexible configuration, you enable the ability to make mistakes that may have a security impact. Don’t make it too easy for an attacker to hack your site by leaving these common misconfigurations unchecked. Detectify can detect all of these misconfigurations and help you secure your site from would-be attackers if you don’t have time to manually check yourself. Sign up for a free 2-week trial today to get started!
via https://blog.detectify.com/2020/11/10/common-nginx-misconfigurations/