Posted on 3 minutes read

After listing files and setting up per directory accesses, we would like to allow some users to upload files.

This solution is very hacky, but it has the advantage of requiring only a standard nginx server and a bit of javaScript.

Nginx Configuration

First, we define our upload endpoint:

...
server {
    ...
    location ^~ /___ngxp/upload/ {
        limit_except GET POST     { deny all; }
        if ($user_authorized = 0) { return 401; }  # auth works here aswell

        client_body_temp_path      /home/user/uploads/; # upload path
        client_body_in_file_only   on; # store on disk
        client_body_buffer_size    16m;
        client_max_body_size       256m;
        client_body_timeout        2h;
        proxy_set_body             off; # do not send file to proxy
        proxy_set_header           X-fileno "$upload_fileno"; # send it back to client
        proxy_pass                 http://[::1]:4000/$request_method;
    }
}

map $request_body_file $upload_fileno { # upload filename path part
    ~([^\/]*)$ $1; # set $upload_fileno to request filename
    default    "";
}

client_body_temp_path will store request bodies at the specified path on the file system. However, nginx will actually do so only if there is a proxy_pass defined.

To work around this, we define another server that listens only on localhost. With proxy_set_body off;, the body will not actually be sent to the proxy.

server {
    listen [::1]:4000;
    location /POST { return 201 "$http_x_fileno"; }
    location /GET  { return 200 "ngxp upload"; }
}

Nginx will create an ever incrementing numbered file for every request body, the increment is non predictable thanks to nginx true randomness1.

Javascript code

To upload files larger than client_max_body_size and preserve the original filename for the administrator, we'll have to write some client side code.

Using javascript, we can upload a file with an XHR request, and split the file into chunks. Conveniently, browser File objects have a slice function.

We also send a meta file to allow the server operator to reconstruct the file from its chunks. This file includes metadata such as a magic header, chunk size, chunk count, the size of the last chunk and the filename of each chunk on the server as sent back by nginx (X-fileno).

// files is [File]
Array.from(files).forEach((f) => {
    var chunk_cnt = 1;
    var chunk_size = f.size;
    var chunk_last_size = 0;
    if (upload_max_size > 0 && f.size > upload_max_size) {
        // upload in chunks
        chunk_cnt = f.size / upload_max_size | 0;
        chunk_size = upload_max_size;
        chunk_last_size = f.size % upload_max_size | 0;
        if (chunk_last_size > 0) {
            chunk_cnt += 1;
        }
    }
    var promise_chain = Promise.resolve(([null, []]));
    for (var i = 0; i < chunk_cnt; i++) {
        var chsz = chunk_size;
        if ((seeker + chsz) > f.size) {
            chsz = chunk_last_size;
        }
        let chunk = f.slice(seeker, seeker + chsz); // <--- slice
        seeker += chsz;
        promise_chain = promise_chain.then(([xhr, chunk_fileno]) => {
            if (xhr !== null) {
                chunk_fileno.push(xhr.responseText);
            }
            return upload(upload_endpoint, chunk, chunk_fileno);
        });
    }
    // finally upload meta file
    return promise_chain.then(([xhr, chunk_fileno]) => {
        if (xhr !== null) {
            chunk_fileno.push(xhr.responseText);
        }
        var meta = meta_info(f, chunk_cnt, chunk_size, chunk_last_size, chunk_fileno);
        return upload_func(
            upload_endpoint, meta, chunk_fileno
        );
    }).then(upload_success, upload_error);
});

With the promise_chain, each chunk of the file will be uploaded one after the other!

The chunks will be waiting until they are reassembled.

Reassemble with a bash script

Once a file is upload we're left with numbered files in the previously specified directory.

We can reconstruct the original files by searching for all files that start with the marker value #ngxpupload_meta. With this meta file, we find all chunks of a file and concatenate them into a file named $name. Finally, we remove all used chunks.

find "$1" -type f | while read -r h; do
    if [ ! -f "$h" ]; then continue; fi                                                # file still exist
    read -r -n 16 head < "$h" || true                                                  # read first 16 bytes
    if [ "$head" != "#ngxpupload_meta" ]; then continue; fi                            # if marker value
    IFS='/' read -r name chk_cnt chk_sz chk_lsz < <(
        jq -Rr 'fromjson? | [(.name | sub("/";"_";"g")), (.chunk_cnt|tonumber), (.chunk_size|tonumber), (.chunk_last_size|tonumber)] | join("/")' "$h"
    ) # extract json
    eval "chk_fileno=( $( jq -Rr --arg d "$1" 'fromjson? | .chunk_fileno[] | select(test("^[0-9]*$")) | "\($d)\(.)" | @sh' "$h" ) )"
    stats=$(stat -c '%n %s' "$h" "${chk_fileno[@]}" | sort | uniq -f1 -c)
    stats=${stats% [0-9]*}
    stats=${stats// }
    stats=${stats//$'\n'}
    expected="$(( chk_cnt - ( chk_lsz > 0 ) ))${chk_fileno[0]}${chk_sz}"
    if (( chk_lsz > 0 )); then
        expected+="1${chk_fileno[-1]}${chk_lsz}"
    fi
    [ "$stats" = "${expected}1${h}" ] || { echo "$h meta invalid" >&2; break; }
    cat "${chk_fileno[@]}" > "$name"
    rm -f "$h" "${chk_fileno[@]}"
done

That's the gist of how nginx explorer uploads work!

There are a lot of other cool features in the UI part that I've not wrote about yet and some I've yet to implement.

Don't hesitate to go look at the project page and test nginx explorer ./ngxp.sh servethis.