Skip to content

KevinSiddhpura/devdrop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ DevDrop

Node.js License Status Maintained PRs


DevDrop is a lightweight, self-hosted file upload server built for developers. It enables fast uploads via raw binary streaming, direct file access via URL, ShareX integration, and simple deployment on a VPS or panel server.

Unlike traditional upload servers that rely on multipart/form-data, DevDrop accepts raw request bodies β€” this eliminates encoding overhead, making uploads significantly faster and easier to automate from CLI tools, bots, or screenshot utilities.


⚑ Quick Start

Clone the repository and install dependencies:

git clone https://github.com/KevinSiddhpura/devdrop.git
cd devdrop
npm install

Copy the example environment file and fill in your values:

cp .env.example .env

Start the server:

npm start

The server will be available at:

http://localhost:3000

You can verify it's running with:

curl http://localhost:3000
# Expected: {"status":"OK"}

πŸ€” Why DevDrop?

  • ⚑ Raw uploads β€” No multipart encoding. Files are streamed directly from the request body to disk, reducing CPU overhead and upload time, especially for large binary files.
  • 🧩 No database β€” Files and metadata live entirely on the filesystem. Nothing to provision, migrate, or back up beyond the uploads/ directory.
  • πŸ“Έ ShareX ready β€” Hit /config to download a ready-made ShareX custom uploader configuration.
  • πŸ” Simple auth β€” A single AUTH_TOKEN in your .env protects all write endpoints. Simple, auditable, easy to rotate.
  • 🌐 Deployable anywhere β€” Works on a bare VPS, a Pterodactyl panel, Railway, Render, or any environment with Node.js β‰₯ 18.
  • 🧠 Minimal architecture β€” The entire codebase is readable in one sitting. No magic, no framework bloat.

πŸ“Œ Overview

DevDrop is designed around four principles:

Fast β€” Raw upload streaming means no base64 encoding, no multipart parsing, and no buffering the entire file into memory before writing. Files stream directly to disk.

Simple β€” The storage layer is just a folder. List files with ls, delete with rm, back up with rsync. No ORM, no query language.

Secure β€” Every write operation (upload, delete) requires a Bearer token. File paths are sanitised to prevent directory traversal attacks. Upload size is enforced before the file is written.

Automation-friendly β€” The API is plain HTTP. Any tool that can send a POST request β€” cURL, ShareX, Python, a Discord bot β€” can upload files without a dedicated SDK.

Ideal for:

  • Developers who want a personal file host without vendor lock-in
  • Screenshot uploading workflows (ShareX, Flameshot)
  • CLI pipelines that need a place to store build artifacts or logs
  • Bots that need to upload and share files via URL

✨ Features

πŸ“€ Upload System

DevDrop uses true raw binary streaming for uploads. Instead of sending a multipart form, you pipe the file content directly as the request body and pass the filename via a header. The request body is piped directly to disk via a write stream β€” files never buffer into memory.

  • Accepts any file type (binary or text)
  • Smart filename handling β€” if a file with the same name already exists, a counter suffix is appended automatically to prevent collisions
  • No temporary files; data streams directly to the final destination
  • File size is enforced during the stream β€” oversized uploads are aborted and partial files are cleaned up

πŸ“‚ File Management

  • List all uploaded files via GET /files β€” returns a JSON array with name, size, and creation date for each file
  • Delete a specific file via DELETE /files/:name β€” protected by Bearer token
  • Access any file directly via GET /files/:name β€” served as a static file with the correct Content-Type

πŸ“Š Monitoring

  • Storage stats via GET /storage β€” returns total size used and file count
  • Request logging β€” every request is logged to the console and written to daily log files in the logs/ directory (e.g. 2026-04-15.log)
  • File size is tracked and enforced at upload time via MAX_FILE_SIZE

πŸ” Security

  • Bearer token authentication β€” all mutating endpoints and sensitive endpoints (/config, /storage, /files) require Authorization: Bearer <TOKEN> in the request header
  • File size limits β€” requests exceeding MAX_FILE_SIZE bytes are rejected during streaming before the full file is written
  • Path traversal protection β€” filenames are sanitised using path.basename(), and names containing .., null bytes, or path separators are rejected
  • Rate limiting β€” configurable per-IP rate limits for general requests and uploads, preventing abuse and disk-fill attacks

⚑ Integrations

  • ShareX β€” download a ready-made custom uploader config from /config (requires authentication) and import it directly into ShareX
  • CLI / cURL β€” any shell script can upload files with a single curl command
  • REST API β€” standard JSON responses make integration with bots and scripts straightforward

πŸ—οΈ Architecture

User / Client
     β”‚
     β”‚  HTTPS request
     β–Ό
  Nginx (reverse proxy)
     β”‚  Terminates TLS, forwards to local port
     β”‚
     β–Ό
DevDrop (Node.js on port 3000)
     β”‚  Validates token, streams file to disk
     β”‚
     β–Ό
  uploads/  (local filesystem)

Nginx sits in front of DevDrop and handles:

  • TLS termination (HTTPS) via Let's Encrypt
  • Public-facing port 80/443 β†’ internal port 3000
  • Optional: request rate limiting, body size enforcement at the proxy layer

Node.js handles:

  • Authentication (Bearer token check)
  • File streaming (pipe req directly to fs.createWriteStream)
  • Filename collision avoidance
  • JSON API responses

Filesystem stores everything:

  • uploads/ β€” all uploaded files
  • logs/ β€” access and error logs

πŸ“¦ Requirements

Minimum (local or development)

Requirement Version
nvm Latest (used to install and manage Node.js)
Node.js β‰₯ 18 (installed via nvm)
npm β‰₯ 8 (bundled with Node 18, managed by nvm)

Install nvm if you don't have it:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

Reload your shell so the nvm command becomes available:

source ~/.bashrc
# or if you use zsh:
source ~/.zshrc

Install and use Node 18:

nvm install 18
nvm use 18

Set Node 18 as the default so it's used automatically in new terminal sessions:

nvm alias default 18

Check your versions:

node --version   # Should print v18.x.x
npm --version
nvm --version

Recommended (production)

Component Purpose
Ubuntu 22.04 VPS Stable, well-documented Linux environment
Nginx Reverse proxy + TLS termination
Certbot Free Let's Encrypt SSL certificates
PM2 Process manager β€” keeps DevDrop alive after crashes and reboots
A domain name Required for valid HTTPS

πŸ“ Project Structure

devdrop/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ config/         # Environment variable loading and validation
β”‚   β”œβ”€β”€ middleware/      # Auth token check, file size enforcement, logging
β”‚   β”œβ”€β”€ routes/          # Express route handlers for each endpoint
β”‚   β”œβ”€β”€ services/        # Business logic: file I/O, storage stats
β”‚   β”œβ”€β”€ utils/           # Helper functions: filename sanitisation, collision avoidance
β”‚   └── app.js           # Express app setup: mounts middleware and routes
β”‚
β”œβ”€β”€ uploads/             # All uploaded files land here (auto-created on first run)
β”œβ”€β”€ logs/                # Request logs with timestamps and response times
β”œβ”€β”€ server.js            # Entry point: loads config, starts HTTP server
β”œβ”€β”€ package.json         # Dependencies and npm scripts
└── .env                 # Your local environment config (never commit this)

Key files to know:

  • server.js β€” Start here. Reads .env, initialises the Express app, and starts listening.
  • src/app.js β€” Wires together middleware and routes.
  • src/routes/ β€” One file per logical group of endpoints (upload, files, storage, config).
  • src/middleware/ β€” The auth check lives here. If you want to add rate limiting, this is where it goes.

πŸ” Environment Configuration

Create your .env file by copying the example:

cp .env.example .env

Then open it in your editor and fill in every value:

nano .env

A complete .env looks like this:

# Port DevDrop listens on internally
PORT=3000

# Your public-facing domain (used to construct file URLs in responses)
DOMAIN=https://your-domain.com

# Secret token β€” used as the Bearer token for all write operations
# Generate a strong one with: openssl rand -hex 32
AUTH_TOKEN=your_token_here

# Directory where uploaded files are stored (relative to project root)
UPLOAD_DIR=./uploads

# Directory where logs are written
LOG_DIR=./logs

# Maximum allowed upload size in bytes
# 52428800 = 50 MB  |  104857600 = 100 MB  |  10485760 = 10 MB
MAX_FILE_SIZE=52428800

# Rate limiting β€” values use human-readable durations (ms syntax)
# Examples: '15m', '1h', '30s', '2d'
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100
UPLOAD_LIMIT_WINDOW=15m
UPLOAD_LIMIT_MAX=20

# Proxy β€” set to 1 if behind a reverse proxy (Nginx, Cloudflare, etc.)
# Required for rate limiting to use real client IPs
TRUST_PROXY=1

Generate a strong token:

openssl rand -hex 32
# Example output: a3f8c2e1d7b904f6a5c318e2d09f4b7c8a1e5d6f2c904b3a7f1e8d5c2b6a9f4e

Copy that output as your AUTH_TOKEN. Never share it or commit it to version control.


πŸ”‘ Environment Variables Reference

Variable Required Default Description
PORT No 3000 The local port Node.js listens on
SERVER_PORT No β€” Overrides PORT when running inside a Pterodactyl panel
DOMAIN Yes β€” Your public URL (e.g. https://files.example.com). Included in upload response URLs
AUTH_TOKEN Yes β€” Secret string required in the Authorization header for uploads and deletes
UPLOAD_DIR No ./uploads Path to the directory where files are stored
LOG_DIR No ./logs Path to the directory where log files are written
MAX_FILE_SIZE No 52428800 Maximum request body size in bytes (default: 50 MB)
TRUST_PROXY No 0 Set to 1 if behind a reverse proxy (Nginx, Cloudflare). Required for rate limiting to use real client IPs
RATE_LIMIT_WINDOW No 15m Time window for the general rate limiter (uses ms syntax)
RATE_LIMIT_MAX No 100 Max requests per IP within the general rate limit window
UPLOAD_LIMIT_WINDOW No 15m Time window for the upload rate limiter
UPLOAD_LIMIT_MAX No 20 Max uploads per IP within the upload rate limit window

▢️ Running the Server

Development (restarts manually):

npm start

Development with auto-restart on file changes (requires nodemon):

npx nodemon server.js

Check the server is responding:

curl http://localhost:3000/
# {"status":"OK"}

🌐 API Documentation

All endpoints that modify data require the Authorization header:

Authorization: Bearer YOUR_AUTH_TOKEN

Responses are always JSON unless the endpoint serves a file directly.


❀️ Health Check

Verify the server is running. No authentication required.

GET /

Request:

curl http://your-domain.com/

Response:

{ "status": "OK" }

πŸ“€ Upload a File

Upload any file by streaming its content as the raw request body.

POST /upload

Required headers:

Header Value Description
Authorization Bearer YOUR_TOKEN Authentication
x-filename yourfile.txt The filename to save the upload as

Body: Raw file content (binary or text). Do not use multipart/form-data.


Upload a text string:

curl -X POST https://your-domain.com/upload \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "x-filename: hello.txt" \
  --data "Hello, DevDrop!"

Upload a file from disk:

curl -X POST https://your-domain.com/upload \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "x-filename: photo.png" \
  --data-binary @/path/to/photo.png

Upload and pipe from another command:

cat /var/log/syslog | curl -X POST https://your-domain.com/upload \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "x-filename: syslog.txt" \
  --data-binary @-

Response:

{
  "url": "https://your-domain.com/files/hello.txt"
}

The url field is the direct public link to your uploaded file.


πŸ“‚ List All Files

Returns a JSON array of all files currently in the uploads/ directory, including their name, size, and creation date.

GET /files

Request:

curl https://your-domain.com/files \
  -H "Authorization: Bearer YOUR_TOKEN"

Response:

[
  { "name": "hello.txt", "size": "14.00 Bytes", "created": "2026-04-15T10:30:00.000Z" },
  { "name": "photo.png", "size": "1.20 MB", "created": "2026-04-15T10:31:00.000Z" }
]

πŸ—‘οΈ Delete a File

Permanently deletes a file by name from the uploads/ directory.

DELETE /files/:name

Request:

curl -X DELETE https://your-domain.com/files/hello.txt \
  -H "Authorization: Bearer YOUR_TOKEN"

Response:

{ "message": "File deleted successfully" }

πŸ“Š Storage Statistics

Returns the total number of files and disk space used by the uploads/ directory.

GET /storage

Request:

curl https://your-domain.com/storage \
  -H "Authorization: Bearer YOUR_TOKEN"

Response:

{
  "totalFiles": 3,
  "totalSize": 2516582,
  "totalSizeFormatted": "2.40 MB"
}

πŸ“₯ Access / Download a File

Serves a file directly. No authentication required β€” this is the public download URL returned by the upload endpoint.

GET /files/:name

Request:

curl https://your-domain.com/files/hello.txt
# Hello, DevDrop!

Or simply open the URL in a browser. The Content-Type header is set automatically based on the file extension.


πŸ“Έ ShareX Configuration

Returns a ready-made ShareX custom uploader configuration as a downloadable .sxcu file. Requires authentication β€” your auth token is embedded in the config so ShareX can upload on your behalf.

GET /config

Request:

curl https://your-domain.com/config \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -o devdrop.sxcu

πŸ“Έ ShareX Setup

ShareX is a free, open-source screenshot tool for Windows. DevDrop integrates with it out of the box.

Steps:

  1. Open your browser and navigate to:

    https://your-domain.com/config
    
  2. Your browser will download a .sxcu file (ShareX Custom Uploader config).

  3. Open ShareX. It will automatically detect and prompt you to import the config. Click Yes.

  4. In ShareX, go to Destinations β†’ Image Uploader and select DevDrop (or whatever name appears).

  5. Take a screenshot β€” ShareX will upload it automatically and copy the direct URL to your clipboard. βœ…


🌍 Deployment Guide


🟒 VPS Deployment (Recommended)

This is the most reliable way to run DevDrop in production. The steps below assume a fresh Ubuntu 22.04 server.


Step 1 β€” Update the system

apt update && apt upgrade -y

Step 2 β€” Install Node.js via nvm

nvm (Node Version Manager) is the recommended way to install Node.js on a server. It lets you install multiple Node versions side-by-side, switch between them instantly, and upgrade without touching system packages β€” far cleaner than using apt or NodeSource.

Install nvm:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

The installer appends a small loader block to your ~/.bashrc. Reload it so the nvm command is available in your current session:

source ~/.bashrc

Verify nvm is working:

nvm --version
# Expected: 0.39.7 (or whichever version you installed)

Install Node 18 and set it as default:

nvm install 18
nvm use 18
nvm alias default 18

nvm alias default 18 ensures that when PM2 or any other tool spawns a new shell (e.g. on reboot), it uses Node 18 rather than falling back to the system Node β€” or worse, finding no Node at all.

Verify Node and npm are available:

node --version   # Should print v18.x.x
npm --version    # Should print 9.x.x or higher
which node       # Should print something like /root/.nvm/versions/node/v18.x.x/bin/node

Useful nvm commands for future reference:

nvm ls                  # List all installed Node versions
nvm ls-remote           # List all available Node versions to install
nvm install 20          # Install a different version (e.g. Node 20)
nvm use 20              # Switch to Node 20 in this session
nvm alias default 20    # Make Node 20 the default going forward
nvm uninstall 18        # Remove a version you no longer need

Step 3 β€” Install Nginx

apt install -y nginx

Start and enable it so it runs on boot:

systemctl start nginx
systemctl enable nginx

Step 4 β€” Clone and configure DevDrop

git clone https://github.com/KevinSiddhpura/devdrop /opt/devdrop
cd /opt/devdrop
npm install

Create your environment file:

cp .env.example .env
nano .env

Fill in at minimum:

PORT=3000
DOMAIN=https://your-domain.com
AUTH_TOKEN=<output of: openssl rand -hex 32>
NODE_ENV=production
MAX_FILE_SIZE=52428800
TRUST_PROXY=1
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100
UPLOAD_LIMIT_WINDOW=15m
UPLOAD_LIMIT_MAX=20

Step 5 β€” Install PM2 and start DevDrop

PM2 is a process manager that keeps your Node.js app running after crashes and across reboots.

npm install -g pm2
pm2 start server.js --name devdrop

Save the PM2 process list so it survives a reboot:

pm2 save

Configure PM2 to start automatically when the server boots:

pm2 startup
# PM2 will print a command β€” run that command as instructed

Useful PM2 commands:

pm2 status              # See if devdrop is running
pm2 logs devdrop        # Stream live logs
pm2 restart devdrop     # Restart after changing .env or code
pm2 stop devdrop        # Stop the process
pm2 delete devdrop      # Remove from PM2 entirely

Step 6 β€” Configure Nginx as a reverse proxy

Create a new Nginx site config:

nano /etc/nginx/sites-available/devdrop

Paste the following, replacing your-domain.com with your actual domain:

server {
    listen 80;
    server_name your-domain.com;

    # Increase client body size to match MAX_FILE_SIZE in your .env
    # This value must be >= MAX_FILE_SIZE, otherwise Nginx will reject large uploads before they reach Node
    client_max_body_size 100M;

    location / {
        proxy_pass http://127.0.0.1:3000;

        # Pass the real client IP to Node.js (useful for logging)
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;

        # Required for raw body streaming β€” disable request buffering
        # so the file streams through Nginx to Node without being held in memory
        proxy_request_buffering off;

        # Extend timeouts for large file uploads (default is 60s, which is too short)
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;
        proxy_connect_timeout 10s;
    }
}

Enable the site by creating a symlink:

ln -s /etc/nginx/sites-available/devdrop /etc/nginx/sites-enabled/

Test the Nginx config for syntax errors:

nginx -t
# Expected: syntax is ok / test is successful

Reload Nginx to apply changes:

systemctl reload nginx

Step 7 β€” Enable HTTPS with Let's Encrypt

Install Certbot:

apt install -y certbot python3-certbot-nginx

Obtain and install a free SSL certificate (Certbot will auto-edit your Nginx config):

certbot --nginx -d your-domain.com

Follow the interactive prompts. Certbot will:

  • Verify domain ownership via HTTP challenge
  • Obtain the certificate
  • Update your Nginx config to redirect HTTP β†’ HTTPS
  • Set up automatic renewal

Test automatic renewal:

certbot renew --dry-run

Your DevDrop instance is now live at https://your-domain.com. βœ…


🟑 Pterodactyl Panel Deployment

Pterodactyl is a Docker-based game/application server panel. Because DevDrop runs inside a container, a few differences apply.


Step 1 β€” Create a new server in Pterodactyl

  • Egg: Node.js (any standard Node.js egg works)

  • Startup command:

    node server.js
    

Step 2 β€” Set environment variables in the panel

In your server's Startup tab, set:

SERVER_PORT=2031
DOMAIN=https://your-domain.com
AUTH_TOKEN=your_secure_token
NODE_ENV=production
MAX_FILE_SIZE=52428800
TRUST_PROXY=1

Use SERVER_PORT instead of PORT when running inside Pterodactyl. The panel allocates a specific port for your server; using a different value will cause the container's port mapping to fail. You should also keep TRUST_PROXY=1 because traffic goes through the host machine's proxy.


Step 3 β€” Configure Nginx on your host machine (outside the container)

Pterodactyl containers are not directly accessible on port 80/443. You need Nginx on the host machine (the physical server running the panel) to proxy traffic to the container's allocated port.

server {
    listen 80;
    server_name your-domain.com;

    client_max_body_size 100M;

    location / {
        # Replace SERVER_IP with your VPS/host IP
        # Replace SERVER_PORT with the port allocated in Pterodactyl (e.g. 2031)
        proxy_pass http://SERVER_IP:SERVER_PORT;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;

        proxy_request_buffering off;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;
    }
}

After saving, test and reload:

nginx -t && systemctl reload nginx

Then add HTTPS:

certbot --nginx -d your-domain.com

Pterodactyl Notes

  • Do not use localhost in DOMAIN or proxy_pass on the host machine β€” containers run in an isolated network namespace, so localhost on the host does not reach the container. Use the host machine's actual IP address.
  • Pterodactyl manages the Node.js process inside the container, so you do not need PM2 when using the panel.
  • File storage (uploads/) lives inside the container's allocated volume. Back it up via the panel's file manager or rsync from the host.

πŸ” Security Recommendations

Use a strong, random token. Weak tokens are trivially brute-forced if your server is public.

openssl rand -hex 32

Never commit .env to version control. Add it to .gitignore:

echo ".env" >> .gitignore

Always use HTTPS. Sending your Bearer token over plain HTTP exposes it to any network observer. Let's Encrypt is free β€” there is no reason not to use it.

Set client_max_body_size in Nginx. This must be greater than or equal to MAX_FILE_SIZE in your .env. If Nginx's limit is smaller, Nginx will reject the upload before it reaches Node.js and return a 413 Request Entity Too Large error.

Tune rate limits for your use case. The default limits (100 requests / 20 uploads per 15 minutes per IP) are sensible for personal use. If you're running a shared instance or a bot that uploads frequently, increase UPLOAD_LIMIT_MAX in your .env.


⚑ Performance Tips

Use SSD storage. Spinning-disk HDDs have high seek times. Since DevDrop is I/O-bound (every upload and download is a disk read or write), SSD storage will dramatically reduce latency for concurrent requests.

Put Cloudflare in front of it. Cloudflare provides:

  • Global CDN caching for file downloads
  • DDoS protection
  • Free HTTPS with minimal setup (just point your domain's nameservers)

Set Cloudflare's SSL mode to Full (strict) so traffic is encrypted end-to-end.

Tune Nginx's client_max_body_size. Set it just above your expected maximum file size. Leaving it at the default 1m will cause all uploads larger than 1 MB to fail at the Nginx layer.

Set proxy_request_buffering off in Nginx. Without this, Nginx buffers the entire upload to disk before forwarding it to Node.js β€” effectively doubling disk writes for every upload. With it off, the request streams directly through to Node.


⭐ Support

If you find DevDrop useful, consider starring the repository on GitHub ⭐

Found a bug or want a feature? Open an issue or submit a pull request β€” contributions are welcome.

About

Self-hosted file storage API with ShareX support, secure uploads, and simple text-based file handling.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors