DevDrop is a lightweight, self-hosted file upload server built for developers. It enables fast uploads via raw binary streaming, direct file access via URL, ShareX integration, and simple deployment on a VPS or panel server.
Unlike traditional upload servers that rely on multipart/form-data, DevDrop accepts raw request bodies β this eliminates encoding overhead, making uploads significantly faster and easier to automate from CLI tools, bots, or screenshot utilities.
Clone the repository and install dependencies:
git clone https://github.com/KevinSiddhpura/devdrop.git
cd devdrop
npm installCopy the example environment file and fill in your values:
cp .env.example .envStart the server:
npm startThe server will be available at:
http://localhost:3000
You can verify it's running with:
curl http://localhost:3000
# Expected: {"status":"OK"}- β‘ Raw uploads β No multipart encoding. Files are streamed directly from the request body to disk, reducing CPU overhead and upload time, especially for large binary files.
- π§© No database β Files and metadata live entirely on the filesystem. Nothing to provision, migrate, or back up beyond the
uploads/directory. - πΈ ShareX ready β Hit
/configto download a ready-made ShareX custom uploader configuration. - π Simple auth β A single
AUTH_TOKENin your.envprotects all write endpoints. Simple, auditable, easy to rotate. - π Deployable anywhere β Works on a bare VPS, a Pterodactyl panel, Railway, Render, or any environment with Node.js β₯ 18.
- π§ Minimal architecture β The entire codebase is readable in one sitting. No magic, no framework bloat.
DevDrop is designed around four principles:
Fast β Raw upload streaming means no base64 encoding, no multipart parsing, and no buffering the entire file into memory before writing. Files stream directly to disk.
Simple β The storage layer is just a folder. List files with ls, delete with rm, back up with rsync. No ORM, no query language.
Secure β Every write operation (upload, delete) requires a Bearer token. File paths are sanitised to prevent directory traversal attacks. Upload size is enforced before the file is written.
Automation-friendly β The API is plain HTTP. Any tool that can send a POST request β cURL, ShareX, Python, a Discord bot β can upload files without a dedicated SDK.
Ideal for:
- Developers who want a personal file host without vendor lock-in
- Screenshot uploading workflows (ShareX, Flameshot)
- CLI pipelines that need a place to store build artifacts or logs
- Bots that need to upload and share files via URL
DevDrop uses true raw binary streaming for uploads. Instead of sending a multipart form, you pipe the file content directly as the request body and pass the filename via a header. The request body is piped directly to disk via a write stream β files never buffer into memory.
- Accepts any file type (binary or text)
- Smart filename handling β if a file with the same name already exists, a counter suffix is appended automatically to prevent collisions
- No temporary files; data streams directly to the final destination
- File size is enforced during the stream β oversized uploads are aborted and partial files are cleaned up
- List all uploaded files via
GET /filesβ returns a JSON array with name, size, and creation date for each file - Delete a specific file via
DELETE /files/:nameβ protected by Bearer token - Access any file directly via
GET /files/:nameβ served as a static file with the correctContent-Type
- Storage stats via
GET /storageβ returns total size used and file count - Request logging β every request is logged to the console and written to daily log files in the
logs/directory (e.g.2026-04-15.log) - File size is tracked and enforced at upload time via
MAX_FILE_SIZE
- Bearer token authentication β all mutating endpoints and sensitive endpoints (
/config,/storage,/files) requireAuthorization: Bearer <TOKEN>in the request header - File size limits β requests exceeding
MAX_FILE_SIZEbytes are rejected during streaming before the full file is written - Path traversal protection β filenames are sanitised using
path.basename(), and names containing.., null bytes, or path separators are rejected - Rate limiting β configurable per-IP rate limits for general requests and uploads, preventing abuse and disk-fill attacks
- ShareX β download a ready-made custom uploader config from
/config(requires authentication) and import it directly into ShareX - CLI / cURL β any shell script can upload files with a single
curlcommand - REST API β standard JSON responses make integration with bots and scripts straightforward
User / Client
β
β HTTPS request
βΌ
Nginx (reverse proxy)
β Terminates TLS, forwards to local port
β
βΌ
DevDrop (Node.js on port 3000)
β Validates token, streams file to disk
β
βΌ
uploads/ (local filesystem)
Nginx sits in front of DevDrop and handles:
- TLS termination (HTTPS) via Let's Encrypt
- Public-facing port 80/443 β internal port 3000
- Optional: request rate limiting, body size enforcement at the proxy layer
Node.js handles:
- Authentication (Bearer token check)
- File streaming (pipe
reqdirectly tofs.createWriteStream) - Filename collision avoidance
- JSON API responses
Filesystem stores everything:
uploads/β all uploaded fileslogs/β access and error logs
| Requirement | Version |
|---|---|
| nvm | Latest (used to install and manage Node.js) |
| Node.js | β₯ 18 (installed via nvm) |
| npm | β₯ 8 (bundled with Node 18, managed by nvm) |
Install nvm if you don't have it:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bashReload your shell so the nvm command becomes available:
source ~/.bashrc
# or if you use zsh:
source ~/.zshrcInstall and use Node 18:
nvm install 18
nvm use 18Set Node 18 as the default so it's used automatically in new terminal sessions:
nvm alias default 18Check your versions:
node --version # Should print v18.x.x
npm --version
nvm --version| Component | Purpose |
|---|---|
| Ubuntu 22.04 VPS | Stable, well-documented Linux environment |
| Nginx | Reverse proxy + TLS termination |
| Certbot | Free Let's Encrypt SSL certificates |
| PM2 | Process manager β keeps DevDrop alive after crashes and reboots |
| A domain name | Required for valid HTTPS |
devdrop/
βββ src/
β βββ config/ # Environment variable loading and validation
β βββ middleware/ # Auth token check, file size enforcement, logging
β βββ routes/ # Express route handlers for each endpoint
β βββ services/ # Business logic: file I/O, storage stats
β βββ utils/ # Helper functions: filename sanitisation, collision avoidance
β βββ app.js # Express app setup: mounts middleware and routes
β
βββ uploads/ # All uploaded files land here (auto-created on first run)
βββ logs/ # Request logs with timestamps and response times
βββ server.js # Entry point: loads config, starts HTTP server
βββ package.json # Dependencies and npm scripts
βββ .env # Your local environment config (never commit this)
Key files to know:
server.jsβ Start here. Reads.env, initialises the Express app, and starts listening.src/app.jsβ Wires together middleware and routes.src/routes/β One file per logical group of endpoints (upload, files, storage, config).src/middleware/β The auth check lives here. If you want to add rate limiting, this is where it goes.
Create your .env file by copying the example:
cp .env.example .envThen open it in your editor and fill in every value:
nano .envA complete .env looks like this:
# Port DevDrop listens on internally
PORT=3000
# Your public-facing domain (used to construct file URLs in responses)
DOMAIN=https://your-domain.com
# Secret token β used as the Bearer token for all write operations
# Generate a strong one with: openssl rand -hex 32
AUTH_TOKEN=your_token_here
# Directory where uploaded files are stored (relative to project root)
UPLOAD_DIR=./uploads
# Directory where logs are written
LOG_DIR=./logs
# Maximum allowed upload size in bytes
# 52428800 = 50 MB | 104857600 = 100 MB | 10485760 = 10 MB
MAX_FILE_SIZE=52428800
# Rate limiting β values use human-readable durations (ms syntax)
# Examples: '15m', '1h', '30s', '2d'
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100
UPLOAD_LIMIT_WINDOW=15m
UPLOAD_LIMIT_MAX=20
# Proxy β set to 1 if behind a reverse proxy (Nginx, Cloudflare, etc.)
# Required for rate limiting to use real client IPs
TRUST_PROXY=1Generate a strong token:
openssl rand -hex 32
# Example output: a3f8c2e1d7b904f6a5c318e2d09f4b7c8a1e5d6f2c904b3a7f1e8d5c2b6a9f4eCopy that output as your AUTH_TOKEN. Never share it or commit it to version control.
| Variable | Required | Default | Description |
|---|---|---|---|
PORT |
No | 3000 |
The local port Node.js listens on |
SERVER_PORT |
No | β | Overrides PORT when running inside a Pterodactyl panel |
DOMAIN |
Yes | β | Your public URL (e.g. https://files.example.com). Included in upload response URLs |
AUTH_TOKEN |
Yes | β | Secret string required in the Authorization header for uploads and deletes |
UPLOAD_DIR |
No | ./uploads |
Path to the directory where files are stored |
LOG_DIR |
No | ./logs |
Path to the directory where log files are written |
MAX_FILE_SIZE |
No | 52428800 |
Maximum request body size in bytes (default: 50 MB) |
TRUST_PROXY |
No | 0 |
Set to 1 if behind a reverse proxy (Nginx, Cloudflare). Required for rate limiting to use real client IPs |
RATE_LIMIT_WINDOW |
No | 15m |
Time window for the general rate limiter (uses ms syntax) |
RATE_LIMIT_MAX |
No | 100 |
Max requests per IP within the general rate limit window |
UPLOAD_LIMIT_WINDOW |
No | 15m |
Time window for the upload rate limiter |
UPLOAD_LIMIT_MAX |
No | 20 |
Max uploads per IP within the upload rate limit window |
Development (restarts manually):
npm startDevelopment with auto-restart on file changes (requires nodemon):
npx nodemon server.jsCheck the server is responding:
curl http://localhost:3000/
# {"status":"OK"}All endpoints that modify data require the Authorization header:
Authorization: Bearer YOUR_AUTH_TOKEN
Responses are always JSON unless the endpoint serves a file directly.
Verify the server is running. No authentication required.
GET /
Request:
curl http://your-domain.com/Response:
{ "status": "OK" }Upload any file by streaming its content as the raw request body.
POST /upload
Required headers:
| Header | Value | Description |
|---|---|---|
Authorization |
Bearer YOUR_TOKEN |
Authentication |
x-filename |
yourfile.txt |
The filename to save the upload as |
Body: Raw file content (binary or text). Do not use multipart/form-data.
Upload a text string:
curl -X POST https://your-domain.com/upload \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "x-filename: hello.txt" \
--data "Hello, DevDrop!"Upload a file from disk:
curl -X POST https://your-domain.com/upload \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "x-filename: photo.png" \
--data-binary @/path/to/photo.pngUpload and pipe from another command:
cat /var/log/syslog | curl -X POST https://your-domain.com/upload \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "x-filename: syslog.txt" \
--data-binary @-Response:
{
"url": "https://your-domain.com/files/hello.txt"
}The url field is the direct public link to your uploaded file.
Returns a JSON array of all files currently in the uploads/ directory, including their name, size, and creation date.
GET /files
Request:
curl https://your-domain.com/files \
-H "Authorization: Bearer YOUR_TOKEN"Response:
[
{ "name": "hello.txt", "size": "14.00 Bytes", "created": "2026-04-15T10:30:00.000Z" },
{ "name": "photo.png", "size": "1.20 MB", "created": "2026-04-15T10:31:00.000Z" }
]Permanently deletes a file by name from the uploads/ directory.
DELETE /files/:name
Request:
curl -X DELETE https://your-domain.com/files/hello.txt \
-H "Authorization: Bearer YOUR_TOKEN"Response:
{ "message": "File deleted successfully" }Returns the total number of files and disk space used by the uploads/ directory.
GET /storage
Request:
curl https://your-domain.com/storage \
-H "Authorization: Bearer YOUR_TOKEN"Response:
{
"totalFiles": 3,
"totalSize": 2516582,
"totalSizeFormatted": "2.40 MB"
}Serves a file directly. No authentication required β this is the public download URL returned by the upload endpoint.
GET /files/:name
Request:
curl https://your-domain.com/files/hello.txt
# Hello, DevDrop!Or simply open the URL in a browser. The Content-Type header is set automatically based on the file extension.
Returns a ready-made ShareX custom uploader configuration as a downloadable .sxcu file. Requires authentication β your auth token is embedded in the config so ShareX can upload on your behalf.
GET /config
Request:
curl https://your-domain.com/config \
-H "Authorization: Bearer YOUR_TOKEN" \
-o devdrop.sxcuShareX is a free, open-source screenshot tool for Windows. DevDrop integrates with it out of the box.
Steps:
-
Open your browser and navigate to:
https://your-domain.com/config -
Your browser will download a
.sxcufile (ShareX Custom Uploader config). -
Open ShareX. It will automatically detect and prompt you to import the config. Click Yes.
-
In ShareX, go to Destinations β Image Uploader and select DevDrop (or whatever name appears).
-
Take a screenshot β ShareX will upload it automatically and copy the direct URL to your clipboard. β
This is the most reliable way to run DevDrop in production. The steps below assume a fresh Ubuntu 22.04 server.
apt update && apt upgrade -ynvm (Node Version Manager) is the recommended way to install Node.js on a server. It lets you install multiple Node versions side-by-side, switch between them instantly, and upgrade without touching system packages β far cleaner than using apt or NodeSource.
Install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bashThe installer appends a small loader block to your ~/.bashrc. Reload it so the nvm command is available in your current session:
source ~/.bashrcVerify nvm is working:
nvm --version
# Expected: 0.39.7 (or whichever version you installed)Install Node 18 and set it as default:
nvm install 18
nvm use 18
nvm alias default 18nvm alias default 18 ensures that when PM2 or any other tool spawns a new shell (e.g. on reboot), it uses Node 18 rather than falling back to the system Node β or worse, finding no Node at all.
Verify Node and npm are available:
node --version # Should print v18.x.x
npm --version # Should print 9.x.x or higher
which node # Should print something like /root/.nvm/versions/node/v18.x.x/bin/nodeUseful nvm commands for future reference:
nvm ls # List all installed Node versions
nvm ls-remote # List all available Node versions to install
nvm install 20 # Install a different version (e.g. Node 20)
nvm use 20 # Switch to Node 20 in this session
nvm alias default 20 # Make Node 20 the default going forward
nvm uninstall 18 # Remove a version you no longer needapt install -y nginxStart and enable it so it runs on boot:
systemctl start nginx
systemctl enable nginxgit clone https://github.com/KevinSiddhpura/devdrop /opt/devdrop
cd /opt/devdrop
npm installCreate your environment file:
cp .env.example .env
nano .envFill in at minimum:
PORT=3000
DOMAIN=https://your-domain.com
AUTH_TOKEN=<output of: openssl rand -hex 32>
NODE_ENV=production
MAX_FILE_SIZE=52428800
TRUST_PROXY=1
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100
UPLOAD_LIMIT_WINDOW=15m
UPLOAD_LIMIT_MAX=20PM2 is a process manager that keeps your Node.js app running after crashes and across reboots.
npm install -g pm2
pm2 start server.js --name devdropSave the PM2 process list so it survives a reboot:
pm2 saveConfigure PM2 to start automatically when the server boots:
pm2 startup
# PM2 will print a command β run that command as instructedUseful PM2 commands:
pm2 status # See if devdrop is running
pm2 logs devdrop # Stream live logs
pm2 restart devdrop # Restart after changing .env or code
pm2 stop devdrop # Stop the process
pm2 delete devdrop # Remove from PM2 entirelyCreate a new Nginx site config:
nano /etc/nginx/sites-available/devdropPaste the following, replacing your-domain.com with your actual domain:
server {
listen 80;
server_name your-domain.com;
# Increase client body size to match MAX_FILE_SIZE in your .env
# This value must be >= MAX_FILE_SIZE, otherwise Nginx will reject large uploads before they reach Node
client_max_body_size 100M;
location / {
proxy_pass http://127.0.0.1:3000;
# Pass the real client IP to Node.js (useful for logging)
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
# Required for raw body streaming β disable request buffering
# so the file streams through Nginx to Node without being held in memory
proxy_request_buffering off;
# Extend timeouts for large file uploads (default is 60s, which is too short)
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_connect_timeout 10s;
}
}Enable the site by creating a symlink:
ln -s /etc/nginx/sites-available/devdrop /etc/nginx/sites-enabled/Test the Nginx config for syntax errors:
nginx -t
# Expected: syntax is ok / test is successfulReload Nginx to apply changes:
systemctl reload nginxInstall Certbot:
apt install -y certbot python3-certbot-nginxObtain and install a free SSL certificate (Certbot will auto-edit your Nginx config):
certbot --nginx -d your-domain.comFollow the interactive prompts. Certbot will:
- Verify domain ownership via HTTP challenge
- Obtain the certificate
- Update your Nginx config to redirect HTTP β HTTPS
- Set up automatic renewal
Test automatic renewal:
certbot renew --dry-runYour DevDrop instance is now live at https://your-domain.com. β
Pterodactyl is a Docker-based game/application server panel. Because DevDrop runs inside a container, a few differences apply.
-
Egg: Node.js (any standard Node.js egg works)
-
Startup command:
node server.js
In your server's Startup tab, set:
SERVER_PORT=2031
DOMAIN=https://your-domain.com
AUTH_TOKEN=your_secure_token
NODE_ENV=production
MAX_FILE_SIZE=52428800
TRUST_PROXY=1Use
SERVER_PORTinstead ofPORTwhen running inside Pterodactyl. The panel allocates a specific port for your server; using a different value will cause the container's port mapping to fail. You should also keepTRUST_PROXY=1because traffic goes through the host machine's proxy.
Pterodactyl containers are not directly accessible on port 80/443. You need Nginx on the host machine (the physical server running the panel) to proxy traffic to the container's allocated port.
server {
listen 80;
server_name your-domain.com;
client_max_body_size 100M;
location / {
# Replace SERVER_IP with your VPS/host IP
# Replace SERVER_PORT with the port allocated in Pterodactyl (e.g. 2031)
proxy_pass http://SERVER_IP:SERVER_PORT;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_request_buffering off;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}After saving, test and reload:
nginx -t && systemctl reload nginxThen add HTTPS:
certbot --nginx -d your-domain.com- Do not use
localhostinDOMAINorproxy_passon the host machine β containers run in an isolated network namespace, solocalhoston the host does not reach the container. Use the host machine's actual IP address. - Pterodactyl manages the Node.js process inside the container, so you do not need PM2 when using the panel.
- File storage (
uploads/) lives inside the container's allocated volume. Back it up via the panel's file manager orrsyncfrom the host.
Use a strong, random token. Weak tokens are trivially brute-forced if your server is public.
openssl rand -hex 32Never commit .env to version control. Add it to .gitignore:
echo ".env" >> .gitignoreAlways use HTTPS. Sending your Bearer token over plain HTTP exposes it to any network observer. Let's Encrypt is free β there is no reason not to use it.
Set client_max_body_size in Nginx. This must be greater than or equal to MAX_FILE_SIZE in your .env. If Nginx's limit is smaller, Nginx will reject the upload before it reaches Node.js and return a 413 Request Entity Too Large error.
Tune rate limits for your use case. The default limits (100 requests / 20 uploads per 15 minutes per IP) are sensible for personal use. If you're running a shared instance or a bot that uploads frequently, increase UPLOAD_LIMIT_MAX in your .env.
Use SSD storage. Spinning-disk HDDs have high seek times. Since DevDrop is I/O-bound (every upload and download is a disk read or write), SSD storage will dramatically reduce latency for concurrent requests.
Put Cloudflare in front of it. Cloudflare provides:
- Global CDN caching for file downloads
- DDoS protection
- Free HTTPS with minimal setup (just point your domain's nameservers)
Set Cloudflare's SSL mode to Full (strict) so traffic is encrypted end-to-end.
Tune Nginx's client_max_body_size. Set it just above your expected maximum file size. Leaving it at the default 1m will cause all uploads larger than 1 MB to fail at the Nginx layer.
Set proxy_request_buffering off in Nginx. Without this, Nginx buffers the entire upload to disk before forwarding it to Node.js β effectively doubling disk writes for every upload. With it off, the request streams directly through to Node.
If you find DevDrop useful, consider starring the repository on GitHub β
Found a bug or want a feature? Open an issue or submit a pull request β contributions are welcome.