# Conflicts: # AGENTS.md # CHANGELOG.md # README.md # docker-compose.yml # pyproject.toml # server/main.py # tests/test_server_api.py |
||
|---|---|---|
| client | ||
| secrets | ||
| server | ||
| tests | ||
| .dockerignore | ||
| .gitignore | ||
| AGENTS.md | ||
| CHANGELOG.md | ||
| docker-bake.hcl | ||
| docker-compose.yml | ||
| Dockerfile | ||
| example.env | ||
| pyproject.toml | ||
| README.md | ||
| requirements.txt | ||
| secscan-report.md | ||
| SPEC.md | ||
DeadDrop
Temporary download link service for developers. Upload a file and get a short-lived URL for secure sharing.
Why DeadDrop
You use DeadDrop when you need to share build artifacts, logs, or debug bundles without setting up long-lived object storage links. Links expire automatically, and expired files are removed by a background cleanup task.
30-second quick start
python -m venv .venv
. .venv/bin/activate
pip install -e .
DD_API_TOKEN=change-me ddrop serve -p 8080 -s ./storage
In another terminal:
. .venv/bin/activate
ddrop upload ./build.tar.gz --ttl 2h --token change-me
Expected output:
Upload complete: http://localhost:8080/Ab12Cd34
Security defaults
DeadDrop requires API auth by default.
- All
/api/*endpoints requireAuthorization: Bearer <token>whenDD_API_TOKEN(orDD_API_TOKEN_FILE) is configured.GET /<short_id>download links do not require auth.- If
DD_API_TOKENis not set, the server exits unless you explicitly setDD_ALLOW_ANON=true.- Never enable
DD_ALLOW_ANON=trueon a network-exposed deployment.
Tutorial: first run
Prerequisites
- Python 3.10+
pip- Optional: Docker and Docker Compose
Install and run locally
Use the quick start commands above. The server stores metadata in storage/links.db and files in storage/files/ by default.
For local-only development without auth, explicitly opt in:
DD_ALLOW_ANON=true ddrop serve -p 8080 -s ./storage
How-to: common tasks
Upload with a custom TTL
ddrop upload ./build.tar.gz --ttl 30m --token change-me
List active links
ddrop list
View storage stats
ddrop stats
Expire a link now
ddrop expire abc123xy
Fetch (download) a file by link ID
ddrop fetch abc123xy
Save to a specific path:
ddrop fetch abc123xy --output ./downloads/artifact.tar.gz
Upload via the stash alias
stash is an alias for upload -- both do the same thing:
ddrop stash ./build.tar.gz --ttl 2h --token change-me
Set CLI defaults
export DD_API_URL=http://localhost:8080
export DD_API_TOKEN=change-me
ddrop upload ./build.tar.gz --ttl 2h
Start server on custom host/port/storage
ddrop serve -p 9000 -h 127.0.0.1 -s /tmp/deaddrop
Rotate API token
- Stop the running server.
- Set a new token value (
DD_API_TOKENorDD_API_TOKEN_FILE). - Restart the server and update CLI clients to use the new token.
How-to: run with Docker (local)
cp example.env .env
mkdir -p secrets
printf '%s\n' "replace-with-long-random-token" > secrets/dd_api_token.txt
docker compose up --build
# verify container does not run as root
docker compose run --rm deaddrop id -u
The server is reachable on http://localhost:8080.
Tip: .env is consumed by Docker Compose, not your interactive shell.
If you want CLI defaults (without --token), export DD_API_TOKEN in your shell session.
Compose reads auth from DD_API_TOKEN_FILE (/run/secrets/dd_api_token) backed by ./secrets/dd_api_token.txt.
For secrets-based setups (Swarm/Podman), point DD_API_TOKEN_FILE at your mounted secret path.
DD_API_TOKEN and DD_API_TOKEN_FILE are mutually exclusive.
The compose file in this repository is for local testing and binds to loopback (127.0.0.1) by default.
secrets/dd_api_token.txt is required in Compose and startup fails if it is missing.
Compose uses a named volume (deaddrop_storage) for /data/storage to avoid host bind-mount permission issues with the non-root container user.
Because the container runs with a read-only root filesystem, Compose also mounts /tmp as tmpfs for multipart upload buffering.
If you raise DD_MAX_FILE_SIZE_MB, increase the /tmp tmpfs size accordingly.
Deployment guidance (production)
- Keep the app behind a reverse proxy that terminates TLS (for example Traefik or nginx).
- Do not publish the app directly to the public internet.
- Keep
DD_ALLOW_ANON=false. - Prefer
DD_API_TOKEN_FILEfrom your secret manager over plain env values. - Apply request rate limiting at the reverse proxy/API gateway layer (see below).
- Use a persistent storage path/volume for
/data/storage. - When using
read_only: true, mount/tmpas writable tmpfs and size it for upload buffering (>= DD_MAX_FILE_SIZE_MB, with headroom for concurrent uploads).
Rate limiting
DeadDrop relies on the reverse proxy for request rate limiting. The app enforces a global
storage quota (DD_MAX_STORAGE_MB) and per-file size limits (DD_MAX_FILE_SIZE_MB), but
does not throttle request rates or limit upload concurrency at the application layer.
Configure your reverse proxy to enforce at minimum:
- Request rate: cap API requests per source IP (e.g. nginx
limit_req_zone, TraefikrateLimitmiddleware). - Upload rate: apply a stricter limit to
POST /api/uploadthan to read-only endpoints. - Connection concurrency: limit simultaneous connections per IP to prevent resource exhaustion from parallel uploads.
Example nginx snippet:
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
limit_req_zone $binary_remote_addr zone=upload:10m rate=5r/m;
location /api/upload {
limit_req zone=upload burst=2 nodelay;
proxy_pass http://deaddrop:8080;
}
location /api/ {
limit_req zone=api burst=10 nodelay;
proxy_pass http://deaddrop:8080;
}
These values are starting points; adjust to your deployment's expected traffic patterns.
Build with buildx
The repository includes a docker-bake.hcl file for Docker buildx bake builds.
# local build (native platform, loads into Docker)
docker buildx bake deaddrop-local
# multi-platform build (amd64 + arm64)
docker buildx bake
# tagged release build (sets both version and latest tags)
VERSION=0.1.0 docker buildx bake deaddrop-release
# override image name for a registry push
VERSION=0.1.0 docker buildx bake deaddrop-release \
--set '*.tags=ghcr.io/owner/deaddrop:0.1.0' \
--set '*.tags=ghcr.io/owner/deaddrop:latest'
Standalone image (without bake):
docker build -t deaddrop .
# run with auth
docker run --rm -p 8080:8080 -e DD_API_TOKEN=change-me deaddrop
# verify image runtime user
docker run --rm deaddrop id -u
Expected id -u output is a non-zero UID (container runs as an unprivileged user).
Reference: environment variables
| Variable | Default | Used by | Description |
|---|---|---|---|
DD_API_TOKEN |
(none) | both | Bearer token for API auth (server validation + CLI default token) |
DD_API_TOKEN_FILE |
(none) | server | File path containing bearer token (for secrets mounts); mutually exclusive with DD_API_TOKEN |
DD_API_URL |
http://localhost:8080 |
client | CLI base URL |
DD_MAX_FILE_SIZE_MB |
100 |
server | Max upload size (1-500 MB) |
DD_MAX_STORAGE_MB |
1024 |
server | Max total storage for uploaded files (MB) |
DD_MAX_TTL |
48h |
server | Max allowed upload TTL (<int>m, <int>h, or inf) |
DD_ALLOW_ANON |
false |
server | Allow anonymous API access (unsafe, dev-only) |
DD_DEFAULT_EXPIRE_HOURS |
1 |
server | Default upload expiration window (hours) |
DD_CLEANUP_INTERVAL_SECONDS |
300 |
server | Interval for expired-link cleanup task |
DD_PORT |
8080 |
server | Server port |
DD_HOST |
0.0.0.0 |
server | Server host |
DD_STORAGE_PATH |
repo storage/ (or /data/storage in container) |
server | Storage directory |
Reference: API
All /api/* endpoints require a bearer token unless DD_ALLOW_ANON=true.
| Endpoint | Description |
|---|---|
POST /api/upload |
Upload a file and return {id, url, expires_at} |
GET /<short_id> |
Download a file by short ID (404 if missing, 410 if expired) |
GET /api/links |
List links and current aggregate stats |
POST /api/links/{id}/expire |
Expire a specific link immediately |
GET /api/stats |
Return counts plus storage usage and capacity |
bytes in stats responses reflects on-disk usage under storage/files (including orphaned files), which is also what quota enforcement uses.
Troubleshooting
RuntimeError: DD_API_TOKEN or DD_API_TOKEN_FILE is required: setDD_API_TOKENor use local-onlyDD_ALLOW_ANON=true.both DD_API_TOKEN and DD_API_TOKEN_FILE are set: remove one; these settings are exclusive.Invalid or missing token (401): ensure the CLI token matches the server token (--tokenorDD_API_TOKEN).413 File too large: increaseDD_MAX_FILE_SIZE_MB(range 1-500) or upload a smaller file.507 Storage limit exceeded: increaseDD_MAX_STORAGE_MBor expire old links.Invalid expiration windowon upload: ensure--ttlis valid (30m,2h) and does not exceedDD_MAX_TTL.400 {"detail":"There was an error parsing the body"}on large uploads (common in Swarm): ensure/tmpis writable and mounted as tmpfs in the service, then size it for your max upload and concurrency.unsupported schemaat startup (unreleased builds): remove legacylinks.dbor reset Compose volume.
# reset persisted compose storage volume
docker compose down
docker volume rm deaddrop_deaddrop_storage
Testing
. .venv/bin/activate
pytest -q