|
|
3 dagen geleden | |
|---|---|---|
| api | 3 dagen geleden | |
| doc | 3 dagen geleden | |
| examples | 1 week geleden | |
| files | 1 week geleden | |
| scheduler | 4 dagen geleden | |
| scripts | 1 week geleden | |
| tests | 1 week geleden | |
| ui | 4 dagen geleden | |
| .env.example | 4 dagen geleden | |
| .gitignore | 1 week geleden | |
| LICENSE | 1 week geleden | |
| NOTICE | 1 week geleden | |
| PROGRESS.md | 1 week geleden | |
| README.md | 4 dagen geleden | |
| SPEC.md | 4 dagen geleden | |
| compose.scheduler.yml | 4 dagen geleden | |
| docker-compose.yml | 4 dagen geleden |
Self-hosted service that ingests abuse reports from many sources (web servers,
IDS, fail2ban-like agents) and distributes tailored, decay-weighted block lists
to firewalls and proxies. Ships as a Docker Compose stack — api (JSON
backend), ui (PHP+Twig BFF), and optional mysql / scheduler sidecars.
For who: ops engineers who run their own infra, want a single place to collect abuse signal across hosts, and need consumer-shaped output (one firewall = one tailored list).
The full design is in SPEC.md. Per-milestone progress is in
PROGRESS.md. Documentation for operators and future
frontend authors lives in doc/.
git clone <this-repo> irdb && cd irdb
cp .env.example .env
# Generate secrets — see "Generating secrets" below for the exact commands.
$EDITOR .env
docker compose -f docker-compose.yml -f compose.scheduler.yml up -d
That's it. The UI is at http://localhost:8080, the api at
http://localhost:8081, and the API reference viewer at
http://localhost:8081/api/docs.
Log in with the local admin credentials you set in .env
(LOCAL_ADMIN_USERNAME / LOCAL_ADMIN_PASSWORD_HASH). OIDC works too —
see doc/auth-flows.md.
Every value in .env marked with a comment "32-byte hex" or similar is a
secret you need to generate. Use these one-liners:
# 32-byte hex (UI_SECRET, APP_SECRET, INTERNAL_JOB_TOKEN)
openssl rand -hex 32
# IRDB-format service token (UI_SERVICE_TOKEN — looks like irdb_svc_…)
docker compose run --rm -T api php -r 'require "/app/vendor/autoload.php";
echo (new App\Domain\Auth\TokenIssuer())->issue(App\Domain\Auth\TokenKind::Service);'
# Local admin password hash (LOCAL_ADMIN_PASSWORD_HASH — Argon2id)
php -r "echo password_hash('your-admin-password', PASSWORD_ARGON2ID);"
# Note: in your .env, double every $ in the hash to $$ so docker-compose
# variable substitution doesn't eat it.
The api validates required env vars on boot; misconfiguration crashes
docker compose up rather than the first user click.
Once the stack is up, log in to the UI as the local admin and:
Categories →
New. Slugs are kebab-ish (brute_force, web_attack).Reporters → New. Trust weight defaults to
1.0; lower it to dampen a noisy source.Tokens → New, kind = reporter,
pick the reporter you just made. Copy the raw token now — it is
shown once and never displayed again.Then post a report from the command line:
curl -X POST http://localhost:8081/api/v1/report \
-H "Authorization: Bearer irdb_rep_…" \
-H "Content-Type: application/json" \
-d '{"ip":"203.0.113.42","category":"brute_force","metadata":{"url":"/wp-login.php"}}'
# → 202 {"report_id":1,"ip":"203.0.113.42","received_at":"…"}
For the distribution side: create a consumer (Consumers → New, pick a
policy — moderate is a good default), create a consumer token, then
pull the blocklist:
curl http://localhost:8081/api/v1/blocklist -H "Authorization: Bearer irdb_con_…"
# → text/plain: one IP or CIDR per line
Add ?format=json for richer per-entry data. Use the ETag
If-None-Match round-trip to skip retransfer if nothing changed.End-to-end examples for fail2ban, iptables-restore, nginx, and HAProxy
are in examples/.
The default compose deployment exposes plain HTTP on :8080 (UI) and
:8081 (api). For production, front them with Caddy / nginx / Traefik
and route by hostname:
reputation.example.com → ui:8080
reputation-api.example.com → api:8081
A working Caddy config is in
examples/reverse-proxy/Caddyfile
— it terminates TLS via Let's Encrypt and forwards both hostnames.
Single-hostname routing (everything under reputation.example.com with
/api/* → api, /* → ui) is documented as an alternative in the
example file.
SQLite (default) is fine for single-host deployments. For networked
storage or multi-replica api scaling, switch to MySQL:
mysql service block in docker-compose.yml.DB_DRIVER=mysql and the DB_MYSQL_* vars in .env.docker compose up -d.The migrate container runs the same Phinx migrations against MySQL on
boot. Phinx detects the adapter; the only schema-shape difference is
adapter-aware DATETIME(6) vs SQLite TEXT for timestamps (handled
in BaseMigration).
Networked storage warning: SQLite's WAL mode is unreliable on NFS / SMB / EFS. If you use networked storage, use MySQL.
Walkthrough in doc/auth-flows.md, sections
"Entra setup" and "OIDC configuration variables". Set
OIDC_* vars in .env, restart the ui container, and the login page
gains a "Sign in with Microsoft" button.
Group → role mapping lives in the oidc_role_mappings table. Until
the dedicated admin UI ships, populate it directly:
docker compose exec -T api sh -c \
"sqlite3 /data/irdb.sqlite \\
\"INSERT INTO oidc_role_mappings(group_id, role) VALUES('<entra-group-id>', 'admin');\""
Default role for unmapped users is viewer; set
OIDC_DEFAULT_ROLE=none in .env to deny logins instead.
Periodic jobs (recompute scores, refresh GeoIP, prune audit log,
enrich pending IPs) are exposed at /internal/jobs/* on the api. Three
ways to drive them:
Sidecar (default in compose.scheduler.yml) — busybox crond posts
to the api once a minute. No host setup required. Started by:
docker compose -f docker-compose.yml -f compose.scheduler.yml up -d
Host cron — install
examples/scheduler/host.crontab
into the system crontab. Suitable when you don't want a sidecar.
systemd timer — install
examples/scheduler/irdb-tick.service
and
examples/scheduler/irdb-tick.timer
into /etc/systemd/system, then systemctl enable --now irdb-tick.timer.
All three drive the same /internal/jobs/tick endpoint, which is the
dispatcher: it asks job_runs what's due and invokes those jobs in
turn. The endpoint is bound to RFC1918 + loopback only (Caddyfile
config in api/docker/Caddyfile); external requests get 404.
Pull new code and rebuild — Docker's layer cache makes the rebuild fast when only app code changes:
git pull
docker compose -f docker-compose.yml -f compose.scheduler.yml up --build -d
docker compose logs -f # Ctrl-C once migrate exits 0 and api/ui are healthy
up --build -d rebuilds local images and recreates only the containers
whose image hash or config changed. The migrate container reruns Phinx
automatically; new migrations in db/migrations/ are picked up in order.
The irdb-data volume persists, so SQLite state, GeoIP MMDBs, and the
audit log carry forward across updates.
Don't use docker compose restart to pick up new code — that just bounces
the existing containers. Don't use docker compose down -v either — that
deletes the volume.
Edge cases (failed migrations, force-rebuild, rollback, fixing volume ownership after a uid change, disk cleanup, scheduler ops) are covered in the admin manual.
The api's persistent state lives in one of two places.
SQLite (default) — online-safe via the SQLite backup API:
docker compose exec api sh -c \
'sqlite3 /data/irdb.sqlite ".backup /data/irdb-backup.sqlite"'
docker compose cp api:/data/irdb-backup.sqlite ./irdb-backup-$(date +%F).sqlite
The .backup command is the only correct way to copy a live SQLite
database with WAL — it quiesces the journal and produces a consistent
snapshot.
SQLite — whole-volume tarball (alternative, requires the api to be stopped or quiesced):
docker compose stop api
docker run --rm -v irdb-data:/data -v "$(pwd):/backup" alpine \
tar czf /backup/irdb-backup.tar.gz -C /data .
docker compose start api
Restore: docker compose down, drop or empty the volume, then
extract:
docker run --rm -v irdb-data:/data -v "$(pwd):/backup" alpine \
sh -c 'rm -rf /data/* && tar xzf /backup/irdb-backup.tar.gz -C /data'
docker compose up -d
MySQL:
docker compose exec mysql sh -c \
'mysqldump --single-transaction --routines --quick \
-u"$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE"' \
> irdb-mysql-$(date +%F).sql
Restore (api must be stopped during restore so it doesn't observe a half-loaded schema):
docker compose stop api migrate
docker compose exec -T mysql sh -c \
'mysql -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE"' \
< irdb-mysql-2026-04-29.sql
docker compose up -d migrate api
The schema is small (under 20 tables); a multi-GB dump is a red flag —
audit_log and reports are the only tables that grow with use, and
cleanup-audit + score_hard_cutoff_days bound them.
What NOT to back up:
api_tokens table is included in the
database backup automatically, but the raw token strings shown
once on creation aren't recoverable. If a token is lost, revoke and
re-issue./data/geoip/*.mmdb) — re-downloadable via the
refresh-geoip job on first run after restore.UI_SERVICE_TOKEN etc. live in .env, not in the database;
back up the env file separately if you need to redeploy from a
blank node.See doc/architecture.md → Disaster Recovery
for the end-to-end recovery checklist.
Three containers (api, ui, migrate) plus optional mysql and
scheduler sidecars. The split is a BFF pattern: api is the
JSON backend (owns the database, business logic, RBAC); ui is the
browser-facing PHP+Twig frontend that holds sessions and forwards
calls with a service token + impersonation header.
Full diagram + rationale in doc/architecture.md.
The OpenAPI document is the source of truth: visit
http://localhost:8081/api/docs for the interactive viewer, or
fetch the YAML at /api/v1/openapi.yaml.
Higher-level prose (token kinds, auth flows, common conventions) lives
in doc/api-overview.md. For machine clients
specifically, examples/ has copy-paste shell + Python
scripts for both reporters and consumers.
The PHP+Twig UI is deliberately replaceable. The api's contract, auth model, and token kinds are stable; the UI is one of several possible frontends (Vue, native desktop, mobile clients are explicitly anticipated).
If you're building a replacement, start at
doc/frontend-development.md. It
describes the three integration patterns (BFF replacement, SPA + thin
BFF, direct API), the minimum API surface a fully-featured UI uses,
and what NOT to do.
./scripts/ci.sh
Runs cs/stan/test for both subprojects and verifies the docker compose images build. Requires Docker; no PHP/Node toolchain needed on the host.
./scripts/check-doc-endpoints.sh
Doc accuracy guard: greps doc/*.md for /api/v1/* paths, fails if any
mentioned path is not in api/public/openapi.yaml. Run after editing
either side.
./tests/e2e/demo.sh
End-to-end smoke check — boots compose, creates a reporter+consumer +tokens, posts a report, pulls the blocklist, and tears down. Mirrors the quickstart documented above.
Licensed under the Apache License, Version 2.0.
Copyright 2026 Alessandro Chiapparini <irdb@chiapparini.org>.
See NOTICE for required attribution when redistributing.