Deployment¶
Ekklesia deploys to Railway as two separate services — an API service and a frontend (web) service — plus a dedicated pgvector database. Both services build from Dockerfiles in the repository.
Services overview¶
graph LR
subgraph Railway
PG[(pgvector\nDatabase)]
API[API Service\nFastAPI + uvicorn]
WEB[Web Service\nnginx + React bundle]
end
BROWSER[Browser] -->|HTTPS| WEB
WEB -->|private network| API
API -->|private network| PG
API -->|HTTPS| GEMINI[Google Gemini API]
API -->|HTTPS| LOGFIRE[Logfire]
Private networking uses Railway's internal hostnames ({service}.railway.internal). Requests between services never leave Railway's network.
Prerequisites¶
- Railway account and CLI (
npm install -g @railway/cli) - Google Gemini API key
- Logfire project + token (optional but recommended)
Railway setup¶
1. Create project¶
2. Add pgvector database¶
Standard PostgreSQL on Railway does not include the pgvector extension. You must provision a dedicated pgvector service:
In the Railway dashboard: 1. Click + New → Database → search for pgvector 2. Select the pgvector template 3. Deploy it — Railway provisions a PostgreSQL instance with pgvector pre-installed
Note the internal hostname after deployment: {database-service-name}.railway.internal:5432.
3. Deploy API service¶
Railway will auto-detect railway.toml at the repo root:
[build]
builder = "DOCKERFILE"
dockerfilePath = "Dockerfile"
[deploy]
startCommand = "sh start.sh"
healthcheckPath = "/health"
healthcheckTimeout = 600
restartPolicyType = "ON_FAILURE"
restartPolicyMaxRetries = 5
The startCommand delegates to start.sh, which prints startup diagnostics (PORT, DATABASE_URL prefix), runs Alembic migrations, then starts uvicorn. Migrations are idempotent — re-deploys are safe.
4. Deploy web service¶
The web service railway.toml lives in web/:
[build]
builder = "DOCKERFILE"
dockerfilePath = "Dockerfile"
[deploy]
healthcheckPath = "/"
restartPolicyType = "ON_FAILURE"
restartPolicyMaxRetries = 5
Environment variables¶
API service¶
| Variable | Required | Example | Notes |
|---|---|---|---|
DATABASE_URL |
Yes | postgres://user:pass@host:5432/db |
Railway injects this automatically if you link the pgvector service; scheme is normalised to postgresql+asyncpg:// |
LLM_API_KEY |
Yes | AIza... |
Google Gemini API key |
LLM_MODEL |
No | gemini-2.0-flash-001 |
Defaults to gemini-2.0-flash; use a pinned version (e.g. gemini-2.0-flash-001) in production |
LLM_PROVIDER |
No | gemini |
LLM provider; defaults to gemini |
LOGFIRE_TOKEN |
No | pylf_... |
Logfire project token; omit to disable tracing |
CORS_ORIGINS |
Yes | https://ekklesia-web.up.railway.app |
Must include https://; comma-separated for multiple origins |
LOG_LEVEL |
No | INFO |
Python logging level |
Set these in the Railway dashboard under API service → Variables.
Web service¶
| Variable | Required | Example | Notes |
|---|---|---|---|
API_HOST |
Yes | mainline.proxy.rlwy.net:29582 |
Railway TCP proxy address for the API service — found in API service variables as $RAILWAY_TCP_PROXY_DOMAIN:$RAILWAY_TCP_PROXY_PORT |
The nginx entrypoint reads the container's DNS resolver from /etc/resolv.conf and substitutes $API_HOST, $PORT, and $NAMESERVER into the config template at startup:
#!/bin/sh
API_HOST="${API_HOST:-api:8000}"
PORT="${PORT:-80}"
RAW_NS=$(grep nameserver /etc/resolv.conf | awk '{print $2}' | head -1)
# nginx requires IPv6 resolver addresses wrapped in brackets
case "$RAW_NS" in
*:*) NAMESERVER="[$RAW_NS]" ;;
*) NAMESERVER="$RAW_NS" ;;
esac
export API_HOST PORT NAMESERVER
envsubst '$API_HOST $PORT $NAMESERVER' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
exec nginx -g 'daemon off;'
The resolver is read dynamically because Railway uses an IPv6 DNS server (fd12::10) while Docker Compose uses 127.0.0.11 — hardcoding either breaks the other environment.
Docker services¶
API Dockerfile (root)¶
Two-stage build:
1. Builder stage — installs dependencies via uv sync --no-dev --frozen
2. Runtime stage — copies .venv, src/, scripts/, data/, alembic/
start.sh prints PORT and the first 30 chars of DATABASE_URL before running migrations — useful for diagnosing Railway startup failures.
Web Dockerfile (web/)¶
Two-stage build:
1. Builder stage — npm ci + npm run build (no VITE_API_URL set; bundle uses relative URLs)
2. Runtime stage — nginx:alpine + static dist files + nginx.conf.template + entrypoint.sh
nginx config (web/nginx.conf.template)¶
resolver ${NAMESERVER} valid=10s ipv6=off;
set $api_upstream http://${API_HOST};
location ~ ^/(sermons|lookup|passage|health) {
proxy_pass $api_upstream;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off; # critical for SSE
proxy_cache off;
chunked_transfer_encoding on;
}
Three things are required for SSE to work end-to-end:
proxy_buffering off— without this nginx buffers events until the connection closes; the browser sees nothing until the pipeline finishes.set $api_upstreamwith aresolver— deferring DNS lookup to request time prevents nginx failing to start when the API isn't yet reachable. The resolver IP is injected at container startup from/etc/resolv.conf.X-Accel-Buffering: noresponse header on the/sermonsendpoint — signals Railway's edge proxy not to buffer the stream (without this, Railway cuts the connection after ~30 seconds).
Local development¶
Start the stack¶
This starts three services:
- db — PostgreSQL + pgvector on port 5433 (host), 5432 (internal)
- api — FastAPI on port 8000
- web — nginx serving the React bundle on port 5173
The API container runs alembic upgrade head automatically on start.
Ingestion¶
Run ingestion scripts after the database is healthy:
# Lexicon and cross-references (no Gemini API key needed)
docker compose exec api python scripts/ingest_lexicon.py
docker compose exec api python scripts/ingest_cross_references.py
# Corpus ingestion (requires LLM_API_KEY in .env)
docker compose exec api python scripts/ingest_bible.py
docker compose exec api python scripts/ingest_commentary.py
Full ingestion takes several minutes — the embedding API is called in batches for each passage.
Environment file¶
Copy .env.example to .env and fill in required values:
DATABASE_URL=postgresql+asyncpg://ekklesia:ekklesia@localhost:5433/ekklesia
LLM_API_KEY=your_gemini_api_key_here
LOGFIRE_TOKEN= # optional
CORS_ORIGINS=http://localhost:5173
Database migrations¶
Migrations are managed with Alembic. The migration history lives in alembic/versions/.
# Check current migration state
alembic current
# Apply all pending migrations
alembic upgrade head
# Generate a new migration from schema changes
alembic revision --autogenerate -m "description"
alembic/env.py uses Settings().DATABASE_URL so scheme normalisation (postgres:// → postgresql+asyncpg://) applies to migration runs as well as the application server.
Key migrations¶
| Revision | Description |
|---|---|
initial |
Creates all tables, HNSW indexes, GIN indexes |
b2f4c8e1d937 |
Resizes embedding columns from vector(384) to vector(768); drops and recreates HNSW indexes; existing embeddings set to NULL (re-ingestion required) |
Logfire¶
Logfire is configured in src/ekklesia/api/app.py before any other local imports:
logfire.configure(token=_settings.LOGFIRE_TOKEN or None, send_to_logfire="if-token-present")
logfire.instrument_sqlalchemy()
logfire.instrument_httpx()
logfire.instrument_pydantic_ai()
send_to_logfire="if-token-present" means traces are sent to Logfire when LOGFIRE_TOKEN is set and silently no-op'd when it isn't. This allows the same Docker image to run with or without observability enabled.
The Logfire dashboard shows:
- Per-request FastAPI traces
- Per-stage sermon_pipeline spans
- Individual hybrid_search and embed_query spans with result counts
- Every LLM call via pydantic-ai with token usage and tool invocations
- SQLAlchemy query traces