Gen Backend¶
The axion-gen Helm chart deploys the Gen API and Gen Worker in one release.
Workloads deployed¶
| Component | Role | See |
|---|---|---|
api |
REST API for dashboards, datasets, big-data SQL (DuckDB federation), maps (PMTiles, hexagons, clusters), roles, agent surface. OIDC auth, Postgres metadata, S3 for PMTiles. Hangfire client only — enqueues jobs but does not execute them. | Gen API (L3) |
worker |
Hangfire server. Runs TableProfileRefreshJob, MapTilesGenerationJob, OntologyTilesGenerationJob. Tippecanoe is baked into this image; the API image is chiseled. Optionally serves an OIDC-gated /hangfire dashboard. |
Gen Containers (L2) |
Both pods share the same Postgres (Hangfire owns its own schema in the axion_gen database) and the same S3 bucket.
Prerequisites¶
- Kubernetes cluster (v1.27+)
- Helm 3.12+
kubectlconfigured for your cluster- PostgreSQL 15+ database provisioned and reachable from the cluster (one logical database for both API and Worker — Hangfire keeps its tables in a separate schema)
- S3-compatible object storage (AWS S3, MinIO, etc.) for PMTiles and agent artifacts
- OIDC provider (Keycloak, Azure AD, Auth0, etc.)
- OpenFGA instance (v1.12+) reachable from the cluster — optional; leave
openfga.apiUrlempty to disable OpenFGA and runIPermissionServicein allow-all mode - cert-manager (only if you enable
api.certificate.enabled)
1. Prepare your values file¶
Create custom-values.yaml and fill in all required values below. Helm will refuse to render the chart if any of them are missing or empty.
Required values¶
| Value path | Description |
|---|---|
api.image.repository |
Full API image path in your private registry |
api.image.tag |
API image version tag |
worker.image.repository |
Full Worker image path in your private registry (separate image — runs the Hangfire server: PMTiles generation, table profile refreshes, ontology tiles) |
worker.image.tag |
Worker image version tag |
oidc.authority |
OIDC issuer URL (e.g. https://idp.example.com/realms/main) |
oidc.clientId |
OAuth2 client ID |
oidc.clientSecret |
OAuth2 client secret |
s3.host |
S3-compatible endpoint (e.g. s3.amazonaws.com, minio.local:9000) |
s3.accessKey |
S3 access key |
s3.secretKey |
S3 secret key |
postgres.api.host |
PostgreSQL hostname (shared by API and Worker; Hangfire owns its own schema) |
postgres.api.database |
Database name (e.g. axion_gen) |
postgres.api.username |
Database user |
postgres.api.password |
Database password |
openfga.apiUrl |
OpenFGA HTTP API URL (e.g. http://openfga.svc:8080). Optional — leave empty to disable OpenFGA. |
openfga.apiKey |
OpenFGA pre-shared API key |
Secrets (clientSecret, s3.secretKey, postgres.api.password, openfga.apiKey) can be passed via --set instead of storing them in the file:
--set oidc.clientSecret=xxx \
--set s3.secretKey=xxx \
--set postgres.api.password=xxx \
--set openfga.apiKey=xxx
Conditionally required¶
| Value path | When required |
|---|---|
api.apiKeys[].key |
If you add entries to api.apiKeys |
api.apiKeys[].serviceName |
If you add entries to api.apiKeys |
api.ingress.hosts |
If api.ingress.enabled: true |
api.certificate.issuerRef.name |
If api.certificate.enabled: true |
api.certificate.dnsNames |
If api.certificate.enabled: true |
Optional (sensible defaults provided)¶
| Value path | Default | Notes |
|---|---|---|
api.replicaCount |
1 |
Increase for HA |
api.service.type |
ClusterIP |
Change to LoadBalancer/NodePort if needed |
api.cors.allowedOrigins |
[] |
Set to your frontend origin(s) |
api.resources |
256Mi/512Mi mem | Tune per workload |
worker.replicaCount |
1 |
Hangfire scales horizontally — bump to run more jobs concurrently |
worker.resources |
256Mi/1Gi mem | Bump for heavy tippecanoe runs (PMTiles for large datasets) |
worker.hangfire.dashboardEnabled |
false |
Set true to expose /hangfire UI on the Worker (OIDC-protected; uses the same oidc.* settings as the API) |
worker.hangfire.workerCount |
2 |
Parallel job threads per Worker pod |
worker.hangfire.queues |
[default, critical] |
Queue names this Worker listens on |
worker.hangfire.allowedEmails |
[] |
Optional whitelist for dashboard access; empty = any authenticated OIDC user |
s3.secured |
true |
Set false for plain HTTP (e.g. local MinIO) |
postgres.api.port |
5432 |
|
postgres.api.maxPoolSize |
20 |
|
oidc.audience |
(= clientId) |
Override if your IdP uses a separate audience |
oidc.requireHttpsMetadata |
true |
Set false only for local dev |
openfga.storeName |
"axion-gen" |
OpenFGA store name |
api.migration.enabled |
true |
Disable if you run migrations externally |
global.imagePullSecrets |
[] |
Set if your registry is private |
Minimal custom-values.yaml¶
A working file with only the required values (everything else uses chart defaults):
api:
image:
repository: "registry.partner.com/axion/gen-api"
tag: "1.0.0"
worker:
image:
repository: "registry.partner.com/axion/gen-worker"
tag: "1.0.0"
oidc:
authority: "https://keycloak.partner.com/realms/main"
clientId: "axion-gen-api"
clientSecret: "change-me"
s3:
host: "s3.eu-west-1.amazonaws.com"
accessKey: "AKIAIOSFODNN7EXAMPLE"
secretKey: "change-me"
postgres:
api:
host: "postgres.partner.internal"
database: "axion_gen"
username: "axion_user"
password: "change-me"
openfga:
apiUrl: "http://openfga.partner.internal:8080"
apiKey: "change-me"
2. Get Helm chart and container images¶
2.1 Obtain a service-account key¶
Request sa-key.json from @v.loboda. This key grants read access to the Axion artifact registry.
2.2 Pull container images¶
The deployment ships two images — the API and the Worker (Hangfire job processor). Both must be pulled and pushed.
cat sa-key.json | docker login -u _json_key --password-stdin "https://europe-west1-docker.pkg.dev"
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/gen-api:latest
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/gen-worker:latest
2.3 Pull the Helm chart¶
cat sa-key.json | helm registry login europe-west1-docker.pkg.dev -u _json_key --password-stdin
helm pull oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-gen --version 0.0.0-latest
2.4 Push images to your private registry¶
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/gen-api:latest registry.example.com/axion/gen-api:latest
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/gen-worker:latest registry.example.com/axion/gen-worker:latest
docker push registry.example.com/axion/gen-api:latest
docker push registry.example.com/axion/gen-worker:latest
If your registry requires authentication:
kubectl create namespace axion-gen
kubectl create secret docker-registry registry-creds \
-n axion-gen \
--docker-server=registry.example.com \
--docker-username=<user> \
--docker-password=<password>
3. Install¶
helm upgrade --install axion-gen \
oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-gen \
--version 0.0.0-latest \
-n axion-gen --create-namespace \
-f custom-values.yaml \
--atomic --wait --timeout 10m
The --atomic flag rolls back automatically on failure.
4. Verify¶
# pods should be Running — expect both API and Worker pods
kubectl get pods -n axion-gen
# check the migration job completed
kubectl get jobs -n axion-gen
# release status
helm -n axion-gen status axion-gen
# API health
kubectl port-forward svc/axion-gen-api 8080:80 -n axion-gen
curl http://localhost:8080/health/ready
# Worker health
kubectl port-forward svc/axion-gen-worker 8081:80 -n axion-gen
curl http://localhost:8081/health/ready
5. Sync ontology data¶
After verifying, hydrate the ontology. Each part syncs independently — flip flags to false to skip.
Territory synchronization triggers PMTiles generation: the API enqueues a Hangfire job and the actual tippecanoe run happens on the Worker pod (typically 1–2 minutes). Make sure the Worker is
Runningbefore triggering the sync; otherwise the job sits inEnqueueduntil a Worker comes online. All other parts sync inline and complete with the API response.
curl -X POST 'https://<YOUR_API_HOST>/api/v1/ontology/sync' \
-H 'Authorization: Bearer <TOKEN>' \
-H 'Content-Type: application/json' \
-d '{
"relations": true,
"entities": true,
"kpiValues": true,
"territories": true
}'
To sync only specific parts, set the others to false:
curl -X POST 'https://<YOUR_API_HOST>/api/v1/ontology/sync' \
-H 'Authorization: Bearer <TOKEN>' \
-H 'Content-Type: application/json' \
-d '{ "relations": true, "entities": true, "kpiValues": false, "territories": false }'
6. Expose the service (optional)¶
Option A — Ingress¶
api:
ingress:
enabled: true
className: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
hosts:
- host: api.partner.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- api.partner.example.com
secretName: axion-gen-api-tls
Option B — NodePort / LoadBalancer¶
7. Upgrade and rollback¶
Update api.image.tag and worker.image.tag (typically to the same version) and re-run the install command. Database migrations run automatically as a pre-upgrade hook.
helm -n axion-gen history axion-gen
helm -n axion-gen rollback axion-gen <REVISION> --wait --timeout 10m
8. Uninstall¶
Troubleshooting¶
# render manifests locally to validate values
helm template axion-gen oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-gen \
--version 0.0.0-latest -f custom-values.yaml
# API logs
kubectl logs -l app.kubernetes.io/component=api -n axion-gen --tail=100
# Worker logs (Hangfire job processor)
kubectl logs -l app.kubernetes.io/component=worker -n axion-gen --tail=100
# migration job logs
kubectl logs -l app.kubernetes.io/component=db-migration -n axion-gen
# describe pod for events/errors
kubectl describe pod -l app.kubernetes.io/component=api -n axion-gen
kubectl describe pod -l app.kubernetes.io/component=worker -n axion-gen
# Kubernetes events
kubectl -n axion-gen get events --sort-by=.lastTimestamp