Sense Backend¶
The axion-sense Helm chart deploys the Sense backend tier in one release.
Workloads deployed¶
| Component | Role | See |
|---|---|---|
api |
REST + gRPC entry point. OIDC auth, Postgres metadata, ClickHouse reads/writes, Kafka producer for frame events. | Sense API (L3) |
worker |
Kafka consumer + Hangfire job runner. ClickHouse batch insert, map-matching writeback, custom-layers sync, coverage tile builds. | Sense Worker (L3) |
vision.detections-api |
HTTP ingress for detection results from external detectors. | Frame upload + ML recognition flow |
vision.worker |
Kafka consumer fanning frames out to detector backends. | (same flow) |
vision.quality-worker |
Frame-quality classifier results. | (same flow) |
vision.clusterization-worker |
Groups detections into objects. | (same flow) |
vision.matching |
Map-matching worker (Valhalla or external API). | Valhalla |
vision.triton |
NVIDIA Triton inference server (separate image). | (Vision pipeline) |
vision.valhalla (optional) |
Bundled OSM routing engine for map-matching. Off by default. | Valhalla |
The Vision tier is enabled by default and can be disabled with vision.enabled: false if the partner runs detection externally.
Prerequisites¶
- Kubernetes cluster (v1.27+)
- Helm 3.12+
kubectlconfigured for your cluster- PostgreSQL 17+ — two databases provisioned and reachable from the cluster (one for the API, one for the Worker / Hangfire); both can sit in the same cluster
- ClickHouse 26+ reachable from the cluster
- Apache Kafka 3.x (or Confluent Cloud) reachable from the cluster
- S3-compatible object storage (AWS S3, MinIO, Oracle OS, etc.) for frame images
- OpenFGA service reachable from the cluster (for fine-grained authorization)
- OIDC provider (Keycloak, Azure AD, Auth0, etc.)
- cert-manager (only if you enable
*.certificate.enabled)
For the less-common dependencies see ClickHouse, OpenFGA, and Valhalla.
1. Prepare your values file¶
Create custom-values.yaml and fill in all required values below. Helm will refuse to render the chart if any of them are missing or empty.
Required values¶
| Value path | Description |
|---|---|
api.image.repository |
Full image path for the API in your private registry |
api.image.tag |
API image version tag |
worker.image.repository |
Full image path for the Worker in your private registry |
worker.image.tag |
Worker image version tag |
oidc.authority |
OIDC issuer URL (e.g. https://idp.example.com/realms/main) |
oidc.clientId |
OAuth2 client ID |
oidc.clientSecret |
OAuth2 client secret |
kafka.bootstrapServers |
Kafka bootstrap servers (host:9092 or comma-separated list) |
s3.host |
S3-compatible endpoint (e.g. s3.amazonaws.com) |
s3.framesBucketName |
Bucket where frame images are stored |
s3.accessKey |
S3 access key |
s3.secretKey |
S3 secret key |
postgres.api.host |
PostgreSQL hostname for the API database |
postgres.api.database |
API database name (e.g. axion_sense) |
postgres.api.username |
API database user |
postgres.api.password |
API database password |
postgres.worker.host |
PostgreSQL hostname for the Worker / Hangfire database |
postgres.worker.database |
Worker database name (e.g. axion_sense_tasks) |
postgres.worker.username |
Worker database user |
postgres.worker.password |
Worker database password |
clickhouse.host |
ClickHouse hostname |
clickhouse.username |
ClickHouse user |
clickhouse.password |
ClickHouse password |
openfga.apiUrl |
OpenFGA HTTP API URL |
openfga.apiKey |
OpenFGA pre-shared API key |
vision.image.repository |
Vision image path (shared by detections-api, worker, quality, clusterization, matching) |
vision.image.tag |
Vision image version tag |
vision.triton.image.repository |
Triton inference server image path |
vision.triton.image.tag |
Triton image version tag |
vision.config.detectors |
At least one detector backend (name, apiUrl, optional authToken) |
Vision is enabled by default. To run without the ML pipeline, set vision.enabled: false — the five vision.* rows above then become unnecessary.
Secrets (any *.password, *.secretKey, *.apiKey, oidc.clientSecret) can be passed via --set instead of storing them in the file:
--set oidc.clientSecret=xxx \
--set s3.secretKey=xxx \
--set postgres.api.password=xxx \
--set postgres.worker.password=xxx \
--set clickhouse.password=xxx \
--set openfga.apiKey=xxx
Conditionally required¶
| Value path | When required |
|---|---|
api.apiKeys.keys[].key |
If you add entries to api.apiKeys.keys |
api.apiKeys.keys[].serviceName |
If you add entries to api.apiKeys.keys |
api.ingress.hosts |
If api.ingress.enabled: true |
api.ingressGrpc.hosts |
If api.ingressGrpc.enabled: true |
api.certificate.issuerRef.name |
If api.certificate.enabled: true |
api.certificate.dnsNames |
If api.certificate.enabled: true |
worker.ingress.hosts |
If worker.ingress.enabled: true (Hangfire) |
kafka.saslUsername / saslPassword |
If kafka.securityProtocol is set (e.g. Confluent Cloud) |
vision.detectionsApi.bearerTokens |
If you require auth on the Detections API |
worker.customLayersSync.connectionString / password |
If worker.customLayersSync.enabled: true |
vision.config.mapMatching.mapMatchingApi.baseUrl |
If vision.config.mapMatching.backend: "map_matching_api" (otherwise enable bundled vision.valhalla) |
Optional (sensible defaults provided)¶
| Value path | Default | Notes |
|---|---|---|
api.replicaCount |
1 |
Increase for HA |
worker.replicaCount |
1 |
Hangfire job storage is in PG, scale carefully |
api.service.type |
ClusterIP |
Change to LoadBalancer/NodePort if needed |
api.cors.allowedOrigins |
[] |
Set to your frontend origin(s) |
s3.secured |
true |
Set false for plain HTTP (e.g. local MinIO) |
postgres.api.port |
5432 |
|
postgres.api.maxPoolSize |
20 |
|
postgres.worker.maxPoolSize |
10 |
|
clickhouse.port |
8123 |
HTTP interface |
clickhouse.database |
axion_sense |
|
openfga.storeName |
axion-sense |
Auto-created on first migration if missing |
oidc.audience |
(= clientId) |
Override if your IdP uses a separate audience |
oidc.requireHttpsMetadata |
true |
Set false only for local dev |
api.migration.enabled |
true |
Disable if you run migrations externally |
worker.migration.enabled |
true |
Disable if you run migrations externally |
global.imagePullSecrets |
[] |
Set if your registry is private |
kafka.filterByEnv |
false |
Set with global.env to share Kafka topics across environments |
Minimal custom-values.yaml¶
A working file with only the required values (everything else uses chart defaults):
api:
image:
repository: "registry.partner.com/axion/sense-api"
tag: "1.0.0"
worker:
image:
repository: "registry.partner.com/axion/sense-worker"
tag: "1.0.0"
oidc:
authority: "https://keycloak.partner.com/realms/main"
clientId: "axion-sense-api"
clientSecret: "change-me"
kafka:
bootstrapServers: "kafka.partner.internal:9092"
s3:
host: "s3.eu-west-1.amazonaws.com"
framesBucketName: "axion-sense-frames"
accessKey: "AKIAIOSFODNN7EXAMPLE"
secretKey: "change-me"
postgres:
api:
host: "postgres.partner.internal"
database: "axion_sense"
username: "axion_user"
password: "change-me"
worker:
host: "postgres.partner.internal"
database: "axion_sense_tasks"
username: "axion_user"
password: "change-me"
clickhouse:
host: "clickhouse.partner.internal"
username: "default"
password: "change-me"
openfga:
apiUrl: "http://openfga.partner.internal:8080"
apiKey: "change-me"
vision:
image:
repository: "registry.partner.com/axion/sense-vision"
tag: "1.0.0"
triton:
image:
repository: "registry.partner.com/axion/sense-vision-triton"
tag: "1.0.0"
config:
detectors:
- name: "primary-detector"
apiUrl: "http://detector.partner.internal/api/v1/upload"
authToken: "change-me" # optional
timeoutSeconds: 30.0
maxRetries: 1
mapMatching:
backend: "map_matching_api" # or "valhalla" + set vision.valhalla.enabled=true
mapMatchingApi:
baseUrl: "https://map-matching.partner.internal"
apiKey: "change-me"
2. Get Helm chart and container images¶
2.1 Obtain a service-account key¶
Request sa-key.json from @v.loboda. This key grants read access to the Axion artifact registry.
2.2 Pull container images¶
# authenticate with the registry
cat sa-key.json | docker login -u _json_key --password-stdin "https://europe-west1-docker.pkg.dev"
# pull (replace :latest with a pinned commit-sha tag for production)
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/sense-api:latest
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/sense-worker:latest
# vision images (only if vision.enabled is true)
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/sense-vision:latest
docker pull europe-west1-docker.pkg.dev/axionx-infra/axion/sense-vision-triton:latest
2.3 Pull the Helm chart¶
cat sa-key.json | helm registry login europe-west1-docker.pkg.dev -u _json_key --password-stdin
helm pull oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-sense --version 0.0.0-latest
2.4 Push images to your private registry¶
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/sense-api:latest registry.example.com/axion/sense-api:latest
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/sense-worker:latest registry.example.com/axion/sense-worker:latest
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/sense-vision:latest registry.example.com/axion/sense-vision:latest
docker tag europe-west1-docker.pkg.dev/axionx-infra/axion/sense-vision-triton:latest registry.example.com/axion/sense-vision-triton:latest
docker push registry.example.com/axion/sense-api:latest
docker push registry.example.com/axion/sense-worker:latest
docker push registry.example.com/axion/sense-vision:latest
docker push registry.example.com/axion/sense-vision-triton:latest
If your registry requires authentication, create an image-pull secret and reference it in global.imagePullSecrets:
kubectl create namespace axion-sense
kubectl create secret docker-registry registry-creds \
-n axion-sense \
--docker-server=registry.example.com \
--docker-username=<user> \
--docker-password=<password>
3. Install¶
helm upgrade --install axion-sense \
oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-sense \
--version 0.0.0-latest \
-n axion-sense --create-namespace \
-f custom-values.yaml \
--atomic --wait --timeout 15m
The --atomic flag rolls back automatically on failure. The 15-minute timeout covers two migration jobs (API: Postgres + ClickHouse + OpenFGA model; Worker: Postgres + Kafka topic creation), which can take several minutes on a fresh database.
4. Verify¶
kubectl get pods -n axion-sense
kubectl get jobs -n axion-sense # both migration jobs should be Complete
helm -n axion-sense status axion-sense
kubectl port-forward svc/axion-sense-api 8080:80 -n axion-sense
curl http://localhost:8080/health
5. Expose the services (optional)¶
Option A — Ingress (HTTP + gRPC)¶
api:
ingress:
enabled: true
className: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
hosts:
- host: api.partner.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- api.partner.example.com
secretName: axion-sense-api-tls
ingressGrpc:
enabled: true
className: nginx
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
hosts:
- host: grpc.partner.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- grpc.partner.example.com
secretName: axion-sense-api-tls
To expose the Hangfire dashboard, enable worker.service and worker.ingress similarly.
Option B — NodePort / LoadBalancer¶
6. Upgrade and rollback¶
Update api.image.tag / worker.image.tag (and Vision tags if applicable) and re-run the install command. Migrations run as pre-upgrade hooks.
helm -n axion-sense history axion-sense
helm -n axion-sense rollback axion-sense <REVISION> --wait --timeout 15m
Rollback restores Kubernetes objects but does not revert Postgres, ClickHouse, or OpenFGA schema changes — coordinate schema-breaking upgrades carefully.
7. Uninstall¶
The chart leaves PostgreSQL databases, ClickHouse data, S3 objects, OpenFGA store, and Kafka topics untouched. Drop them manually if you want a clean teardown.
Troubleshooting¶
# render manifests locally to validate values
helm template axion-sense oci://europe-west1-docker.pkg.dev/axionx-infra/axion/charts/axion-sense \
--version 0.0.0-latest -f custom-values.yaml
# component logs
kubectl logs -l app.kubernetes.io/component=api -n axion-sense --tail=100
kubectl logs -l app.kubernetes.io/component=worker -n axion-sense --tail=100
# migration job logs
kubectl logs -l app.kubernetes.io/component=db-migration -n axion-sense
kubectl logs -l app.kubernetes.io/component=worker-db-migration -n axion-sense
# describe pod for events/errors
kubectl describe pod -l app.kubernetes.io/component=api -n axion-sense
# inspect Kubernetes events
kubectl -n axion-sense get events --sort-by=.lastTimestamp