Configuration Reference
This document provides a comprehensive reference for all configuration values in the Rulebricks Helm chart. Values are organized by component with detailed explanations of their purpose, defaults, and recommendations.
Core Configuration
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
global.domain | string | "" | Yes | The domain name for your Rulebricks instance. Must be a domain you control. |
global.email | string | "support@rulebricks.com" | Yes | Email address for Let's Encrypt certificate registration. |
global.licenseKey | string | "evaluation" | Yes | Your Rulebricks Enterprise license key. |
global.tlsEnabled | boolean | false | No | Enable HTTPS. Set to true after DNS is configured or when using external-dns. |
global.externalDnsEnabled | boolean | false | No | Enable external-dns annotations on ingresses for automatic DNS management. |
Domain Configuration
Your domain should follow the pattern: rulebricks.yourdomain.com
When self-hosting Supabase, an additional subdomain is created: supabase.rulebricks.yourdomain.com
global:
domain: 'rulebricks.acme.com'
email: 'devops@acme.com'Important: The domain must resolve to your cluster's load balancer before enabling TLS. See DNS Configuration for automatic setup.
SMTP Configuration
SMTP is required for user authentication flows (invitations, password resets, email verification).
| Parameter | Type | Default | Description |
|---|---|---|---|
global.smtp.host | string | "smtp.mailtrap.io" | SMTP server hostname |
global.smtp.port | integer | 2525 | SMTP server port (typically 25, 465, 587, or 2525) |
global.smtp.user | string | "demo-user" | SMTP authentication username |
global.smtp.pass | string | "demo-password" | SMTP authentication password |
global.smtp.from | string | "no-reply@rulebricks.com" | Sender email address |
global.smtp.fromName | string | "Rulebricks" | Sender display name |
Production SMTP Providers
| Provider | Host | Port | Notes |
|---|---|---|---|
| AWS SES | email-smtp.<region>.amazonaws.com | 587 | Requires verified domain |
| SendGrid | smtp.sendgrid.net | 587 | API key as password |
| Mailgun | smtp.mailgun.org | 587 | Domain verification required |
| Postmark | smtp.postmarkapp.com | 587 | Server token as password |
global:
smtp:
host: 'email-smtp.us-east-1.amazonaws.com'
port: 587
user: 'AKIAIOSFODNN7EXAMPLE'
pass: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
from: 'no-reply@yourdomain.com'
fromName: 'Your Company - Rulebricks'Supabase Keys
These JWT keys are used for authentication between components.
| Parameter | Type | Description |
|---|---|---|
global.supabase.anonKey | string | Public/anonymous key for client-side auth |
global.supabase.serviceKey | string | Service role key for server-side operations |
global.supabase.jwtSecret | string | JWT signing secret (self-hosted only) |
Security: For production, generate new keys using Supabase's key generator or a secure random string generator. The default keys are for demo purposes only.
Managed Supabase (Cloud)
When using Supabase Cloud instead of self-hosting:
| Parameter | Type | Description |
|---|---|---|
global.supabase.url | string | Project URL (e.g., https://abcd1234.supabase.co) |
global.supabase.projectRef | string | Project reference ID (derived from URL if empty) |
global.supabase.accessToken | string | Personal access token for Supabase CLI |
global:
supabase:
url: 'https://abcd1234.supabase.co'
anonKey: '<from-supabase-dashboard>'
serviceKey: '<from-supabase-dashboard>'
accessToken: '<personal-access-token>'
supabase:
enabled: false # Disable self-hosted SupabaseEmail Templates
Customize authentication email subjects and templates:
global:
supabase:
emails:
subjects:
invite: 'Join your team on Rulebricks'
confirmation: 'Confirm Your Email'
recovery: 'Reset Your Password'
emailChange: 'Confirm Email Change'
templates:
invite: 'https://your-cdn.com/templates/invite.html'
confirmation: 'https://your-cdn.com/templates/verify.html'
recovery: 'https://your-cdn.com/templates/password_change.html'
emailChange: 'https://your-cdn.com/templates/email_change.html'AI Features
Enable AI-powered rule generation (optional):
| Parameter | Type | Default | Description |
|---|---|---|---|
global.ai.enabled | boolean | false | Enable AI features |
global.ai.openaiApiKey | string | "" | OpenAI API key |
global:
ai:
enabled: true
openaiApiKey: 'sk-...'SSO Configuration
Enable Single Sign-On (SSO) via OpenID Connect (OIDC).
| Parameter | Type | Default | Description |
|---|---|---|---|
global.sso.enabled | boolean | false | Enable SSO |
global.sso.provider | string | "" | Provider name (azure, google, okta, keycloak, ory, other) |
global.sso.url | string | "" | Provider URL (required for all except Google) |
global.sso.clientId | string | "" | OAuth client ID |
global.sso.clientSecret | string | "" | OAuth client secret |
Provider Types
You will need to go to your Supabase dashboard, and configure your provider in the Authentication tab. If you don't see your provider, use Keycloak – our proxy uses it as a bridge.
- Native Providers (
azure,google,okta,keycloak): Supabase handles authentication directly. - Custom Providers (
ory,other): The app uses a built-in OIDC proxy we manage to translate paths for your IdP.
Identity Provider Setup
You must configure your Identity Provider (IdP) with the following settings:
- Scopes:
openid,email,profile - Response Type:
code - Grant Types:
authorization_code,refresh_token - Auth Method:
http body - Redirect URIs:
https://<your-domain>/api/sso-proxy/callback<your-supabase-url>/auth/v1/callback
External Secrets
For enterprise deployments using external secret management:
| Parameter | Type | Description |
|---|---|---|
global.secrets.secretRef | string | Name of existing Kubernetes secret |
global.secrets.secretRefKeys.* | object | Key name mappings |
global:
secrets:
secretRef: 'rulebricks-external-secrets'
secretRefKeys:
licenseKey: 'RULEBRICKS_LICENSE'
smtpUser: 'SMTP_USERNAME'
smtpPass: 'SMTP_PASSWORD'
supabaseAnonKey: 'SUPABASE_ANON'
supabaseServiceKey: 'SUPABASE_SERVICE'
supabaseAccessToken: 'SUPABASE_TOKEN'
openaiApiKey: 'OPENAI_KEY'Rulebricks Application
The core application and high-performance solver (HPS).
Application Image
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.app.image.repository | string | "index.docker.io/rulebricks/app" | Docker image repository |
rulebricks.app.image.tag | string | "1.X.X" | Image version tag |
rulebricks.app.image.pullPolicy | string | "IfNotPresent" | Image pull policy |
Logging Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.app.logging.enabled | boolean | true | Enable decision logging |
rulebricks.app.logging.kafkaBrokers | string | "" | Kafka brokers (auto-discovered if empty) |
rulebricks.app.logging.kafkaTopic | string | "logs" | Kafka topic for logs |
rulebricks.app.logging.loggingDestination | string | "Console (stdout)" | Display label in UI |
Ingress
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.ingress.enabled | boolean | true | Enable ingress |
rulebricks.ingress.className | string | "traefik" | Ingress class name |
rulebricks.ingress.paths | list | [{...}] | List of paths to route |
Redis
Redis is used for caching and session management.
| Parameter | Type | Default | Recommendation |
|---|---|---|---|
rulebricks.redis.resources.requests.cpu | string | "200m" | Increase for high traffic |
rulebricks.redis.resources.requests.memory | string | "256Mi" | — |
rulebricks.redis.resources.limits.cpu | string | "500m" | — |
rulebricks.redis.resources.limits.memory | string | "4Gi" | Increase for large rule sets |
rulebricks.redis.persistence.enabled | boolean | true | Keep enabled for production |
rulebricks.redis.persistence.size | string | "4Gi" | — |
rulebricks.redis.persistence.storageClass | string | "gp3" | Match your storage class |
High Performance Server (HPS)
HPS handles rule execution with horizontal scaling.
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.hps.enabled | boolean | true | Enable HPS |
rulebricks.hps.replicas | integer | 3 | Number of HPS gateway replicas |
rulebricks.hps.workers.enabled | boolean | true | Enable worker pods |
rulebricks.hps.workers.replicas | integer | 4 | Base worker replica count |
HPS Image
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.hps.image.repository | string | "index.docker.io/rulebricks/hps" | Docker image repository |
rulebricks.hps.image.tag | string | "1.X.X" | Image version tag |
rulebricks.hps.image.pullPolicy | string | "Always" | Image pull policy |
HPS Resources
| Parameter | Type | Default |
|---|---|---|
rulebricks.hps.resources.requests.cpu | string | "1000m" |
rulebricks.hps.resources.requests.memory | string | "1Gi" |
rulebricks.hps.resources.limits.cpu | string | "2000m" |
rulebricks.hps.resources.limits.memory | string | "2Gi" |
KEDA Autoscaling for HPS Workers
| Parameter | Type | Default | Description |
|---|---|---|---|
rulebricks.hps.workers.keda.enabled | boolean | true | Enable KEDA autoscaling |
rulebricks.hps.workers.keda.minReplicaCount | integer | 4 | Minimum workers |
rulebricks.hps.workers.keda.maxReplicaCount | integer | 12 | Maximum workers |
rulebricks.hps.workers.keda.pollingInterval | integer | 10 | Seconds between metric checks |
rulebricks.hps.workers.keda.cooldownPeriod | integer | 300 | Seconds before scale-down |
rulebricks.hps.workers.keda.lagThreshold | integer | 50 | Kafka lag threshold |
rulebricks.hps.workers.keda.cpuThreshold | integer | 25 | CPU percentage threshold |
Tuning Recommendations:
# High-throughput configuration
# Replicas should match partitions
rulebricks:
hps:
replicas: 4
workers:
replicas: 8
keda:
minReplicaCount: 12
maxReplicaCount: 32
lagThreshold: 5
cpuThreshold: 20Database (Supabase)
Self-Hosted vs. Managed
| Mode | supabase.enabled | Use Case |
|---|---|---|
| Self-hosted | true | Full control, air-gapped environments |
| Managed (Cloud) | false | Simplified operations, managed backups |
Self-Hosted Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
supabase.enabled | boolean | true | Deploy self-hosted Supabase |
supabase.secret.db.username | string | "postgres" | Database username |
supabase.secret.db.password | string | "postgres-password-change-me" | Change this! |
supabase.secret.db.database | string | "postgres" | Database name |
supabase.secret.dashboard.username | string | "supabase" | Studio dashboard username |
supabase.secret.dashboard.password | string | "dashboard-password-change-me" | Change this! |
Database Resources
| Parameter | Type | Default | Production Recommendation |
|---|---|---|---|
supabase.db.resources.requests.cpu | string | "500m" | "1000m" or higher |
supabase.db.resources.requests.memory | string | "1Gi" | "2Gi" or higher |
supabase.db.persistence.enabled | boolean | true | Always true for production |
supabase.db.persistence.size | string | "10Gi" | Based on data volume |
supabase.db.persistence.storageClassName | string | "gp3" | Use fast storage |
Kong Ingress
| Parameter | Type | Default | Description |
|---|---|---|---|
supabase.kong.ingress.enabled | boolean | true | Enable Supabase API ingress |
supabase.kong.ingress.className | string | "traefik" | Must match Traefik class |
Message Queue (Kafka)
Kafka handles async rule execution and logging.
Basic Settings
| Parameter | Type | Default | Description |
|---|---|---|---|
kafka.enabled | boolean | true | Deploy Kafka |
kafka.kraft.enabled | boolean | true | Use KRaft mode (no Zookeeper) |
kafka.zookeeper.enabled | boolean | false | Disable Zookeeper |
Controller/Broker Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
kafka.controller.replicaCount | integer | 1 | Number of Kafka nodes |
kafka.controller.resources.requests.cpu | string | "500m" | CPU request |
kafka.controller.resources.requests.memory | string | "2Gi" | Memory request |
kafka.controller.resources.limits.cpu | string | "2000m" | CPU limit |
kafka.controller.resources.limits.memory | string | "3Gi" | Memory limit |
kafka.controller.persistence.size | string | "10Gi" | Storage size |
kafka.controller.heapOpts | string | "-Xmx1g -Xms1g..." | JVM heap settings |
Kafka Tuning
The default configuration includes extensive JVM and Kafka tuning:
kafka:
overrideConfiguration:
auto.create.topics.enable: 'true'
log.retention.hours: '24' # Adjust based on log volume
default.replication.factor: '1' # Increase for HA
offsets.topic.replication.factor: '1'
num.partitions: '12' # Increase for parallelism
controller:
extraEnvVars:
- name: KAFKA_JVM_PERFORMANCE_OPTS
value: '-XX:MaxDirectMemorySize=256M -Djdk.nio.maxCachedBufferSize=262144'
- name: KAFKA_CFG_QUEUED_MAX_REQUESTS
value: '10000'
- name: KAFKA_CFG_NUM_NETWORK_THREADS
value: '8'
- name: KAFKA_CFG_NUM_IO_THREADS
value: '8'
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: '1048576'
- name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES
value: '1048576'
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: '209715200'
- name: KAFKA_CFG_LOG_RETENTION_BYTES
value: '4294967296'
- name: KAFKA_CFG_LOG_SEGMENT_BYTES
value: '1073741824'
- name: KAFKA_CFG_NUM_REPLICA_FETCHERS
value: '4'
- name: KAFKA_CFG_REPLICA_SOCKET_RECEIVE_BUFFER_BYTES
value: '1048576'
- name: KAFKA_CFG_LOG_CLEANER_DEDUPE_BUFFER_SIZE
value: '268435456'
- name: KAFKA_CFG_LOG_CLEANER_IO_BUFFER_SIZE
value: '1048576'
- name: KAFKA_CFG_MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION
value: '10'
listeners:
client:
protocol: PLAINTEXT
controller:
protocol: PLAINTEXT
interbroker:
protocol: PLAINTEXTHigh-Availability Configuration:
kafka:
controller:
replicaCount: 3
overrideConfiguration:
default.replication.factor: '3'
min.insync.replicas: '2'Ingress (Traefik)
Traefik handles all incoming traffic and TLS termination.
| Parameter | Type | Default | Description |
|---|---|---|---|
traefik.enabled | boolean | true | Deploy Traefik |
traefik.ingressClass.name | string | "traefik" | Ingress class name |
traefik.autoscaling.enabled | boolean | true | Enable HPA |
traefik.autoscaling.minReplicas | integer | 1 | Minimum replicas |
traefik.autoscaling.maxReplicas | integer | 2 | Maximum replicas |
Resources
| Parameter | Type | Default | High-Traffic |
|---|---|---|---|
traefik.resources.requests.cpu | string | "100m" | "500m" |
traefik.resources.requests.memory | string | "256Mi" | "512Mi" |
traefik.resources.limits.cpu | string | "1000m" | "2000m" |
traefik.resources.limits.memory | string | "2Gi" | "4Gi" |
Ports
| Parameter | Default | Description |
|---|---|---|
traefik.ports.web.port | 8000 | Internal HTTP port |
traefik.ports.web.exposedPort | 80 | External HTTP port |
traefik.ports.websecure.port | 8443 | Internal HTTPS port |
traefik.ports.websecure.exposedPort | 443 | External HTTPS port |
Persistence
| Parameter | Type | Default | Description |
|---|---|---|---|
traefik.persistence.enabled | boolean | false | Enable persistence |
Autoscaling (KEDA)
KEDA provides event-driven autoscaling for HPS workers.
| Parameter | Type | Default | Description |
|---|---|---|---|
keda.enabled | boolean | true | Deploy KEDA |
keda.crds.install | boolean | false | CRDs managed by parent chart |
Note: KEDA CRDs are included in the crds/ directory and installed
automatically.
Certificates (cert-manager)
cert-manager provisions TLS certificates from Let's Encrypt.
| Parameter | Type | Default | Description |
|---|---|---|---|
cert-manager.enabled | boolean | true | Deploy cert-manager |
cert-manager.installCRDs | boolean | false | CRDs managed by parent chart |
The chart creates:
- A
ClusterIssuerfor Let's Encrypt production Certificateresources for your domain(s)
Logging (Vector)
Vector aggregates logs from Kafka and forwards them to configured sinks.
| Parameter | Type | Default | Description |
|---|---|---|---|
vector.enabled | boolean | true | Deploy Vector |
vector.role | string | "Stateless-Aggregator" | Vector role |
vector.replicas | integer | 2 | Number of replicas |
Resources
| Parameter | Type | Default |
|---|---|---|
vector.resources.requests.cpu | string | "50m" |
vector.resources.requests.memory | string | "128Mi" |
vector.resources.limits.cpu | string | "200m" |
vector.resources.limits.memory | string | "256Mi" |
Service
| Parameter | Type | Default | Description |
|---|---|---|---|
vector.service.enabled | boolean | true | Enable Vector service |
vector.service.ports | list | [{...}] | Service ports |
Environment Variables
| Parameter | Type | Default | Description |
|---|---|---|---|
vector.env | list | [{...}] | Environment variables (e.g., KAFKA_BOOTSTRAP_SERVERS) |
Custom Sinks
Configure log destinations in vector.customConfig.sinks:
vector:
customConfig:
sources:
kafka:
type: kafka
bootstrap_servers: '${KAFKA_BOOTSTRAP_SERVERS:-rulebricks-kafka:9092}'
topics:
- logs
group_id: vector-consumers
auto_offset_reset: latest
sinks:
# Console output (default)
console:
type: console
inputs: [kafka]
encoding:
codec: json
# S3 sink example
s3:
type: aws_s3
inputs: [kafka]
bucket: 'your-logs-bucket'
region: 'us-east-1'
key_prefix: 'rulebricks/logs/%Y/%m/%d/'
compression: gzip
encoding:
codec: jsonDNS (external-dns)
external-dns automatically creates DNS records for your ingresses.
| Parameter | Type | Default | Description |
|---|---|---|---|
external-dns.enabled | boolean | false | Deploy external-dns |
external-dns.provider | string | "route53" | DNS provider |
external-dns.sources | list | ["ingress"] | Resource types to watch |
external-dns.domainFilters | list | [] | Restrict to specific domains |
external-dns.policy | string | "upsert-only" | Record management policy |
Provider Configuration
AWS Route53
external-dns:
enabled: true
provider: route53
# Uses IRSA - ensure service account has Route53 permissionsCloudflare
external-dns:
enabled: true
provider: cloudflare
extraEnvVars:
- name: CF_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-api-token
key: api-tokenGoogle Cloud DNS
external-dns:
enabled: true
provider: google
google:
project: 'your-gcp-project'Azure DNS
external-dns:
enabled: true
provider: azure
azure:
resourceGroup: 'your-resource-group'
subscriptionId: 'your-subscription-id'Monitoring (Prometheus)
Optional Prometheus stack for metrics collection.
| Parameter | Type | Default | Description |
|---|---|---|---|
monitoring.enabled | boolean | false | Enable monitoring |
kube-prometheus-stack.alertmanager.enabled | boolean | false | Deploy Alertmanager |
kube-prometheus-stack.grafana.enabled | boolean | false | Deploy Grafana |
Prometheus Storage
kube-prometheus-stack:
prometheus:
prometheusSpec:
retention: 30d
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp3
resources:
requests:
storage: 50GiStorage
StorageClass
The chart can create a gp3 StorageClass for AWS EBS:
| Parameter | Type | Default | Description |
|---|---|---|---|
storageClass.create | boolean | true | Create StorageClass |
storageClass.name | string | "gp3" | StorageClass name |
storageClass.provisioner | string | "ebs.csi.aws.com" | CSI provisioner |
storageClass.type | string | "gp3" | EBS volume type |
storageClass.fsType | string | "ext4" | File system type |
storageClass.reclaimPolicy | string | "Delete" | Reclaim policy |
storageClass.volumeBindingMode | string | "WaitForFirstConsumer" | Binding mode |
storageClass.allowVolumeExpansion | boolean | true | Allow expansion |
For non-AWS clusters, set storageClass.create: false and ensure a compatible StorageClass exists.