Skip to main content

Recipes

Each recipe below is self-contained. They assume you have the Ilum CLI installed and a Kubernetes cluster accessible via kubectl.

Migrating from Manual Helm to CLI

Context: You have an existing Ilum installation that was deployed manually with helm install or helm upgrade. You manage it through raw Helm commands and custom --set flags.

Goal: You want to bring that installation under CLI management so you can use ilum module enable, ilum upgrade, ilum status, and other CLI commands going forward -- without redeploying or losing state.

Solution:

$ ilum connect
⠋ Scanning for Ilum releases...
Found 1 Ilum release:
Release: ilum
Namespace: analytics
Version: 6.6.0
Status: deployed

? Connect this release to profile 'default'? Yes
✓ Profile 'default' updated with release 'ilum' in namespace 'analytics'.
✓ 12 modules synced to config.
✓ Values snapshot saved (drift detection enabled).

Next steps:
ilum status Show release status
ilum upgrade Upgrade to a newer version
ilum module enable Enable additional modules

The connect command auto-detects the release name, namespace, chart version, and currently enabled modules by reading the live Helm values. It saves everything into a CLI profile and takes a values snapshot for future drift detection.

If you have multiple Ilum releases across namespaces, connect presents a selection menu:

$ ilum connect
⠋ Scanning for Ilum releases...
╭─ Found Ilum Releases ───────────────────────────────────────╮
│ # Release Namespace Version Status │
├─────────────────────────────────────────────────────────────┤
│ 1 ilum-dev dev 6.7.0 deployed │
│ 2 ilum-staging staging 6.6.0 deployed │
│ 3 ilum-prod production 6.5.0 deployed │
╰─────────────────────────────────────────────────────────────╯
? Select release [1-3]: 2
✓ Profile 'default' updated with release 'ilum-staging' in namespace 'staging'.
✓ 9 modules synced to config.
✓ Values snapshot saved (drift detection enabled).

To connect each release to its own named profile, use --profile:

$ ilum connect --release ilum-dev --namespace dev --profile dev
$ ilum connect --release ilum-staging --namespace staging --profile staging
$ ilum connect --release ilum-prod --namespace production --profile prod

After connecting, all CLI commands work against the active profile. Switch profiles with ilum config set active_profile <name>.

Multi-Namespace Production Setup

Context: You have a Kubernetes cluster where you need isolated environments for development, staging, and production.

Goal: You want to deploy Ilum into three separate namespaces with different module sets and configurations, each managed independently through CLI profiles.

Solution:

Create a profile and install for each environment. Use ilum init --profile to create each profile, switch to it with ilum config set active_profile, then configure and install:

$ ilum init --profile dev --yes
$ ilum config set active_profile dev
$ ilum config set cluster.namespace ilum-dev
$ ilum config set release_name ilum-dev
$ ilum install -m jupyter --yes
✓ Namespace 'ilum-dev' does not exist — creating it.
✓ Namespace 'ilum-dev' created.
✓ Ilum installed successfully (release: ilum-dev).
$ ilum init --profile staging --yes
$ ilum config set active_profile staging
$ ilum config set cluster.namespace ilum-staging
$ ilum config set release_name ilum-staging
$ ilum install -m sql -m airflow --yes
✓ Namespace 'ilum-staging' does not exist — creating it.
✓ Namespace 'ilum-staging' created.
✓ Ilum installed successfully (release: ilum-staging).
$ ilum init --profile prod --yes
$ ilum config set active_profile prod
$ ilum config set cluster.namespace ilum-prod
$ ilum config set release_name ilum-prod
$ ilum install \
-m sql -m airflow -m superset -m monitoring -m loki \
--values prod-values.yaml \
--yes
✓ Namespace 'ilum-prod' does not exist — creating it.
✓ Namespace 'ilum-prod' created.
✓ Ilum installed successfully (release: ilum-prod).

Switch between environments with:

$ ilum config set active_profile staging
$ ilum status # Shows staging release

Or target a specific release directly using --release and --namespace flags:

$ ilum status --release ilum-prod --namespace ilum-prod
note

Each namespace gets its own isolated set of PVCs, Services, and ConfigMaps. There is no cross-namespace interference. The CLI tracks each profile's module list independently.

Enabling the Full Data Stack (SQL + Airflow + Superset)

Context: You have a running Ilum installation with the default modules (core, ui, mongodb, postgresql, minio, jupyter, gitea).

Goal: You want to add a complete data analytics workflow: SQL queries via Kyuubi, pipeline orchestration via Airflow, and dashboards via Superset.

Solution:

First, inspect the dependency chain to understand what will be pulled in:

$ ilum module show sql
╭─ Module: sql ────────────────────────────────────────────────────────╮
│ sql (sql) │
│ Ilum SQL engine (Kyuubi) │
│ │
│ Default enabled: yes │
│ Requires: postgresql, core │
│ │
│ Resolved enable chain (including dependencies): │
│ --set postgresql.enabled=true │
│ --set ilum-core.enabled=true │
│ --set ilum-sql.enabled=true │
│ --set ilum-core.sql.enabled=true │
╰──────────────────────────────────────────────────────────────────────╯

$ ilum module show airflow
╭─ Module: airflow ────────────────────────────────────────────────────╮
│ airflow (orchestration) │
│ Apache Airflow workflow orchestration │
│ │
│ Default enabled: no │
│ Requires: postgresql │
╰──────────────────────────────────────────────────────────────────────╯

$ ilum module show superset
╭─ Module: superset ───────────────────────────────────────────────────╮
│ superset (analytics) │
│ Apache Superset BI dashboards │
│ │
│ Default enabled: no │
│ Requires: postgresql │
╰──────────────────────────────────────────────────────────────────────╯

All three share postgresql as a dependency, which is already enabled by default. Enable them in a single command:

$ ilum module enable sql airflow superset --yes
╭─ Enable Summary ─────────────────────────────────────────╮
│ Action Enable │
│ Release ilum │
│ Namespace default │
│ Modules sql, airflow, superset │
╰──────────────────────────────────────────────────────────╯

ℹ Auto-resolved dependencies: core, postgresql

╭─ Values Diff ────────────────────────────────────────────╮
│ Key Before After │
├──────────────────────────────────────────────────────────┤
│ ilum-sql.enabled false true │
│ ilum-core.sql.enabled false true │
│ airflow.enabled false true │
│ superset.enabled false true │
│ postgresql.enabled true true │
│ ilum-core.enabled true true │
╰──────────────────────────────────────────────────────────╯

⠋ Enabling modules...
✓ Enabled: sql, airflow, superset

Verify the modules are running:

$ ilum module status --category sql
╭─ Module Status (release: ilum, namespace: default) ──────────────────╮
│ Module Category Status Pods Health │
├──────────────────────────────────────────────────────────────────────┤
│ sql sql Enabled 1/1 Healthy │
│ hive-metastore sql Disabled -- -- │
│ nessie sql Disabled -- -- │
│ unity-catalog sql Disabled -- -- │
│ trino sql Disabled -- -- │
╰──────────────────────────────────────────────────────────────────────╯

$ ilum module status --category orchestration
╭─ Module Status (release: ilum, namespace: default) ──────────────────╮
│ Module Category Status Pods Health │
├──────────────────────────────────────────────────────────────────────┤
│ airflow orchestration Enabled 3/3 Healthy │
│ kestra orchestration Disabled -- -- │
│ n8n orchestration Disabled -- -- │
│ nifi orchestration Disabled -- -- │
│ mageai orchestration Disabled -- -- │
╰──────────────────────────────────────────────────────────────────────╯

To later remove one piece of the stack without affecting the others:

$ ilum module disable superset --yes
✓ Disabled: superset

PostgreSQL remains running because sql, airflow, and gitea still depend on it.

Setting Up Monitoring (Prometheus + Loki + Grafana)

Context: You have a running Ilum installation and you want observability into cluster health, application metrics, and centralized logs.

Goal: You want to enable the Prometheus + Grafana monitoring stack for metrics and Grafana Loki for log aggregation.

Solution:

Check what the monitoring modules include:

$ ilum module show monitoring
╭─ Module: monitoring ─────────────────────────────────────────────────╮
│ monitoring (monitoring) │
│ Prometheus + Grafana monitoring stack │
│ │
│ Default enabled: no │
│ Enable flags: kube-prometheus-stack.enabled=true │
│ Disable flags: kube-prometheus-stack.enabled=false │
╰──────────────────────────────────────────────────────────────────────╯

$ ilum module show loki
╭─ Module: loki ───────────────────────────────────────────────────────╮
│ loki (monitoring) │
│ Grafana Loki log aggregation │
│ │
│ Default enabled: no │
│ Enable flags: global.logAggregation.enabled=true, │
│ global.logAggregation.loki.enabled=true, │
│ global.logAggregation.promtail.enabled=true │
╰──────────────────────────────────────────────────────────────────────╯

Neither module has external dependencies, so enabling them is straightforward:

$ ilum module enable monitoring loki --yes
╭─ Enable Summary ─────────────────────────────────────────╮
│ Action Enable │
│ Release ilum │
│ Namespace default │
│ Modules monitoring, loki │
╰──────────────────────────────────────────────────────────╯

╭─ Values Diff ────────────────────────────────────────────╮
│ Key Before After │
├──────────────────────────────────────────────────────────┤
│ kube-prometheus-stack.enabled false true │
│ global.logAggregation.enabled false true │
│ global.logAggregation.loki.enabled false true │
│ global.logAggregation.promtail.enabled false true │
╰──────────────────────────────────────────────────────────╯

⠋ Enabling modules...
✓ Enabled: monitoring, loki

Verify the monitoring stack is healthy:

$ ilum module status --category monitoring
╭─ Module Status (release: ilum, namespace: default) ──────────────────╮
│ Module Category Status Pods Health │
├──────────────────────────────────────────────────────────────────────┤
│ monitoring monitoring Enabled 3/3 Healthy │
│ loki monitoring Enabled 2/2 Healthy │
│ graphite monitoring Disabled -- -- │
╰──────────────────────────────────────────────────────────────────────╯

The monitoring module deploys the kube-prometheus-stack, which includes Prometheus for metrics collection, Grafana for dashboards, and Alertmanager for alerts. The loki module adds Loki for log storage and Promtail for log collection from all pods in the namespace.

Access Grafana through the Ilum UI at http://localhost:31777/external/grafana/, or check the service directly with:

$ ilum doctor --check pods

If you also want Graphite metrics export (for legacy integrations), add it as a third module:

$ ilum module enable graphite --yes

Dry-Run Workflow for Change Management

Context: You are managing a production Ilum deployment and need to review infrastructure changes before applying them -- for compliance, peer review, or just caution.

Goal: You want a preview-then-apply workflow where every change is reviewed as a diff before execution.

Solution:

Every mutating CLI command supports --dry-run. The flag shows the full operation summary, values diff, and generated Helm command -- then exits without executing anything.

Preview a version upgrade:

$ ilum upgrade --version 6.8.0 --dry-run
╭─ Upgrade Summary ───────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Chart ilum/ilum │
│ Version 6.7.0 → 6.8.0 │
│ Modules core, gitea, jupyter, minio, mongodb, │
│ postgresql, ui │
│ Atomic True │
╰──────────────────────────────────────────────────────────╯

ℹ Upgrade notes: https://ilum.cloud/docs/upgrade-notes/

Command: helm upgrade ilum ilum/ilum \
--namespace default \
--version 6.8.0 \
--timeout 10m \
--atomic \
--reuse-values

ℹ Dry-run mode — no changes applied.

Preview enabling a module:

$ ilum module enable airflow --dry-run
╭─ Enable Summary ─────────────────────────────────────────╮
│ Action Enable │
│ Release ilum │
│ Namespace default │
│ Modules airflow │
╰──────────────────────────────────────────────────────────╯

ℹ Auto-resolved dependencies: postgresql

╭─ Values Diff ────────────────────────────────────────────╮
│ Key Before After │
├──────────────────────────────────────────────────────────┤
│ airflow.enabled false true │
│ postgresql.enabled true true │
╰──────────────────────────────────────────────────────────╯

Command: helm upgrade ilum ilum/ilum \
--namespace default \
--timeout 10m \
--reuse-values \
--set airflow.enabled=true \
--set postgresql.enabled=true

ℹ Dry-run mode — no changes applied.

Preview an uninstall:

$ ilum uninstall --delete-data --dry-run
╭─ Uninstall Summary ─────────────────────────────────────╮
│ Release ilum │
│ Namespace default │
│ Delete PVCs Yes │
│ Delete Namespace No │
╰──────────────────────────────────────────────────────────╯

⚠ PersistentVolumeClaims will be DELETED.

Command: helm uninstall ilum --namespace default

ℹ Dry-run mode — no changes applied.

A typical change management workflow looks like this:

  1. Run the command with --dry-run and capture the output.
  2. Share the diff with your team for review.
  3. Once approved, re-run the same command without --dry-run (and with --yes if running from a script).
$ ilum module enable sql --dry-run 2>&1 | tee change-request-42.txt
$ # ... team reviews change-request-42.txt ...
$ ilum module enable sql --yes
note

The --dry-run flag is available on ilum install, ilum upgrade, ilum uninstall, ilum module enable, and ilum module disable. Each command exits with code 0 in dry-run mode regardless of what would happen, so you can safely run dry-runs in CI pipelines. See the Command Reference for the full list of supported flags per command.