How to Write Interactive Spark Jobs in Python (IlumJob)
This guide teaches you how to develop interactive Spark jobs in Python using the IlumJob interface. You'll learn how to structure your code, pass parameters at execution time, and leverage the benefits of this approach for production workloads on Kubernetes.
What is the IlumJob Interface?
The IlumJob interface is a Python base class used to create reusable, parameterized Spark jobs that run on interactive Ilum services. Unlike traditional spark-submit scripts, IlumJob allows you to:
- Receive configuration at runtime: Parameters are passed as a dictionary, allowing the same job to handle different inputs without code changes.
- Return structured results: The
runmethod returns a string, making it easy to extract and display results. - Run on-demand: Jobs can be triggered via the UI, REST API, or CI/CD pipelines.
from ilum.api import IlumJob
class MySparkJob(IlumJob):
def run(self, spark, config) -> str:
# Your Spark logic here
return "Job completed successfully"
Structure of an Interactive Spark Job
Every interactive job consists of three essential parts:
- Import the interface:
from ilum.api import IlumJob - Define a class: Create a class that inherits from
IlumJob. - Implement
run: Write your Spark logic inside therun(self, spark, config)method.
| Parameter | Type | Description |
|---|---|---|
spark | SparkSession | Pre-initialized Spark session, ready to use. |
config | dict | A dictionary containing parameters passed at execution time. |
| Return | str | A string result that will be displayed in the UI or returned via API. |
How to Pass Parameters to Spark Jobs
Parameters are passed as a JSON object when executing the job. Inside your run method, you access them using standard dictionary methods.
Example: Table Inspector
This example demonstrates reading database and table parameters to inspect a Hive table.
from ilum.api import IlumJob
from pyspark.sql.functions import col, sum as spark_sum
class TableInspector(IlumJob):
def run(self, spark, config) -> str:
# Read required parameters
table_name = config.get('table')
database_name = config.get('database') # Optional
if not table_name:
raise ValueError("Config must provide a 'table' key")
# Set database if provided
if database_name:
spark.catalog.setCurrentDatabase(database_name)
# Check if table exists
if table_name not in [t.name for t in spark.catalog.listTables()]:
raise ValueError(f"Table '{table_name}' not found in catalog")
df = spark.table(table_name)
# Build report
report = [
f"=== Table: {table_name} ===",
f"Total rows: {df.count()}",
f"Total columns: {len(df.columns)}",
"",
"Schema:",
]
for field in df.schema.fields:
report.append(f" {field.name}: {field.dataType}")
report.append("")
report.append("Sample (5 rows):")
for row in df.take(5):
report.append(str(row.asDict()))
# Null counts
report.append("")
report.append("Null counts:")
null_df = df.select([spark_sum(col(c).isNull().cast("int")).alias(c) for c in df.columns])
for c, v in null_df.collect()[0].asDict().items():
report.append(f" {c}: {v}")
return "\n".join(report)
Execution Parameters (JSON)
When executing via UI or API, provide parameters like this:
{
"database": "ilum_example_product_sales",
"table": "products"
}
To run an interactive job, you first need to create and deploy a Job-type Service in Ilum. This service provides the Spark environment where your jobs execute.
When creating the service:
- Type: Select
Job - Language: Select
Python - Py Files: Upload your job file (e.g.,
table_inspector.py)
👉 Learn how to deploy a Job Service — step-by-step guide with UI screenshots and configuration options.
Executing Jobs
You can execute interactive jobs in three ways:
- Ilum UI
- REST API
- CI/CD Pipeline
- Go to Services → Select your Job service
- In the Execute section:
- Class:
table_inspector.TableInspector - Parameters:
{"database": "sales", "table": "orders"}
- Class:
- Click Execute
The result string is displayed immediately in the UI.
Before executing jobs via API:
- Expose the API: See Accessing the API for port forwarding, NodePort, or Ingress setup
- Get your Group ID: Run
curl http://localhost:9888/api/v1/groupand copy theidfield of your Job Service
curl -X POST "http://ilum-core:9888/api/v1/group/{groupId}/job/execute" \
-H "Content-Type: application/json" \
-d '{
"type": "interactive_job_execute",
"jobClass": "table_inspector.TableInspector",
"jobConfig": {
"database": "sales",
"table": "orders"
}
}'
The response contains the result string and execution metadata.
Trigger job execution from GitLab CI/CD or similar:
execute_interactive_job:
stage: run
script:
- |
curl -s -X POST \
-H "Content-Type: application/json" \
-d '{
"type": "interactive_job_execute",
"jobClass": "table_inspector.TableInspector",
"jobConfig": {
"database": "sales",
"table": "orders"
}
}' \
http://ilum-core:9888/api/v1/group/${GROUP_ID}/job/execute
variables:
GROUP_ID: "your-group-id-here" # Get this from: curl http://ilum-core:9888/api/v1/group
See CI/CD with GitLab for a complete pipeline example including group creation.
Benefits of the IlumJob Approach
| Benefit | Description |
|---|---|
| Reusability | Write once, run many times with different parameters. |
| No Cold Starts | Interactive services keep Spark warm, so subsequent executions are instant. |
| Parameterization | Pass configuration at runtime—no need to hardcode values. |
| Observability | Results are captured and visible in the UI/API for easy debugging. |
| API-Driven | Execute jobs programmatically from orchestrators, CI/CD, or external systems. |
| Version Control | Store job code in Git and deploy via pipelines. |
Interactive Jobs vs. Batch Jobs (Spark Submit)
| Feature | Interactive Jobs (IlumJob) | Batch Jobs (spark-submit) |
|---|---|---|
| Startup Time | Instant (uses warm executors) | Slow (provisions new pods) |
| Context | Shared Spark Context | Isolated Spark Context |
| Use Case | Ad-hoc queries, API backends, quick reports | Long-running ETL, heavy processing |
| Result | Returns string result to API/UI | Logs to driver stdout/file |
| Resources | Shared within the service | Dedicated per job |
Best Practices
1. Validate Input Parameters
Always validate required parameters and provide helpful error messages.
def run(self, spark, config) -> str:
required_keys = ['table', 'output_path']
for key in required_keys:
if key not in config:
raise ValueError(f"Missing required parameter: '{key}'")
2. Use Default Values
For optional parameters, use config.get('key', default_value).
batch_size = int(config.get('batch_size', 1000))
3. Structure Your Output
Return a well-formatted string for readability in the UI.
lines = ["=== Job Summary ==="]
lines.append(f"Processed: {count} records")
lines.append(f"Duration: {elapsed_time}s")
return "\n".join(lines)
4. Handle Errors Gracefully
Wrap risky operations in try/except and return meaningful messages.
try:
df.write.saveAsTable(output_table)
return f"Successfully wrote to {output_table}"
except Exception as e:
return f"Error writing table: {str(e)}"
Complete Example: Transaction Report Generator
This job generates a transaction summary report based on the transaction_anomaly_d.transactions table.
from ilum.api import IlumJob
from pyspark.sql.functions import sum as spark_sum, count, col
class TransactionReportGenerator(IlumJob):
def run(self, spark, config) -> str:
# Parameters
merchant_filter = config.get('merchant') # Optional filter
# Load data from the default Ilum transactions table
df = spark.table("transaction_anomaly_detection.transactions")
if merchant_filter:
df = df.filter(col("Merchant") == merchant_filter)
# Aggregate by TransactionType
summary = df.groupBy("TransactionType").agg(
count("TransactionID").alias("transaction_count"),
spark_sum("Amount").alias("total_amount")
).collect()
# Build report
report = [
f"=== Transaction Report ===",
f"Merchant Filter: {merchant_filter or 'All'}",
"",
"Summary by Transaction Type:",
]
for row in summary:
report.append(f" {row['TransactionType']}: {row['transaction_count']} txns, ${row['total_amount']:,.2f}")
return "\n".join(report)
Execute with:
{
"merchant": "AcmeCorp"
}
Next Steps
- Interactive Job Service: Learn how to deploy and manage Job-type services.
- Interactive Code Service: For ad-hoc exploratory analysis with persistent sessions.
- CI/CD with GitLab: Automate job deployments via pipelines.
Frequently Asked Questions
Can I use Scala for interactive jobs?
Yes. Currently, the IlumJob interface is primarily documented for Python. Check the Interactive Job Service documentation for language support details.
How do I debug an interactive job?
Since interactive jobs run on a remote cluster, you can't use a local debugger directly. Instead:
- Use
print()statements or a logger, which will appear in the driver logs. - Return error messages as part of the string result in your
try/exceptblocks. - Check the Spark UI for the specific job execution to analyze tasks and stages.
What happens if my job fails?
If your code raises an unhandled exception, the execution will fail, and the error trace will be returned in the API response. It is best practice to wrap your logic in a try/except block to return a user-friendly error message.