Skip to main content

How to Write Interactive Spark Jobs in Python (IlumJob)

This guide teaches you how to develop interactive Spark jobs in Python using the IlumJob interface. You'll learn how to structure your code, pass parameters at execution time, and leverage the benefits of this approach for production workloads on Kubernetes.

What is the IlumJob Interface?

The IlumJob interface is a Python base class used to create reusable, parameterized Spark jobs that run on interactive Ilum services. Unlike traditional spark-submit scripts, IlumJob allows you to:

  • Receive configuration at runtime: Parameters are passed as a dictionary, allowing the same job to handle different inputs without code changes.
  • Return structured results: The run method returns a string, making it easy to extract and display results.
  • Run on-demand: Jobs can be triggered via the UI, REST API, or CI/CD pipelines.
Basic Structure
from ilum.api import IlumJob

class MySparkJob(IlumJob):
def run(self, spark, config) -> str:
# Your Spark logic here
return "Job completed successfully"

Structure of an Interactive Spark Job

Every interactive job consists of three essential parts:

  1. Import the interface: from ilum.api import IlumJob
  2. Define a class: Create a class that inherits from IlumJob.
  3. Implement run: Write your Spark logic inside the run(self, spark, config) method.
ParameterTypeDescription
sparkSparkSessionPre-initialized Spark session, ready to use.
configdictA dictionary containing parameters passed at execution time.
ReturnstrA string result that will be displayed in the UI or returned via API.

How to Pass Parameters to Spark Jobs

Parameters are passed as a JSON object when executing the job. Inside your run method, you access them using standard dictionary methods.

Example: Table Inspector

This example demonstrates reading database and table parameters to inspect a Hive table.

table_inspector.py
from ilum.api import IlumJob
from pyspark.sql.functions import col, sum as spark_sum

class TableInspector(IlumJob):
def run(self, spark, config) -> str:
# Read required parameters
table_name = config.get('table')
database_name = config.get('database') # Optional

if not table_name:
raise ValueError("Config must provide a 'table' key")

# Set database if provided
if database_name:
spark.catalog.setCurrentDatabase(database_name)

# Check if table exists
if table_name not in [t.name for t in spark.catalog.listTables()]:
raise ValueError(f"Table '{table_name}' not found in catalog")

df = spark.table(table_name)

# Build report
report = [
f"=== Table: {table_name} ===",
f"Total rows: {df.count()}",
f"Total columns: {len(df.columns)}",
"",
"Schema:",
]
for field in df.schema.fields:
report.append(f" {field.name}: {field.dataType}")

report.append("")
report.append("Sample (5 rows):")
for row in df.take(5):
report.append(str(row.asDict()))

# Null counts
report.append("")
report.append("Null counts:")
null_df = df.select([spark_sum(col(c).isNull().cast("int")).alias(c) for c in df.columns])
for c, v in null_df.collect()[0].asDict().items():
report.append(f" {c}: {v}")

return "\n".join(report)

Execution Parameters (JSON)

When executing via UI or API, provide parameters like this:

{
"database": "ilum_example_product_sales",
"table": "products"
}

Before You Start

To run an interactive job, you first need to create and deploy a Job-type Service in Ilum. This service provides the Spark environment where your jobs execute.

When creating the service:

  • Type: Select Job
  • Language: Select Python
  • Py Files: Upload your job file (e.g., table_inspector.py)

👉 Learn how to deploy a Job Service — step-by-step guide with UI screenshots and configuration options.

Executing Jobs

You can execute interactive jobs in three ways:

  1. Go to Services → Select your Job service
  2. In the Execute section:
    • Class: table_inspector.TableInspector
    • Parameters: {"database": "sales", "table": "orders"}
  3. Click Execute

The result string is displayed immediately in the UI.


Benefits of the IlumJob Approach

BenefitDescription
ReusabilityWrite once, run many times with different parameters.
No Cold StartsInteractive services keep Spark warm, so subsequent executions are instant.
ParameterizationPass configuration at runtime—no need to hardcode values.
ObservabilityResults are captured and visible in the UI/API for easy debugging.
API-DrivenExecute jobs programmatically from orchestrators, CI/CD, or external systems.
Version ControlStore job code in Git and deploy via pipelines.

Interactive Jobs vs. Batch Jobs (Spark Submit)

FeatureInteractive Jobs (IlumJob)Batch Jobs (spark-submit)
Startup TimeInstant (uses warm executors)Slow (provisions new pods)
ContextShared Spark ContextIsolated Spark Context
Use CaseAd-hoc queries, API backends, quick reportsLong-running ETL, heavy processing
ResultReturns string result to API/UILogs to driver stdout/file
ResourcesShared within the serviceDedicated per job

Best Practices

1. Validate Input Parameters

Always validate required parameters and provide helpful error messages.

Validate Parameters
def run(self, spark, config) -> str:
required_keys = ['table', 'output_path']
for key in required_keys:
if key not in config:
raise ValueError(f"Missing required parameter: '{key}'")

2. Use Default Values

For optional parameters, use config.get('key', default_value).

Use Default Values
batch_size = int(config.get('batch_size', 1000))

3. Structure Your Output

Return a well-formatted string for readability in the UI.

Structure Output
lines = ["=== Job Summary ==="]
lines.append(f"Processed: {count} records")
lines.append(f"Duration: {elapsed_time}s")
return "\n".join(lines)

4. Handle Errors Gracefully

Wrap risky operations in try/except and return meaningful messages.

Handle Errors
try:
df.write.saveAsTable(output_table)
return f"Successfully wrote to {output_table}"
except Exception as e:
return f"Error writing table: {str(e)}"

Complete Example: Transaction Report Generator

This job generates a transaction summary report based on the transaction_anomaly_d.transactions table.

transaction_report.py
from ilum.api import IlumJob
from pyspark.sql.functions import sum as spark_sum, count, col

class TransactionReportGenerator(IlumJob):
def run(self, spark, config) -> str:
# Parameters
merchant_filter = config.get('merchant') # Optional filter

# Load data from the default Ilum transactions table
df = spark.table("transaction_anomaly_detection.transactions")

if merchant_filter:
df = df.filter(col("Merchant") == merchant_filter)

# Aggregate by TransactionType
summary = df.groupBy("TransactionType").agg(
count("TransactionID").alias("transaction_count"),
spark_sum("Amount").alias("total_amount")
).collect()

# Build report
report = [
f"=== Transaction Report ===",
f"Merchant Filter: {merchant_filter or 'All'}",
"",
"Summary by Transaction Type:",
]
for row in summary:
report.append(f" {row['TransactionType']}: {row['transaction_count']} txns, ${row['total_amount']:,.2f}")

return "\n".join(report)

Execute with:

Execute with Payload
{
"merchant": "AcmeCorp"
}

Next Steps


Frequently Asked Questions

Can I use Scala for interactive jobs?

Yes. Currently, the IlumJob interface is primarily documented for Python. Check the Interactive Job Service documentation for language support details.

How do I debug an interactive job?

Since interactive jobs run on a remote cluster, you can't use a local debugger directly. Instead:

  1. Use print() statements or a logger, which will appear in the driver logs.
  2. Return error messages as part of the string result in your try/except blocks.
  3. Check the Spark UI for the specific job execution to analyze tasks and stages.
What happens if my job fails?

If your code raises an unhandled exception, the execution will fail, and the error trace will be returned in the API response. It is best practice to wrap your logic in a try/except block to return a user-friendly error message.