Skip to main content

Run Spark Jobs via REST API

Overview

Ilum provides a robust REST API that allows you to manage, submit, and execute Apache Spark jobs programmatically. This capability is essential for organizations running Spark on Kubernetes who need to automate their data workflows.

Using the API is particularly effective for:

  • CI/CD Integration: Seamlessly trigger Spark jobs from GitLab CI, Jenkins, GitHub Actions, or Airflow.
  • Custom Orchestration: Build your own data platforms or internal tools on top of Ilum.
  • Automation: Replace manual spark-submit CLI commands with reliable, code-driven API calls.

REST API vs. Spark CLI

FeatureREST APISpark CLI (spark-submit)
Primary Use CaseAutomation, CI/CD, Web AppsAd-hoc testing, Local development
Client Requirementcurl or HTTP ClientSpark Binaries & Java installed
Feedback LoopJSON Response (Job ID)Console Logs (Streamed)
Firewall FriendlyYes (Single HTTP Port)No (Requires Random Ports)

In this guide, you will learn how to:

  1. Submit a Spark job using the multipart/form-data endpoint.
  2. Monitor the job's status via the API.

Prerequisites

To follow this example, you will need the curl command-line tool and a sample Spark JAR file.

  • Download Example JAR (choose one):

Accessing the API

The Ilum Core API is exposed by default on port 9888. Depending on your environment, you can access it using one of the following methods:

API Base URL

In the examples below, replace http://localhost:9888 with your actual Ilum Core address.

1. Port Forwarding (Development)

If you are running the API on your local machine using a Kubernetes cluster (like Minikube or MicroK8s), you can use kubectl port-forward to access it locally:

Port Forward
kubectl port-forward svc/ilum-core 9888:9888

The API will then be available at http://localhost:9888/api/v1.

2. NodePort

If your Ilum installation is configured with a NodePort service type, you can access it via any Kubernetes node IP:

Get Nodes & Services
# Get the node IP
kubectl get nodes -o wide

# Get the assigned NodePort
kubectl get svc ilum-core

Access the API at http://<NODE_IP>:<NODE_PORT>/api/v1.

3. Ingress (Production)

For production environments, use an Ingress controller to expose the API. This allows you to use a custom domain and SSL/TLS encryption.

Example Ingress Path
- path: /api/v1/(.*)
pathType: ImplementationSpecific
backend:
service:
name: ilum-core
port:
number: 9888

Access the API at https://your-domain.com/api/v1.

Which Method Should I Use?

MethodBest ForRequirement
Port ForwardingLocal development, one-off testskubectl access to the cluster
NodePortInternal lab environments, simple setupsAccess to Kubernetes Node IPs
IngressProduction, Team collaboration, CI/CDIngress Controller (Nginx, Traefik, etc.)

Submit Apache Spark Jobs Programmatically

To submit a new Spark application, use the POST /api/v1/job/submit endpoint. This endpoint accepts multipart/form-data requests, allowing you to upload your application JAR or Python script along with the job configuration. This method is the programmatic equivalent of spark-submit.

Example: Submitting MiniReadWriteTest

The following curl command submits the MiniReadWriteTest example job (from the downloaded JAR). This job writes a file and then reads it back to verify the setup.

Submit Job (Spark 4)
curl -X POST "http://localhost:9888/api/v1/job/submit" \
-F "name=MiniReadWriteTest" \
-F "clusterName=default" \
-F "language=SCALA" \
-F "jobClass=org.apache.spark.examples.MiniReadWriteTest" \
-F "jobConfig=spark.executor.instances=2" \
-F "args=/opt/spark/examples/src/main/resources/kv1.txt" \
-F "jars=@spark-examples_2.13-4.1.1.jar"

Parameter Reference

ParameterTypeDescriptionRequiredExample
namestringA unique identifier for your job.YesMiniReadWriteTest
clusterNamestringThe name of the Kubernetes cluster registered in Ilum.Yesdefault
languagestringThe programming language of the job (SCALA or PYTHON).YesSCALA
jobClassstringScala: The fully qualified main class name. Python: The script filename (without extension).Yesorg.apache.spark.examples.MiniReadWriteTest
jobConfigstringSemicolon-separated List of Spark configuration properties in key=value format.Nospark.executor.instances=2
argsstringSemicolon-separated list of arguments to pass to the job's main method.No/path/to/input.txt
jarsfileThe application JAR file. Use the @ prefix in curl to upload the file.Yes (for Scala)@app.jar
pyFilesfileThe main Python script or ZIP package.Yes (for Python)@job.py
Full API Specification

For a complete list of all available parameters and their detailed descriptions, refer to the Ilum API Documentation.

Monitor Spark Job Status

Upon successful submission, the API returns a JSON response containing the jobId. You can use this ID to poll for the job's completion status, making it easy to build wait-logic into your automation scripts.

{
"jobId": "20251222-0931-f56pqk5y1ap"
}

You can use this jobId to check the current status of your job:

Get Job Status
curl "http://localhost:9888/api/v1/job/{jobId}"

The response provides a comprehensive overview of the job's configuration, state, and execution timing.

{
"jobId": "20251222-0931-f56pqk5y1ap",
"jobName": "MiniReadWriteTest",
"jobType": "SINGLE",
"language": "SCALA",
"appId": "spark-92b3da7ee0fa4d1e965b521ba356544c",
"state": "FINISHED",
"submitTime": 1766395898079,
"startTime": 1766395899941,
"endTime": 1766395905785,
"jobConfig": {
"spark.executor.instances": "2",
"spark.kubernetes.namespace": "default",
"spark.eventLog.enabled": "true",
"..." : "..."
}
}

Key fields to monitor include:

  • state: The current lifecycle phase (e.g., SUBMITTED, RUNNING, FINISHED, FAILED).
  • appId: The Spark Application ID assigned by the cluster manager.
  • startTime / endTime: Epoch timestamps (ms) for performance tracking.
  • error: If the state is FAILED, this field will contain the error message or stack trace.

Troubleshooting Common Issues

If you encounter issues while submitting jobs, refer to the table below for common error codes and solutions.

HTTP CodeErrorPossible Cause & Solution
400Bad RequestMissing Parameters: Ensure jobClass, clusterName, and jars (for Scala) are provided correctly in the form data.
401UnauthorizedAuth Failure: Check if your cluster requires an API Token or Basic Auth header.
404Not FoundInvalid Cluster: The clusterName specified does not exist. Verify active clusters via GET /api/v1/cluster.
500Internal Server ErrorCluster Connection: Ilum cannot talk to the K8s API server. Check the ilum-core logs for connectivity issues.

Frequently Asked Questions (FAQ)

Can I upload Python dependencies?

Yes. For PySpark jobs, use the pyFiles parameter to upload your .py script or a .zip archive containing your Python modules.

How do I secure the API?

We recommend placing the Ilum API behind an Ingress Controller with Basic Auth or OAuth2 enabled. You can then pass the credentials via standard HTTP headers.

What is the maximum JAR size?

The default limit is usually 100MB (configured in your Ingress or Spring Boot settings). For larger JARs, we recommend uploading them to S3/HDFS first and referencing them via spark.jars config, rather than uploading directly.