Skip to main content

Run Spark Submit (spark-submit) on Kubernetes

A simple Spark job in Ilum operates just like one submitted via the standard spark-submit command, but with additional enhancements for ease of use, configuration, and integration with external tools.

You can use the JAR file with Spark examples from your local Spark installation or any custom JAR you have.

Below is a step-by-step guide to setting up and running a simple Spark job using spark-submit on Ilum. This guide demonstrates the core configuration needed and shows how to monitor your job’s progress within the Ilum platform. For a complete overview of Ilum's architecture, check the Architecture Overview.


Quick Start (TL;DR)

How do I run a Spark job on Kubernetes with spark-submit?

To run a Spark job on Ilum (Kubernetes), ensure Java 17 and Spark are installed, upload your JAR, and run:

Quick Start: Spark Submit on K8s
./bin/spark-submit \
--master k8s://http://<ilum-core-address>:<ilum-core-port> \
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--conf spark.driver.memory=4g \
--conf spark.ilum.cluster=default \
--conf spark.kubernetes.container.image=ilum/spark:4.1.0 \
--conf spark.kubernetes.submission.waitAppCompletion=true \
s3a://ilum-files/ilum/default/spark-examples_2.13-4.1.0.jar

Note: Replace <ilum-core-address> with your actual Ilum Core endpoint.

Step-by-Step Guide

1. Prerequisites

  • Ensure Java 17 is installed and correctly set in your JAVA_HOME.
  • Download and extract the appropriate version of Apache Spark:
Download Spark 4
wget https://archive.apache.org/dist/spark/spark-4.1.0/spark-4.1.0-bin-hadoop3.tgz
tar -xzf spark-4.1.0-bin-hadoop3.tgz
cd spark-4.1.0-bin-hadoop3

2. Connect to Ilum

If Ilum is deployed on Kubernetes, forward the service port to your local machine to make Ilum accessible at localhost:9888.

Forward Core
kubectl port-forward svc/ilum-core 9888:9888
Production Tip

If you're communicating from within the same Kubernetes cluster, you can use Kubernetes DNS-based service addresses (e.g., http://ilum-core.namespace.svc.cluster.local) or expose services using Ingress.

3. Submit Your Spark Job

Choose the submission method that best fits your workflow:

This method is suitable for quick local testing.

1. Upload your JAR File

For demonstration, we assume the JAR is uploaded manually to MinIO.

Locate the example JAR: examples/jars/spark-examples_2.13-4.1.0.jar

Upload it to MinIO (bucket ilum-files, path ilum/default/): s3a://ilum-files/ilum/default/spark-examples_2.13-4.1.0.jar

2. Submit via REST

Limitation

spark.ilum.pyRequirements is not supported in this mode, as REST does not support PySpark submissions.

Run the following command:

REST Submit (Spark 4)
./bin/spark-submit \
--master spark://localhost:9888 \
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--conf spark.master.rest.enabled=true \
--conf spark.ilum.cluster=default \
--conf spark.app.name=my-spark-job \
s3a://ilum-files/ilum/default/spark-examples_2.13-4.1.0.jar

Parameters:

ParameterDescription
--masterIlum Core address via REST (e.g. spark://localhost:9888).
--conf spark.master.rest.enabled=trueEnables REST submission.
s3a://...JAR file path in MinIO.
Expected Output
Running Spark using the REST application submission protocol.
25/03/12 12:58:01 INFO RestSubmissionClient: Submitting a request to launch an application in spark://localhost:9888.
25/03/12 12:58:03 INFO RestSubmissionClient: Submission successfully created as 20250312-1158-qdnioef2rny. Polling submission state...
25/03/12 12:58:03 INFO RestSubmissionClient: Submitting a request for the status of submission 20250312-1158-qdnioef2rny in spark://localhost:9888.
25/03/12 12:58:03 INFO RestSubmissionClient: State of driver 20250312-1158-qdnioef2rny is now SUBMITTED.
25/03/12 12:58:03 INFO RestSubmissionClient: Driver is running on worker ILUM at ILUM_UI_ADDRESS/workloads/details/job/20250312-1158-qdnioef2rny.
25/03/12 12:58:03 INFO RestSubmissionClient: Server responded with CreateSubmissionResponse:
{
"action" : "CreateSubmissionResponse",
"serverSparkVersion" : "4.1.0",
"submissionId" : "20250312-1158-qdnioef2rny",
"success" : true
}
25/03/12 12:58:03 INFO ShutdownHookManager: Shutdown hook called
25/03/12 12:58:03 INFO ShutdownHookManager: Deleting directory /tmp/spark-fa2603be-488a-4e2a-9b7f-5e49825d379b

4. Monitor and Troubleshoot

Using the Ilum UI:

  • Monitor Job Progress: Track executors, memory usage, and job stages.
  • Review Results: Access logs and the integrated Spark History Server.
  • Troubleshoot: Diagnose failures by checking detailed executor logs.

For more details on monitoring metrics, see the Monitoring Guide.


Comparison: Classic spark-submit vs Ilum Approach

Running Spark directly on Kubernetes requires significant administrative effort. Ilum simplifies this by automating infrastructure management.

Traditional Approach (Native Spark on K8s) vs Ilum

FeatureNative Spark on K8sIlum (Managed Spark)
SetupManual Docker image build & complex spark-submit args.Automated. Use existing JARs; Ilum handles images.
ConfigVerbose (Service Accounts, Volumes, Secrets).Simplified. Minimal args; configs are injected automatically.
StorageManual Hadoop/S3 configuration per job.Integrated. Automatic credential injection for S3/GCS/Azure.
MonitoringCLI-based (kubectl logs), ephemeral.Centralized UI. Persistent logs, metrics, and history.
ObservabilityBasic Spark UI (if exposed).Advanced. Data Lineage, detailed resource metrics.

Key Benefits of Ilum:

  1. Automatic Image Selection: Ilum selects a compatible Spark Docker image matching the cluster version.
  2. Advanced Observability: Ilum provides deep lineage observability and advanced monitoring capabilities.
  3. Simplified Configuration: Reduce spark-submit parameters by 3x-4x.
  4. Integrated Storage Access: Credentials for all configured storages are automatically injected.
  5. Instant Monitoring: Logs and metrics (CPU/RAM) appear in the Ilum UI immediately.

For a developer, this means less time fighting with infrastructure and error-prone configurations, and more time delivering business logic.

For advanced customization, refer to the official Spark documentation.