Skip to main content

Scheduling Apache Spark Jobs

Scheduling in Ilum allows you to automate the execution of Apache Spark jobs on Kubernetes clusters at specified intervals using CRON expressions. This is essential for setting up reliable ETL pipelines, regular data analysis, or maintenance tasks that need to run without manual intervention.

Example JAR Source

You can use the jar file with spark examples from one of these links:

Spark 4 / Scala 2.13: spark-examples_2.13-4.1.1.jar

Step-by-Step Guide: Scheduling a Spark Job

  1. Navigate to Schedules: Access the Schedules section in your Ilum dashboard.

  2. Create New Schedule: Click the New Schedule + button to start setting up your automated job.

  3. Fill Out Schedule Details:

    • General Tab:

      • Name: Enter ScheduledMiniReadWriteTest
      • Cluster: Select your target cluster
      • Class: Enter org.apache.spark.examples.MiniReadWriteTest
      • Language: Select Scala
    • Timing Tab:

      • CRON Expression: Select the Custom tab.
      • Custom expression: Enter @daily
      Timing

      This configuration will trigger the job to run once every day at midnight. You can adjust this to any valid CRON expression (e.g., 0 */12 * * * for every 12 hours).

    • Configuration Tab:

      • Arguments: Enter /opt/spark/examples/src/main/resources/kv1.txt
    • Resources Tab:

      • Jars: Upload the JAR file from the link above.
    • Memory Tab:

      • Leave all settings at their default values for this example.
  4. Submit and Monitor:

    • Click Submit to create the schedule.
    • You can see your new schedule in the list.
    • When the scheduled time arrives, a new job instance will be launched automatically. You can view these instances in the Jobs section.

Schedule Configuration Reference

Below is a detailed breakdown of all available settings, organized by tab as they appear in the UI.

ParameterDescription
NameA unique identifier for the schedule.
ClusterThe target cluster where the scheduled jobs will be executed.
ClassThe fully qualified class name of the application (e.g., org.apache.spark.examples.SparkPi) or the filename for Python scripts.
LanguageThe programming language used for the job (Scala or Python).
DescriptionAn optional description to explain the purpose of this schedule.
Max RetriesThe maximum number of times Ilum will attempt to restart the job if it fails.

Frequently Asked Questions

Details

Can I schedule PySpark jobs using Ilum? Yes, Ilum fully supports scheduling for both Scala/Java (JARs) and Python (PySpark) jobs. Simply select "Python" as the language in the General tab and provide your script.

Details

How does the retry mechanism work? If a scheduled job fails, Ilum can automatically attempt to restart it based on the "Max Retries" configuration. This ensures transient issues don't break your pipelines.

Details

What CRON formats are supported? Ilum supports standard Unix-style CRON expressions (e.g., 0 12 * * *) as well as predefined macros like @daily, @hourly, etc.