ROAR User Guide   »   Batch Jobs on Roar Collab
Feedback [ + ]

Batch Jobs on Roar Collab

On RC, users can run jobs by submitting scripts to the Slurm job scheduler. A Slurm script must do three things:

  1. Prescribe the resource requirements for the job
  2. Set the job’s environment
  3. Specify the work to be carried out in the form of shell commands

The portion of the job that prescribes the resource requirements contains the resource directives. Resource directives in Slurm submission scripts are denoted by lines starting with the #SBATCH keyword. The rest of the script, which both sets the environment and specifies the work to be done, consists of bash commands. The very first line of the submission script, #!/bin/bash, is called a shebang and specifies to the command line environment to interpret the commands as bash commands.

Below is a sample Slurm script for running a Python task:

 

#!/bin/bash

#SBATCH --job-name=apythonjob   # give the job a name
#SBATCH --account=open          # specify the account
#SBATCH --partition=open        # specify the partition
#SBATCH --nodes=1               # request a node
#SBATCH --ntasks=1              # request a task / cpu
#SBATCH --mem=1G                # request the memory required per node
#SBATCH --time=00:01:00         # set a limit on the total run time

python pyscript.py

 

In this sample submission script, the resource directives request a single node with a single task. Slurm is a task-based scheduler, and a task is equivalent to a processor core unless otherwise specified in the submission script. The scheduler directives then request 1 GB of memory per node for a maximum of 1 minute of runtime. The memory can be specified in KB, MB, GB, or TB by using a suffix of K, M, G, or T, respectively. If no suffix is used, the default is MB. Lastly, the work to be done is specified, which is the execution of a Python script in this case.

The resource directives should be populated with resource requests that are adequate to complete the job but should be minimal enough that the job can be placed somewhat quickly by the scheduler. The total time to completion of a job consists of the amount of time the job is queued plus the amount of time it takes the job to run to completion once placed. The queue time is minimized when the bare minimum amount of resources are requested, and the queue time grows as the amount of requested resources grows. The run time of the job is minimized when all of the computational resources available to the job are efficiently utilized. The total time to completion, therefore, is minimized when the resources requested closely match the amount of computational resources that can be efficiently utilized by the job. During the development of the computational job, it is best to keep track of an estimate of the computational resources used by the job. Add about a 20% margin on top of the best estimate of the job’s resource usage in order to produce a job’s resource requests used in the scheduler directives.

If the above sample submission script was saved as pyjob.slurm, it would be submitted to the Slurm scheduler with the sbatch command.

 

$ sbatch pyjob.slurm

 

The job can be submitted to the scheduler from any node on RC. The scheduler will keep the job in the job queue until the job gains sufficient priority to run on a compute node. Depending on the nature of the job and the availability of computational resources, the queue time will vary between seconds to days. To check the status of queued and running jobs, use the squeue command:

 

$ squeue -u <userid>