ROAR User Guide   »   Running Batch Jobs on Roar Collab
Feedback [ + ]

Running Batch Jobs on Roar Collab

To submit batch jobs, use the `sbatch` command to submit a job submit script to the scheduler. To highlight how this works, let’s use this basic script (called as an example: 

module load python/3.6 

 To submit this script as a batch job using the default parameters, the command would be: 

$ sbatch 

 Job submit scripts are batch scripts with added #SBATCH directives to outline the resources desired by the scheduler. These directives can be placed at the top of the submit script, as shown below: 

#SBATCH --nodes=1 
#SBATCH --ntasks=1 
#SBATCH --mem=1GB 
#SBATCH --time=1:00:00 
#SBATCH --partition=open 

module load python/3.6 


Alternatively, directives can be specified inline when you submit using


Here is an example of inline directives requesting a single core on a single node with 1 GB of RAM for 1 hour on the Open queue:

$ sbatch -N 1 -n 1 --mem=1GB -t 1:00:00 -p open 

Here are some common directives used: 

--nodes (-N) Number of nodes requested
--time (-t) Maximum wall time for the job – in DD-HHH:MM:SS format
--mem Real memory (RAM) required per node - can use KB, MB, and GB units – default is MB

Request less memory than total available on the node - The maximum available on a 512 GB RAM node is 500, for 256 GB RAM node is 250
--ntasks (-n) The number of tasks total – used to request a specific number of cores
--ntasks-per-node Number of tasks per node – used to request a specific number of cores
This value multiplied by the number of nodes requested will equal total allocated cores
--mem Minimum of memory allocated to entire job
--mem-per-cpu Minimum of memory required per allocated CPU
--output Filename where all STDOUT will be directed – default is Slurm-.out
--error Filename where all STDERR will be directed – default is Slurm-.out
--job-name How the job will show up in the queue

More #SBATCH directives can be found in the Slurm documentation.