ROAR User Guide   »   Roar User Guide   »   How Specify Resources when Running Jobs in Open OnDemand
Feedback [ + ]

How Specify Resources when Running Jobs in Open OnDemand

When running jobs in Open OnDemand you’ll assess your resource requirements just as you do when submitting job through an SSH client.

Jobs will run when dedicated resources are available on the compute nodes. Roar uses Moab and Torque for the scheduler and resource manager. Jobs can be either run in batch or interactive modes. Both are submitted using the qsub command.

Both batch and interactive jobs are required to provide a list of requested resources to the scheduler in order to be placed on a compute node with the correct resources available. These are given either in the submission script or on the command line. If these are given in a submission script, they must come before any non-PBS command.

Typical PBS Directives

PBS DirectiveDescription
#PBS -l walltime=HH:MM:SSThis specifies the maximum wall time (real time, not CPU time) that a job should take. If this limit is exceeded, PBS will stop the job. Keeping this limit close to the actual expected time of a job can allow a job to start more quickly than if the maximum wall time is always requested.
#PBS -l pmem=SIZEgbThis specifies the maximum amount of physical memory used by any processor ascribed to the job. For example, if the job would run on four processors and each would use up to 2 GB (gigabytes) of memory, then the directive would read #PBS -l pmem=2gb. The default for this directive is 1 GB of memory.
#PBS -l mem=SIZEgbThis specifies the maximum amount of physical memory used in total for the job. This should be used for single node jobs only.
#PBS -l nodes=N:ppn=M
This specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use. PBS treats a processor core as a processor, so a system with eight cores per compute node can have ppn=8 as its maximum ppn request. Note that unless a job has some inherent parallelism of its own through something like MPI or OpenMP, requesting more than a single processor on a single node is usually wasteful and can impact the job start time.
#PBS -l nodes=N:ppn=M:OThis specifies the node type (node type=O). You can only specify the node type when using the "Open Queue".

Node types available on Roar:
Node Type = OCPURAM
basicIntel Xeon E5-2650v4 2.2GHz 128 GB Total
lcivybridge
scivybridge
Intel Xeon E5-2680v2 2.8GHz 256 GB Total
schaswellIntel Xeon E5-2680v3 2.5GHz 256 GB Total
himemIntel Xeon E7-4830v2 2.2GHz 1024 GB Total
#PBS -A allocNameThis identifies the account to which the resource consumption of the job should be charged (SponsorID_collab). This flag is necessary for all job submissions. For jobs being submitted to a system’s open queue you should use -A open.
#PBS -j oeNormally when a command runs it prints its output to the screen. This output is often normal output and error output. This directive tells PBS to put both normal output and error output into the same output file.

Node Types Available on Roar

Node TypesSpecifications
Basic2.2 GHz Intel Xeon Processor, 24 CPU/server, 128 GB RAM, 40 Gbps Ethernet
Standard2.8 GHz Intel Xeon Processor, 20 CPU/server, 256 GB RAM, FDR Infiniband, 40 Gbps Ethernet
High2.2 GHz Intel Xeon Processor, 40 CPU/server, 1 TB RAM, FDR Infiniband, 10 Gbps Ethernet
GPU2.5 GHz Intel Xeon Processor, 2 Nvidia Tesla K80 computing modules/server, 24 CPU/server, Double Precision, FDR Infiniband, 10 Gbps Ethernet

PBS Environmental Variables

Jobs submitted will automatically have several PBS environment variables created that can be used within the job submission script and scripts within the job. A full list of PBS environment variables can be used by viewing the output of

env | grep PBS > log.pbsEnvVars

run within a submitted job.

Variable NameDescription
PBS_O_WORKDIRThe directory in which the qsub command was issued.
PBS_JOBIDThe job's id.
PBS_JOBNAMEThe job's name.
PBS_NODEFILEA file in which all relevant node hostnames are stored for a job.