Using Containers
Jobs on HPRC are run in an enterprise Linux 6 container. If you want your job to run in a custom Singularity container, you can specify that container either on the qsub command line or within your script.
Here is an example on the qsub command line:
qsub -I -v SINGULARITY_CONTAINER="/storage/work/abc123/singularity/bionic-base.simg"
For example, the above interactive job would provide:
[abc123@aci-lgn-008 abc123]$ qsub -I -A ics-hprc -q hprc -l nodes=1:ppn=20 -l mem=32gb -l walltime=1:23:30:00 qsub: waiting for job 12907285.torque01.util.production.int.aci.ics.psu.edu to start qsub: job 12907285.torque01.util.production.int.aci.ics.psu.edu ready [abc123@comp-vc-1645 abc123]$ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.1 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic Here is an example where the container is specified within your submission script:
#PBS -l walltime=1:30:00 #PBS -l nodes=1:ppn=8 #PBS -l mem=8gb #PBS -j oe #PBS -r n #PBS -m bae #PBS -M abf123@psu.edu #PBS -q hprc #PBS -A ics-hprc #PBS -v SINGULARITY_CONTAINER="/storage/work/abc123/singularity/bionic-base.simg"
Specifying a Custom Bash Environment on HPRC
Some jobs may be best served by custom bash environments, especially those run within containers that do not support modules or other environment variables supported on ACI-b directly. A custom bash environment can be specified with a qsub variable on the command line or within your submission script.
Here’s an example on the command line:
qsub -I -v SINGULARITY_CONTAINER="/storage/work/abc123/singularity/bionic-base.simg" -v BASH_ENV=”/storage/abc123/.bashrc_ubuntu” -q hprc -A epo2-hprc -l nodes=1:ppn=20 -l pmem=5gb -l walltime=16:30:00
Here’s an example within the submission script:
#PBS -l walltime=1:30:00 #PBS -l nodes=1:ppn=8 #PBS -l mem=8gb #PBS -j oe #PBS -r n #PBS -m bae #PBS -M abf123@psu.edu #PBS -q hprc #PBS -A ics-hprc #PBS -v SINGULARITY_CONTAINER="/storage/work/abc123/singularity/bionic-base.simg" #PBS -v BASH_ENV=”/storage/abc123/.bashrc_ubuntu”
Table of Contents
- 0 We’re Redesigning the Roar User Guide!
- 1 Introduction
- 2 Roar History
- 3 System Overview
- 4 System Access
- 5 Basics of the Roar Resources
- 6 Application Development
- 7 Running Jobs on ACI-b
- 8 Running Jobs on HPRC
- 9 Software Stack
- 10 Policies
- 11 For Further Assistance
- Details on Great Allocations
- Free System Access Versus Paid Allocations
- Getting an Account – Non-Penn Staters
- Getting an Account – Penn Staters
- Getting Help: From Self-Service to Full-Service
- Getting Started
- How to Assess Which Resources You Should Request in Open OnDemand
- Roar System Layout
- Running a Simple Job on Open OnDemand
- Submitting a Simple Job on Roar
- Using Open OnDemand