Using the cluster
Queues and limits
You can submit jobs to any of the following SLURM partitions (otherwise known as queues):
- singlenode
Jobs requiring one node, with a maximum of 32 cores.
Maximum run-time 30 days. - parallel
Jobs running parallel across multiple nodes. There is no upper limit on the number of nodes (other than how many there are), but remember that you are using a shared resource. Maximum run-time 30 days. - gpu
Jobs requesting one or both of kennedy's GPU nodes. Maximum run-time 30 days. Note that you still need to request the GPUs with the --gres flag. The following line would request both GPUs on a node:#SBATCH --gres=gpu:2
- debug
Small and short jobs, usually meant for tests or debugging. This partition is limited to one node and a maximum of two hours run-time.
Submitting and monitoring jobs
Calculations need to be submitted to the SLURM queues and will be scheduled to run on the requested compute nodes when those become available. A job script is submitted using the sbatch command:
sbatch <job-script>
Example job scripts for different programs can be found in the directory /usr/local/examples. Please copy one of those and adjust it to your requirements.
You can monitor submitted batch jobs with the squeue command. On its own, it will show all jobs in the queues at the moment. To show only your own, use it with the -u flag
squeue -u <user>
The output will be similar to the following:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
280521 parallel CeO2 hf63 PD 0:00 4 (Resources)
279068 singlenod g09 hf63 R 8-00:53:21 1 kennedy36
The meaning of the different fields is as follows:
JOBID: Job number. You need this to find more information about the job or to cancel it (see below).
PARTITION: queue.
NAME: Job name as given with the -J flag in the sbatch command or the #SBATCH -J directive in the job script.
USER: The owner of the job.
ST: Status: R (running), PD (pending, i.e.waiting) or CG (changing, i.e starting or finishing).
TIME: Run-time so far.
NODES: Number of nodes requested.
NODELIST(REASON): Nodes in use (if running); reason for not running otherwise:
(Resources): waiting for the requested nodes.
(Priority): There are other jobs with higher priority.
A batch job can be deleted with the command
scancel <jobid>
If it is already running, it will be stopped.
Example job
Here is a typical batch job:
#!/bin/bash
#SBATCH -J mpbtest
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH -p parallel
# Set up MPI and libraries
source /opt/rh/devtoolset-8/enable
export PATH=/gpfs1/apps/software/devts8/bin:$PATH
export LD_LIBRARY_PATH=/gpfs1/apps/software/devts8/lib:$LD_LIBRARY_PATH
ulimit -s unlimited
export OMP_NUM_THREADS=1
mpirun -n $SLURM_NPROCS \
/gpfs1/apps/software/devts8/bin/mpb-mpi slab.ctl
The Job script consists of two sections: SLURM directives determining the requested resources and the job's appearance in the queue, and commands to be executed once the job is running. This job will show up as "mpbtest" in the squeue output. It requests two nodes and 32 cores (the maximum available) cores on each node, and will run in the parallel partition. The execution in this case consists of setting up path and environment for a specific compiler/MPI combination, and then the command to run an application in parallel using MPI.
Installing your own software
If you require software that needs to be installed in a location other than your home directory, or that may be of use to other users, you can request its installation in a publicly accessible location. Your fellow users will thank you for it. It is, however, possible and allowed to install software in your own directory. It is your responsibility to ensure that software you install and its use is legal, ethical and used for the purposes of your research. This means in particular that licensed software is only used within the conditions of its license. The method of installation will depend on the software in question, but the most commonly used approaches are CONDA or compilation from source.
CONDA
Many popular open source Python (and other) packlages are available within the CONDA framework. Kennedy provides a location for installing your packages in the directory /gpfs1/apps/conda/<user>. The command for setting up your initial environment is
install-conda
Accept all defaults. It will ask permission to add its initialisation to your .bashrc file. This is OK for most purposes and simplifies future use.
One common problem (not specific to our setup) is that the execute permissions are not set correctly. You can fix this after installation with the command
chmod u+x /gpfs1/apps/conda/$USER/conda/bin/*
One of the main advantages of CONDA is that you can maintain different environments for different applications, even if they require different compiler or Python versions. Instructions for creating and managing environments can be found online. You may have to repeat setting file permissions for the environment specific bin directories:
chmod u+x /gpfs1/apps/conda/$USER/conda/envs/*/bin/*
Compiling your own software
Software distributed as source code (Fortran, C or C++, and possibly parallelised using MPI) needs to be compiled before execution. Several compilers and MPI versions are installed. For most purposes, the Intel compilers, in combination with Intel's Math Kernel Library (for linear algebra and FFT), in combination with the MVAPICH flavour of MPI, produce the most efficient code. You activate this environment with the commands
# Set up environment for Intel OneAPI and MVAPICH
source /opt/intel/oneapi/setvars.sh
export PATH=/gpfs1/apps/software/OneAPI+MVAPICH/bin:$PATH
export LD_LIBRARY_PATH=/gpfs1/apps/software/OneAPI+MVAPICH/lib:$LD_LIBRARY_PATH
export MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING=1
Note that this does not include Intel MPI, so the MPI compilers are mpicc, mpicxx and mpif90 (not mpiifort etc.).
For software that requires the GNU compilers, we recommend the devtoolset-8 versions. You use them with the following commands:
# Set up environment for gcc 8.3.1 (in devtoolset-8) and MVAPICH
source /opt/rh/devtoolset-8/enable
export LD_LIBRARY_PATH=/gpfs1/apps/software/devts8/lib:$LD_LIBRARY_PATH
export PATH=/gpfs1/apps/software/devts8/mvapich2/bin:$PATH
export LD_LIBRARY_PATH=/gpfs1/apps/software/devts8/mvapich2/lib:$LD_LIBRARY_PATH
The most commonly used libraries (openblas for BLAS/LAPACK, fftw3) for this setup can be found in /gpfs1/apps/software/devts8/lib
Using multiple accounts
The CPU time you use will normally be charged to your default account (normally your School at the University of St Andrews). It is possible to run a batch job in a different account. This will be necessary if you have high priority access, or if you are working on different projects connected to different Schools. To charge a job to the account herbert-hp (a high-priority account that sadly does not exist...), you need to add the line
#SBATCH -A herbert-hp
to your batch job, or submit it with
sbatch -A herbert-hp <jobscript>
The priority of your job in the queue will be calculated based on the account it is charged to. To be able to use a non-default account, the system administrator needs to add you to the account in question.