Running Shared-Memory Parallel Jobs

This section details how you can submit a OpenMP job (or multi-threaded job). Software in this category can make use of more than one cpu core, in order to speed up computation, but those cores must all be in the same physical server.

For an explanation of the basics of submitting a job, please visit the Running Software page.

A typical batch file for this type of job is shown below:

#!/bin/bash

# Set the name of the job
# (this gets displayed when you get a list of jobs on the cluster)
#SBATCH --job-name="My OpenMP Job"

# Specify the maximum wall clock time your job can use
# (Your job will be killed if it exceeds this)
#SBATCH --time=3:00:00

# Specify the amount of memory your job needs (in Mb)
# (Your job will be killed if it exceeds this for a significant length of time)
#SBATCH --mem-per-cpu=1024

# Specify the number of cpu cores your job requires
#SBATCH --ntasks=4

# Specify the number of cpu cores per node (server)
#SBATCH --ntasks-per-node=4

# Set up the environment
module load gcc/10.2.0

# Run the application
echo My job is started
./my_openmp_program
echo My job has finished

This is identical to the basic job submission file except for the lines in bold.

If we assume our job will run using 4 CPUs then we need to specify 4 for the number of tasks and also specify that there must be 4 tasks per node. SLURM will then ensure that all four CPUs are allocated on the same server.

It is also important to note that the option, mem-per-cpu, is the memory required for each CPU, not the total memory required for the application.