Skip to content

SLURM Commands

Tip

We also recommend checking out our Sample SLURM Scripts page which provides valuable information on writing your own script.

While there is a lot of documentation available on the SLURM web page, we provide a few popular commands to give users an introduction on using SLURM with HiPerGator.

Check Job/Queue Status

Go to our SLURM Status Commands for commands that give you helpful information about your ongoing jobs.

Submit a Job

Submit a job script to the SLURM scheduler with:

sbatch your_script

Passing variables into a job at submission

For a list of common environment variables, visit Sample SLURM Scripts.

It is possible to pass variables into a SLURM job when you submit the job using the --export flag. For example to pass the value of the variables A and b into the job script named jobscript.sbatch you can use either of the following:

  • sbatch --export=A=5,b='test' jobscript.sbatch

  • sbatch --export=ALL,A=4,b='test' jobscript.sbatch

The first example will replace the user's environment with a new environment containing only values for A and b and the SLURM_* environment variables. The second will add the values for A and b to the existing environment.

Using variables to set SLURM job name and output files

SLURM does not support using variables in the #SBATCH lines within a job script. However, values passed from the command line have precedence over values defined in the job script. So the job name and output/error files can be passed on the sbatch command line:

A=5
b=test
sbatch --job-name=$A.$b.run --output=$A.$b.out --export=A=$A,b=$b jobscript.sbatch

Interactive Session

An interactive SLURM session i.e. a shell prompt within a running job can be started with

  • srun <resources> --pty bash -i

For example, a single node 2 CPU core job with 2gb of RAM for 90 minutes can be started with

  • srun --ntasks=1 --cpus-per-task=2 --mem=2gb -t 90 --pty bash -i

Canceling Jobs

scancel jobID
or, for cancelling multiple jobs with names that follow a wildcard pattern
scancel pattern

Using sreport to view group summaries

The basic command is report. The full documentation for sreport is available on the SLURM web page, but we hope these examples are useful as they are and as templates for further customization.

To view a summary of group usage since a given date (May 1st in this example):

sreport cluster AccountUtilizationByUser Start=0501 Accounts=group_name
Or for a particular month (the month of May):
sreport cluster AccountUtilizationByUser Start=0501 End=0531 Accounts=group_name
Or for more information
sreport -t Hours cluster AccountUtilizationByUser Start=2022-01-01T00:00:00 End=2022-01-31T23:59:59 Accounts=group_name

Viewing Resources Available to a Group

To check the resources available to a group for running jobs, you can use the sacctmgr command (substitute the group_name with your group)

sacctmgr show qos group_name format="Name%-16,GrpSubmit,MaxWall,GrpTres%-45"
or for the burst allocation:
sacctmgr show qos group_name-b format="Name%-16,GrpSubmit,MaxWall,GrpTres%-45"

Using sinfo to view partition information and node features

sinfo allows users to learn about the resources managed by SLURM. sinfo provides information on the configuration of partitions and the details of nodes within each partition. Using sinfo, users can view the features attributed to the nodes, and then use those features as constraints when submitting jobs to, for example, request only nodes with Intel processors.

sinfo -s
Provides a summary of the partitions and the nodes within each, including the numbers of nodes that are Allocated, Idle, Offiline, and Total.
sinfo -o %P,%D,%c,%X,%m,%f
or
$ module load ufrc
$ nodeInfo
Shows the partitions, number of nodes, number of cores per node, number of sockets per node, amount of RAM per node, and the features associated with the nodes. These features can be used to request constraints in sbatch. For example:
#SBATCH --partition=hpg2-compute
#SBATCH --constraint='hgp2'
Would constrain a job to run on one of the 32-core AMD nodes from HiPerGator 2.

While constraints can be used to target particular resources, users should realize that using constraints also limits where a job can run and may delay scheduling a job.

Get stored historic job script and environmental variables

In SLURM v22 and up versions, job script and environmental variables are automatically stored and indexed in the databases and can be recalled conveniently:

sacct --batch -j <JOB_ID>
The above command outputs the job script used by the historic job to the standard output.

Recall environmental variables of a historic job :

sacct --env-vars -j <JOB_ID>
The above command outputs the environmental variables used by a historic job to the standard output.

Bypassing X11 Requirement

It may be necessary to set up a virtual X11 environment with Xvfb if a program you need to run expects an X11 environment. Use the following code between the module load and the command to run in the job script. Adapt as needed.

export DISPLAY=${RANDOM}
Xvfb :${DISPLAY} -screen 0 1280x960x24 &