SLURM Job Script ExamplesΒΆ
Below are a number of sample scripts that can be used as a template for building your own SLURM submission scripts for use on HiPerGator 2.0. These scripts are also located at: /data/training/SLURM/, and can be copied from there. If you choose to copy one of these sample scripts, please make sure you understand what each #SBATCH directive means before using the script to submit your jobs. Otherwise, you may not get the result you want and may waste valuable computing resources.
Note: There is a maximum limit of 3000 jobs per user.
See Annotated SLURM Script for a step-by-step explanation of all options.
This is a walk-through for a basic SLURM scheduler job script for a common case of a multi-threaded analysys. If the program you run is single-threaded (can use only one CPU core) then only use '--ntasks=1' line for the cpu request instead of all three listed lines. Annotations are marked with bullet points.
You can click on the link below to download the raw job
script file without the annotation. Values in brackets are placeholders.
You need to replace them with your own values. E.g. Change '
#!/bin/bash
- Common arguments
- Name the job to make it easier to see in the job queue
#SBATCH --job-name=<JOBNAME>
- Your email address to use for all batch system communications
##SBATCH --mail-user=<EMAIL>
##SBATCH --mail-user=<EMAIL-ONE>,<EMAIL-TWO>
- What emails to send
- NONE - no emails
- ALL - all emails
- END,FAIL - only email if the job fails and email the summary at the end of the job
#SBATCH --mail-type=FAIL,END
- Standard Output and Error log files
- Use file patterns
- %j - job id
- %A-%a - Array job id (A) and task id (a)
- You can also use --error for a separate stderr log
#SBATCH --output <my_job-%j.out>
- Number of nodes to use. For all non-MPI jobs this number will be equal to '1'
#SBATCH --nodes=1
- Number of tasks. For all non-MPI jobs this number will be equal to '1'
#SBATCH --ntasks=1
- Number of CPU cores to use. This number must match the argument used for the program you run.
#SBATCH --cpus-per-task=4
- Total memory limit for the job. Default is 2 gigabytes, but units can be specified with mb or gb for Megabytes or Gigabytes.
#SBATCH --mem=4gb
- Job run time in [DAYS]:HOURS:MINUTES:SECONDS
- [DAYS] are optional, use when it is convenient
#SBATCH --time=72:00:00
- Optional:
- A group to use if you belong to multiple groups. Otherwise, do not use.
#SBATCH --account=<GROUP>
- A job array, which will create many jobs (called array tasks) different only in the '
$SLURM_ARRAY_TASK_ID
' variable
#SBATCH --array=<BEGIN-END>
- Example of five tasks
- #SBATCH --array=1-5
- Recommended convenient shell code to put into your job script
- Add host, time, and directory name for later troubleshooting
date;hostname;pwd
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
- Load the software you need
module load ncbi_blast
- Run the program
blastn -db nt -query input.fa -outfmt 6 -out results.xml --num_threads 4
date