Available Node Features¶
HiPerGator users may finely control selection of compute hardware for a SLURM job like
specific processor families, processor models by using the
--constraint
directive to specify node features.
Example:
Use one of the following commands to specify between turin and milan CPU core microarchitectures
#SBATCH --constraint=turin
#SBATCH --constraint=milan
Basic boolean logic can be used to request combinations of features. For example, to request nodes that have AMD processors AND InfiniBand interconnect use
#SBATCH --constraint='amd&infiniband'
To request processors from either AMD Rome OR Turin CPU family use
#SBATCH --constraint='rome|turin'
All Node Features¶
You can run nodeInfo
command from the ufrc environment module to list
all available SLURM features. In addition, the table below shows
automatically updated nodeInfo output as well as the corresponding CPU
models.
Partition | NodeCores | Sockets | SocketCores | HT | Memory | Features | CPU |
---|---|---|---|---|---|---|---|
bigmem | 128 | 8 | 16 | 1 | 4023Gb | bigmem;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
bigmem | 128 | 8 | 16 | 2 | 4023Gb | bigmem;amd;rome;infiniband;el9 | AMD EPYC 7702 64-Core Processor |
hpg-b200 | 112 | 2 | 56 | 1 | 2010Gb | ai2;su1;intel;emerald;infiniband;gpu;b200;el9 | Intel(R) Xeon(R) Platinum 8570 |
hpg-default | 128 | 8 | 16 | 1 | 1003Gb | hpg3;amd;rome;infiniband;el8 | AMD EPYC 7702 64-Core Processor |
hpg-default | 128 | 8 | 16 | 1 | 1003Gb | hpg3;amd;rome;infiniband;el9 | AMD EPYC 7702 64-Core Processor |
hpg-dev | 64 | 8 | 8 | 1 | 500Gb | hpg3;amd;milan;infiniband;el9 | AMD EPYC 75F3 32-Core Processor |
hpg-milan | 64 | 8 | 8 | 1 | 500Gb | hpg3;amd;milan;infiniband;el9 | AMD EPYC 75F3 32-Core Processor |
hpg-turin | 96 | 1 | 96 | 1 | 752Gb | hpg4;amd;turin'infiniband;gpu;l4;el9 | AMD EPYC 9655P 96-Core Processor |
hwgui | 96 | 1 | 96 | 1 | 752Gb | hpg4;amd;turin;infiniband;gpu;l4;el9 | Intel(R) Xeon(R) Gold 6242 |
Note: the bigmem partitions are maintained for calculations requiring large amounts of memory. To submit jobs to this partition you will need to add the following directive to your job submission script.
#SBATCH --partition=bigmem
Since our regular nodes have 1TB of available memory we do not recommend using bigmem nodes for jobs with memory requests lower than that.
Note: See GPU Access for more details on GPUs, such as available GPU memory.