Skip to content

Partitions

This page lists the available Slurm partitions on Perun and their configuration limits. For a general explanation of partitions and usage, see the Shared Guides → Partitions section.

Available Partitions

Partition Nodes Time limit
(d-hh:mm)
Job size limit
(nodes)
GPUs Priority factor
testing login01–login04 0-00:30 1 0
cpu_short cn001–cn045 1-00:00 2 0 1
cpu_long cn001–cn045 4-00:00 1 0 0
cpu_hm_short cn046–cn060 1-00:00 1 0 1
cpu_hm_long cn046–cn060 4-00:00 1 0 0
gpu_short gn001–gn076 1-00:00 4 yes 2
gpu_medium gn001–gn076 2-00:00 2 yes 1
gpu_long gn001–gn076 4-00:00 1 yes 0

High-Memory Partitions

Perun provides a dedicated high-memory node group:

  • cn046–cn060
  • exposed via cpu_hm_short and cpu_hm_long

These partitions differ from standard CPU partitions in:

  • Higher memory per CPU:
  • standard: 6800 MB / CPU
  • high-memory: 13600 MB / CPU
  • Lower memory billing weight:
  • encourages use of these nodes for memory-intensive workloads

Use high-memory partitions when:

  • jobs exceed standard node memory capacity
  • memory per core is the limiting factor rather than CPU count

Note

High-memory partitions are limited to 1 node per job.

Viewing Partition Status

The current state of partitions and nodes can be displayed using the sinfo command:

sinfo

sinfo output

PARTITION    AVAIL  TIMELIMIT  NODES  STATE NODELIST
cpu_short*      up 1-00:00:00     43   idle cn[001-045]
cpu_long        up 4-00:00:00     43   idle cn[001-045]
cpu_hm_short    up 1-00:00:00     15   idle cn[046-060]
cpu_hm_long     up 4-00:00:00     15   idle cn[046-060]
gpu_short       up 1-00:00:00     71   idle gn[001-076]
gpu_medium      up 2-00:00:00     71   idle gn[001-076]
gpu_long        up 4-00:00:00     71   idle gn[001-076]
testing         up      30:00      2  drain login[01-04]

The * in the partition name indicates the default partition. Nodes may appear multiple times if they are currently in different states such as idle, mix, or alloc.

Viewing Detailed Partition Configuration

To inspect full partition parameters:

scontrol show partitions

or for a specific partition:

scontrol show partition cpu_short
show partition cpu_short output
show partition cpu_short
  PartitionName=cpu_short
  AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
  AllocNodes=ALL Default=YES QoS=N/A
  DefaultTime=1-00:00:00 DisableRootJobs=NO ExclusiveUser=NO ExclusiveTopo=NO GraceTime=0 Hidden=NO
  MaxNodes=2 MaxTime=1-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED
  Nodes=cn[001-045]
  PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
  OverTimeLimit=NONE PreemptMode=OFF
  State=UP TotalCPUs=14400 TotalNodes=45 SelectTypeParameters=NONE
  JobDefaults=(null)
  DefMemPerCPU=6800 MaxMemPerNode=UNLIMITED
  TRES=cpu=14400,mem=49438485M,node=45,billing=14400
  TRESBillingWeights=CPU=1.0,Mem=0.15G

Notes on Scheduling Behavior

  • Short partitions (higher priority) are scheduled faster but impose stricter limits.
  • Long partitions (lower priority) allow extended runtime but may queue longer.
  • High-memory partitions are constrained but optimized for memory-heavy workloads.
  • GPU partitions carry significantly higher billing weight, so inefficient usage is penalized.
  • Login-node testing partition is restricted to short validation jobs only.

Walltime estimation and job efficiency

The maximum allowed runtime for regular jobs on Perun is 2 days (2-00:00:00) for CPU and 4 days (4-00:00:00) for GPU jobs. Estimating job runtime accurately can significantly improve scheduling efficiency. As your workloads mature, check previous job performance using the seff command and adjust the #SBATCH -t parameter in your job scripts accordingly. Shorter jobs are typically scheduled faster.

Created by: Andrej Sec