Partitions¶
This page lists the available Slurm partitions on Devana and their configuration limits. For a general explanation of partitions and how to use them, see the Shared Guides → Partitions section.
Available Partitions¶
| Partition | Nodes | Time limit (d-hh:mm) |
Job size limit (nodes/cores) |
GPUs | Priority factor |
|---|---|---|---|---|---|
testing |
login01,login02 | 0-00:30 | 1/16 | 1 | 0.0 |
gpu |
n141-n148 | 2-00:00 | 1/64 | 4 | 0.0 |
short |
n001-n140 | 1-00:00 | 8/512 | 0 | 1.0 |
medium |
n001-n140 | 2-00:00 | 4/256 | 0 | 0.5 |
long |
n001-n140 | 4-00:00 | 1/64 | 0 | 0.0 |
Viewing Partition Status¶
The current state of partitions and nodes can be displayed using the
sinfo command:
sinfo
sinfo output
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
ncpu up 1-00:00:00 22 drain* n[014-021,026-031,044-051]
ncpu up 1-00:00:00 10 mix n[001-002,025,052,058,067,073,079,081,105]
ncpu up 1-00:00:00 86 alloc n[003-008,012-013,022-024,032-033,036-043,053-057,059-066,068-072,074,077-078,080,082-094,097-099,102-104,106-116,119-127,131,135-136,140]
ncpu up 1-00:00:00 22 idle n[009-011,034-035,075-076,095-096,100-101,117-118,128-130,132-134,137-139]
ngpu up 2-00:00:00 4 mix n[141-143,148]
ngpu up 2-00:00:00 1 alloc n144
ngpu up 2-00:00:00 3 idle n[145-147]
testing up 30:00 2 idle login[01-02]
gpu up 2-00:00:00 4 mix n[141-143,148]
gpu up 2-00:00:00 1 alloc n144
gpu up 2-00:00:00 3 idle n[145-147]
short* up 1-00:00:00 22 drain* n[014-021,026-031,044-051]
short* up 1-00:00:00 10 mix n[001-002,025,052,058,067,073,079,081,105]
short* up 1-00:00:00 86 alloc n[003-008,012-013,022-024,032-033,036-043,053-057,059-066,068-072,074,077-078,080,082-094,097-099,102-104,106-116,119-127,131,135-136,140]
short* up 1-00:00:00 22 idle n[009-011,034-035,075-076,095-096,100-101,117-118,128-130,132-134,137-139]
medium up 2-00:00:00 22 drain* n[014-021,026-031,044-051]
medium up 2-00:00:00 10 mix n[001-002,025,052,058,067,073,079,081,105]
medium up 2-00:00:00 86 alloc n[003-008,012-013,022-024,032-033,036-043,053-057,059-066,068-072,074,077-078,080,082-094,097-099,102-104,106-116,119-127,131,135-136,140]
medium up 2-00:00:00 22 idle n[009-011,034-035,075-076,095-096,100-101,117-118,128-130,132-134,137-139]
long up 4-00:00:00 22 drain* n[014-021,026-031,044-051]
long up 4-00:00:00 10 mix n[001-002,025,052,058,067,073,079,081,105]
long up 4-00:00:00 86 alloc n[003-008,012-013,022-024,032-033,036-043,053-057,059-066,068-072,074,077-078,080,082-094,097-099,102-104,106-116,119-127,131,135-136,140]
long up 4-00:00:00 22 idle n[009-011,034-035,075-076,095-096,100-101,117-118,128-130,132-134,137-139]
The * in the partition name indicates the default partition.
Nodes may appear multiple times if they are currently in different
states such as idle, mix, or alloc.
Viewing Detailed Partition Configuration¶
To display detailed configuration parameters for partitions, use:
scontrol show partitions
or for a specific partition:
scontrol show partition long
scontrol show partition long output
scontrol show partition long
ParrtitionName=long
AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
AllocNodes=ALL Default=NO QoS=N/A
DefaultTime=4-00:00:00 DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
MaxNodes=1 MaxTime=4-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED
Nodes=n[001-140]
PriorityJobFactor=0 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
OverTimeLimit=NONE PreemptMode=OFF
State=UP TotalCPUs=8960 TotalNodes=140 SelectTypeParameters=NONE
JobDefaults=(null)
DefMemPerCPU=4000 MaxMemPerNode=UNLIMITED
TRES=cpu=8960,mem=35000G,node=140,billing=8960
TRESBillingWeights=CPU=1.0,Mem=0.256G
This command provides information such as node lists, runtime limits, memory settings, and scheduling parameters.
Additional partitions
The commands sinfo and scontrol show partitions also list
partitions ncpu and ngpu. These are internal scheduler
aliases mapping to the short and gpu partitions,
respectively.
Notes on Scheduling Behavior¶
- Short partitions (higher priority) are scheduled faster but impose stricter limits.
- Long partitions (lower priority) allow extended runtime but may queue longer.
- High-memory partitions are constrained but optimized for memory-heavy workloads.
- GPU partitions carry significantly higher billing weight, so inefficient usage is penalized.
Walltime estimation and job efficiency
The maximum allowed runtime for regular jobs on Devana is
4 days (4-00:00:00). Estimating job runtime accurately can
significantly improve scheduling efficiency. As your workloads
mature, check previous job performance using the
seff command and adjust the #SBATCH -t parameter in your
job scripts accordingly. Shorter jobs are typically scheduled
faster.