GROMACS
GROMACS on Perun¶

GROMACS is a versatile package to perform molecular dynamics for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions it can also be used for dynamics of non-biological systems, such as polymers and fluid dynamics.
User Guide¶
Available Versions¶
Following versions of GROMACS are currently available:
- Runtime dependencies:
- None, required libraries and dependencies are loaded automatically with the
GROMACS/2025.3-foss-2025bmodule.
- None, required libraries and dependencies are loaded automatically with the
You can load the GROMACS module by following command:
GROMACS/2025.3-foss-2025b
Example Run Script¶
You can copy and modify this script to gromacs_run.sh and submit job to a compute node by command sbatch gromacs_run.sh.
#!/bin/bash
#SBATCH --job-name= # Name of the job
#SBATCH --account= # Project account number
#SBATCH --partition= # Partition name (cpu_short, cpu_long, cpu_hm_short, cpu_hm_long)
#SBATCH --nodes= # Number of nodes
#SBATCH --ntasks= # Total number of MPI ranks
#SBATCH --cpus-per-task= # Number of threads per MPI rank
#SBATCH --time=hh:mm:ss # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err # Standard error
#SBATCH --mail-type=END,FAIL # Notifications for job done or failed
#SBATCH --mail-user= # Email address for notifications
# === Metadata functions ===
log_job_start() {
echo "================== SLURM JOB METADATA =================="
printf " Job ID : %s\n" "$SLURM_JOB_ID"
printf " Job Name : %s\n" "$SLURM_JOB_NAME"
printf " Partition : %s\n" "$SLURM_JOB_PARTITION"
printf " Nodes : %s\n" "$SLURM_JOB_NUM_NODES"
printf " Tasks (MPI) : %s\n" "$SLURM_NTASKS"
printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
printf " Account : %s\n" "$SLURM_JOB_ACCOUNT"
printf " Submit Dir : %s\n" "$SLURM_SUBMIT_DIR"
printf " Work Dir : %s\n" "$PWD"
printf " Start Time : %s\n" "$(date)"
echo "========================================================"
}
log_job_end() {
printf " End Time : %s\n" "$(date)"
echo "========================================================"
}
# === Load required module(s) ===
module purge
module load GROMACS/2025.3-foss-2025b
# === Set working directories ===
# Use shared filesystems for cross-node calculations
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"
# === Input/output file declarations ===
INPUT_FILES="" # Adjust as needed
OUTPUT_FILES="" # Adjust as needed
# === Copy input files to scratch ===
cp $INPUT_FILES "$WORK_DIR"
# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }
log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
# === Run GROMACS ===
mpiexec -np ${SLURM_NTASKS} gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} ...
# === Copy output files back ===
cp $OUTPUT_FILES "$INIT_DIR"
# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"
log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
Created by: