PLUMED on Perun¶

PLUMED is an open-source library for enhanced sampling and analysis of molecular simulations. It is typically used together with molecular dynamics engines such as GROMACS to perform techniques like metadynamics, umbrella sampling, and free energy calculations.
User Guide¶
Available Versions¶
Following versions of PLUMED are currently available:
- Runtime dependencies:
- None, required libraries and dependencies are loaded automatically with the
PLUMED/2.9.3-foss-2024amodule.
- None, required libraries and dependencies are loaded automatically with the
You can load the PLUMED module by following command:
PLUMED/2.9.3-foss-2024a
- Runtime dependencies:
- None, required libraries and dependencies are loaded automatically with the
PLUMED/2.9.4-foss-2025bmodule.
- None, required libraries and dependencies are loaded automatically with the
You can load the PLUMED module by following command:
PLUMED/2.9.4-foss-2025b
GROMACS
PLUMED is a plugin library and is typically used together with GROMACS. Make sure to load a compatible GROMACS module when running simulations.
Usage with GROMACS¶
PLUMED is used by passing a plumed.dat input file to GROMACS during the simulation.
srun gmx_mpi mdrun -s topol.tpr -plumed plumed.dat
Example Run Script¶
#!/bin/bash
#SBATCH --job-name= # Name of the job
#SBATCH --account= # Project account number
#SBATCH --partition= # Partition name (cpu_short, cpu_long, cpu_hm_short, cpu_hm_long)
#SBATCH --nodes= # Number of nodes
#SBATCH --ntasks= # Total number of MPI ranks
#SBATCH --cpus-per-task= # Number of threads per MPI rank
#SBATCH --time=hh:mm:ss # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err # Standard error
#SBATCH --mail-type=END,FAIL # Notifications for job done or failed
#SBATCH --mail-user= # Email address for notifications
# === Metadata functions ===
log_job_start() {
echo "================== SLURM JOB METADATA =================="
printf " Job ID : %s\n" "$SLURM_JOB_ID"
printf " Job Name : %s\n" "$SLURM_JOB_NAME"
printf " Partition : %s\n" "$SLURM_JOB_PARTITION"
printf " Nodes : %s\n" "$SLURM_JOB_NUM_NODES"
printf " Tasks (MPI) : %s\n" "$SLURM_NTASKS"
printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
printf " Account : %s\n" "$SLURM_JOB_ACCOUNT"
printf " Submit Dir : %s\n" "$SLURM_SUBMIT_DIR"
printf " Work Dir : %s\n" "$PWD"
printf " Start Time : %s\n" "$(date)"
echo "========================================================"
}
log_job_end() {
printf " End Time : %s\n" "$(date)"
echo "========================================================"
}
# === Load required modules ===
module purge
module load GROMACS/2025.3-foss-2025b
module load PLUMED/2.9.4-foss-2025b
# === Set working directories ===
# Use shared filesystems for cross-node calculations
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"
# === Input/output file declarations ===
INPUT_FILES=""
OUTPUT_FILES=""
# === Copy input files to scratch ===
cp $INPUT_FILES "$WORK_DIR"
# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }
log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
# === Run GROMACS with PLUMED ===
mpiexec -np ${SLURM_NTASKS} gmx_mpi mdrun -s topol.tpr -plumed plumed.dat -ntomp ${SLURM_CPUS_PER_TASK}
# === Copy output files back ===
cp $OUTPUT_FILES "$INIT_DIR"
# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"
log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
Created by: