Skip to content

PLUMED on Devana

PLUMED is an open-source library for enhanced sampling and analysis of molecular simulations. It is typically used together with molecular dynamics engines such as GROMACS to perform techniques like metadynamics, umbrella sampling, and free energy calculations.

User Guide

Available Versions

Following versions of PLUMED are currently available:

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.7.3-foss-2021b module.

You can load the PLUMED module by following command:

PLUMED/2.7.3-foss-2021b

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.8.1-foss-2022a module.

You can load the PLUMED module by following command:

PLUMED/2.8.1-foss-2022a

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.9.0-foss-2022b module.

You can load the PLUMED module by following command:

PLUMED/2.9.0-foss-2022b

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.9.0-foss-2023a module.

You can load the PLUMED module by following command:

PLUMED/2.9.0-foss-2023a

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.9.2-foss-2023b module.

You can load the PLUMED module by following command:

PLUMED/2.9.2-foss-2023b

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the PLUMED/2.9.3-foss-2024a module.

You can load the PLUMED module by following command:

PLUMED/2.9.3-foss-2024a

GROMACS

PLUMED is a plugin library and is typically used together with GROMACS. Make sure to load a compatible GROMACS module when running simulations.

Usage with GROMACS

PLUMED is used by passing a plumed.dat input file to GROMACS during the simulation.

Example run command

mpiexec -np ${MPI_RANKS} gmx_mpi mdrun -s topol.tpr -plumed plumed.dat

CPU Example


#!/bin/bash
#SBATCH --job-name=                     # Name of the job
#SBATCH --account=                      # Project account number
#SBATCH --partition=                    # Partition name (short, medium, long)
#SBATCH --nodes=                        # Number of nodes
#SBATCH --ntasks=                       # Total number of MPI ranks
#SBATCH --cpus-per-task=                # Number of threads per MPI rank
#SBATCH --time=hh:mm:ss                 # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out          # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err           # Standard error
#SBATCH --mail-type=END,FAIL            # Notifications for job done or failed
#SBATCH --mail-user=                    # Email address for notifications

# === Metadata functions ===
log_job_start() {
    echo "================== SLURM JOB METADATA =================="
    printf " Job ID        : %s\n" "$SLURM_JOB_ID"
    printf " Job Name      : %s\n" "$SLURM_JOB_NAME"
    printf " Partition     : %s\n" "$SLURM_JOB_PARTITION"
    printf " Nodes         : %s\n" "$SLURM_JOB_NUM_NODES"
    printf " Tasks (MPI)   : %s\n" "$SLURM_NTASKS"
    printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
    printf " Account       : %s\n" "$SLURM_JOB_ACCOUNT"
    printf " Submit Dir    : %s\n" "$SLURM_SUBMIT_DIR"
    printf " Work Dir      : %s\n" "$PWD"
    printf " Start Time    : %s\n" "$(date)"
    echo "========================================================"
}

log_job_end() {
    printf " End Time      : %s\n" "$(date)"
    echo "========================================================"
}

# === Load required modules ===
module purge
module load GROMACS/2024.4-foss-2023b-CUDA-12.4.0
module load PLUMED/2.9.2-foss-2023b

# === Set working directories ===
# Use shared filesystems for cross-node calculations
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"

# === Input/output file declarations ===
INPUT_FILES=""
OUTPUT_FILES=""

# === Copy input files to scratch ===
cp $INPUT_FILES "$WORK_DIR"

# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }

log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"

# === Run GROMACS with PLUMED ===
mpiexec -np ${SLURM_NTASKS} gmx_mpi mdrun -s topol.tpr -plumed plumed.dat -ntomp ${SLURM_CPUS_PER_TASK}

# === Copy output files back ===
cp $OUTPUT_FILES "$INIT_DIR"

# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"

log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
Created by: Marek Štekláč