Skip to content

Wannier90 on Perun

Wannier90 is a post-processing tool for electronic structure calculations. It is commonly used together with density functional theory (DFT) codes such as Quantum ESPRESSO to compute maximally localized Wannier functions, band interpolation, and electronic properties.

User Guide

Available Versions

Following versions of Wannier90 are currently available:

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the Wannier90/3.1.0-foss-2024a module.

You can load the GROMACS module by following command:

Wannier90/3.1.0-foss-2024a

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the Wannier90/3.1.0-intel-2025b module.

You can load the GROMACS module by following command:

Wannier90/3.1.0-intel-2025b

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the Wannier90/3.1.0-NVHPC-25.1 module.

You can load the GROMACS module by following command:

Wannier90/3.1.0-NVHPC-25.1

MPI prallelization

This version of Wannier90 is not parallelized on the MPI level.


Usage

Wannier90 is typically used in combination with Quantum ESPRESSO. The workflow consists of:

  1. Performing a DFT calculation (e.g., with pw.x)
  2. Preparing Wannier input files (*.win)
  3. Running Wannier90 preprocessing
  4. Running Wannier90 main calculation
wannier90.x -pp seedname.win
mpiexec -np ${MPI)RANKS} wannier90.x -pp seedname.win

Example Run Script

You can copy and modify this script to wannier_run.sh and submit job to a compute node by command sbatch wannier_run.sh.


#!/bin/bash
#SBATCH --job-name=                   # Name of the job
#SBATCH --account=                    # Project account number
#SBATCH --partition=                  # Partition name (cpu_short, cpu_long, cpu_hm_short, cpu_hm_long)
#SBATCH --nodes=                      # Number of nodes
#SBATCH --ntasks=                     # Total number of MPI ranks
#SBATCH --cpus-per-task=              # Number of threads per MPI rank
#SBATCH --time=                       # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out        # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err         # Standard error
#SBATCH --mail-type=END,FAIL          # Notifications for job done or failed
#SBATCH --mail-user=                  # Email address for notifications

# === Metadata functions ===
log_job_start() {
    echo "================== SLURM JOB METADATA =================="
    printf " Job ID        : %s\n" "$SLURM_JOB_ID"
    printf " Job Name      : %s\n" "$SLURM_JOB_NAME"
    printf " Partition     : %s\n" "$SLURM_JOB_PARTITION"
    printf " Nodes         : %s\n" "$SLURM_JOB_NUM_NODES"
    printf " Tasks (MPI)   : %s\n" "$SLURM_NTASKS"
    printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
    printf " Account       : %s\n" "$SLURM_JOB_ACCOUNT"
    printf " Submit Dir    : %s\n" "$SLURM_SUBMIT_DIR"
    printf " Work Dir      : %s\n" "$PWD"
    printf " Start Time    : %s\n" "$(date)"
    echo "========================================================"
}

log_job_end() {
    printf " End Time      : %s\n" "$(date)"
    echo "========================================================"
}

# === Load required modules ===
module purge
module load Wannier90/3.1.0-intel-2025b

# === Set working directories ===
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"

# === Input/output file declarations ===
INPUT_FILES="seedname.win"
OUTPUT_FILES="seedname.wout"

# === Copy input files to scratch ===
cp $INPUT_FILES "$WORK_DIR"

# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }

log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"

# === Run Wannier90 ===
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
mpiexec -np ${SLURM_NTASKS} wannier90.x -pp seedname.win
# wannier90.x -pp seedname.win (for non-MPI parallelized version)

# === Copy output files back ===
cp $OUTPUT_FILES "$INIT_DIR"

# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"

log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
Created by: Marek Štekláč