Quantum ESPRESSO on Perun¶
Quantum ESPRESSO is a suite for first-principles electronic-structure calculations and materials modeling. It is based on DFT, plane wave basis sets, and pseudopotentials (both norm-conserving and ultrasoft). The core plane wave DFT functions of QE are provided by the PWscf (Plane-Wave Self-Consistent Field) component, a set of programs for electronic structure calculations within density functional theory and density functional perturbation theory, using plane wave basis sets and pseudopotentials.
User guide¶
Available Versions¶
Following versions of QuantumESPRESSO package are currently available:
- Runtime dependencies:
- None, required libraries and dependencies are loaded automatically with the
QuantumESPRESSO/7.5-foss-2025bmodule.
- None, required libraries and dependencies are loaded automatically with the
You can load the QuantumESPRESSO module by following command:
module load QuantumESPRESSO/7.5-foss-2025b
Example Run Script¶
You can copy and modify this script to qe_run.sh and submit job to a compute node by command sbatch qe_run.sh.
#!/bin/bash
#SBATCH --job-name= # Name of the job
#SBATCH --account= # Project account number
#SBATCH --partition= # Partition name (cpu_short, cpu_long, cpu_hm_short, cpu_hm_long)
#SBATCH --nodes= # Number of nodes
#SBATCH --ntasks= # Total number of MPI ranks
#SBATCH --cpus-per-task= # Number of threads per MPI rank
#SBATCH --time=hh:mm:ss # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err # Standard error
#SBATCH --mail-type=END,FAIL # Notifications for job done or failed
#SBATCH --mail-user= # Email address for notifications
# === Metadata functions ===
log_job_start() {
echo "================== SLURM JOB METADATA =================="
printf " Job ID : %s\n" "$SLURM_JOB_ID"
printf " Job Name : %s\n" "$SLURM_JOB_NAME"
printf " Partition : %s\n" "$SLURM_JOB_PARTITION"
printf " Nodes : %s\n" "$SLURM_JOB_NUM_NODES"
printf " Tasks (MPI) : %s\n" "$SLURM_NTASKS"
printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
printf " Account : %s\n" "$SLURM_JOB_ACCOUNT"
printf " Submit Dir : %s\n" "$SLURM_SUBMIT_DIR"
printf " Work Dir : %s\n" "$PWD"
printf " Start Time : %s\n" "$(date)"
echo "========================================================"
}
log_job_end() {
printf " End Time : %s\n" "$(date)"
echo "========================================================"
}
# === Load required modules ===
module purge
module load QuantumESPRESSO/7.5-foss-2025b
# === Set working directories ===
# Use shared filesystems for cross-node calculations
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"
# === Input/output file declarations ===
INPUT_FILE="" # Input files w.x
OUTPUT_FILE="" # Output files
# === Copy input files to scratch ===
cp "$INIT_DIR/$INPUT_FILE" "$WORK_DIR"
# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }
log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
# === Set OpenMP threads ===
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# === Run Quantum ESPRESSO ===
mpiexec -np ${SLURM_NTASKS} pw.x -input "$INPUT_FILE" > "$OUTPUT_FILE"
# Optional command line switches include -nimage, -npools, -nband, -ntg, -ndiag or -northo, for more information see the QE documentation
# Optional parallelization flags:
# -npool N (k-point parallelization)
# -ndiag N (diagonalization parallelization)
# === Copy results back ===
cp "$OUTPUT_FILE" "$INIT_DIR"
# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"
log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"