Skip to content

SIESTA on Perun

SIESTA is a density functional theory (DFT) code designed for efficient electronic structure calculations and ab initio molecular dynamics of large systems. It uses localized basis sets and is suitable for materials science and condensed matter simulations.

User Guide

Available Version

Following versions of SIESTA are currently available:

  • Runtime dependencies:
    • None, required libraries and dependencies are loaded automatically with the Siesta/5.4.1-foss-2024a module.

You can load the GROMACS module by following command:

Siesta/5.4.1-foss-2024a


Example Run Script

You can copy and modify this script to siesta_run.sh and submit it using sbatch siesta_run.sh:


#!/bin/bash
#SBATCH --job-name=                     # Name of the job
#SBATCH --account=                      # Project account number
#SBATCH --partition=                    # Partition name (cpu_short, cpu_long, cpu_hm_short, cpu_hm_long)
#SBATCH --nodes=                        # Number of nodes
#SBATCH --ntasks=                       # Total number of MPI ranks
#SBATCH --cpus-per-task=                # Number of threads per MPI rank
#SBATCH --time=hh:mm:ss                 # Time limit (hh:mm:ss)
#SBATCH --output=stdout.%j.out          # Standard output (%j = Job ID)
#SBATCH --error=stderr.%j.err           # Standard error
#SBATCH --mail-type=END,FAIL            # Notifications for job done or failed
#SBATCH --mail-user=                    # Email address for notifications

# === Metadata functions ===
log_job_start() {
    echo "================== SLURM JOB METADATA =================="
    printf " Job ID        : %s\n" "$SLURM_JOB_ID"
    printf " Job Name      : %s\n" "$SLURM_JOB_NAME"
    printf " Partition     : %s\n" "$SLURM_JOB_PARTITION"
    printf " Nodes         : %s\n" "$SLURM_JOB_NUM_NODES"
    printf " Tasks (MPI)   : %s\n" "$SLURM_NTASKS"
    printf " CPUs per Task : %s\n" "$SLURM_CPUS_PER_TASK"
    printf " Account       : %s\n" "$SLURM_JOB_ACCOUNT"
    printf " Submit Dir    : %s\n" "$SLURM_SUBMIT_DIR"
    printf " Work Dir      : %s\n" "$PWD"
    printf " Start Time    : %s\n" "$(date)"
    echo "========================================================"
}

log_job_end() {
    printf " End Time      : %s\n" "$(date)"
    echo "========================================================"
}

# === Load required modules ===
module purge
module load Siesta/5.4.1-foss-2024a

# === Set working directories ===
# Use shared filesystems for cross-node calculations
INIT_DIR="${SLURM_SUBMIT_DIR}"
WORK_DIR="/work/${SLURM_JOB_ACCOUNT}/${SLURM_JOB_ID}"
mkdir -p "$WORK_DIR"

# === Input/output file declarations ===
INPUT_FILE="input.fdf"
OUTPUT_FILE="siesta.out"

# === Copy input files to scratch ===
cp "$INIT_DIR/$INPUT_FILE" "$WORK_DIR"

# === Change to working directory ===
cd "$WORK_DIR" || { echo "Failed to cd into $WORK_DIR"; exit 1; }

log_job_start >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"

# === Set OpenMP threads ===
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

# === Run SIESTA ===
mpiexec -np ${SLURM_NTASKS} siesta < "$INPUT_FILE" > "$OUTPUT_FILE"

# === Copy results back ===
cp "$OUTPUT_FILE" "$INIT_DIR"

# === Optional: clean up scratch ===
# rm -rf "$WORK_DIR"

log_job_end >> "$INIT_DIR/jobinfo.$SLURM_JOB_ID.log"
Created by: Marek Štekláč