Shared Guides¶
This section contains documentation applicable to all users and all clusters. The guides provide general instructions and recommended practices for working efficiently in the HPC environment.
The documentation is organized into several thematic areas covering storage management, software environments, application compilation, and job submission. Each section focuses on practical workflows that users typically encounter when running computations on the cluster.
Scope of this section
The guides in this section apply to all clusters and most users. They describe common workflows and shared infrastructure used across the HPC environment.
Storage¶
The storage guides describe how data are organized on the system and how users should manage files and directories. These pages explain file permissions, storage quotas, recommended practices for handling data, and efficient data transfer.
Special attention is given to workloads that generate a large number of small files, which can negatively impact filesystem performance on parallel storage systems.
Good storage practices
Efficient data management improves filesystem performance and helps avoid quota limits. Users are strongly encouraged to follow the recommended storage practices described in this section.
Environment¶
The environment section explains how software environments are managed on the cluster. Most software is provided through the module system, which allows users to load compilers, libraries, and applications in a controlled way.
Users can also run applications inside containers or create isolated software environments when specific dependencies are required.
Guides are available for:
- Lua module environment
- Singularity containers
- Conda environments
- Python virtual environments
- Running Jupyter notebooks
Compilation¶
Some applications need to be compiled directly on the cluster. This section explains how to build software using the available toolchains and libraries.
The documentation includes instructions for building software using EasyBuild as well as manual compilation with Intel and GNU compilers. GPU-enabled software compilation using CUDA is also covered.
When compilation is required
Many commonly used scientific applications are already installed on the system. Compilation is typically required only for custom software or applications that are not available through modules.
Job Submission¶
All computations on the cluster are executed through the job scheduler. Users must submit jobs using job scripts that define resource requirements such as CPU cores, GPUs, memory, and runtime.
This section introduces the job submission workflow and explains how to create job scripts, choose appropriate partitions, and monitor running jobs.
Additional topics include:
- Job priorities
- Job states
- Resource accounting
- Array jobs
- GPU jobs
- E-mail notifications
- Practical command examples
Direct execution
Computational workloads should not be executed directly on login nodes. All production workloads must be submitted to the scheduler using a job script.