Skip to content

Storage Overview

Effective data management is essential for maintaining high performance and productivity when working on the Devana HPC cluster. This guide provides an overview of the available storage systems, their intended use cases, and general recommendations for optimal usage.

No Backups Available

There are no backup services for any directory (/home, /projects, /scratch, /work). Users are fully responsible for safeguarding their data and ensuring that important results are stored in appropriate locations.

For detailed information about storage quotas, see the Storage Quotas guide.


Storage Systems

After logging into the Devana cluster, several storage locations are available. Each filesystem is designed for a specific part of the computational workflow.

Overview of Available Filesystems on Devana

Mountpoints Capacity

  • /home

    • Personal directory unique to each user.
    • Intended mainly for configuration files, scripts, and smaller personal results.
  • /projects

    • Shared directory accessible to all project members.
    • Used for storing project-related data and results.
  • /scratch

    • Shared high-performance storage for large datasets.
    • Accessible to all project members.
    • Intended for calculations that exceed the local disk capacity.
  • /work

    • Local storage on compute and GPU nodes.
    • Intended for calculations whose data fit within the local disk capacity.
    • Accessible only during an active job.

Where to Run Calculations?

Mountpoint Capacity Accessible From Performance (Write/Read)
/home/username 547 TB Login & Compute Nodes 3 GB/s & 6 GB/s
/projects/project_id 269 TB Login & Compute Nodes XXX GB/s & XXX GB/s
/scratch/project_id 269 TB Login & Compute Nodes 7 GB/s & 14 GB/s
/work/SLURM_JOB_ID 3.5 TB Nodes 001-048, 141-148 3.6 GB/s & 6.7 GB/s
/work/SLURM_JOB_ID 1.5 TB Nodes 049-140 1.9 GB/s & 3.0 GB/s

Choosing the Right Filesystem

The optimal filesystem depends on workload size, I/O patterns, and storage requirements. In general, /work provides the highest performance when the available local capacity is sufficient.


Where to Store Data?

Storage locations are categorized according to their intended purpose.

Path (Mountpoint) Quota Retention Protocol
/home/username/ 1 TB None NFS

Details

Personal home directory for each user. This location should be used primarily for scripts, configuration files, and smaller datasets.

The path can be verified using:

echo $HOME
Path (Mountpoint) Quota Retention Protocol
/projects/<project_id> Unlimited 3 months after project ends NFS

Details

Shared directory accessible to all members of a project. This location should be used for shared project data, intermediate results, and final outputs.

Path (Mountpoint) Quota Retention Protocol
/scratch/<project_id> Unlimited 3 months after project ends BeeGFS

Details

Temporary storage intended for computational workloads.

  • Designed for large datasets and high-throughput I/O.
  • Accessible from all compute nodes.
  • Data should be considered non-persistent.
Path (Mountpoint) Quota Retention Protocol
/work/$SLURM_JOB_ID Unlimited Deleted automatically after job completion XFS

Details

Local node storage available only during the execution of a job.

  • Highest I/O performance available on the system.
  • Suitable for workloads fitting within node-local capacity.
  • Data is automatically removed after job completion.

Detailed quota information is available in the Storage Quotas guide.


Home

Mountpoint Per-User Limit Backup Total Capacity Performance Protocol
/home 1 TB No 547 TB 3/6 GB/s NFS

The /home directory is the default storage location after login and contains the personal directory for each user. A quota of 1 TB per user is enforced.


Projects

Mountpoint Per-User Limit Backup Total Capacity Performance Protocol
/projects None No 269 TB XXX GB/s NFS

Each user is assigned one or more project IDs, which are required for accessing project-related storage and computational resources. You can find your project ID(s) using the sprojects command. This will list all projects associated with your account. To view additional details, such as storage allocations and shared directories, use sprojects -f command. Alternatively, you can check your project memberships by running:

id

id output

uid=187000000(user) gid=187000000(user) groups=187000000(user),187000062(p70-23-t),187000064(p81-23-t)
Users projects are listed after the groups entry.

Each project has an associated storage directory located under /projects. You can access your project directory using the following path structure:

/projects/<project_id>

Replace <project_id> with your specific project identifier. For example, if your project ID is p70-23-t, your project directory would be:

/projects/p70-23-t

Data Retention Policy

Data in /projects is preserved for 6 months after the project concludes.

Scratch

Mountpoint Per-User Limit Backup Total Capacity Performance Protocol
/scratch None No 269 TB XXX GB/s BeeGFS

The /scratch directory provides temporary storage for computational data and is implemented as a BeeGFS parallel filesystem connected via 100 Gb/s InfiniBand.

The directory structure follows the same layout as /projects, with each project having its own directory:

/scratch/<project_id>

Data Retention Policy

Data in /scratch is preserved for 6 months after the project concludes.


Work

Mountpoint Per-User Limit Backup Total Capacity Performance Protocol
/work None No 1.5 / 3.5 TB XXX GB/s XFS

The /work directory, similarly to /scratch, is intended for temporary computational data. Unlike /scratch, it consists of local storage located directly on compute nodes, providing the highest I/O performance.

This storage is accessible only during an active job and is typically used for I/O-intensive workloads.

Node-Specific Capacity

  • Nodes 001–048 and 141–148 provide 3.5 TB of /work storage.
  • Other compute nodes provide approximately 1.8 TB.

For additional hardware information, see the Storage Hardware Section.

Created by: Andrej Sec