Storage Overview¶
Effective data management is essential for ensuring high performance and productivity when working on the Devana HPC cluster. This guide outlines the available storage systems, their intended uses, and best practices for optimal usage.
No Backups Available
There are no backup services for any directory (/home
, /projects
, /scratch
, /work
). Users are responsible for safeguarding their data.
Where to Run Calculations?¶
Mountpoint | Capacity | Accessible From | Performance (Write/Read) |
---|---|---|---|
/home/username |
547 TB | Login & Compute Nodes | 3 GB/s & 6 GB/s |
/projects/project_id |
269 TB | Login & Compute Nodes | XXX |
/scratch/project_id |
269 TB | Login & Compute Nodes | 7 GB/s & 14 GB/s |
/work/SLURM_JOB_ID |
3.5 TB | Compute/GPU (Nodes 001-048, 141-148) | 3.6 GB/s & 6.7 GB/s |
/work/SLURM_JOB_ID |
1.5 TB | Compute (Nodes 049-140) | 1.9 GB/s & 3.0 GB/s |
Choosing the Right Filesystem
The optimal filesystem depends on various factors. In general, /work provides the best performance for workloads where storage capacity is not a primary concern.
Where to Store Data?¶
Storage locations are categorized based on their intended use.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/home/username/ |
1 TB | 3 months after project ends | NFS |
Details
A personal home directory. Check the path with echo $HOME
.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/projects/<project_id> |
Unlimited | 3 months after project ends | NFS |
Details
A shared project directory accessible to all project members.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/scratch/<project_id> |
Unlimited | 3 months after project ends | BeeGFS |
/work/$SLURM_JOB_ID |
Unlimited | Automatically deleted after job completion | XFS |
Details
Temporary storage directories for calculations, accessible only during a running job.
/scratch/<project_id>
– Shared scratch directory, available from all compute nodes./work/$SLURM_JOB_ID
– Local storage, specific to the allocated compute node.
Storage Systems¶
Upon logging into the Devana cluster, multiple storage locations are available, each designed to support specific aspects of computational workflows:
Overview of Available Filesystems on Devana
- /home
- A personal directory that is unique to each user.
- Intended for storing personal results.
- /projects
- A shared directory that all project members can access.
- Used for storing project-related results.
- /scratch
- A shared directory designed for large files, accessible to all project members.
- Intended for calculations involving files exceeding the local disk capacity.
- /work
- Local storage on each compute and GPU node.
- Suitable for calculations with files not exceeding the local disk capacity.
- Only accessible during an active job.
Home¶
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /home | 1TB | No | 547 TB | 3 GB/s write, 6 GB/s read | NFS |
The /home directory is the default storage location after logging in, containing a user's personal directory. A quota of 1 TB per user is enforced. For details on storage quotas, refer to the home quotas section,
Projects¶
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /projects | None | No | 269 TB | XXX GB/s | NFS |
Each user is assigned one or more project IDs, which are required for accessing project-related storage and computational resources. You can find your project ID(s) using the sprojects command. This will list all projects associated with your account. To view additional details, such as storage allocations and shared directories, use sprojects -f
command. Alternatively, you can check your project memberships by running:
id
id
output
uid=187000000(user) gid=187000000(user) groups=187000000(user),187000062(p70-23-t),187000064(p81-23-t)
Each project has an associated storage directory located under /projects
. You can access your project directory using the following path structure:
/projects/<project_id>
Replace <project_id>
with your specific project identifier. For example, if your project ID is p70-23-t
, your project directory would be:
/projects/p70-23-t
Data Retention Policy
Data in /projects is preserved for 3 months after the project concludes.
Scratch¶
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /scratch | None | No | 269 TB | XXX GB/s | BeeGFS |
The /scratch
directory is temporary storage for computational data and is implemented as a BeeGFS parallel filesystem with 100Gb/s Infiniband connectivity. The /scratch
storage follows a similar structure to /projects
, with each user having a dedicated directory located at /scratch/<project_id>
.
User Responsibility
Users are required to transfer important data from /scratch to /home or /projects once calculations are complete and remove temporary files.
Work¶
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Compute Nodes | /work | None | No | 1.5/3.5 TB | XXX GB/s | XFS |
The /work directory, similar to /scratch, is a temporary storage space specifically for calculations. However, it consists of local storage on individual compute nodes, accessible only during an active job.
Node-Specific Capacity
- Nodes
001 - 049
and141-148
offer 3.5 TB of /work storage. - Other nodes provide 1.8 TB.
For additional hardware details, visit the Storage Hardware Section.