Skip to content

Storage Best Practices

Efficient use of shared storage is important for maintaining high performance of the HPC system and ensuring fair access to resources for all users. HPC filesystems are optimized for large parallel I/O workloads and may perform poorly when used in ways typical for personal computers.

The following recommendations help users avoid common performance problems and manage their data effectively.


Use the Appropriate Storage Location

Different storage areas are designed for different purposes.

Location Intended Use
/home Source code, scripts, configuration files, small datasets
/projects Shared research data and long-term project storage
/scratch Temporary files produced during running jobs

Tip

Use /scratch for active computations and large temporary datasets. Scratch filesystems are optimized for high I/O throughput and are intended for job runtime data.

Large simulation outputs, intermediate files, and checkpoint data should be written to scratch storage, not to home directories.

After a job finishes, important results should be moved from /scratch to /projects or another long-term storage location.


Avoid Storing Large Data in Home Directories

Home directories are typically backed up and optimized for small files. Storing large datasets or simulation outputs in /home can negatively affect overall filesystem performance.

/home directory usage

Home directories are not intended for large simulation outputs. Storing large datasets in /home may impact backup systems and degrade performance for other users.

Use /projects or /scratch for large data instead.


Clean Temporary Data Regularly

Scratch storage is intended only for temporary files created during job execution.

/scratch directory usage

Files stored in /scratch may be periodically cleaned or removed automatically. Important results should always be transferred to persistent storage.

Users should periodically remove unnecessary files to free space for other users.


Limit the Number of Small Files

Parallel filesystems are optimized for large files and high throughput. Workflows that generate millions of small files can significantly degrade filesystem performance.

Small Files

Creating very large numbers of small files places heavy load on filesystem metadata servers and may slow down both your jobs and the system for other users.

When possible:

  • combine files into archives (tar)
  • store results in larger aggregated files
  • avoid very deep directory hierarchies with many entries

For example:

tar czf results.tar.gz results_directory/

See section about Working with large number of small files for more information.


Perform Intensive I/O on Scratch Storage

Applications that perform heavy input/output operations should read and write data from scratch storage whenever possible.

I/O intensive operations

Running I/O-intensive workloads on /scratch can significantly improve performance compared to persistent storage locations.

Scratch filesystems typically provide higher bandwidth and are designed for temporary job data.


Transfer Data Efficiently

For large datasets, use tools designed for efficient data transfer, such as:

  • rsync
  • scp
  • sftp

rsync efficiency

rsync is usually the most efficient option for transferring large directory trees because it transfers only changed files and can resume interrupted transfers.

Example:

rsync -avh data/ user@cluster:/projects/<project_id>/data/

Organize Your Data

Keeping files organized makes it easier to manage storage usage and avoid exceeding quotas.

Recommended practices include:

  • separating input data, outputs, and scripts
  • archiving completed simulations
  • periodically reviewing disk usage

Useful command:

du -sh *

to view directory sizes.


Monitor Your Disk Usage

Users should regularly monitor their storage consumption to avoid exceeding quota limits.

Example:

quota -s

or

du -sh /projects/<project>/

Running out for space

Jobs may fail if storage quotas are exceeded or if a filesystem runs out of available space.

Monitoring usage helps prevent job failures caused by insufficient disk space.


Protect Important Data

Although some filesystems may be backed up, users should always maintain copies of important data outside the HPC system whenever possible.

Backup policy

HPC systems are not intended as long-term archival storage. Users are responsible for maintaining external backups of important research data.

Critical research data should be archived in long-term storage systems or institutional repositories.

Created by: Andrej Sec