Skip to content

Welcome to the SAS HPC Documentation

Welcome to the SAS User Documentation Portal, your central hub related to our supercomputers Devana and Perun. Together, these clusters are designed to support research in areas such as artificial intelligence, bioinformatics, quantum chemistry, and related computationally intensive disciplines.

The mission of the SAS HPC centre is to provide reliable, high-performance computational resources and a stable software environment to accelerate scientific and technological research.

Learn more about:


Perun cluster

A new high-performance computing cluster, Perun, has been deployed and made available to users in Q2 2026. Perun provides significantly higher performance and improved energy efficiency compared to Devana cluster.

Action Required

Please review the Perun documentation and follow the Get Access guide to request access.

Software Stack Update on Devana cluster

In order to keep our software environment up-to-date, secure, and performant, we have rebuilt the entire Devana software stack in Q3 2025.

Old software stack has been retired on November 10th, 2025, and will be temporarily accesible with command ssx. For more information see documentation section on Software Stack Update.


Stay Updated

We’re committed to keeping you informed about updates and opportunities. Here are some ways to stay in the loop:

  • Account & Project Management: Access your dashboard to manage your account, projects, SSH keys and many more.
  • Devana Status and Perun Status: Monitor real-time system performance and availability metrics.
  • Computing Center SAS: Explore services available at Computing Center of the Slovak Academy of Sciences, including project calls for Devana supercomputer.
  • Workshops and Training: Join hands-on sessions to boost your HPC skills.
  • Helpdesk: Submit support tickets, report issues, or get assistance with account access, job submissions, and other HPC-related inquiries.

How to Begin?

If you’re new to HPC our portal is here to guide you every step of the way.

  1. Get Access: Learn how to request access and set up your user account.
  2. Get Project: Find out how to create and manage projects for your computational work.
  3. Connect to Cluster: Follow step-by-step instructions to connect to our clusters from your local machine.
  4. Submit Your First Job: Create your own sbatch script using our Job Builder and start running computations using our simple job submission guides.
  5. Explore Advanced Features: Optimize your workflows with detailed documentation on modules, software available on Devana or Perun, gpu jobs, and more.

How to Use the Documentation?

Our documentation is designed to be clear and easy to navigate, with resources tailored to users at all experience levels. Here are some tips and best practices to help you make the most of it:

  1. Start with the Basics: If you're new to HPC calculations, begin with previous section. It will walk you through getting access to supercomputer, account setup, basic Linux commands, and how to run your first job. If you're new, don't rush into advanced topics. Work through the documentation step-by-step and gradually build your understanding.

  2. Follow Examples: Use the provided code examples and job script templates as a guide to write and submit your own jobs. Make sure to adapt them to fit your specific project needs.

  3. Use the Search Function: If you're looking for something specific, use the search bar to quickly locate the documentation you need. When looking for answers, try to use specific terms or commands. For example, instead of searching for "job," search for "submit job script".

  4. Focus on Relevant Sections: Once you're familiar with the basics, you can dive deeper into sections like:

    • Environments: Learn how to set up and manage different environments for your projects, including module management and environment variables.
    • Compilation Understand the steps involved in compiling code on HPC systems, including how to use compilers, link libraries, and optimize your code for the supercomputer.
  5. Provide Feedback: Your experience is important to us! If you have suggestions to improve this portal or face any challenges, please let us know by submitting feedback, so that together, we can make this documentation portal even better.

Commonly Used Terminology

  • Principal investigator (PI): A person responsible for project and utilization of computational resources allocated to that project.
  • Collaborator: a person participating in the investigation of the project.
  • Project: A research task identified by a project ID, under investigation by the PI, with allocated resources.

  • Job: a calculation running on the supercomputer – the job allocates and utilizes the resources of the supercomputer for certain time.

  • Jobscript: A script to be executed by the Slurm workload manager.
  • Code: A program that performs calculations.

  • Node: A computer, interconnected via a network to other computers, used for running calculations.

  • Task: A single process of work in an MPI-based parallel application.
  • Core: A unit of CPU unit that executes calculations.

  • Billing Unit (BU): A metric of usage, see definition.

Admonitions

Throughout this documentation, you’ll encounter admonitions—special callouts designed to emphasize important information. These help you easily identify key points, tips, warnings, and examples that are critical for effective use of the Devana supercomputer. These admonitions include:

Note

The HPC supercomputer is optimized for high-performance workloads. Ensure your job scripts are tailored to utilize resources effectively.

Tip

Before submitting a job, test your script on a small scale to avoid unnecessary consumption of node-hours.

Warning

Make sure your job scripts specify the correct resource requirements. Over-allocation can delay your job and affect system performance.

Danger

Never share your login credentials. Unauthorized access may lead to the suspension of your account.

Example

Here’s an example of a basic job script to get started with running jobs on Devana:

sbatch my_first_job_script.sh

Thank you for choosing SAS and Devana for your computational needs. We’re excited to support your research and innovation journey.